Image Title

Search Results for Ayatollah:

Glenn Grossman and Yusef Khan | Io-Tahoe ActiveDQ Intelligent Automation


 

>>from around the globe. It's the >>cube presenting >>active de que intelligent automation for data quality brought to you by Iota Ho >>Welcome to the sixth episode of the I. O. Tahoe data automation series. On the cube. We're gonna start off with a segment on how to accelerate the adoption of snowflake with Glenn Grossman, who is the enterprise account executive from Snowflake and yusef khan, the head of data services from Iota. Gentlemen welcome. >>Good afternoon. Good morning, Good evening. Dave. >>Good to see you. Dave. Good to see you. >>Okay glenn uh let's start with you. I mean the Cube hosted the snowflake data cloud summit in November and we heard from customers and going from love the tagline zero to snowflake, you know, 90 minutes very quickly. And of course you want to make it simple and attractive for enterprises to move data and analytics into the snowflake platform but help us understand once the data is there, how is snowflake helping to achieve savings compared to the data lake? >>Absolutely. dave. It's a great question, you know, it starts off first with the notion and uh kind of, we coined it in the industry or t shirt size pricing. You know, you don't necessarily always need the performance of a high end sports car when you're just trying to go get some groceries and drive down the street 20 mph. The t shirt pricing really aligns to, depending on what your operational workload is to support the business and the value that you need from that business? Not every day. Do you need data? Every second of the moment? Might be once a day, once a week through that t shirt size price and we can align for the performance according to the environmental needs of the business. What those drivers are the key performance indicators to drive that insight to make better decisions, It allows us to control that cost. So to my point, not always do you need the performance of a Ferrari? Maybe you need the performance and gas mileage of the Honda Civic if you would just get and deliver the value of the business but knowing that you have that entire performance landscape at a moments notice and that's really what what allows us to hold and get away from. How much is it going to cost me in a data lake type of environment? >>Got it. Thank you for that yussef. Where does Io Tahoe fit into this equation? I mean what's, what's, what's unique about the approach that you're taking towards this notion of mobilizing data on snowflake? >>Well, Dave in the first instance we profile the data itself at the data level, so not just at the level of metadata and we do that wherever that data lives. So it could be structured data could be semi structured data could be unstructured data and that data could be on premise. It could be in the cloud or it could be on some kind of SAAS platform. And so we profile this data at the source system that is feeding snowflake within snowflake itself within the end applications and the reports that the snowflake environment is serving. So what we've done here is take our machine learning discovery technology and make snowflake itself the repository for knowledge and insights on data. And this is pretty unique. Uh automation in the form of our P. A. Is being applied to the data both before after and within snowflake. And so the ultimate outcome is that business users can have a much greater degree of confidence that the data they're using can be trusted. Um The other thing we do uh which is unique is employee data R. P. A. To proactively detect and recommend fixes the data quality so that removes the manual time and effort and cost it takes to fix those data quality issues. Uh If they're left unchecked and untouched >>so that's key to things their trust, nobody's gonna use the data. It's not trusted. But also context. If you think about it, we've contextualized are operational systems but not our analytic system. So there's a big step forward glen. I wonder if you can tell us how customers are managing data quality when they migrate to snowflake because there's a lot of baggage in in traditional data warehouses and data lakes and and data hubs. Maybe you can talk about why this is a challenge for customers. And like for instance can you proactively address some of those challenges that customers face >>that we certainly can. They have. You know, data quality. Legacy data sources are always inherent with D. Q. Issues whether it's been master data management and data stewardship programs over the last really almost two decades right now, you do have systemic data issues. You have siloed data, you have information operational, data stores data marks. It became a hodgepodge when organizations are starting their journey to migrate to the cloud. One of the things that were first doing is that inspection of data um you know first and foremost even looking to retire legacy data sources that aren't even used across the enterprise but because they were part of the systemic long running operational on premise technology, it stayed there when we start to look at data pipelines as we onboard a customer. You know we want to do that era. We want to do QA and quality assurance so that we can, And our ultimate goal eliminate the garbage in garbage out scenarios that we've been plagued with really over the last 40, 50 years of just data in general. So we have to take an inspection where traditionally it was E. T. L. Now in the world of snowflake, it's really lt we're extracting were loading or inspecting them. We're transforming out to the business so that these routines could be done once and again give great business value back to making decisions around the data instead of spending all this long time. Always re architect ng the data pipeline to serve the business. >>Got it. Thank you. Glenda yourself of course. Snowflakes renowned for customers. Tell me all the time. It's so easy. It's so easy to spin up a data warehouse. It helps with my security. Again it simplifies everything but so you know, getting started is one thing but then adoption is also a key. So I'm interested in the role that that I owe. Tahoe plays in accelerating adoption for new customers. >>Absolutely. David. I mean as Ben said, you know every every migration to Snowflake is going to have a business case. Um uh and that is going to be uh partly about reducing spending legacy I. T. Servers, storage licenses, support all those good things um that see I want to be able to turn off entirely ultimately. And what Ayatollah does is help discover all the legacy undocumented silos that have been built up, as Glenn says on the data estate across a period of time, build intelligence around those silos and help reduce those legacy costs sooner by accelerating that that whole process. Because obviously the quicker that I. T. Um and Cdos can turn off legacy data sources the more funding and resources going to be available to them to manage the new uh Snowflake based data estate on the cloud. And so turning off the old building, the new go hand in hand to make sure those those numbers stack up the program is delivered uh and the benefits are delivered. And so what we're doing here with a Tahoe is improving the customers are y by accelerating their ability to adopt Snowflake. >>Great. And I mean we're talking a lot about data quality here but in a lot of ways that's table stakes like I said, if you don't trust the data, nobody's going to use it. And glenn, I mean I look at Snowflake and I see obviously the ease of use the simplicity you guys are nailing that the data sharing capabilities I think are really exciting because you know everybody talks about sharing data but then we talked about data as an asset, Everyone so high I to hold it. And so sharing is is something that I see as a paradigm shift and you guys are enabling that. So one of the things beyond data quality that are notable that customers are excited about that, maybe you're excited about >>David, I think you just cleared it out. It's it's this massive data sharing play part of the data cloud platform. Uh you know, just as of last year we had a little over about 100 people, 100 vendors in our data marketplace. That number today is well over 450 it is all about democratizing and sharing data in a world that is no longer held back by FTp s and C. S. V. S and then the organization having to take that data and ingested into their systems. You're a snowflake customer. want to subscribe to an S and P data sources an example, go subscribe it to it. It's in your account there was no data engineering, there was no physical lift of data and that becomes the most important thing when we talk about getting broader insights, data quality. Well, the data has already been inspected from your vendor is just available in your account. It's obviously a very simplistic thing to describe behind the scenes is what our founders have created to make it very, very easy for us to democratize not only internal with private sharing of data, but this notion of marketplace ensuring across your customers um marketplace is certainly on the type of all of my customers minds and probably some other areas that might have heard out of a recent cloud summit is the introduction of snow park and being able to do where all this data is going towards us. Am I in an ale, you know, along with our partners at Io Tahoe and R. P. A. Automation is what do we do with all this data? How do we put the algorithms and targets now? We'll be able to run in the future R and python scripts and java libraries directly inside Snowflake, which allows you to even accelerate even faster, Which people found traditionally when we started off eight years ago just as a data warehousing platform. >>Yeah, I think we're on the cusp of just a new way of thinking about data. I mean obviously simplicity is a starting point but but data by its very nature is decentralized. You talk about democratizing data. I like this idea of the global mesh. I mean it's very powerful concept and again it's early days but you know, keep part of this is is automation and trust, yussef you've worked with Snowflake and you're bringing active D. Q. To the market what our customers telling you so far? >>Well David the feedback so far has been great. Which is brilliant. So I mean firstly there's a point about speed and acceleration. Um So that's the speed to incite really. So where you have inherent data quality issues uh whether that's with data that was on premise and being brought into snowflake or on snowflake itself, we're able to show the customer results and help them understand their data quality better Within Day one which is which is a fantastic acceleration. I'm related to that. There's the cost and effort to get that insight is it's a massive productivity gain versus where you're seeing customers who've been struggling sometimes too remediate legacy data and legacy decisions that they've made over the past couple of decades, so that that cost and effort is much lower than it would otherwise have been. Um 3rdly, there's confidence and trust, so you can see Cdos and see IOS got demonstrable results that they've been able to improve data quality across a whole bunch of use cases for business users in marketing and customer services, for commercial teams, for financial teams. So there's that very quick kind of growth in confidence and credibility as the projects get moving. And then finally, I mean really all the use cases for the snowflake depend on data quality, really whether it's data science, uh and and the kind of snow park applications that Glenn has talked about, all those use cases work better when we're able to accelerate the ri for our joint customers by very quickly pushing out these data quality um insights. Um And I think one of the one of the things that the snowflake have recognized is that in order for C. I. O. Is to really adopt enterprise wide, um It's also as well as the great technology with Snowflake offers, it's about cleaning up that legacy data state, freeing up the budget for CIA to spend it on the new modern day to a state that lets them mobilise their data with snowflake. >>So you're seeing the Senate progression. We're simplifying the the the analytics from a tech perspective. You bring in Federated governance which which brings more trust. Then then you bring in the automation of the data quality piece which is fundamental. And now you can really start to, as you guys are saying, democratized and scale uh and share data. Very powerful guys. Thanks so much for coming on the program. Really appreciate your time. >>Thank you. I appreciate as well. Yeah.

Published Date : Apr 29 2021

SUMMARY :

It's the the head of data services from Iota. Good afternoon. Good to see you. I mean the Cube hosted the snowflake data cloud summit and the value that you need from that business? Thank you for that yussef. so not just at the level of metadata and we do that wherever that data lives. so that's key to things their trust, nobody's gonna use the data. Always re architect ng the data pipeline to serve the business. Again it simplifies everything but so you know, getting started is one thing but then I mean as Ben said, you know every every migration to Snowflake is going I see obviously the ease of use the simplicity you guys are nailing that the data sharing that might have heard out of a recent cloud summit is the introduction of snow park and I mean it's very powerful concept and again it's early days but you know, Um So that's the speed to incite And now you can really start to, as you guys are saying, democratized and scale uh and I appreciate as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Glenn GrossmanPERSON

0.99+

BenPERSON

0.99+

Io TahoeORGANIZATION

0.99+

Yusef KhanPERSON

0.99+

DavePERSON

0.99+

20 mphQUANTITY

0.99+

GlennPERSON

0.99+

CIAORGANIZATION

0.99+

IOSTITLE

0.99+

GlendaPERSON

0.99+

90 minutesQUANTITY

0.99+

100 vendorsQUANTITY

0.99+

FerrariORGANIZATION

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

first instanceQUANTITY

0.99+

NovemberDATE

0.99+

sixth episodeQUANTITY

0.99+

once a dayQUANTITY

0.99+

once a weekQUANTITY

0.98+

SenateORGANIZATION

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

eight years agoDATE

0.97+

yusef khanPERSON

0.97+

overQUANTITY

0.96+

oneQUANTITY

0.95+

R. P. A. AutomationORGANIZATION

0.95+

pythonTITLE

0.95+

TahoeORGANIZATION

0.94+

I. O. TahoeTITLE

0.93+

HondaORGANIZATION

0.93+

Io-TahoeORGANIZATION

0.93+

one thingQUANTITY

0.91+

Io TahoePERSON

0.87+

firstlyQUANTITY

0.87+

CivicCOMMERCIAL_ITEM

0.87+

SnowflakeTITLE

0.86+

TahoePERSON

0.85+

AyatollahPERSON

0.84+

SnowflakeEVENT

0.83+

past couple of decadesDATE

0.82+

about 100 peopleQUANTITY

0.81+

two decadesQUANTITY

0.8+

over 450QUANTITY

0.79+

40, 50 yearsQUANTITY

0.76+

Day oneQUANTITY

0.75+

glennPERSON

0.74+

javaTITLE

0.72+

snowflakeEVENT

0.7+

Iota HoORGANIZATION

0.68+

P.ORGANIZATION

0.62+

ActiveDQ Intelligent AutomationORGANIZATION

0.61+

snowflake data cloud summitEVENT

0.6+

IotaLOCATION

0.58+

FTpTITLE

0.56+

SnowflakeORGANIZATION

0.54+

zeroQUANTITY

0.53+

RTITLE

0.52+

O.EVENT

0.41+

C.EVENT

0.34+

Yusef Khan, Io Tahoe | Enterprise Data Automation


 

>>from around the globe. It's the Cube with digital coverage of enterprise data automation, an event Siri's brought to you by Iot. Tahoe, everybody, We're back. We're talking about enterprise data automation. The hashtag is data automated, and we're going to really dig into data migrations, data, migrations. They're risky. They're time consuming, and they're expensive. Yousef con is here. He's the head of partnerships and alliances at I o ta ho coming again from London. Hey, good to see you, Seth. Thanks very much. >>Thank you. >>So your role is is interesting. We're talking about data migrations. You're gonna head of partnerships. What is your role specifically? And how is it relevant to what we're gonna talk about today? >>Uh, I work with the various businesses such as cloud companies, systems integrators, companies that sell operating systems, middleware, all of whom are often quite well embedded within a company. I t infrastructures and have existing relationships. Because what we do fundamentally makes migrating to the cloud easier on data migration easier. A lot of businesses that are interested in partnering with us. Um, we're interested in parting with, So >>let's set up the problem a little bit. And then I want to get into some of the data. You know, I said that migration is a risky, time consuming, expensive. They're they're often times a blocker for organizations to really get value out of data. Why is that? >>Uh, I think I mean, all migrations have to start with knowing the facts about your data, and you can try and do this manually. But when that you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate so that I have everything from on premise mainframes. They may have stuff which is probably in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. Um, now they're understanding of what they have. Ai's often quite limited because you can try and draw a manual maps, but they're outdated very quickly. Every time that data changes the manual that's out of date on people obviously leave organizations over time, so that kind of tribal knowledge gets built up is limited as well. So you can try a Mackel that manually you might need a db. Hey, thanks. Based analyst or ah, business analyst, and they won't go in and explore the data for you. But doing that manually is very, very time consuming this contract teams of people, months and months. Or you can use automation just like what's the bank with Iot? And they managed to do this with a relatively small team. Are in a timeframe of days. >>Yeah, we talked to Paul from Webster Bank. Awesome discussion. So I want to dig into this migration and let's let's pull up graphic it will talk about. We'll talk about what a typical migration project looks like. So what you see here it is. It's very detailed. I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this Ah, and then use. If I want you to chime in. So at the top here, you see that area graph that's operational risk for a typical migration project, and you can see the timeline and the the milestones. That blue bar is the time to test so you can see the second step data analysis talking 24 weeks so, you know, very time consuming. And then Let's not get dig into the stuff in the middle of the fine print, but there's some real good detail there, but go down the bottom. That's labor intensity in the in the bottom and you can see high is that sort of brown and and you could see a number of data analysis, data staging data prep, the trial, the implementation post implementation fixtures, the transition toe B A B a year, which I think is business as usual. Those are all very labor intensive. So what do you take aways from this typical migration project? What do we need to know yourself? >>I mean, I think the key thing is, when you don't understand your data upfront, it's very difficult to scope to set up a project because you go to business stakeholders and decision makers and you say Okay, we want to migrate these data stores. We want to put them in the cloud most often, but actually, you probably don't know how much data is there. You don't necessarily know how many applications that relates to, you know, the relationships between the data. You don't know the flow of the data. So the direction in which the data is going between different data stores and tables, so you start from a position where you have pretty high risk and alleviate that risk. You could be stacking project team of lots and lots of people to do the next base, which is analysis. And so you set up a project which has got a pretty high cost. The big projects, more people, the heavy of governance, obviously on then there, then in the phase where they're trying to do lots and lots of manual analysis manage. That, in a sense, is, as we all know, on the idea of trying to relate data that's in different those stores relating individual tables and columns. Very, very time consuming, expensive. If you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use party tools, Aziz said earlier. The people who understand some of those systems may have left a while ago. See you even high risks quite cost situation from the off on the same things that have developed through the project. Um, what are you doing with it, Ayatollah? Who is that? We're able to automate a lot of this process from the very beginning because we can do the initial data. Discovery run, for example, automatically you very quickly have an automated validator. A data map on the data flow has been generated automatically, much less time and effort and much less cars. Doctor Marley. >>Okay, so I want to bring back that that first chart, and I want to call your attention to the again that area graph the blue bars and then down below that labor intensity. And now let's bring up the the the same chart. But with a set of an automation injection in here and now. So you now see the So let's go Said Accelerated by Iot, Tom. Okay, great. And we're going to talk about this. But look, what happens to the operational risk. A dramatic reduction in that. That graph. And then look at the bars, the bars, those blue bars. You know, data analysis went from 24 weeks down to four weeks and then look at the labor intensity. The it was all these were high data analysis data staging data prep. Try a lot post implementation fixtures in transition to be a you. All of those went from high labor intensity. So we've now attack that and gone to low labor intensity. Explain how that magic happened. >>I think that the example off a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data in its Price States catalog, if you like, um, imagine trying to do that manually. You need to go into every individual data store. You need a DB a business analyst, rich data store they need to do in extracted the data table was individually they need to cross reference that with other data school, it stores and schemers and tables. You probably were the mother of all lock Excel spreadsheets. It would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of data lots of these things is, um it accelerates the ability to water may, But in some cases, it also makes it possible for enterprise customers with legacy systems um, take banks, for example. There quite often end up staying on mainframe systems that they've had in place for decades. Uh, no migrating away from them because they're not able to actually do the work of understanding the data g duplicating the data, deleting data isn't relevant and then confidently going forward to migrate. So they stay where they are with all the attendant problems assistance systems that are out of support. Go back to the data catalog example. Um, whatever you discover invades, discovery has to persist in a tool like a data catalog. And so we automate data catalog books, including Out Way Cannot be others, but we have our own. The only alternative to this kind of automation is to build out this very large project team or business analysts off db A's project managers processed analysts together with data to understand that the process of gathering data is correct. To put it in the repository to validate it except etcetera, we've got into organizations and we've seen them ramp up teams off 2030 people costs off £234 million a year on a time frame, 15 20 years just to try and get a data catalog done. And that's something that we can typically do in a timeframe of months, if not weeks. And the difference is using automation. And if you do what? I've just described it. In this manual situation, you make migrations to the cloud prohibitively expensive. Whatever saving you might make from shutting down your legacy data stores, we'll get eaten up by the cost of doing it. Unless you go with the more automated approach. >>Okay, so the automated approach reduces risk because you're not gonna, you know you're going to stay on project plan. Ideally, it's all these out of scope expectations that come up with the manual processes that kill you in the rework andan that data data catalog. People are afraid that their their family jewels data is not going to make it through to the other side. So So that's something that you're you're addressing and then you're also not boiling the ocean. You're really taking the pieces that are critical and stuff you don't need. You don't have to pay for >>process. It's a very good point. I mean, one of the other things that we do and we have specific features to do is to automatically and noise data for a duplication at a rover or record level and redundancy on a column level. So, as you say before you go into a migration process. You can then understand. Actually, this stuff it was replicated. We don't need it quite often. If you put data in the cloud you're paying, obviously, the storage based offer compute time. The more data you have in there that's duplicated, that is pure cost. You should take out before you migrate again if you're trying to do that process of understanding what's duplicated manually off tens or hundreds of bases stores. It was 20 months, if not years. Use machine learning to do that in an automatic way on it's much, much quicker. I mean, there's nothing I say. Well, then, that costs and benefits of guitar. Every organization we work with has a lot of money existing, sunk cost in their I t. So have your piece systems like Oracle or Data Lakes, which they've spent a good time and money investing in. But what we do by enabling them to transition everything to the strategic future repositories, is accelerate the value of that investment and the time to value that investment. So we're trying to help people get value out of their existing investments on data estate, close down the things that they don't need to enable them to go to a kind of brighter, more future well, >>and I think as well, you know, once you're able to and this is a journey, we know that. But once you're able to go live on, you're infusing sort of a data mindset, a data oriented culture. I know it's somewhat buzzword, but when you when you see it in organizations, you know it's really and what happens is you dramatically reduce that and cycle time of going from data to actually insights. Data's plentiful, but insights aren't, and that is what's going to drive competitive advantage over the next decade and beyond. >>Yeah, definitely. And you could only really do that if you get your data estate cleaned up in the first place. Um, I worked with the managed teams of data scientists, data engineers, business analysts, people who are pushing out dashboards and trying to build machine learning applications. You know, you know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is on cleaning data, which really you don't want a highly paid thanks to scientists doing with their time. But if you sort out your data stays in the first place, get rid of duplication. If that pans migrate to cloud store, where things are really accessible on its easy to build connections and to use native machine learning tools, you're well on the way up to date the maturity curve on you can start to use some of those more advanced applications. >>You said. What are some of the pre requisites? Maybe the top few that are two or three that I need to understand as a customer to really be successful here? Is it skill sets? Is it is it mindset leadership by in what I absolutely need to have to make this successful? >>Well, I think leadership is obviously key just to set the vision of people with spiky. One of the great things about Ayatollah, though, is you can use your existing staff to do this work. If you've used on automation, platform is no need to hire expensive people. Alright, I was a no code solution. It works out of the box. You just connect to force on your existing stuff can use. It's very intuitive that has these issues. User interface? >>Um, it >>was only to invest vast amounts with large consultants who may well charging the earth. Um, and you already had a bit of an advantage. If you've got existing staff who are close to the data subject matter experts or use it because they can very easily learn how to use a tool on, then they can go in and they can write their own data quality rules on. They can really make a contribution from day one, when we are go into organizations on way. Can I? It's one of the great things about the whole experience. Veritas is. We can get tangible results back within the day. Um, usually within an hour or two great ones to say Okay, we started to map relationships. Here's the data map of the data that we've analyzed. Harrison thoughts on where the sensitive data is because it's automated because it's running algorithms stater on. That's what they were really to expect. >>Um, >>and and you know this because you're dealing with the ecosystem. We're entering a new era of data and many organizations to your point, they just don't have the resources to do what Google and Amazon and Facebook and Microsoft did over the past decade To become data dominant trillion dollar market cap companies. Incumbents need to rely on technology companies to bring that automation that machine intelligence to them so they can apply it. They don't want to be AI inventors. They want to apply it to their businesses. So and that's what really was so difficult in the early days of so called big data. You have this just too much complexity out there, and now companies like Iot Tahoe or bringing your tooling and platforms that are allowing companies to really become data driven your your final thoughts. Please use it. >>That's a great point, Dave. In a way, it brings us back to where it began. In terms of partnerships and alliances. I completely agree with a really exciting point where we can take applications like Iot. Uh, we can go into enterprises and help them really leverage the value of these type of machine learning algorithms. And and I I we work with all the major cloud providers AWS, Microsoft Azure or Google Cloud Platform, IBM and Red Hat on others, and we we really I think for us. The key thing is that we want to be the best in the world of enterprise data automation. We don't aspire to be a cloud provider or even a workflow provider. But what we want to do is really help customers with their data without automated data functionality in partnership with some of those other businesses so we can leverage the great work they've done in the cloud. The great work they've done on work flows on virtual assistants in other areas. And we help customers leverage those investments as well. But our heart, we really targeted it just being the best, uh, enterprised data automation business in the world. >>Massive opportunities not only for technology companies, but for those organizations that can apply technology for business. Advantage yourself, count. Thanks so much for coming on the Cube. Appreciate. All right. And thank you for watching everybody. We'll be right back right after this short break. >>Yeah, yeah, yeah, yeah.

Published Date : Jun 23 2020

SUMMARY :

of enterprise data automation, an event Siri's brought to you by Iot. And how is it relevant to what we're gonna talk about today? fundamentally makes migrating to the cloud easier on data migration easier. a blocker for organizations to really get value out of data. And they managed to do this with a relatively small team. That blue bar is the time to test so you can see the second step data analysis talking 24 I mean, I think the key thing is, when you don't understand So you now see the So let's go Said Accelerated by Iot, You need a DB a business analyst, rich data store they need to do in extracted the data processes that kill you in the rework andan that data data catalog. close down the things that they don't need to enable them to go to a kind of brighter, and I think as well, you know, once you're able to and this is a journey, And you could only really do that if you get your data estate cleaned up in I need to understand as a customer to really be successful here? One of the great things about Ayatollah, though, is you can use Um, and you already had a bit of an advantage. and and you know this because you're dealing with the ecosystem. And and I I we work And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PaulPERSON

0.99+

MicrosoftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

LondonLOCATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Yusef KhanPERSON

0.99+

SethPERSON

0.99+

DavePERSON

0.99+

20 monthsQUANTITY

0.99+

AzizPERSON

0.99+

hundredsQUANTITY

0.99+

tensQUANTITY

0.99+

IBMORGANIZATION

0.99+

Webster BankORGANIZATION

0.99+

24 weeksQUANTITY

0.99+

twoQUANTITY

0.99+

four weeksQUANTITY

0.99+

threeQUANTITY

0.99+

AWSORGANIZATION

0.99+

Io TahoePERSON

0.99+

MarleyPERSON

0.99+

HarrisonPERSON

0.99+

Data LakesORGANIZATION

0.99+

SiriTITLE

0.99+

ExcelTITLE

0.99+

VeritasORGANIZATION

0.99+

second stepQUANTITY

0.99+

15 20 yearsQUANTITY

0.98+

TahoePERSON

0.98+

OneQUANTITY

0.98+

first chartQUANTITY

0.98+

an hourQUANTITY

0.98+

Red HatORGANIZATION

0.98+

oneQUANTITY

0.97+

TomPERSON

0.96+

hundreds of basesQUANTITY

0.96+

firstQUANTITY

0.95+

next decadeDATE

0.94+

first placeQUANTITY

0.94+

IotORGANIZATION

0.94+

IotTITLE

0.93+

earthLOCATION

0.93+

day oneQUANTITY

0.92+

MackelORGANIZATION

0.91+

todayDATE

0.91+

AyatollahPERSON

0.89+

£234 million a yearQUANTITY

0.88+

dataQUANTITY

0.88+

IotPERSON

0.83+

hundreds ofQUANTITY

0.81+

thousands of applicationsQUANTITY

0.81+

decadesQUANTITY

0.8+

I o ta hoORGANIZATION

0.75+

past decadeDATE

0.75+

Microsoft AzureORGANIZATION

0.72+

two great onesQUANTITY

0.72+

2030 peopleQUANTITY

0.67+

DoctorPERSON

0.65+

StatesLOCATION

0.65+

Iot TahoeORGANIZATION

0.65+

a yearQUANTITY

0.55+

YousefPERSON

0.45+

Cloud PlatformTITLE

0.44+

CubeORGANIZATION

0.38+

Enterprise Data Automation | Crowdchat


 

>>from around the globe. It's the Cube with digital coverage of enterprise data automation, an event Siri's brought to you by Iot. Tahoe Welcome everybody to Enterprise Data Automation. Ah co created digital program on the Cube with support from my hotel. So my name is Dave Volante. And today we're using the hashtag data automated. You know, organizations. They really struggle to get more value out of their data, time to data driven insights that drive cost savings or new revenue opportunities. They simply take too long. So today we're gonna talk about how organizations can streamline their data operations through automation, machine intelligence and really simplifying data migrations to the cloud. We'll be talking to technologists, visionaries, hands on practitioners and experts that are not just talking about streamlining their data pipelines. They're actually doing it. So keep it right there. We'll be back shortly with a J ahora who's the CEO of Iot Tahoe to kick off the program. You're watching the Cube, the leader in digital global coverage. We're right back right after this short break. Innovation impact influence. Welcome to the Cube disruptors. Developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on the Cube, your global leader. High tech digital coverage from around the globe. It's the Cube with digital coverage of enterprise, data, automation and event. Siri's brought to you by Iot. Tahoe. Okay, we're back. Welcome back to Data Automated. A J ahora is CEO of I O ta ho, JJ. Good to see how things in London >>Thanks doing well. Things in, well, customers that I speak to on day in, day out that we partner with, um, they're busy adapting their businesses to serve their customers. It's very much a game of ensuring the week and serve our customers to help their customers. Um, you know, the adaptation that's happening here is, um, trying to be more agile. Got to be more flexible. Um, a lot of pressure on data, a lot of demand on data and to deliver more value to the business, too. So that customers, >>as I said, we've been talking about data ops a lot. The idea being Dev Ops applied to the data pipeline, But talk about enterprise data automation. What is it to you. And how is it different from data off >>Dev Ops, you know, has been great for breaking down those silos between different roles functions and bring people together to collaborate. Andi, you know, we definitely see that those tools, those methodologies, those processes, that kind of thinking, um, lending itself to data with data is exciting. We look to do is build on top of that when data automation, it's the it's the nuts and bolts of the the algorithms, the models behind machine learning that the functions. That's where we investors, our r and d on bringing that in to build on top of the the methods, the ways of thinking that break down those silos on injecting that automation into the business processes that are going to drive a business to serve its customers. It's, um, a layer beyond Dev ops data ops. They can get to that point where well, I think about it is is the automation behind new dimension. We've come a long way in the last few years. Boy is, we started out with automating some of those simple, um, to codify, um, I have a high impact on organization across the data a cost effective way house. There's data related tasks that classify data on and a lot of our original pattern certain people value that were built up is is very much around that >>love to get into the tech a little bit in terms of how it works. And I think we have a graphic here that gets into that a little bit. So, guys, if you bring that up, >>sure. I mean right there in the middle that the heart of what we do it is, you know, the intellectual property now that we've built up over time that takes from Hacha genius data sources. Your Oracle Relational database. Short your mainframe. It's a lay and increasingly AP eyes and devices that produce data and that creates the ability to automatically discover that data. Classify that data after it's classified. Them have the ability to form relationships across those different source systems, silos, different lines of business. And once we've automated that that we can start to do some cool things that just puts of contact and meaning around that data. So it's moving it now from bringing data driven on increasingly where we have really smile, right people in our customer organizations you want I do some of those advanced knowledge tasks data scientists and ah, yeah, quants in some of the banks that we work with, the the onus is on, then, putting everything we've done there with automation, pacifying it, relationship, understanding that equality, the policies that you can apply to that data. I'm putting it in context once you've got the ability to power. Okay, a professional is using data, um, to be able to put that data and contacts and search across the entire enterprise estate. Then then they can start to do some exciting things and piece together the the tapestry that fabric across that different system could be crm air P system such as s AP and some of the newer brown databases that we work with. Snowflake is a great well, if I look back maybe five years ago, we had prevalence of daily technologies at the cutting edge. Those are converging to some of the cloud platforms that we work with Google and AWS and I think very much is, as you said it, those manual attempts to try and grasp. But it is such a complex challenges scale quickly runs out of steam because once, once you've got your hat, once you've got your fingers on the details Oh, um, what's what's in your data state? It's changed, You know, you've onboard a new customer. You signed up a new partner. Um, customer has, you know, adopted a new product that you just Lawrence and there that that slew of data keeps coming. So it's keeping pace with that. The only answer really is is some form of automation >>you're working with AWS. You're working with Google, You got red hat. IBM is as partners. What is attracting those folks to your ecosystem and give us your thoughts on the importance of ecosystem? >>That's fundamental. So, I mean, when I caimans where you tell here is the CEO of one of the, um, trends that I wanted us CIO to be part of was being open, having an open architecture allowed one thing that was close to my heart, which is as a CEO, um, a c i o where you go, a budget vision on and you've already made investments into your organization, and some of those are pretty long term bets. They should be going out 5 10 years, sometimes with the CRM system training up your people, getting everybody working together around a common business platform. What I wanted to ensure is that we could openly like it using AP eyes that were available, the love that some investment on the cost that has already gone into managing in organizations I t. But business users to before. So part of the reason why we've been able to be successful with, um, the partners like Google AWS and increasingly, a number of technology players. That red hat mongo DB is another one where we're doing a lot of good work with, um and snowflake here is, um Is those investments have been made by the organizations that are our customers, and we want to make sure we're adding to that. And they're leveraging the value that they've already committed to. >>Yeah, and maybe you could give us some examples of the r A y and the business impact. >>Yeah, I mean, the r a y David is is built upon on three things that I mentioned is a combination off. You're leveraging the existing investment with the existing estate, whether that's on Microsoft Azure or AWS or Google, IBM, and I'm putting that to work because, yeah, the customers that we work with have had made those choices. On top of that, it's, um, is ensuring that we have got the automation that is working right down to the level off data, a column level or the file level we don't do with meta data. It is being very specific to be at the most granular level. So as we've grown our processes and on the automation, gasification tagging, applying policies from across different compliance and regulatory needs that an organization has to the data, everything that then happens downstream from that is ready to serve a business outcome now without hoping out which run those processes within hours of getting started And, um, Bill that picture, visualize that picture and bring it to life. You know, the PR Oh, I that's off the bat with finding data that should have been deleted data that was copies off on and being able to allow the architect whether it's we're working on GCB or a migration to any other clouds such as AWS or a multi cloud landscape right off the map. >>A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have you. >>Thank you, David. Look who is smoking in >>now. We want to bring in the customer perspective. We have a great conversation with Paul Damico, senior vice president data architecture, Webster Bank. So keep it right there. >>Utah Data automated Improve efficiency, Drive down costs and make your enterprise data work for you. Yeah, we're on a mission to enable our customers to automate the management of data to realise maximum strategic and operational benefits. We envisage a world where data users consume accurate, up to date unified data distilled from many silos to deliver transformational outcomes, activate your data and avoid manual processing. Accelerate data projects by enabling non I t resources and data experts to consolidate categorize and master data. Automate your data operations Power digital transformations by automating a significant portion of data management through human guided machine learning. Yeah, get value from the start. Increase the velocity of business outcomes with complete accurate data curated automatically for data, visualization tours and analytic insights. Improve the security and quality of your data. Data automation improves security by reducing the number of individuals who have access to sensitive data, and it can improve quality. Many companies report double digit era reduction in data entry and other repetitive tasks. Trust the way data works for you. Data automation by our Tahoe learns as it works and can ornament business user behavior. It learns from exception handling and scales up or down is needed to prevent system or application overloads or crashes. It also allows for innate knowledge to be socialized rather than individualized. No longer will your companies struggle when the employee who knows how this report is done, retires or takes another job, the work continues on without the need for detailed information transfer. Continue supporting the digital shift. Perhaps most importantly, data automation allows companies to begin making moves towards a broader, more aspirational transformation, but on a small scale but is easy to implement and manage and delivers quick wins. Digital is the buzzword of the day, but many companies recognized that it is a complex strategy requires time and investment. Once you get started with data automation, the digital transformation initiated and leaders and employees alike become more eager to invest time and effort in a broader digital transformational agenda. Yeah, >>everybody, we're back. And this is Dave Volante, and we're covering the whole notion of automating data in the Enterprise. And I'm really excited to have Paul Damico here. She's a senior vice president of enterprise Data Architecture at Webster Bank. Good to see you. Thanks for coming on. >>Nice to see you too. Yes. >>So let's let's start with Let's start with Webster Bank. You guys are kind of a regional. I think New York, New England, uh, leave headquartered out of Connecticut, but tell us a little bit about the >>bank. Yeah, Webster Bank is regional, Boston. And that again in New York, Um, very focused on in Westchester and Fairfield County. Um, they're a really highly rated bank regional bank for this area. They, um, hold, um, quite a few awards for the area for being supportive for the community. And, um, are really moving forward. Technology lives. Currently, today we have, ah, a small group that is just working toward moving into a more futuristic, more data driven data warehouse. That's our first item. And then the other item is to drive new revenue by anticipating what customers do when they go to the bank or when they log into there to be able to give them the best offer. The only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on off something to offer that >>at the top level, what were some of what are some of the key business drivers there catalyzing your desire for change >>the ability to give the customer what they need at the time when they need it? And what I mean by that is that we have, um, customer interactions and multiple weights, right? And I want to be able for the customer, too. Walk into a bank, um, or online and see the same the same format and being able to have the same feel, the same look and also to be able to offer them the next best offer for them. >>Part of it is really the cycle time, the end end cycle, time that you're pressing. And then there's if I understand it, residual benefits that are pretty substantial from a revenue opportunity >>exactly. It's drive new customers, Teoh new opportunities. It's enhanced the risk, and it's to optimize the banking process and then obviously, to create new business. Um, and the only way we're going to be able to do that is that we have the ability to look at the data right when the customer walks in the door or right when they open up their app. >>Do you see the potential to increase the data sources and hence the quality of the data? Or is that sort of premature? >>Oh, no. Um, exactly. Right. So right now we ingest a lot of flat files and from our mainframe type of runnin system that we've had for quite a few years. But now that we're moving to the cloud and off Prem and on France, you know, moving off Prem into, like, an s three bucket Where that data king, we can process that data and get that data faster by using real time tools to move that data into a place where, like, snowflake Good, um, utilize that data or we can give it out to our market. The data scientists are out in the lines of business right now, which is great, cause I think that's where data science belongs. We should give them on, and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own like tableau dashboards and then pushing the data back out. I have eight engineers, data architects, they database administrators, right, um, and then data traditional data forwarding people, Um, and because some customers that I have that our business customers lines of business, they want to just subscribe to a report. They don't want to go out and do any data science work. Um, and we still have to provide that. So we still want to provide them some kind of read regiment that they wake up in the morning and they open up their email. And there's the report that they just drive, um, which is great. And it works out really well. And one of the things. This is why we purchase I o waas. I would have the ability to give the lines of business the ability to do search within the data, and we read the data flows and data redundancy and things like that and help me cleanup the data and also, um, to give it to the data. Analysts who say All right, they just asked me. They want this certain report and it used to take Okay, well, we're gonna four weeks, we're going to go. We're gonna look at the data, and then we'll come back and tell you what we dio. But now with Iot Tahoe, they're able to look at the data and then, in one or two days of being able to go back and say, Yes, we have data. This is where it is. This is where we found that this is the data flows that we've found also, which is what I call it is the birth of a column. It's where the calm was created and where it went live as a teenager. And then it went to, you know, die very archive. >>In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the data structure, and actually dig into it. But also see it, um, and that speeds things up and gives everybody additional confidence. And then the other pieces essentially infusing ai or machine intelligence into the data pipeline is really how you're attacking automation, right? >>Exactly. So you're able to let's say that I have I have seven cause lines of business that are asking me questions. And one of the questions I'll ask me is, um, we want to know if this customer is okay to contact, right? And you know, there's different avenues so you can go online to go. Do not contact me. You can go to the bank And you could say, I don't want, um, email, but I'll take tests and I want, you know, phone calls. Um, all that information. So seven different lines of business asked me that question in different ways once said Okay to contact the other one says, You know, just for one to pray all these, you know, um, and each project before I got there used to be siloed. So one customer would be 100 hours for them to do that and analytical work, and then another cut. Another of analysts would do another 100 hours on the other project. Well, now I can do that all at once, and I can do those type of searches and say yes we already have that documentation. Here it is. And this is where you can find where the customer has said, You know, you don't want I don't want to get access from you by email, or I've subscribed to get emails from you. I'm using Iot typos eight automation right now to bring in the data and to start analyzing the data close to make sure that I'm not missing anything and that I'm not bringing over redundant data. Um, the data warehouse that I'm working off is not, um a It's an on prem. It's an oracle database. Um, and it's 15 years old, so it has extra data in it. It has, um, things that we don't need anymore. And Iot. Tahoe's helping me shake out that, um, extra data that does not need to be moved into my S three. So it's saving me money when I'm moving from offering on Prem. >>What's your vision or your your data driven organization? >>Um, I want for the bankers to be able to walk around with on iPad in their hands and be able to access data for that customer really fast and be able to give them the best deal that they can get. I want Webster to be right there on top, with being able to add new customers and to be able to serve our existing customers who had bank accounts. Since you were 12 years old there and now our, you know, multi. Whatever. Um, I want them to be able to have the best experience with our our bankers. >>That's really what I want is a banking customer. I want my bank to know who I am, anticipate my needs and create a great experience for me. And then let me go on with my life. And so that's a great story. Love your experience, your background and your knowledge. Can't thank you enough for coming on the Cube. >>No, thank you very much. And you guys have a great day. >>Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes us through the key considerations of moving to the cloud. >>Yeah, right. The entire platform Automated data Discovery data Discovery is the first step to knowing your data auto discover data across any application on any infrastructure and identify all unknown data relationships across the entire siloed data landscape. smart data catalog. Know how everything is connected? Understand everything in context, regained ownership and trust in your data and maintain a single source of truth across cloud platforms, SAS applications, reference data and legacy systems and power business users to quickly discover and understand the data that matters to them with a smart data catalog continuously updated ensuring business teams always have access to the most trusted data available. Automated data mapping and linking automate the identification of unknown relationships within and across data silos throughout the organization. Build your business glossary automatically using in house common business terms, vocabulary and definitions. Discovered relationships appears connections or dependencies between data entities such as customer account, address invoice and these data entities have many discovery properties. At a granular level, data signals dashboards. Get up to date feeds on the health of your data for faster improved data management. See trends, view for history. Compare versions and get accurate and timely visual insights from across the organization. Automated data flows automatically captured every data flow to locate all the dependencies across systems. Visualize how they work together collectively and know who within your organization has access to data. Understand the source and destination for all your business data with comprehensive data lineage constructed automatically during with data discovery phase and continuously load results into the smart Data catalog. Active, geeky automated data quality assessments Powered by active geek You ensure data is fit for consumption that meets the needs of enterprise data users. Keep information about the current data quality state readily available faster Improved decision making Data policy. Governor Automate data governance End to end over the entire data lifecycle with automation, instant transparency and control Automate data policy assessments with glossaries, metadata and policies for sensitive data discovery that automatically tag link and annotate with metadata to provide enterprise wide search for all lines of business self service knowledge graph Digitize and search your enterprise knowledge. Turn multiple siloed data sources into machine Understandable knowledge from a single data canvas searching Explore data content across systems including GRP CRM billing systems, social media to fuel data pipelines >>Yeah, yeah, focusing on enterprise data automation. We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. Who's the CTO of Iot Tahoe? Give us a little background CTO, You've got a deep, deep expertise in a lot of different areas. But what do we need to know? >>Well, David, I started my career basically at Microsoft, uh, where I started the information Security Cryptography group. They're the very 1st 1 that the company had, and that led to a career in information, security. And and, of course, as easy as you go along with information security data is the key element to be protected. Eso I always had my hands and data not naturally progressed into a roll out Iot talk was their CTO. >>What's the prescription for that automation journey and simplifying that migration to the cloud? >>Well, I think the first thing is understanding what you've got. So discover and cataloging your data and your applications. You know, I don't know what I have. I can't move it. I can't. I can't improve it. I can't build upon it. And I have to understand there's dependence. And so building that data catalog is the very first step What I got. Okay, >>so So we've done the audit. We know we've got what's what's next? Where do we go >>next? So the next thing is remediating that data you know, where do I have duplicate data? I may have often times in an organization. Uh, data will get duplicated. So somebody will take a snapshot of the data, you know, and then end up building a new application, which suddenly becomes dependent on that data. So it's not uncommon for an organization of 20 master instances of a customer, and you can see where that will go. And trying to keep all that stuff in sync becomes a nightmare all by itself. So you want to sort of understand where all your redundant data is? So when you go to the cloud, maybe you have an opportunity here to do you consolidate that that data, >>then what? You figure out what to get rid of our actually get rid of it. What's what's next? >>Yes, yes, that would be the next step. So figure out what you need. What, you don't need you Often times I've found that there's obsolete columns of data in your databases that you just don't need. Or maybe it's been superseded by another. You've got tables have been superseded by other tables in your database, so you got to kind of understand what's being used and what's not. And then from that, you can decide. I'm gonna leave this stuff behind or I'm gonna I'm gonna archive this stuff because I might need it for data retention where I'm just gonna delete it. You don't need it. All were >>plowing through your steps here. What's next on the >>journey? The next one is is in a nutshell. Preserve your data format. Don't. Don't, Don't. Don't boil the ocean here at music Cliche. You know, you you want to do a certain degree of lift and shift because you've got application dependencies on that data and the data format, the tables in which they sent the columns and the way they're named. So some degree, you are gonna be doing a lift and ship, but it's an intelligent lift and ship. The >>data lives in silos. So how do you kind of deal with that? Problem? Is that is that part of the journey? >>That's that's great pointed because you're right that the data silos happen because, you know, this business unit is start chartered with this task. Another business unit has this task and that's how you get those in stance creations of the same data occurring in multiple places. So you really want to is part of your cloud migration. You really want a plan where there's an opportunity to consolidate your data because that means it will be less to manage. Would be less data to secure, and it will be. It will have a smaller footprint, which means reduce costs. >>But maybe you could address data quality. Where does that fit in on the >>journey? That's that's a very important point, you know. First of all, you don't want to bring your legacy issues with U. S. As the point I made earlier. If you've got data quality issues, this is a good time to find those and and identify and remediate them. But that could be a laborious task, and you could probably accomplish. It will take a lot of work. So the opportunity used tools you and automate that process is really will help you find those outliers that >>what's next? I think we're through. I think I've counted six. What's the What's the lucky seven >>Lucky seven involved your business users. Really, When you think about it, you're your data is in silos, part of part of this migration to cloud as an opportunity to break down the silos. These silence that naturally occurs are the business. You, uh, you've got to break these cultural barriers that sometimes exists between business and say so. For example, I always advise there's an opportunity year to consolidate your sensitive data. Your P I. I personally identifiable information and and three different business units have the same source of truth From that, there's an opportunity to consolidate that into one. >>Well, great advice, Lester. Thanks so much. I mean, it's clear that the Cap Ex investments on data centers they're generally not a good investment for most companies. Lester really appreciate Lester Water CTO of Iot Tahoe. Let's watch this short video and we'll come right back. >>Use cases. Data migration. Accelerate digitization of business by providing automated data migration work flows that save time in achieving project milestones. Eradicate operational risk and minimize labor intensive manual processes that demand costly overhead data quality. You know the data swamp and re establish trust in the data to enable data signs and Data analytics data governance. Ensure that business and technology understand critical data elements and have control over the enterprise data landscape Data Analytics ENABLEMENT Data Discovery to enable data scientists and Data Analytics teams to identify the right data set through self service for business demands or analytical reporting that advanced too complex regulatory compliance. Government mandated data privacy requirements. GDP Our CCP, A, e, p, R HIPPA and Data Lake Management. Identify late contents cleanup manage ongoing activity. Data mapping and knowledge graph Creates BKG models on business enterprise data with automated mapping to a specific ontology enabling semantic search across all sources in the data estate data ops scale as a foundation to automate data management presences. >>Are you interested in test driving the i o ta ho platform Kickstart the benefits of data automation for your business through the Iot Labs program? Ah, flexible, scalable sandbox environment on the cloud of your choice with set up service and support provided by Iot. Top Click on the link and connect with the data engineer to learn more and see Iot Tahoe in action. Everybody, we're back. We're talking about enterprise data automation. The hashtag is data automated and we're going to really dig into data migrations, data migrations. They're risky, they're time consuming and they're expensive. Yousef con is here. He's the head of partnerships and alliances at I o ta ho coming again from London. Hey, good to see you, Seth. Thanks very much. >>Thank you. >>So let's set up the problem a little bit. And then I want to get into some of the data said that migration is a risky, time consuming, expensive. They're they're often times a blocker for organizations to really get value out of data. Why is that? >>I think I mean, all migrations have to start with knowing the facts about your data. Uh, and you can try and do this manually. But when you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate so that I have everything from on premise mainframes. They may have stuff which is probably in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. >>So I want to dig into this migration and let's let's pull up graphic. It will talk about We'll talk about what a typical migration project looks like. So what you see, here it is. It's very detailed. I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this, uh and then use if I want you to chime in. So at the top here, you see that area graph that's operational risk for a typical migration project, and you can see the timeline and the the milestones That Blue Bar is the time to test so you can see the second step. Data analysis. It's 24 weeks so very time consuming, and then let's not get dig into the stuff in the middle of the fine print. But there's some real good detail there, but go down the bottom. That's labor intensity in the in the bottom, and you can see hi is that sort of brown and and you could see a number of data analysis data staging data prep, the trial, the implementation post implementation fixtures, the transition to be a Blu, which I think is business as usual. >>The key thing is, when you don't understand your data upfront, it's very difficult to scope to set up a project because you go to business stakeholders and decision makers, and you say Okay, we want to migrate these data stores. We want to put them in the cloud most often, but actually, you probably don't know how much data is there. You don't necessarily know how many applications that relates to, you know, the relationships between the data. You don't know the flow of the basis of the direction in which the data is going between different data stores and tables. So you start from a position where you have pretty high risk and probably the area that risk you could be. Stack your project team of lots and lots of people to do the next phase, which is analysis. And so you set up a project which has got a pretty high cost. The big projects, more people, the heavy of governance, obviously on then there, then in the phase where they're trying to do lots and lots of manual analysis, um, manual processes, as we all know, on the layer of trying to relate data that's in different grocery stores relating individual tables and columns, very time consuming, expensive. If you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use party tools. Aziz said earlier the people who understand some of those systems may have left a while ago. CEO even higher risks quite cost situation from the off on the same things that have developed through the project. Um, what are you doing with Ayatollah? Who is that? We're able to automate a lot of this process from the very beginning because we can do the initial data. Discovery run, for example, automatically you very quickly have an automated validator. A data met on the data flow has been generated automatically, much less time and effort and much less cars stopped. >>Yeah. And now let's bring up the the the same chart. But with a set of an automation injection in here and now. So you now see the sort of Cisco said accelerated by Iot, Tom. Okay, great. And we're gonna talk about this, but look, what happens to the operational risk. A dramatic reduction in that, That that graph and then look at the bars, the bars, those blue bars. You know, data analysis went from 24 weeks down to four weeks and then look at the labor intensity. The it was all these were high data analysis, data staging data prep trialling post implementation fixtures in transition to be a you all those went from high labor intensity. So we've now attacked that and gone to low labor intensity. Explain how that magic happened. >>I think that the example off a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data in its price States catalog. If you like, imagine trying to do that manually, you need to go into every individual data store. You need a DB, a business analyst, reach data store. They need to do an extract of the data. But it on the table was individually they need to cross reference that with other data school, it stores and schemers and tables you probably with the mother of all Lock Excel spreadsheets. It would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of data lots of these things is, um it accelerates the ability to water may, But in some cases, it also makes it possible for enterprise customers with legacy systems take banks, for example. There quite often end up staying on mainframe systems that they've had in place for decades. I'm not migrating away from them because they're not able to actually do the work of understanding the data, duplicating the data, deleting data isn't relevant and then confidently going forward to migrate. So they stay where they are with all the attendant problems assistance systems that are out of support. You know, you know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is on cleaning data, which really you don't want a highly paid thanks to scientists doing with their time. But if you sort out your data in the first place, get rid of duplication that sounds migrate to cloud store where things are really accessible. It's easy to build connections and to use native machine learning tools. You well, on the way up to the maturity card, you can start to use some of the more advanced applications >>massive opportunities not only for technology companies, but for those organizations that can apply technology for business. Advantage yourself, count. Thanks so much for coming on the Cube. Much appreciated. Yeah, yeah, yeah, yeah

Published Date : Jun 23 2020

SUMMARY :

of enterprise data automation, an event Siri's brought to you by Iot. a lot of pressure on data, a lot of demand on data and to deliver more value What is it to you. into the business processes that are going to drive a business to love to get into the tech a little bit in terms of how it works. the ability to automatically discover that data. What is attracting those folks to your ecosystem and give us your thoughts on the So part of the reason why we've IBM, and I'm putting that to work because, yeah, the A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have Look who is smoking in We have a great conversation with Paul Increase the velocity of business outcomes with complete accurate data curated automatically And I'm really excited to have Paul Damico here. Nice to see you too. So let's let's start with Let's start with Webster Bank. complete data on the customer and what's really a great value the ability to give the customer what they need at the Part of it is really the cycle time, the end end cycle, time that you're pressing. It's enhanced the risk, and it's to optimize the banking process and to the cloud and off Prem and on France, you know, moving off Prem into, In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the You know, just for one to pray all these, you know, um, and each project before data for that customer really fast and be able to give them the best deal that they Can't thank you enough for coming on the Cube. And you guys have a great day. Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes Automated data Discovery data Discovery is the first step to knowing your We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. data is the key element to be protected. And so building that data catalog is the very first step What I got. Where do we go So the next thing is remediating that data you know, You figure out what to get rid of our actually get rid of it. And then from that, you can decide. What's next on the You know, you you want to do a certain degree of lift and shift Is that is that part of the journey? So you really want to is part of your cloud migration. Where does that fit in on the So the opportunity used tools you and automate that process What's the What's the lucky seven there's an opportunity to consolidate that into one. I mean, it's clear that the Cap Ex investments You know the data swamp and re establish trust in the data to enable Top Click on the link and connect with the data for organizations to really get value out of data. Uh, and you can try and milestones That Blue Bar is the time to test so you can see the second step. have pretty high risk and probably the area that risk you could be. to be a you all those went from high labor intensity. But it on the table was individually they need to cross reference that with other data school, Thanks so much for coming on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VolantePERSON

0.99+

Paul DamicoPERSON

0.99+

Paul DamicoPERSON

0.99+

IBMORGANIZATION

0.99+

AzizPERSON

0.99+

Webster BankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

WestchesterLOCATION

0.99+

AWSORGANIZATION

0.99+

24 weeksQUANTITY

0.99+

SethPERSON

0.99+

LondonLOCATION

0.99+

oneQUANTITY

0.99+

hundredsQUANTITY

0.99+

ConnecticutLOCATION

0.99+

New YorkLOCATION

0.99+

100 hoursQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

CiscoORGANIZATION

0.99+

four weeksQUANTITY

0.99+

SiriTITLE

0.99+

thousandsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

first itemQUANTITY

0.99+

20 master instancesQUANTITY

0.99+

todayDATE

0.99+

second stepQUANTITY

0.99+

S threeCOMMERCIAL_ITEM

0.99+

I o ta hoORGANIZATION

0.99+

first stepQUANTITY

0.99+

Fairfield CountyLOCATION

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

each projectQUANTITY

0.99+

FranceLOCATION

0.98+

two daysQUANTITY

0.98+

Leicester WatersORGANIZATION

0.98+

Iot TahoeORGANIZATION

0.98+

Cap ExORGANIZATION

0.98+

seven causeQUANTITY

0.98+

Lester WatersPERSON

0.98+

5 10 yearsQUANTITY

0.98+

BostonLOCATION

0.97+

IotORGANIZATION

0.97+

TahoeORGANIZATION

0.97+

TomPERSON

0.97+

FirstQUANTITY

0.97+

15 years oldQUANTITY

0.96+

seven different linesQUANTITY

0.96+

single sourceQUANTITY

0.96+

UtahLOCATION

0.96+

New EnglandLOCATION

0.96+

WebsterORGANIZATION

0.95+

12 years oldQUANTITY

0.95+

Iot LabsORGANIZATION

0.95+

Iot. TahoeORGANIZATION

0.95+

1st 1QUANTITY

0.95+

U. S.LOCATION

0.95+

J ahoraORGANIZATION

0.95+

CubeCOMMERCIAL_ITEM

0.94+

PremORGANIZATION

0.94+

one customerQUANTITY

0.93+

OracleORGANIZATION

0.93+

I O ta hoORGANIZATION

0.92+

SnowflakeTITLE

0.92+

sevenQUANTITY

0.92+

singleQUANTITY

0.92+

LesterORGANIZATION

0.91+

Partha Seetala & Radhesh Menon, Robin.io | CUBEconversations, March 2019


 

>> from our studios in the heart of Silicon Valley, Palo Alto, California It is a cute conversation >> universe. And welcome to another cube conversation from our wonderful Palo Alta Studios in beautiful Palo Alto, California. As we do with every cute conversation, we're gonna talk about an important topic with smart people that can provide some good clues and guidance as to how the industry's gonna be forward. We're gonna do that today, too. Specifically, what we're gonna talk about is that there has been an enormous amount of interest in kubernetes is a technology for making possible the whole micro service's approached application development. But one of the challenges that kubernetes has been specifically built to be stateless, which means that it's not necessarily aware of its underlying data. Now that is okay for certain classes of application. But the typical enterprise does want to ensure that its data can remain state full. That does have a level of protection required, et cetera, which creates a new need within the industry for how do we marry state full capabilities, staple storage capabilities with kubernetes and have that conversation? We've got great guests here. Part of Ayatollah is a co founder and C t o of robin dot io and radish men on is the CMO Robin. I owe partner Radish. Welcome to the Cube. >> Great to be here. >> All right, so, reddish one, we start with you. Why don't you give us a quick update on Robin Donna? >> Sure. Robin. Daughter, You, as you were alluding to, is addressing super important problem that is in front of us, which is that you've got cloud. Native technologies, especially containers. And community is becoming the default way in which enterprises are choosing to innovate. But at the same time, there's a >> whole swath >> of applications which were architected just five years ago, which all need to get the same benefits off agility, portability and efficiency of cloud native technologies. Robin helps bridge that, and I hope to talk more about that. >> Excellent. So part of let's start with you and talk about this problem this impedance mismatch between applications that require some state full assurance about the data and kubernetes, which tends to be stateless. How does that How does that impact the way applications get built and deployed? >> Sure. So if you look at me as you mentioned that communities is a platform that has started our originated for stateless workloads, and people have adopted the fastest growing open source project. We know about that, but when you look at a stateless work lord, it actually depends on state from somewhere. It's basically computing something right. It's computing state that's coming either, for the network ordered. Is computing on state that store brother inside, big data data, data leak or inside a database? Now, if you look at the problem itself, developers have gotten used to the agility benefits that communities has to offer the mostly infrastructure as a court kind of construct centered offers, however, the agility is not complete if you do not bring the state full workload workloads also into the communities for so as an example, think about somebody who's trying to build on entire pipeline right across the in. Just process so visualized by plane. If you're saying that you know what, in order to put this entire stock together, our entire pipeline together are to still do something that is non agile by going out sorry communities and then marry that with something inside communities. That's not true, actually. So more and more we're seeing developers and the develops teams basically saying that. Okay, I want to have the entire stock developed on deployed on a child platform, like open these. And of course, that comes with a bunch of challenges that need to be addressed and hoping you talk about that today. >> Well, if we have a zoo said the state has to be maintained somewhere, state may be maintained somewhere up in the cloud, But there are gonna be circumstances where because of data locality issues on, you know you want local control. You have ah, Leighton. See, considerations a number of other issues that you want to be ableto locate state in the closer close to the kubernetes. Is that really what we're talking about here? >> That's one aspect of it which is essentially around the performance and maybe you in governance reasons why you want to call a Kate State and stateless, Right? But the other reason I was saying is, if you want to deploy a stack, stack is comprised of many too many competent, stateless as well estate full. And you're talking about the birth of an entire application that the developer is gonna push under this platform right, so there. It's not about just the data locality and all that. It's also that just enabling the entire stock to be deployed in one shot. >> So you just you just you want a simpler, more manageable stats at all, right? So what's the solution? What people, what people have to do to get access to both those performance more more performance state Full application. Cubans clusters that record, have some degree of day locality concerns or to sustain that dream of increasingly simple stacks. What has to happen differently? >> Sure, and there are two aspects to this. The 1st 1 I would say, is that a the platform that is going to offer this on top of communities has to guarantee the persistency needs, whether it is in terms of reliability, dumps of performances. Selous, it has to guarantee does so you have to get those onto the platform first. But beyond that, if you look at other issues talking about many, there are many, many data platforms or data applications of workloads that predict board docker and communities. Now, if you don't really bring them into the Ford, you really are not solving the real business challenges that people have today, right? So beyond just providing persistency layer to communities pods, you need to have a way in which you can take complex platforms such as Mongol, Cassandra Elastic, such article rack. Cloudera these kind of workload and bring them onto a platform that has architected for Microsoft. Just communities, right? Because these platforms are not. These workers are not designed for micro service's workloads. So how do you marry them onto a platform such as communities that is designed as a micro service's platform? So you go to solve that, and that is exactly what Robin has done. So we have taken this approach where you can take complex workloads, rear platforms and then make them run on on a Microsoft this platform like abilities, starting with the storage subsystem, which is where one of our course fences. >> So I could conceivably imagine an Oracle database being rendered as a container with inside a cougar and he's cluster and position as a service have been orchestrated by by that kubernetes instance. What >> if I could jump in? You don't have to imagine we have customers in production there. They have Oracle Rack as a service offered on robin right now. One thing I want to contextualize is that our roots are in problem solving this hard problem off applications that I haven't been designed for containers contain arising them and being able to manage that gracefully in carbonated right. It just gave the example off Oracle Rack as a service. Or we also have customers with, let's say, multiple petabytes of data with her new bastard service, um, covering big large enterprises as well. Now from that lineage. Now, what've you also offering is that there is a set of customers who, already picked, Committed is already right might be open shift. It might be P K, as it might be g k to do its customers. We also have an offering called Robbins Storage, which brings powerful data management capabilities right. So to offering the platform offering, which is communities plus storage plus networking. Bless application bundles for some of the demanding workloads. But we just talked about, and then Robin Storage is a new offering which can add the magic of data management and advanced data management capabilities to any community. Is that you? >> Well, let's talk about that just for one second the uh, when I think of data management capabilities, I'm thinking not just a Iot being written back and forth between some media and some application. I'm thinking in terms of, oh, data protection and security. So are there Give us a sense of the scope of the service? Is that our part off this solution that you're talking? >> Yeah, I'll start in part like and chime in as well. So the first context you need to have is that all these data management capabilities are in the context of a hybrid being the normed implementation, right? Nine or 10 customers are looking at implementing on Prem with Public Cloud, right? So in that context, any of the cable release that we're talking about being being able to take snapshots or being able to take, you know, move that snap short to be offer as a back up in the cloud or ability, the clone and rehydrate applications, these air own capabilities that need to operate in a hybrid cloud context, that's number one. The second thing is, rather than just solve the storage level problem off taking snapshots, being able to bring application and data together is a big game changer in partner. Can you add a little bit more on the apple is data? >> Absolutely. Because, I mean, if you look at the the dinner service is the radish doctor board snapshots and clones and things living backups. Those constructs have existed in the storage industries for almost three decades. So there's nothing new about dark, right? But if you look at applying them for work Lord that are running in communities, you gotta uplevel that, because when you look at a story little snapshot, it is still a volume orelon level snapshot. But what a developer develops team needs is the ability to take an entire workload. That's a Mongo TB cluster and the only snap, short and dark cluster. I want to keep different states, even if the topology of the application is changing. Correct. And that is something that Robin has innovated on because we recognized. And I come from a storage bag when I was a distinguishing. Jenna very does have Bean fortunate to be building many data platforms there on be recognized that just leaving that storage does not deliver the promise of agility that communities offers. They were uplevel it into applications and for the very first time. In fact, we're introducing concepts such as you go to a Mongo classroom. You say I want to go snapshot this cluster. We understand the apology that this cluster has. How many shards depositor for offering these things. The service is under Langston the volumes and we dark forms a snapshot. That's an application. Little snapshot of the benefit of application will snap Shirt is that if another developer wants to go clone and run queries on that, you don't have to go Dr Storage Admin inside. Just give me clones of these large volumes. They'll say, Just clone this Mongol Devi cluster on. Then within minutes, you have an up and running long body be cluster fully functional. You can start readies life. Exactly. Other thing would be draw the stock double portability. So you have this snapshot taken periodic snapshots. So let's say that you run out of capacity nor deer center, and you would like to go bust into a different cloud. That's your on premises, and you want to go and run a clone in geeky because that's where the capacities, our snapshots and the baby, a implemented and architect of this allow you to port an entire application along with topology? Medea on data so that he can go and stand up Fully functional, ready to use. That's among Would he be cluster and geeky in the club? >> Now you talk about UK a Google kubernetes engine on G C P Google Cloud Platform. Obviously, that's when you think about kubernetes. That's kind of the mother ship. When you come right down to it. How does your platform and G K E G. C P work together? >> So the first thing is >> that we have, ah, partnership, which is led by engineering to engineering engagement, that >> part eyes front, ending around a standard set of AP eyes whereby the advanced data management capabilities that we're talking about can be brought into communities world itself and, of course, geeky as the implementation footprint. Right? So that's one area that we've been collaborating on. The second is from, ah, Google perspective. The preferred storage for running enterprise workloads or state full workloads or the data intensive work clothes that be talking about is Robin Storage and that's ah that we definitely are pretty excited by the fact that through rigorous technical evaluation, after rigorous technical evaluation, Google is chosen Robyn stories as the preferred storage for these demanding workloads. So from both these standpoints off moving the state of the art of what does it mean to provide data management capabilities to communities, to providing a solution that works today for customers who are embracing G K both on Prem in in the cloud to be able to bring state full workloads? We're working with Google and pretty excited about that part. Anyone add further color on the engineering partnership? >> He absolutely, I think, as a radish mentioned. So Google perform. We are the purport storage solution for that. Now can we just rewind back a little bit there? About 25 30 different stories? When does providing stories for communities? Right. So what is this? I think that this move is something special that let us tow this thing at this point, right. We took a very fundamentally different approach when we when we saw this problem for G k r for communities you could have started with several open source story solutions, are there and build on top of that. When their companies that take barter effects, for example, pity orifice and build on that. The companies that takes seven belong there, right? Be formally said that. Listen, if you want to elevate the experience from storage onto applications, that the example that I took earlier off taking a snatcher, a mongo migrating and if your story, it's stackers underwear off the application, which means that the stories track is unaware of the topology of the application. Can you really do application consistent snapshots? You can't. All he can do is begin to snapshot individual Williams. Correct. Now, if the stories stock is not a rare off the application to polish, can you actually the application level quality? Also, this. If you can't do that, can you really guarantee noisy neighbor elimination? You had to >> do all >> those things right? If you really wanna run data platforms, those are the core things that you need to do right and Soviet took an approach is that it doesn't know it will not cut it if you build a story. Stack on top of border defense, for example, are on set, so we do a ground up approach and he said, Look, if you wanna build a story, started this cloud native communities native. How would that look like? And how would the perimeters exposed so that it can deliver the entire experienced applications? So architectural leave yard very superior compared to the other players out there, it's proof is that we've got picked. Now that's one aspect. The other aspect is the approach that were taken to expose these primitives, their own snapshots and backup on a portability and all that was very clean. Right on. Very pragmatic how it works with both the born in the cloud as well as the the prior boatloads right on. Because of that, we're also collaborating with the Google engineers is to come up with a set off a P eyes that were planning to standardize right around community so that you could have a very standard set off a p I through which you can trigger these data management calls. Right? So that's that's other like no other stock Borden engineering to engineering collaboration. So that's the other thing that we're collaborating on to create the stana riser of FBI's based on the knowledge that we have had, because we have have we have feel deployments off like rubbish. Talked about right article rack. We have field the Prime Minster. People are deploying multiple petabytes off starting in the single communities. Robin, cluster. Right? So all that learning all the experience that we have had its contributed towards this joint Engineering to engineering. Afford that you're going to create the standardized data management. >> So we've got Robin. I owe has delivered a piece of technology for handling state full kubernetes clusters that has been validated by Google I o. Today or you know, so that can be used now. And is the basis for further engineering work to move this Maur into the mainstream for the future? That's good. Very exciting stuff, Partha. Right, Dash. Thanks very much for being here in the Cube. Thank you. Thank you. And once again, I want to thank part uh Chautala, Who is the co founder and CEO of Robin I owe and radish men on Who's the CMO Robin don I owe once again. I'm Peter Bursts. Thanks very much for watching this cube conversation until next time

Published Date : Apr 9 2019

SUMMARY :

But one of the challenges that kubernetes has been specifically built to be stateless, Why don't you give us a quick update on Robin Donna? And community is becoming the default that, and I hope to talk more about that. So part of let's start with you and talk about this problem this impedance And of course, that comes with a bunch of challenges that need to be addressed and hoping you talk about that today. that you want to be ableto locate state in the closer close to the kubernetes. It's also that just enabling the entire stock to be deployed in one shot. So you just you just you want a simpler, more manageable stats at all, right? So we have taken this approach where you can take complex workloads, rear platforms and then make by by that kubernetes instance. You don't have to imagine we have customers in production there. Well, let's talk about that just for one second the uh, when I think of data management capabilities, So the first context you need to have is that So let's say that you run out of capacity nor deer center, That's kind of the mother ship. on Prem in in the cloud to be able to bring state full workloads? from storage onto applications, that the example that I took earlier off taking a snatcher, So all that learning all the experience that we have had its contributed towards And is the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
March 2019DATE

0.99+

Radhesh MenonPERSON

0.99+

FBIORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

NineQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

GoogleORGANIZATION

0.99+

RobinPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

robin dot ioORGANIZATION

0.99+

bothQUANTITY

0.99+

10 customersQUANTITY

0.99+

FordORGANIZATION

0.99+

one secondQUANTITY

0.99+

five years agoDATE

0.99+

WilliamsPERSON

0.99+

oneQUANTITY

0.99+

Peter BurstsPERSON

0.99+

sevenQUANTITY

0.99+

ChautalaPERSON

0.99+

appleORGANIZATION

0.98+

secondQUANTITY

0.98+

one shotQUANTITY

0.98+

two aspectsQUANTITY

0.98+

Partha SeetalaPERSON

0.98+

one aspectQUANTITY

0.98+

second thingQUANTITY

0.98+

first timeQUANTITY

0.97+

TodayDATE

0.97+

Kate StatePERSON

0.97+

OracleORGANIZATION

0.97+

Palo Alta StudiosORGANIZATION

0.97+

JennaPERSON

0.96+

OneQUANTITY

0.96+

todayDATE

0.96+

first thingQUANTITY

0.96+

1stQUANTITY

0.96+

G C PTITLE

0.95+

About 25 30 different storiesQUANTITY

0.95+

MongoORGANIZATION

0.94+

UKLOCATION

0.93+

Palo Alto, CaliforniaLOCATION

0.93+

radishORGANIZATION

0.93+

singleQUANTITY

0.92+

Robin StorageORGANIZATION

0.91+

Robbins StorageORGANIZATION

0.9+

firstQUANTITY

0.88+

first contextQUANTITY

0.87+

one areaQUANTITY

0.86+

Google Cloud PlatformTITLE

0.85+

DashPERSON

0.81+

Robin DonnaPERSON

0.8+

LeightonPERSON

0.79+

three decadesQUANTITY

0.69+

CubeLOCATION

0.68+

SovietORGANIZATION

0.66+

G K ETITLE

0.65+

LangstonLOCATION

0.63+

AyatollahPERSON

0.62+

RackTITLE

0.61+

MongolTITLE

0.61+

Mongol DeviORGANIZATION

0.59+

BordenORGANIZATION

0.59+

Oracle RackORGANIZATION

0.59+

Cassandra ElasticTITLE

0.58+

CubeORGANIZATION

0.57+

SelousPERSON

0.55+

petabytesQUANTITY

0.55+

RadishPERSON

0.54+

C PTITLE

0.54+

ParthaORGANIZATION

0.52+

CubansOTHER

0.51+

Public CloudTITLE

0.51+

ClouderaORGANIZATION

0.5+