Image Title

Search Results for Boogie:

UNLIST TILL 4/2 - The Road to Autonomous Database Management: How Domo is Delivering SLAs for Less


 

hello everybody and thank you for joining us today at the virtual Vertica BBC 2020 today's breakout session is entitled the road to autonomous database management how Domo is delivering SLA for less my name is su LeClair I'm the director of marketing at Vertica and I'll be your host for this webinar joining me is Ben white senior database engineer at Domo but before we begin I want to encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click Submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we aren't able to address or drew our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forum to keep the conversation going also as a reminder you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you notification as soon as it's ready now let's get started then over to you greetings everyone and welcome to our virtual Vertica Big Data conference 2020 had we been in Boston the song you would have heard playing in the intro would have been Boogie Nights by heatwaves if you've never heard of it it's a great song to fully appreciate that song the way I do you have to believe that I am a genuine database whisperer then you have to picture me at 3 a.m. on my laptop tailing a vertical log getting myself all psyched up now as cool as they may sound 3 a.m. boogie nights are not sustainable they don't scale in fact today's discussion is really all about how Domo engineers the end of 3 a.m. boogie nights again well I am Ben white senior database engineer at Domo and as we heard the topic today the road to autonomous database management how Domo is delivering SLA for less the title is a mouthful in retrospect I probably could have come up with something snazzy er but it is I think honest for me the most honest word in that title is Road when I hear that word it evokes for me thoughts of the journey and how important it is to just enjoy it when you truly embrace the journey often you look up and wonder how did we get here where are we and of course what's next right now I don't intend to come across this too deep so I'll submit there's nothing particularly prescient and simply noticing the elephant in the room when it comes to database economy my opinion is then merely and perhaps more accurately my observation the office context imagine a place where thousands and thousands of users submit millions of ad-hoc queries every hour now imagine someone promised all these users that we could deliver bi leverage at cloud scale in record time I know what many of you should be thinking who in the world would do such a thing of course that news was well received and after the cheers from executives and business analysts everywhere and chance of Keep Calm and query on finally started to subside someone that turns an ass that's possible we can do that right except this is no imaginary place this is a very real challenge we face the demo through imaginative engineering demo continues to redefine what's possible the beautiful minds at Domo truly embrace the database engineering paradigm that one size does not fit all that little philosophical nugget is one I would pick up while reading the white papers and books of some guy named stone breaker so to understand how I and by extension Domo came to truly value analytic database administration look no further than that philosophy and what embracing it would mean it meant really that while others were engineering skyscrapers we would endeavor to build Datta neighborhoods with a diverse kapala G of database configuration this is where our journey at Domo really gets under way without any purposeful intent to define our destination not necessarily thinking about database as a service or anything like that we had planned this ecosystem of clusters capable of efficiently performing varied workloads we achieve this with custom configurations for node count resource pool configuration parameters etc but it also meant concerning ourselves with the unattended consequences of our ambition the impact of increased DDL activities on the catalog system overhead in general what would be the management requirements of an ever-evolving infrastructure we would be introducing multiple points of failure what are the advantages the disadvantages those types of discussions and considerations really help to define what would be the basic characteristics of our system the database itself needed to be trivial redundant potentially ephemeral customizable and above all scalable and we'll get more into that later with this knowledge of what we were getting into automation would have to be an integral part of development one might even say automation will become the first point of interest on our journey now using popular DevOps tools like saltstack terraform ServiceNow everything would be automated I mean it discluded everything from larger multi-step tasks like database designs database cluster creation and reboots to smaller routine tasks like license updates move-out and projection refreshes all of this cool automation certainly made it easier for us to respond to problems within the ecosystem these methods alone still if our database administration reactionary and reacting to an unpredictable stream of slow query complaints is not a good way to manage a database in fact that's exactly how three a.m. Boogie Nights happen and again I understand there was a certain appeal to them but ultimately managing that level of instability is not sustainable earlier I mentioned an elephant in the room which brings us to the second point of interest on our road to autonomy analytics more specifically analytic database administration why our analytics so important not just in this case but generally speaking I mean we have a whole conference set up to discuss it domo itself is self-service analytics the answer is curiosity analytics is the method in which we feed the insatiable human curiosity and that really is the impetus for analytic database administration analytics is also the part of the road I like to think of as a bridge the bridge if you will from automation to autonomy and with that in mind I say to you my fellow engineers developers administrators that as conductors of the symphony of data we call analytics we have proven to be capable producers of analytic capacity you take pride in that and rightfully so the challenge now is to become more conscientious consumers in some way shape or form many of you already employ some level of analytics to inform your decisions far too often we are using data that would be categorized as nagging perhaps you're monitoring slow queries in the management console better still maybe you consult the workflows analyzing how about a logging and alerting system like sumo logic if you're lucky you do have demo where you monitor and alert on query metrics like this all examples of analytics that help inform our decisions being a Domo the incorporation of analytics into database administration is very organic in other words pretty much company mandated as a company that provides BI leverage a cloud scale it makes sense that we would want to use our own product could be better at the business of doma adoption of stretches across the entire company and everyone uses demo to deliver insights into the hands of the people that need it when they need it most so it should come as no surprise that we have from the very beginning use our own product to make informed decisions as it relates to the application back engine in engineering we call it our internal system demo for Domo Domo for Domo in its current iteration uses a rules-based engine with elements through machine learning to identify and eliminate conditions that cause slow query performance pulling data from a number of sources including our own we could identify all sorts of issues like global query performance actual query count success rate for instance as a function of query count and of course environment timeout errors this was a foundation right this recognition that we should be using analytics to be better conductors of curiosity these types of real-time alerts were a legitimate step in the right direction for the engineering team though we saw ourselves in an interesting position as far as demo for demo we started exploring the dynamics of using the platform to not only monitor an alert of course but to also triage and remediate just how much economy could we give the application what were the pros and cons of that Trust is a big part of that equation trust in the decision-making process trust that we can mitigate any negative impacts and Trust in the very data itself still much of the data comes from systems that interacted directly and in some cases in directly with the database by its very nature much of the data was past tense and limited you know things that had already happened without any reference or correlation to the condition the mayor to those events fortunately the vertical platform holds a tremendous amount of information about the transaction it had performed its configurations the characteristics of its objects like tables projections containers resource pools etc this treasure trove of metadata is collected in the vertical system tables and the appropriately named data collector tables as a version 9 3 there are over 190 tables that define the system tables while the data collector is the collection of 215 components a rich collection can be found in the vertical system tables these tables provide a robust stable set of views that let you monitor information about your system resources background processes workload and performance allowing you to more efficiently profile diagnose and correlate historical data such as low streams query profiles to pool mover operations and more here you see a simple query to retrieve the names and descriptions of the system tables and an example of some of the tables you'll find the system tables are divided into two schemas the catalog schema contains information about persistent objects and the monitor schema tracks transient system States most of the tables you find there can be grouped into the following areas system information system resources background processes and workload and performance the Vertica data collector extends system table functionality by gathering and retaining aggregating information about your database collecting the data collector mixes information available in system table a moment ago I show you how you get a list of the system tables in their description but here we see how to get that information for the data collector tables with data from the data collecting tables in the system tables we now have enough data to analyze that we would describe as conditional or leading data that will allow us to be proactive in our system management this is a big deal for Domo and particularly Domo for demo because from here we took the critical next step where we analyze this data for conditions we know or suspect lead to poor performance and then we can suggest the recommended remediation really for the first time we were using conditional data to be proactive in a database management in record time we track many of the same conditions the Vertica support analyzes via scrutinize like tables with too many production or non partition fact tables which can negatively affect query performance and life in vertical in viral suggests if the table has a data a time step column you recommend the partitioning by the month we also can track catalog sizes percentage of total memory and alert thresholds and trigger remediations requests per hour is a very important metric in determining when a trigger are scaling solution tracking memory usage over time allows us to adjust resource pool parameters to achieve the optimal performance for the workload of course the workload analyzer is a great example of analytic database administration I mean from here one can easily see the logical next step where we were able to execute these recommendations manually or automatically be of some configuration parameter now when I started preparing for this discussion this slide made a lot of sense as far as the logical next iteration for the workload analyzing now I left it in because together with the next slide it really illustrates how firmly Vertica has its finger on the pulse of the database engineering community in 10 that OS management console tada we have the updated work lies will load analyzer we've added a column to show tuning commands the management console allows the user to select to run certain recommendations currently tuning commands that are louder and alive statistics but you can see where this is going for us using Domo with our vertical connector we were able to then pull the metadata from all of our clusters we constantly analyze that data for any number of known conditions we build these recommendations into script that we can then execute immediately the actions or we can save it to a later time for manual execution and as you would expect those actions are triggered by thresholds that we can set from the moment nyan mode was released to beta our team began working on a serviceable auto-scaling solution the elastic nature of AI mode separated store that compute clearly lent itself to our ecosystems requirement for scalability in building our system we worked hard to overcome many of the obstacles they came with the more rigid architecture of enterprise mode but with the introduction is CRM mode we now have a practical way of giving our ecosystem at Domo the architectural elasticity our model requires using analytics we can now scale our environment to match demand what we've built is a system that scales without adding management overhead or our necessary cost all the while maintaining optimal performance well we're really this is just our journey up to now and which begs the question what's next for us we expand the use of Domo for Domo within our own application stack maybe more importantly we continue to build logic into the tools we have by bringing machine learning and artificial intelligence to our analysis and decision making really do to further illustrate those priorities we announced the support for Amazon sage maker autopilot at our demo collusive conference just a couple of weeks ago for vertical the future must include in database economy the enhanced capabilities in the new management console to me are clear nod to that future in fact with a streamline and lightweight database design process all the pieces should be in place versions deliver economists database management itself we'll see well I would like to thank you for listening and now of course we will have a Q&A session hopefully very robust thank you [Applause]

Published Date : Mar 31 2020

SUMMARY :

conductors of the symphony of data we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

VerticaORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DomoORGANIZATION

0.99+

3 a.m.DATE

0.99+

AmazonORGANIZATION

0.99+

todayDATE

0.99+

first timeQUANTITY

0.98+

this weekDATE

0.97+

over 190 tablesQUANTITY

0.97+

two schemasQUANTITY

0.96+

second pointQUANTITY

0.96+

215 componentsQUANTITY

0.96+

first pointQUANTITY

0.96+

three a.m.DATE

0.96+

Boogie NightsTITLE

0.96+

millions of ad-hoc queriesQUANTITY

0.94+

DomoTITLE

0.93+

Vertica Big Data conference 2020EVENT

0.93+

Ben whitePERSON

0.93+

10QUANTITY

0.91+

thousands of usersQUANTITY

0.9+

one sizeQUANTITY

0.89+

saltstackTITLE

0.88+

4/2DATE

0.86+

a couple of weeks agoDATE

0.84+

DattaORGANIZATION

0.82+

end of 3 a.m.DATE

0.8+

Boogie NightsEVENT

0.78+

double arrowQUANTITY

0.78+

every hourQUANTITY

0.74+

ServiceNowTITLE

0.72+

DevOpsTITLE

0.72+

Database ManagementTITLE

0.69+

su LeClairPERSON

0.68+

many questionsQUANTITY

0.63+

SLATITLE

0.62+

The RoadTITLE

0.58+

Vertica BBCORGANIZATION

0.56+

2020EVENT

0.55+

database managementTITLE

0.52+

Domo DomoTITLE

0.46+

version 9 3OTHER

0.44+

Marc Crespi, ExaGrid Systems | VeeamON 2019


 

>> Live from Miami Beach, Florida, It's theCUBE covering VeeamON 2019. Brought to you by Veeam. >> Welcome back to Miami, everybody. This is Dave Vellante with Peter Burris. We're here at day one at VeeamON 2019. This is CUBE's 3rd year of doing VeeamON. We started in New Orleans, it was a great show. Last year was Chicago, and here, Miami at the Fontainbleau hotel. Marc Crespi is here, he's the vice president of sales engineering for the Americas at ExaGrid Systems Cube. Hello Marc, good to see you again. >> Good to see you. >> Thanks for coming on. So, give us the update. What's happening with ExaGrid? You guys got new headquarters in Marlborough. Marlborough's happening these days, right? We got the new shopping spa, and the mayor's going crazy, so give us the update on ExaGrid. >> Yes, so we just moved into a beautiful new headquarters in Marlborough and share it with some great other companies. The company continues to grow rapidly, double digit growth year over year, one of the few companies in this category that's growing that quickly. So everything's great. >> What's driving the growth? >> Well, customers are looking to fix the economics of backup. They've been spending too much money on it for a lot of years, so they look at products now, they want them to be simple, easy to use, and very cost-effective and we drive that trend very hard. >> Yeah I mean that doesn't really describe- what you just described, simple, easy to use, and cost-effective really doesn't describe backup for the past 20 years. So what are you doing specifically to make it simple, cost-effective, and easy to use? >> Well, first of all, by working with companies like Veeam. Veeam is a very easy-to-use product, it's very intuitive and then our product integrates very well with it so the products work together very well and makes just a very simple solution. >> What do you see as other big trends in backup? showed a slide today, 15 billion dollars. A big chunk of that, maybe close to half of it was backup and recovery, there's all kind of other stuff: data management, analytics, etc, etc, etc. What do you see, obviously cloud, you talked about the big superpowers, what are the big trends that are driving your business and more importantly, your customers transformation? >> Well, customers are looking to reduce the amount of data that they actually have to move. So, incremental technology's a really big- themes of pioneer in that, obviously doing incremental backups and that saves time and effort, saves space, along with data deduplication, it really makes for cost-effective storage solution. >> Talk a little bit more about why you're growing, how you sort of uniquely compete in the marketplace with some of the big whales. >> Sure, so our most unique feature is our architecture, and it has both technical aspects and economic aspects. Because we're a scale-out architecture, meaning that with every capacity increase of your data, we're not just adding storage, we're adding CompuPower network memory, etc. so that we keep the backup times very, very, very low. That also makes for a very cost-effective architecture because what we've done is you can scale out pretty much infinitely and we've also eliminated the concept of the end of a life of products. So we never force our customers into mandatory refreshes so their economics are very predictable over a long period of time. >> What do you see as the biggest use cases today that are driving your business? I mean, obviously, backup and recovery, I talked earlier about some of these emerging data management, cloud obviously, is this big, Edge, you seeing much going on there. What are some of those workloads and use cases that you see? >> I think probably one of the biggest use cases these days is what I would call instant recoveries, meaning that rather than doing a traditional restore, which could take a long number of minutes to hours. Customers will actually run production workloads off of the backup target as a way to get users back productive more quickly than would've been done in the past. >> Yeah, and that's key because you see in RPO and RTO's sort of companies putting more and more pressure on the IT groups to shrink those times, presuming you're seeing that in conjunction with digital, digital business, digital transformation. You talked about architecture before. What about your architecture and maybe with your partnership with Veeam allows customers to shrink those RPO and RTO times? >> I think the other aspect of our architecture that's very unique is what we called adaptive deduplication. One of the things we looked at when we architected the product was deduplication is obviously a very effective technology, but what are potential cons. Things that would make it less effective in backup. And one of the things we realized was if you put deduplication in the middle of the backup window and due to deduplication while the backups are running, then you could interfere with the speed of disk. So we do something called adaptive deduplication which means that we allow the object from the backup software to land and then we deduplicate and replicate them in parallel, but we make sure that we're not throttling the backups. So, we provide disk speeds even though we use deduplication. >> Okay. So, that's an example of one of the things you're doing to sort of improve it. How about Veeam integration? Is there anything specific there that you're doing that we should know about? >> Well, part of it is because of adaptive deduplication and because we maintain complete copies of backups. We uniquely support instant Veeam recovery like no other vendor can. Furthermore, we run what's called the Veeam data remover which is actually Veeam technology runs inside of our appliance and sets up a optimized communication protocol with the Veeam software that allows us to do a number of great things. >> Wait, double click on on that. So, is it an efficient protocol or is there other sort of accelerators that you've got in there? >> The protocol is optimized, and then we do some other acceleration around how you do synthetic folds and things of that sort that are unique to the data mover. >> And you have news with Veeam this week, do you not? >> Yes, we do. We're announcing something called ExaGrid backup with Veemam and what it is in a nutshell is the ability for a customer to purchase both technologies from their preferred reseller by just ordering one part number. So it dramatically simplifies the acquisition of the two technologies and allows customers to simplify the buying process. >> So Veeam, I know, is all channel sales. How about you guys? How do you go to market? >> We also are, yes. >> So, talk more about your go-to market. What do you have? Like, an overlay sales force that it helps facilitate? You got partners? Maybe you can talk more about your ecosystem. >> Well, we have a worldwide sales force and our sales people, the people that do the selling, work directly with our partners, so we don't have a specialized channel workforce, but we have a specialized channel strategy, and our entire sales team is very well trained on the channel, how to work with the channel, and make them happy and successful. >> So, backup for a long time time was kind of an afterthought. It was non-differentiated. You just did what you needed to make sure the devices could be recovered. >> Yeah, you bolted it on. >> You bolted it on. >> Right. >> Increasingly, it's becoming recognized as a central capability to any digital business, because if your data goes away or your data's no longer available, your digital business is gone. >> Right. >> That suggests we're going to get a greater degree of differentiation in the types of devices, in the types of systems, etc, that are going to become part of a backup solution. First of all, do you agree with that? And then secondly, go back to the use cases, where do you guys see yourselves fitting into that increasingly federated backup capability? >> Well, I certainly do agree with it. I mean, it's always been a necessity, but now even with things like Ransomware and the cryptoviruses, and things of that sort, it's even more important than it's ever been. It's no longer just data loss, etc. So, we fit into that trend and we'll continue to fit into that trend by continuing to drive the economics through the floor. Customers want that level of protection, it's a little bit like insurance. You need the protection, but you don't want to pay a dollar more than you have to, right? So you want to put it on an economic diet, and the way our technology evolves, we come out with denser, faster systems at a lower cost per terabyte just about every year. And we'll continue to do that. >> So do you anticipate then that there's going to be specialized use cases or are you just going after taking costs out of the equation? >> It's not so specialized because it's very horizontal. Everybody does it and everybody backs up all their data. So, we don't specialize in any one area of the data center like database or anything of that sort. We go wherever the customer needs us to go inside their data center. >> It's in the data center, sorry David, it's in the data center. >> In the data center, we also have a cloud offering, we have partners that will offer disaster recovery as a service, so they'll have data centers that manage on behalf of the customers, and we also have an offering that goes into Amazon web services. And, shortly, we'll be coming out with one for Azure. >> And that is what? A software based offering that uses the cloud as a target? >> Correct, it's a virtual appliance that you can replicate into the cloud. >> All right. We don't have much time left tonight, we have a really important topic to cover, which is, we talked about last year, but I want to bring it up again, which is sports. >> Yup. Why don't we talk Boston sports, we could talk about Warriors. I got a question for you, but- >> I'll watch >> I asked you last year, and I think it was May, we were in Chicago, I said "Would you have traded Tom Brady?" At a time when the sentiment was, he was done. And you said "No way, absolutely not." You, Peter McKay, and Patrick Osmond all said emphatically no, you made the right call. So good job. >> Thank you. >> Your thoughts? >> Would never trade him. He can play until he's 100 for all I care. As long as he keeps performing at such a high level, why would you lose him? >> And then, of course, the Red Sox, 108 wins, that was an amazing gift that they gave us. So, I don't know if you're a baseball fan. >> I am. >> All right, I got to ask you, Peter. Are the Warriors the greatest basketball team in the history of basketball? >> Well, let's see... >> Brendan says yes. >> They are the best basketball team at a time of the most competitive NBA. Some of the rules have changed, but the athletes are better, they're more conditioned, they are more knowledgeable by how to play this game, and they are the best team in basketball without Kevin Durant and without Boogie Cousins. >> Yeah. >> So ... hard to argue. >> They're sweeping Portland without Durant which is pretty amazing. So Brendan, for years, has been trying to tell me that. You know, Brendan is our local basketball genius so, I don't know. >> Now, would the Warriors have beaten say a Bill Russell Celtics team with the Celtics- Bill Russell Celtics team rules? Maybe not. >> Yeah, I don't know. I would say I'm starting to come around to Brendan's way of thinking. But, Marc, we'll give you the last word here. VeeamON 2019, great venue here in Miami, very hip, hip company, hip venue, ExaGrid growing, double digit growth rate, so congratulations on that. Your final thoughts? >> Just great to be here, I always like coming to Veeam events, they're always very well attended, I get to meet a lot of customers and really enjoy it. >> Marc Crespi, thanks very much for coming to theCUBE. It's great to see you again. >> Thank you. >> All right, keep it right there everybody. Peter and I will be back with our next guest right after this short break. This is VeeamON 2019 and you're watching theCUBE.

Published Date : May 21 2019

SUMMARY :

Brought to you by Veeam. Hello Marc, good to see you again. and the mayor's going crazy, and share it with some great other companies. and we drive that trend very hard. So what are you doing specifically to make it and makes just a very simple solution. What do you see as other big trends in backup? the amount of data that they actually have to move. how you sort of uniquely compete in the marketplace so that we keep the backup times very, very, very low. What do you see as the biggest use cases today meaning that rather than doing a traditional restore, Yeah, and that's key because you see in One of the things we looked at when we architected one of the things you're doing to sort of improve it. and because we maintain complete copies of backups. So, is it an efficient protocol or is there other sort of and then we do some other acceleration around how you is the ability for a customer to purchase both technologies How do you go to market? What do you have? and our sales people, the people that do the selling, You just did what you needed to make sure a central capability to any digital business, a greater degree of differentiation in the types of devices, and the way our technology evolves, we come out with So, we don't specialize in any one area of the data center It's in the data center, sorry David, In the data center, we also have a cloud offering, you can replicate into the cloud. we have a really important topic to cover, which is, Why don't we talk Boston sports, and I think it was May, we were in Chicago, I said why would you lose him? that was an amazing gift that they gave us. in the history of basketball? Some of the rules have changed, but the athletes are better, So Brendan, for years, has been trying to tell me that. say a Bill Russell Celtics team with the Celtics- But, Marc, we'll give you the last word here. I always like coming to Veeam events, It's great to see you again. Peter and I will be back with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Patrick OsmondPERSON

0.99+

Marc CrespiPERSON

0.99+

MarcPERSON

0.99+

BrendanPERSON

0.99+

Peter BurrisPERSON

0.99+

Peter McKayPERSON

0.99+

Kevin DurantPERSON

0.99+

Dave VellantePERSON

0.99+

PeterPERSON

0.99+

DurantPERSON

0.99+

New OrleansLOCATION

0.99+

ExaGridORGANIZATION

0.99+

Red SoxORGANIZATION

0.99+

ChicagoLOCATION

0.99+

MiamiLOCATION

0.99+

Tom BradyPERSON

0.99+

AmazonORGANIZATION

0.99+

two technologiesQUANTITY

0.99+

CelticsORGANIZATION

0.99+

MarlboroughLOCATION

0.99+

100QUANTITY

0.99+

last yearDATE

0.99+

VeeamORGANIZATION

0.99+

both technologiesQUANTITY

0.99+

15 billion dollarsQUANTITY

0.99+

Last yearDATE

0.99+

108 winsQUANTITY

0.99+

WarriorsORGANIZATION

0.99+

MayDATE

0.99+

3rd yearQUANTITY

0.99+

ExaGrid Systems CubeORGANIZATION

0.99+

Boogie CousinsPERSON

0.99+

Miami Beach, FloridaLOCATION

0.99+

tonightDATE

0.99+

one partQUANTITY

0.99+

CUBEORGANIZATION

0.98+

this weekDATE

0.98+

ExaGrid SystemsORGANIZATION

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

VeemamORGANIZATION

0.98+

bothQUANTITY

0.97+

todayDATE

0.96+

FirstQUANTITY

0.95+

Bill RussellPERSON

0.95+

secondlyQUANTITY

0.94+

AmericasLOCATION

0.93+

CompuPowerORGANIZATION

0.9+

VeeamTITLE

0.83+

VeeamON 2019EVENT

0.81+

2019DATE

0.81+

a dollarQUANTITY

0.77+

one areaQUANTITY

0.75+

VeeamONEVENT

0.75+

much moneyQUANTITY

0.73+

VeeamEVENT

0.7+

halfQUANTITY

0.68+

day oneQUANTITY

0.68+

AzureORGANIZATION

0.66+

doubleQUANTITY

0.65+

FontainbleauLOCATION

0.64+

PortlandLOCATION

0.64+

VeeamONORGANIZATION

0.63+

NBAORGANIZATION

0.62+

BostonLOCATION

0.61+

RansomwareTITLE

0.59+

20 yearsQUANTITY

0.56+

Amber Hameed, Dollar Shave Club | Adobe Summit 2019


 

>> Live, from Las Vegas. It's theCUBE, covering Adobe Summit 2019. Brought to you by Adobe. >> Hey welcome back everyone, this is CUBE's live coverage at Adobe Summit here in Las Vegas. I'm John Furrier, host of theCUBE with Jeff Frick, co-host for the next two days' live coverage. Our next guest is Amber Hameed, vice-president of Information Systems at the Dollar Shave Club. Welcome to theCUBE, thanks for coming on. >> It's great to be here. >> So I love your title, we were talking about it before the camera came on. It's not, it's Information Systems. Why is that different, tell us, what about the title. >> I think, everything from a technology point of view, there's no such thing as a purest anymore. I think it's really important to understand every aspect of the business as a technologist, to really evolve with the technology itself. I think, from a role that I play at Dollar Shave Club, I have the fortune of actually working very closely with all aspects of our business, From marketing, to fintech, to data, to technology, which is what our IT function is, is essentially embedded and ingrained within the entire holistic approach to technology. So it's not isolated anymore. And when we look at technologists, we actually look at how they actually interface with all of the aspects of business processes first. That's how we actually understand what the needs of the business are, to then cater the innovation and the technology to it. >> So is there a VP of IT, Information Technology? 'Cause IT is kind of a word that people think of the data center or cloud or buying equipment. It's a different role right, I mean that's not you. >> It is, if you look at the Information Systems evolution, you will see that, more and more systems are geared towards business needs, and less and less towards pure-play technology. So back in the day, you had a CTO role in an organization, which was focused on infrastructure, networks, technology, as DevOps is considered to be. Information Systems is actually focused more on the business itself, how do we enable marketing, how do we enable finance, how do we enable digital technology as a platform. But not so much as how do we develop a technology platform, that's part and parcel of what the business solution proposes, that drives how the technology operates. >> So what's old is new is coming back, Jeff, remember MIS, Management Information Systems? >> You don't want to remember this John. (laughing) >> Data Processing Systems Department. But if you think about it, we're doing Management Information Systems and we're processing a lot of data, kind of just differently, it's all with cloud now, so it's kind of important. >> That's exactly right. So technology's one aspect of bringing information together. So data is one aspect of it, business processes is another aspect of it and your resources, the way your teams are structured, are part and parcel of the strategy of any technology platform. >> Right, well what you're involved in, the topic of this show, is really not using that to so much support the business, but to be the business. And to take it to another level, to actually not support the product, but support the experience of the customer with your brand that happens to be built around some products, some of which are used for Shaving. So it's a really different way and I would imagine, except for actually holding the products in their hands, 99% of the customer engagement with your Dollar Shave Club is electronic. >> Well I mean our customer experience is a very, very unique combination within Dollar Shave Club. And that makes it even more challenging as a technologist to be able to cater, and bring that experience to what we call our members. So when we talk about a 360-degree approach from a technology platform point of view, we're taking into account, the interaction with the customer from the time we identify them, who they are, who are segmented market is, to the time they actually interact with us in any capacity, whether that's looking at our content, whether it's coming to our site, whether it's looking at our app, and then actually how we service them once we acquire them. So there's a big focus, an arm of our customer strategy, that's focused on the customer experience itself, once they are acquired, once they become part of the club. And it's that small community experience, that we want to give them, that's integral to our brand. >> You guys have all the elements of what the CEO of Adobe said on stage, we move from an old software model we're too slow, now we're fast, new generation of users, reimagining the product experience. You guys did that, that was an innovation. How do you keep that innovation going because you're a direct-to-consumer, but you got a club and a member model, you've got to constantly be raising the bar on capabilities and value to your members. What's the secret sauce how do you guys do that? >> It's exactly right. So as I mentioned it's an evolving challenge, we have to keep our business very, very agile obviously, 'cause our time to market is essential. How quickly the consumers actually change their minds, you know, so we have to target them, we have to be effective in that targeting. And how quickly do we actually deliver personalized content to them, that they can relate to, is integral to it. When we look at our our technology stack, we consider ourselves to be, you know, a cut above the others because we want to be on the bleeding edge of technology stack no matter what we do. We have an event-driven architecture. We invested quite a bit in our data infrastructure. I happen to be overseeing our data systems platform, and when I started with the organization, that was our central focus. In fact, before we invested in Adobe as a stack, which is helping us tremendously and drive some of the 360-degree view of customer centralization, we actually built our entire data architecture first, in order to make the Adobe products a success. And it was that architecture and platform, that then enabled a very successful implementation of Adobe Audience Manager going forward. >> How do you do that, because this is one of the things, that keeps coming up on the themes of every event we cover, all the different conversations with experts, people are trying to crack the code on the data architecture. I've heard people say it's a moving train, it's really hard. It is hard, how did you guys pull it off? Did you take kind of a slow approach? Was it targeted, was there a methodology to it? Can you explain? >> Yeah, so essentially, you know, as you can imagine, being a consumer driven organization, we have data coming out from all aspects. From all of our applications what we call first party data. We also have what we call second party data, which is essentially with our external marketers, information that we are using. They're using our information to channel, and we're using all of that channeled information back in, to then use that and make other strategic decisions. It was really, really important for us, to set up an architecture that is the core foundation of any sort of a data organization that you want to set. The other big challenge is the resources, as you can imagine this is a very competitive environment for data resources, so how do you keep them interested, how do you bring them to your brand, to work on your data architecture, is to make sure that you're providing them, with them latest and greatest opportunities, to take advantage of. So we're actually a big data organization, we run heavily on an AWS stack. We have bleeding edge technology stacks, that actually resources are interested in getting their their hands into, and learning and building on their skill sets. So when you take that ingredient in, the biggest driver is once you have that architecture set up, how do you get your organization, to be as a data-driven organization? And that is when you start, to start the adoption process slowly. You start delivering the insights, you start bringing your business along and explaining what those insights look like. >> I'm just curious, what are some of the KPIs that you guys take a look at, that probably a traditional marketer that graduated from P&G, thirty years ago, you know, wasn't really thinking about, that are really fundamentally different than just simply sales, and revenue, and profitability, and some of those things. >> Well I mean, I don't think there's a magic bullet, but I think they're things, that are key drivers in our business, obviously, because we're a subscription model, we are an industry disruptor there, and we started out by really looking at what the value is, that we can bring to our customers. So when we put them on a subscription model, it was very important for us to look at, how much we're spending in the acquisition, of that customer so our CPA and what we call the Golden Rule, and then how are we delivering on those. And the key KPI there is the LTV the longevity, which is the lifetime value of a customer. So we're very proud to have a pretty substantiated customer base, these members they've been with us for over six years. And the way we keep them interested, is refreshing all of that information that we're providing to them, in a very personalized way. >> How much do you think in terms of the information that they consume to stay engaged with the company, is the actual, what percentage of the value, you know, is the actual razor blade, or the actual product and the use of that versus, all the kind of ancillary material, the content, the being part of a club, and there's other things. I would imagine it's a much higher percentage on the ladder, than most people think. >> Exactly right, so our members are, we get this feedback constantly. I mean once they get into, usually a large customer base, we have over three million subscribers of our mail magazine, which is independent content delivery, from our site. And when people come and read the magazine, they automatically, they don't know at first, that it's part of the Dollar Shave Club umbrella. But once they get interested and they find out that it is, they automatically are attracted to the site and they land on it, so that's one arm that essentially targets through original content. The other aspect of it is, once you are a member, every shipment that you receive, actually has an original content insert in it. So the idea is that when you're in the bathroom, you're enjoying your products, you're also enjoying something that refreshes, keeps your mind, just as healthy as your body. >> So original content's critical to your strategy? >> It is, yes. >> On engagement and then getting that data. So I got to ask you a question, this is really an earned media kind of conversation, in that it used the parlance of the industry. Earning that trust is hard, and I see people changing their strategies from the old way of thinking of communities, forum software, login be locked in to me being more open. Communities' a hard nut to crack these days. You got to earn it, you know, you can't buy community. How is the community equation changing? You guys are doing it really, well what's the formula, obviously content's one piece. How would you share, how someone should set up their community strategy? >> Well I think it's also a lot of personal interaction. You know, we have club pros, that are exclusively dedicated to our members, and meeting our member needs. And it's world-class customer service. And from a technology point of view we have to make sure that our club pros understand our customers holistically. They understand how they've previously interacted with us. They understand what they like. We also do member surveys and profile reviews, with our members on a regular basis. We do what we call social scraping, so we understand what they're talking about, when they're talking about in social media, about our brand. And all of that is part of the technology stack. So we gather all this information, synthesize it, and provide it to our club pros. So when a member calls in, that information needs to be available to them, to interact properly and adequately. >> So it's intimacy involved, I can get an alignment. >> Absolutely yeah, it's hard core customer service, like right information, at the right time, in the right hands of our club pros. >> So he's a trick question for you, share a best practice in the industry. >> I think the idea of best practices, is sort of kind of on its way out. I think it's what we call evolving practices. I think that the cornerstone of every team, every culture, every company, is how you're learning constantly from the experiences, that you're having with your customers. And you bring a notion and it quickly goes out the door, based on feedback that you've received from your customers, or an interaction you've had. So you have to constantly keep on evolving on what are true and tested best practices. >> And that begs a question then that, if there's best practices used to be a term, like boiler plate, standards, when you have personalization, that's at the micro targeted level, personalization, that's the best practice, but it's not a practice it's unique to everybody. >> That's true and I think it's sort of, kind of a standing ground, it's a foundation. It gives you somewhere to start, but I think it would be you'd be hard-pressed, to say that that, is going to be the continuation of your experience. I think it's going to change and evolve drastically, especially in a world that we live in, which is highly digitized. Customer experiences and their attention span is so limited, that you cannot give them stale best practices, you have to keep changing. >> So the other really key piece is the subscription piece. A, it's cool that it's a club right, it's not just a subscription you're part of the club, but subscriptions are such a powerful tool, to force you to continue to think about value, continue to deliver value, to continue to innovate, because you're taking money every month and there's an there's an option for them to opt out every month. I wonder, how hard is that, to kind of get into people's heads that have not worked in that way, you know, I've worked in a product, we ship a new product once a year, we send it out, you know, okay we're working on the next PRD and MRD. Versus, you guys are almost more like a video game. Let's talk about video games, because a competitor will come out with a feature, suddenly, tomorrow and you're like, ah stop everything. Now we need to, you know, we need to feature match that. So it's a very different kind of development cycle as you said, you've got to move. >> Yeah exactly right. So there's different things that we deliver with every interaction with our customers. So one of the key ingredients is, is obviously we have an evolving brand and the content, a physical product of our brand. We recently launched groundskeeper, which is our deodorant brand, and essentially we want to make sure, that the idea is that, our consumers never actually have to leave their house. So the idea is to provide cheaper products, right, that a good quality, effective and they are delivered to your door. The idea of convenience is never outdated, never goes out of practice. But to your point, it's important to continue to listen to what your customers are asking for. So if they're asking, if they're bald, you don't want to continue to market Boogie's products, which is our hair care product to them. But if they're shaving their heads we want to, you know, evolve on our razors to be able to give them that flexibility so they have a holistic approach. >> Get that data flywheel going, see if the feedback loop coming in, lot of touch points. I got to ask the question around your success in innovation, which is awesome, congratulations. >> Thank you. >> There are a lot of people out there trying to get to kind of where you're at, maybe at the beginning of their journey, let's just say you have an innovative marketer out there or an IT, I mean a Information Systems person who says, we have a lot of members but we don't have a membership. We have a network, we have people, we're different content, we're great at original content. They have the piece parts, but now everything's not pulled together. What's your advice to that person watching, because you start to see people start to develop original content as an earned media strategy, they have open network effective content flowing. They might have members, do they do a membership? What's the playbook? >> Well I think the concept at the base of it all is, how do we, we need to stay very true to our mission. And I think that's the focal point, that sort of brings everything together. We never diverge too far away from that, is for men to really be able to take care of their minds and their bodies. So the area where we focus in on a lot, is that we can't just bombard you, with products, after products, after products. We have to be able to cater to your needs specifically. So when we're listening in to people and what they're talking about in their own personal grooming, personal care needs, we're also going out there and finding information and content to constantly allow them, to hear in on what their questions are all about. What their needs are on a daily basis. How do men interact with grooming products in general, when they go into a retail brick-and-mortar environment, versus when they are online. So all of that is the core ingredient that when we are actually positioning our technology around it. When it comes to innovation, my personal approach to innovation is, the people that are working for you in your organization, whether they're marketers, whether they're technologists, it's very, very important to keep them intrigued. So I personally have introduced what we call an innovation plan. And what that does, is as part of our roadmap delivery for technology, I allow my team members to think about, what they would want to do in the next phase of what they want to to deliver, outside of what they do everyday as their main job. That gets their creativity going and it adds a lot of value to the brand itself. >> And it's great for retention, 'cause innovative people want to solve hard problems, they want to work with other innovative people. So you got to kind of keep that going, you know, so the company wins. >> Exactly, and the company is very approachable when it comes to lunch-and-learn opportunities and essentially learning days. So you keep your resources, and your team's really, really invigorated and working on core things, that are important to the business. >> Amber, thank you so much for coming on, and sharing these amazing insights. >> Thank you, I appreciate it. >> I'll give you the final word, just a final parting word. Share an experience of something, that you've learned over your journey, as VP of Information. Something that you, maybe some scar tissue, something that was a bump in the road, that, a failure that you overcame and you grew from. >> I think as is as a female technologist, I think I would say, and I would encourage most women out there is that it's really important to focus on your personal brand. It's really important to understand what you stand for, what your message is and one of the things that I have learned is that takes a village, it takes a community of people, to really help you grow and really staying strong and connected to your resources, whether they are working with you directly, whether they're reporting to you, you learn constantly from them. And just to be open and approachable, and be able to be open to learning, and then evolving as you grow. >> Amber thank you for sharing. >> Great advice. >> Thank you so much. >> It's theCUBE live coverage here in Las Vegas, for Adobe Summit 2019. I'm John Furrier with Jeff Frick, stay with us. After this short break we'll be right back. (upbeat music)

Published Date : Mar 26 2019

SUMMARY :

Brought to you by Adobe. co-host for the next before the camera came on. and the technology to it. the data center or cloud So back in the day, you had a You don't want to remember this John. But if you think about it, are part and parcel of the strategy that happens to be built from the time we identify You guys have all the elements and drive some of the It is hard, how did you guys pull it off? And that is when you start, that you guys take a look at, And the way we keep them interested, of the value, you know, that it's part of the So I got to ask you a question, and provide it to our club pros. So it's intimacy involved, in the right hands of our club pros. share a best practice in the industry. So you have to constantly keep on evolving that's at the micro targeted that you cannot give them to force you to continue So the idea is to provide I got to ask the question around maybe at the beginning of their journey, So all of that is the core So you got to kind of keep that going, that are important to the business. Amber, thank you so much for coming on, a failure that you and connected to your resources, I'm John Furrier with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Amber HameedPERSON

0.99+

Dollar Shave ClubORGANIZATION

0.99+

John FurrierPERSON

0.99+

AdobeORGANIZATION

0.99+

JeffPERSON

0.99+

Las VegasLOCATION

0.99+

99%QUANTITY

0.99+

P&GORGANIZATION

0.99+

AWSORGANIZATION

0.99+

360-degreeQUANTITY

0.99+

JohnPERSON

0.99+

BoogieORGANIZATION

0.99+

AmberPERSON

0.99+

tomorrowDATE

0.99+

Adobe SummitEVENT

0.99+

over six yearsQUANTITY

0.98+

one pieceQUANTITY

0.98+

thirty years agoDATE

0.98+

oneQUANTITY

0.98+

once a yearQUANTITY

0.98+

Adobe Summit 2019EVENT

0.98+

over three million subscribersQUANTITY

0.98+

LTVORGANIZATION

0.98+

CUBEORGANIZATION

0.97+

one aspectQUANTITY

0.97+

one armQUANTITY

0.97+

theCUBEORGANIZATION

0.96+

Information SystemsORGANIZATION

0.95+

firstQUANTITY

0.93+

secondQUANTITY

0.85+

CEOPERSON

0.83+

days'QUANTITY

0.69+

Dollar Shave ClubORGANIZATION

0.66+

cultureQUANTITY

0.63+

SystemsORGANIZATION

0.61+

teamQUANTITY

0.6+

DataORGANIZATION

0.59+

presidentPERSON

0.58+

ManagerTITLE

0.57+

MRD.ORGANIZATION

0.56+

next twoDATE

0.45+

ingredientsQUANTITY

0.45+

Joel Horwitz, WANdisco | CUBEConversation, January 2019


 

(soaring orchestral music) >> Everyone, welcome to this CUBE Conversation here at Palo Alto, California. I'm John Furrier, host of theCUBE. We are here with Joel Horwitz, who's the CMO of WANdisco, Joel, great to see you, formerly of IBM, we've known you for many years, we've had great conversations when you were at IBM, rising star, now at WANdisco, congratulations. >> Thank you, yeah, it's really great to be at WANdisco, and great to be here with theCUBE. So we've had many conversations, again, goin' back, you were a rising star in data, you know the cloud real well, why WANdisco, why leave IBM for WANdisco, what attracted you to the opportunity? >> Yeah, really three things. First and foremost, the people. I've known the WANdisco team now for years. Back in my Hadoop days, when I was at Datamere, I used to, hang out with the WANdisco team at Data After Dark, in New York, which was great, and they had the best marketing there at the time. Two, the product, I mean I won't join a company unless the product is really legit, and they have an absolutely great technology, and they are applying it to some really tough problems. And third is just the potential, really, the potential of this company is not even close to being tapped. So there's a ton of runway there, and so, for me, I'm just totally grateful, and totally honored, to be a part of WANdisco. >> What's the tailwind for them, that wave that they're on, if you will, because you mentioned, there's a lot of runway or headroom, a lot of market growth. Certainly cloud, David Richards will talk about that. But what attracted you, 'cause you knew the cloud game too. >> Yeah, yeah. >> IBM made a big run at the cloud. >> Yeah well, I came in, at IBM, through the data door, so to speak, and then I walked through the cloud door, as well, while I was there. And the reality is that data continues to be the lifeblood of an enterprise, no matter what. And so, what I saw in WANdisco was that they had technology that allowed people to enlarge enterprise to, frankly replicate or manage their data across Hadoop clusters from cluster to cluster. And then we ended up, when I spoke with you last, with David here, we also recognize the opportunity that just how copying data, large-scale data from one Hadoop cluster to another, is challenging, copying data, it's really not that different of copying data from, say, HDFS to an object storage or S3, as pretty similar problem. And so that's why, just this past week, we announced live data for multicloud. >> Explain live data for multicloud, I've read it in the news, got some buzz, it's this great trend, live. We're doing you a lot of live videos on theCUBE, live implies real time. Data's data. Multicloud is clearly becoming one of those enterprise categories. >> Yeah. >> First it was public cloud, then hybrid cloud. >> Yeah. >> Now it's multicloud. How does live data fit into multicloud? >> Yeah, so multicloud, and live data, as I just mentioned, we have live data for Hadoop, so that's fairly obvious, so if you're going multi-cluster you can do that. As well as from, even on-prem, data center to data center, so, multi-site if you will. But multicloud is a really interesting phrase that's kind of cropped up this year. We're seeing it used quite a lot. The focus in multicloud has been mainly focus on applications. And so, talking about, how do you have a container strategy? Or a virtualization strategy, for your applications? And so, I think of it really as a multicloud strategy, as opposed to a multicloud architecture. So we're helping our enterprise clients think about their multicloud strategy. So they're not locked in to any one vendor, so they're able to take advantage of all the great innovations that are happening, if you ask me, on the cloud first, and then ultimately comes down to, at times, on-prem. >> What's the pitfalls between multicloud strategy and multicloud architecture, you just said, customers don't want to get locked in, obviously, no-one wants to get locked in, multi-vendor used to be a big buzzword, during that last wave of computer-to-client server. >> Yeah. >> Now multicloud seems like multi-vendor, what do you mean by architecture versus strategy, how do you parse that? Yeah, so like I said, in terms of your data, right, and it all comes back to your data. If you go all in on, say, one vendor, and you're architecting for that vendor only and you're choosing your migration, your data management tools, for a particular cloud vendor, and, said a different way, if you're only using the native tools from that vendor, then it's very difficult to ever move off of that cloud, or to take advantage of other clouds as they, for example, maybe have new IOT offerings, or have new blockchain offerings, or have new AI offerings, as many others come on the scene. And so, that's what I mean by strategy, is if you choose one vendor for, your certain toolset, then it's going to be very difficult to maintain arbitrage between the different vendors. >> Talk about how you guys are attacking the market, obviously, it's clear that data, has been a fundamental part of WANdisco's value proposition. Moving data around has been a top concern, even back in the Hadoop days, now it's in the cloud. >> Yeah. >> Moving data across the network, whether it's cloud to cloud, or cloud to data center, or to the edge of the network-- >> Yep. Yep. >> Is a challenge. >> You know at IBM, when I was there in 2016, and we're coming up with our strategy when I was in Corp Dev. We talked about four different areas of data, we talked about data gravity, so data has gravity. We talked about data movement, and we talked about data science. And we talked about data governance. And I still think those are still relatively the four major themes around this topic of data. And so, absolutely data has gravity, and not just in terms of the absolute size and weight, if you will. But it also has applications that depend on it, the business itself depends on it, and so, the types of strategies that we've seen to migrate data, say, to the cloud, or have a hybrid data management strategy, has been lift and shift, or to load it on to the back of, I always picture that image of the forklift lifting all those tape drives onto the airplane, you know, the IBM version of that. And that's like a century old at this point, so, we have a way to replicate data continuously, using our patented consensus technology, that's in the lifeblood of our company, which is distributed computing. And so having a way, to migrate data to the cloud, without disrupting your business, is not just marketing speak, but it's really what we are able to do for our clients. How do you guys go to market, how do you guys serve customers, what's the strategy? >> So, primarily we've formed a number of strategic partnerships, obviously one with IBM that I helped spearhead while I was there, we actually just recently announced that we now support Big SQL, so it's actually the first opportunity where, if you are using a database, provided by IBM, you can actually replicate across different databases and still query it with Big SQL. Which is a big deal, right, it means you can still have access to your data while it's in motion, right, that's pretty cool. And then so IBM is there, and then secondly, we've formed a number of other strategic partnerships with the other cloud vendors, of course, Alibaba we have an OEM, Microsoft, we have preferred selling motion with them, AWS, of course, we're in their marketplace. So primarily, we sell through a number of our key partnerships, because, we are, fairly integrated, like I said, into the architecture of these platforms, and, just to comment more deeply on that, when you look at, object storage, on each of these various public cloud vendors. They may look similar on the surface, maybe they all use the same APIs or have some level of, similar interaction, they look like they're the same, the pricing might be the same. We go like one level deeper, and they're all very different, they're all very different flavors of object storage. And so while it might seem like, "Oh, that's trivial to work with," it really isn't, it's extremely non-trivial, so, we help, not only our customers solve that, but we also help our partners significantly, help their clients move to the cloud, to their cloud, faster. >> So you basically work through people who sell your product, to the end user customer, or through their application or service. >> Yeah, that's our main route to market, I would say, the other, obviously, the main, we have a direct sales force, who's out there, working with the best clients in the world. AMD is a great customer of ours, who we recently helped migrate to Microsoft Azure. And we have a number of other large enterprise customers, in retail, and finance, and media. And so really, when it comes down to it, yeah it's those two majors motions, one through the cloud vendors themselves, 'cause frankly, in most cases, they don't have this technology to do it, you know, they're trying to basically take snapshots of data, and they're struggling to convince their customers to move to their cloud. >> It becomes a key feature in platforms. >> Yes it does. >> So that's obviously what attracts sellers, what other things would attract sellers or partners, for you, what motivates them, obviously the IP, clearly, is the number one, economics, what's the other value proposition? >> The end goal isn't to move data to the cloud, the end goal is to move business processes to the cloud, and then be able to take advantage of the other value adds that already exist in the cloud. And so if you're saying, what's the benefit there, well, once you do that move, then you can sell into, clients with all your additional value adds. So that's really powerful, if you are stuck with this stage of "Eh, how do we actually migrate data to the cloud?" >> So IBM Think is coming up, what's your view of what's happening there, what are you guys going to be doing there, as are you, on the IBM side-- >> Yeah. >> Now you're on the other side of the table. You've been on both sides of the table. >> Yeah. >> So what's goin' on at Think, and how does WANdisco, vector, and certainly CUBE will be there. >> Yeah, we'll be there, so WANdisco is a sponsor of IBM Think as well, clearly, as I mentioned, we'll be talking about Big Replicate, which is our Hadoop replication offering, that's sold with IBM. The other one, as I mentioned, is Big SQL, so that's a new offering that we just announced this past month. So we'll be talking about that, and showing a number of great examples of how that actually works, so if you're going to be at Think, come by our booth, and check that out. In addition to that, I mean, clearly, IBM is also talking about multicloud and hybrid cloud, so hybrid data management, hybrid cloud is a big topic. You can expect to see, at IBM Think, a lot of conversations on the application side. In terms of, obviously with their acquisition of Red Hat, you can well imagine they're going to be talking a lot about the software stack, there. But I would say that, we'll be talking, and spending most of our time talking about, how to manage your data across different environments. >> Where's the product roadmap heading, I know you guys don't like to go into specifics in public- >> Yeah. >> Sensitive information, but, generally speaking, where's the main trendlines that you guys are going to be building on, obviously, cloud data, they'll come in together, good core competency there for WANdisco, what's next, what's the next level for you guys? >> So what's really fascinating, and I actually didn't realize this when I joined WANdisco, just to be completely transparent. WANdisco has a core piece of technology called DConE, Distributed Coordination Engine. It essentially is a form of blockchain, really, it's a consensus technology, it's an algorithm. And that's been their secret sauce since the founding of the company. And so they originally applied that to code, through source code management, and then only in this last few years they've applied it to data. So you can guess, at other areas that we might apply it to, and already this past year, we actually filed two patents, in the area of blockchain, or really, distributed ledger technology, as we're starting to hear it called in the actual enterprise that's using it. But you can expand that to any other enterprise asset, really. That's big, right, that has value, and that you want to manage across different environments, so you can imagine, lots of other assets that we could apply this to, not only code, not only data, not only ledgers, but what are the other assets? And so that's essentially what we're working on. >> Is that protectable IP the patents, so those are filed on the blockchain? >> Yeah, yeah. >> For instance? >> So DConE is certainly patented, I'm sure Jagane'll talk more about this. >> Yeah, we'll get into it. >> There's probably a handful of people in the world, and they might all be working at WANdisco at this point. (chuckles) Who actually know how that works, and it's essentially Paxos, which is a really gnarly problem to solve, a really difficult math problem. And as David mentioned earlier, Google, the other smartest company in the world, published their paper on Spanner, and as you said, they used brute force, really, to solve the problem. Where we have a very elegant solution, using software, right? So it's a really great time to be at WANdisco, because I just see that there's so many applications of our technology, but, right now, we're mainly focused on what our customers are asking for. >> You've said a great quote, thanks Joe, final question for you, where do you see it going, WANdisco, what are your plans, do you have anything in mind, do you want to share anything notable, around what you're doing, and what you think WANdisco will be in a few years. >> We have an incredible team, as I mentioned, the people that are joining WANdisco, as David mentioned, I myself, not to say too much there, but, the new folks that have joined our Research and Development Team, but we've been making some great hires, to WANdisco. So I'm really excited about the team, I'm going, actually, to visit, we have a great team in Europe, in the UK, in the United Kingdom, so I'm going to go see them next week. But we have just the company culture is what drives me, I think that's just one of those hard things, really, to find. And so that's what I'm really excited about, so there's a lot of cool stuff happening there. You know, on that note, it's actually kind of funny, because on one of the articles that talked about live data for multicloud, asked the question, and her headline was "Are You Down to Boogie?" So, disco continues to be a great meme for us, with our name. (John chuckles) Unintentional, so, as a marketer, it's a pretty fun time to be at WANdisco. >> Seventies and eighties were great times, certainly I'm an eighties guy, Joel, thanks for comin' on, appreciate the update, Joel Horowitz, CMO, Chief Marketing Officer, WANdisco, really on a nice wave right now, cloud growth, data growth, all comin' together, real IP, lookin' forward to hearing more, what comes down the pipe for those guys, you'll see him at IBM Think. I'm John Furrier here, in the studios at Palo Alto, thanks for watching. (soaring orchestral music)

Published Date : Jan 23 2019

SUMMARY :

we've had great conversations when you were at IBM, and great to be here with theCUBE. and they are applying it to some really tough problems. that wave that they're on, if you will, a big run at the cloud. And the reality is that data continues to be I've read it in the news, got some buzz, Now it's multicloud. data center to data center, so, multi-site if you will. and multicloud architecture, you just said, and it all comes back to your data. even back in the Hadoop days, now it's in the cloud. and so, the types of strategies that we've seen it means you can still have access to your data So you basically work through and they're struggling to convince their customers in platforms. the end goal is to move business processes to the cloud, You've been on both sides of the table. and how does WANdisco, vector, a lot of conversations on the application side. and that you want to manage across different environments, So DConE is certainly patented, So it's a really great time to be at WANdisco, and what you think WANdisco will be in a few years. And so that's what I'm really excited about, in the studios at Palo Alto, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

JoelPERSON

0.99+

IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

WANdiscoORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Joel HorwitzPERSON

0.99+

JoePERSON

0.99+

John FurrierPERSON

0.99+

Joel HorowitzPERSON

0.99+

UKLOCATION

0.99+

2016DATE

0.99+

David RichardsPERSON

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

New YorkLOCATION

0.99+

United KingdomLOCATION

0.99+

January 2019DATE

0.99+

Data After DarkORGANIZATION

0.99+

next weekDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Are You Down to BoogieTITLE

0.99+

JohnPERSON

0.99+

first opportunityQUANTITY

0.99+

TwoQUANTITY

0.99+

thirdQUANTITY

0.99+

ThinkORGANIZATION

0.99+

two patentsQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

oneQUANTITY

0.98+

DatamereORGANIZATION

0.98+

this yearDATE

0.98+

FirstQUANTITY

0.98+

both sidesQUANTITY

0.97+

eachQUANTITY

0.97+

MulticloudORGANIZATION

0.96+

JaganePERSON

0.95+

IBM ThinkORGANIZATION

0.95+

four major themesQUANTITY

0.94+

Big SQLTITLE

0.92+

Red HatORGANIZATION

0.92+

one vendorQUANTITY

0.92+

two majorsQUANTITY

0.92+

PaxosORGANIZATION

0.92+

DConEORGANIZATION

0.91+

firstQUANTITY

0.91+

multicloudTITLE

0.9+

Ben Sharma, Tony Fisher, Zaloni - BigData SV 2017 - #BigDataSV - #theCUBE


 

>> Announcer: Live from San Jose, California, it's The Cube, covering Big Data Silicon Valley 20-17. (rhythmic music) >> Hey, welcome back, everyone. We're live in Silicon Valley for Big Data SV, Big Data Silicon Valley in conjunction with Strata + Hadoob. This is the week where it all happens in Silicon Valley around the emergence of the Big Data as it goes to the next level. The Cube is actually on the ground covering it like a blanket. I'm John Furrier. My cohost, George Gilbert with Boogie Bond. And our next guest, we have two executives from Zeloni, Ben Sharma, who's the founder and CEO, and Tony Fischer, SVP and strategy. Guys, welcome back to The Cube. Good to see you. >> Thank you for having us back. >> You guys are great guests. You're in New York for Big Data NYC, and a lot is going on, certainly, here, and it's just getting kicked off with Strata-Hadoob, they got the sessions today, but you guys have already got some news out there. Give us the update. What's the big discussion at the show? >> So yeah, 20-16 was a great year for us. A lot of growth. We tripled our customer base, and a lot of interest in data lake, as customers are going from say Pilot and POCs into production implementation so far though. And in conjunction with that, this week we launched what we call a solution named Data Lake in a Box, appropriately, right? So what that means is we're bringing the full stack together to customers, so that we can get a data lake up and running in eight weeks time frame, with enterprise create data ingestion from their source systems hydrated into the data lake and ready for analytics. >> So is it a pretty big box, and is it waterproof? (all laughing) I mean, this is the big discussion now, pun intended. But the data lake is evolving, so I wanted to get your take on it. This is kind of been a theme that's been leading up and now front and center here on The Cube. Already the data lake has changed, also we've heard, I think Dave Alante in New York said data swamp. But using the data is critical on a data lake. So as it goes to more mature model of leveraging the data, what are the key trends right now? What are you guys seeing? Because this is a hot topic that everyone is talking about. >> Well, that's a good distinction that we like to make, is the difference between a data swamp and a data lake. >> And a data lake is much more governed. It has the rigor, it has the automation, it has a lot of the concepts that people are used to from traditional architectures, only we apply them in the scale-out architecture. So we put together a maturity model that really maps out a customer's journey throughout the big data and the data lake experience. And each phase of this, we can see what the customer's doing, what their trends are and where they want to go, and we can advise to them the right way to move forward. And so a lot of the customers we see are kind of in kind of what we call the ignore stage. I'd say most of the people we talk to are just ignoring. They don't have things active, but they're doing a lot of research. They're trying to figure out what's next. And we want to move them from there. The next stage up is called store. And store is basically just the sandbox environment. "I'm going to stick stuff in there." "I'm going to hope something comes out of it." No collaboration. But then, moving forward, there's the managed phase, the automated phase, and the optimized phase. And our goal is to move them up into those phases as quickly as possible. And data lake in a box is an effort to do that, to leapfrog them into a managed data lake environment. >> So that's kind of where the swamp analogy comes in, because the data lake, the swamp is kind of dirty, where you can almost think, "Okay, the first step is store it." And then they get busy or they try to figure out how to operationalize it, and then it's kind of like, "Uh ..." So your point, they're trying to get to that. So you guys get 'em to that set up, and then move them quickly to value? Is that kind of the approach? >> Yeah. So, time to value is critical, right? So how do you reduce the time to insight from the time the data is produced by the date producer, till the time you can make the data available to the data consumer for analytics and downstream use cases. So that's kind of our core focus in bringing these solutions to the market. >> Dave often and I were talking, and George always talk about the value of data at the right time at the right place, is the critical lynch-pin for the value, whether it's an app-driven, or whatever. So the data lake, you never know what data in the data lake will need to be pulled out and put into either real time or an app. So you have to assume at any given moment there's going to be data value. >> Sure >> So that, conceptually, people can get that. But how do you make that happen? Because that's a really hard problem. How do you guys tackle that when a customer says, "Hey, I want to do the data lake. "I've got to have the coverage. "I got to know who's accessing stuff. "But at the end of the day, "I got to move the data to where it's valuable." >> Sure. So the approach we have taken is with an integrated platform with a common metadata layer. Metadata is the key. So, using this common metadata layer, being able to do managed ingestion from various different sources, being able to do data validation and data quality, being able to manage the life cycle of the data, being able to generate these insights about the data itself, so that you can use that effectively for data science or for downstream applications and use cases is critical based on our experience of taking these applications from, say, a POC pilot phase into a production phase. >> And what's the next step, once you guys get to that point with the metadata? Because, like, I get that, it's like everyone's got the metadata focus. Now, I'm the data engineer, the data NG or the geek, the supergeek and then you've got the data science, then the analysts, then there will probably be a new category, a bot or something AI will do something. But you can have a spectrum of applications on the data side. How do they get access to the metadata? Is it through the machine learning? Do you guys have anything unique there that makes that seamless or is that the end goal? >> Sure, do you want to take that? >> Yes sure, it's a multi-pronged answer, but I'll start and you can jump in. One of the things we provide as part of our overall platform is a product called Micah. And Micah is really the kind of on-ramp to the data. And all those people that you just named, we love them all, but their access to the data is through a self-service data preparation product, and key to that is the metadata repository. So, all the metadata is out there; we call it a catalog at that point, and so they can go in, look at the catalog, get a sense for the data, get an understanding for the form and function of the data, see who uses it, see where it's used, and determine if that's the data that they want, and if it is, they have the ability to refine it further, or they can put it in a shopping cart if they have access to it, they can get it immediately, they can refine it, if they don't have access to it, there's an automatic request that they can get access to it. And so it's a onramp concept, of having a card catalog of all the information that's out there, how it's being used, how it's been refined, to allow the end user to make sure that they've got the right data, they can be positioned for their ultimate application. >> And just to add to what Tony said, because we are using this common metadata layer, and capturing metadata every instance, if you will, we are serving it up to the data consumers, using a rich catalog, so that a lot of our enterprise customers are now starting to create what they consider a data marketplace or a data portal within their organization, so that they're able to catalog not just the data that's in the data lake, but also data that's in other data stores. And provide one single unified view of these data sets, so that your data scientists can come in and see is this a data set that I can use for my model building? What are the different attributes of this data set? What is the quality of the data? How fresh is the data? And those kind of traits, so that they are effective in their analytical journey. >> I think that's the key thing that's interesting to me, is that you're seeing the big data explosions over the past ten years, eight years, we've been covering The Cube since the dupe world started. But now, it's the data set world, so it's a big data set in this market. The data sets are the key because that's what data scientists want to wrangle around with, and sling data sets with whatever tooling they want to use. Is that kind of the same trend that you guys see? >> That's correct. And also what we're seeing in the marketplace, is that customers are moving from a single architecture to a distributed architecture, where they may have a hybrid environment with some things being instantiated in the Cloud, some things being on PRIM. So how do you not provide a unified interface across these multiple environments, and in a governed way, so that the right people have access to the right data, and it's not the data swamp. >> Okay, so lets go back to the maturity model because I like that framework. So now you've just complicated the heck out of it. Cause now you've got Cloud, and then on PRIM, and then now, how do you put that prism of maturity model, on now hybrid, so how does that cross-connect there? And a second follow-up to that is, where are the customers on this progress bar? I'm sure they're different by customer but, so, maturity model to the hybrid, and then trends in the customer base that you're seeing? >> Alright, I'll take the second one, and then you can take the first one, okay? So, the vast majority of the people that we work with, and the people, the prospects customers, analysts we've talked to, other industry dignitaries, they put the vast majority of the customers in the ignore stage. Really just doing their research. So a good 50% plus of most organizations are still in that stage. And then, the data swamp environment, that I'm using it to store stuff, hopefully I'll get something good out of it. That's another 25% of the population. And so, most of the customers are there, and we're trying to move them kind of rapidly up and into a managed and automated data lake environment. The other trend along these lines that we're seeing, that's pretty interesting, is the emergence of IT in the big data world. It used to be a business user's world, and business users built these sandboxes, and business users did what they wanted to. But now, we see organizations that are really starting to bring IT into the fold, because they need the governance, they need the automation, they need the type of rigor that they're used to, in other data environments, and has been lacking in the big data environment. >> And you've got the IOT code cracking the code on the IOT side which has created another dimension of complexity. On the numbers of the 50% that ignore, is that profile more for Fortune 1000? >> It's larger companies, it's Fortune, and Global 2000. >> Got it, okay, and the terms of the hybrid maturity model, how's that, and add a third dimension, IOT, we've got a multi-dimensional chess game going here. >> I think they way we think about it is, that they're different patterns of data sets coming in. So they could be batched, they could be files, or database extracts, or they could be streams, right? So as long as you think about a converged architecture that can handle these different patterns, then you can map different use cases whether they are IOT and streaming use cases versus what we are seeing is that a lot of companies are trying to replace their operational analytics platforms with a data lake environment, and they're building their operational analytics on top of the data lake, correct? So you need to think more from an abstraction layer, how do you abstract it out? Because one of the challenges that we see customers facing, is that they don't want to get sticky with one Cloud service provider because they may have multiple Cloud service providers, >> John: It's a multi-Cloud world right now. >> So how do you leverage that, where you have one Cloud service provider in one geo, another Cloud service provider in another geo, and still being able to have an abstraction layer on top of it, so that you're building applications? >> So do you guys provide that data layer across that abstraction? >> That is correct, yes, so we leverage the ecosystem, but what we do is add the data management and data governance layer, we provide that abstraction, so that you can be on PREM, you can be in Cloud service provider one, or Cloud service provider two. You still have the same controls, and same governance functions as you build your data lake environment. >> And this is consistent with some of the Cube interviews we had all day today, and other Cube interviews, where when you had the Cloud, you're renting basically, but you own your data. You get to have a nice ... And that metadata seems to be the key, that's the key, right? For everything. >> That's right. And now what we're seeing is that a lot of our Enterprise customers are looking at bringing in some of the public cloud infrastructure into their on-PRAM environment as they are going to be available in appliances and things like that, right? So how do you then make sure that whatever you're doing in a non-enterprise cloud environment you are also able to extend it to the enterprise-- >> And the consequences to the enterprise is that the enterprise multiple jobs, if they don't have a consistent data layer ... >> Sure, yeah. >> It's just more redundancy. >> Exactly. >> Not redundancy, duplication actually. >> Yeah, duplication and difficulty of rationalizing it together. >> So let me drill down into a little more detail on the transition between these sort of maturity phases? And then the movement into production apps. I'm curious to know, we've heard Tableau, XL, Power BI, Click I guess, being-- sort of adapting to being front ends to big data. But they don't, for their experience to work they can't really handle big data sets. So you need the MPP sequel database on the data lake. And I guess the question there is is there value to be gotten or measurable value to be gotten just from turning the data lake into you know, interactive BI kind of platform? And sort of as the first step along that maturity model. >> One of the patterns we were seeing is that serving LIR is becoming more and more mature in the data lake, so that earlier it used to be mainly batch type of workloads. Now, with MPP engines running on the data lake itself, you are able to connect your existing BI applications, whether it's Tableau, Click, Power BI, and others, to these engines so that you are able to get low-latency query response times and are able to slice-and-dice your data sets in the data lake itself. >> But you're essentially still, you have to sample the data. You can't handle the full data set unless you're working with something like Zoom Data. >> Yeah, so there are physical limitations obviously. And then there are also this next generation of BI tools which work in a converged manner in the data lake itself. So there's like Zoom Data, Arcadia, and others that are able to kind of run inside the data lake itself instead of you having to have an external environment like the other BI tools, so we see that as a pattern. But if you already are an enterprise, you have on board a BI platform, how do you leverage that with the data lake as part of the next-generation architecture is a key trend that we are seeing. >> So that your metadata helps make that from swamp to curated data lake. >> That's right, and not only that what we have done, as Tony was mentioning, in our Micah product we have a self-service catalog and then we provide a shopping cart experience where you can actually source data sets into the shopping cart, and we let them provision a sandbox. And when they provision the sandbox, they can actually launch Tableau or whatever the BI tool of choice is on that sandbox, so that they can actually-- and that sandbox could exist in the data lake or it could exist on a relational data store or an MPP data store that's outside of the data lake. That's part of your modern data architecture. >> But further to your point, if people have to throw out all of their decision support applications and their BI applications in order to change their data infrastructure, they're not going to do it. >> Understood. >> So you have to make that environment work and that's what Ben's referring to with a lot of the new accelerator tools and things that will sit on top of the data lake. >> Guys, thanks so much for coming on The Cube. Really appreciate it. I'll give you guys the final word in the segment ... What do you expect this week? I mean, obviously, we've been seeing the consolidation. You're starting to see the swim lanes of with Spark and Open Source and you see the cloud and IOT colliding, there's a huge intersection with deep learning, AI is certainly hyped up now beyond all recognition but it's essentially deep learning. Neural networks meets machine learning. That's been around before, but now freely available with Cloud and Compute. And so kind of a interesting dynamic that's rockin' the big data world. Your thoughts on what we're going to see this week and how that relates to the industry? >> I'll take a stab at it and you may feel free to jump in. I think what we'll see is that lot of customers that have been playing with big data for a couple of years are now getting to a point where what worked for one or two use cases now needs to be scaled out and provided at an enterprise scale. So they're looking at a managed and a governance layer to put on top of the platform. So they can enable machine learning and AI and all those use cases, because business is asking for them. Right? Business is asking for how they can bring intenser flow and run on the data lake itself, right? So we see those kind of requirements coming up more and more frequently. >> Awesome. Tony? >> What he said. >> And enterprise readiness certainly has to be table-- there's a lot of table stakes in the enterprise. It's not like, easy to get into, you can see Google kind of just putting their toe in the water with the Google cloud, tenser flow, great highlight they got spanner, so all these other things like latency rearing their heads again. So these are all kind of table stakes. >> Yeah, and the other thing, moving forward with respect to machine learning and some of the advanced algorithms, what we're doing now and some of the research we're doing is actually using machine learning to manage the data lake, which is a new concept, so when we get to the optimized phase of our maturity model, a lot of that has to do with self-correcting and self-automating. >> I need some machine learning and some AI, so does George and we need machine learning to watch the machine learn, and then algorithmists for algorithms. It's a crazy world, exciting time for us. >> Are we going to have a bot next time when we come here? (all laughing) >> We're going to chat off of messenger, we just came from south by southwest. Guys, thanks for coming on The Cube. Great insight and congratulations on the continued momentum. This is The Cube breakin' it down with experts, CEOs, entrepreneurs, all here inside The Cube. Big Data Sv, I'm John for George Gilbert. We'll be back after this short break. Thanks! (upbeat electronic music)

Published Date : Mar 14 2017

SUMMARY :

Announcer: Live from This is the week where it What's the big discussion at the show? hydrated into the data lake But the data lake is evolving, is the difference between a and the data lake experience. Is that kind of the approach? make the data available So the data lake, you never "But at the end of the day, So the approach we have taken is seamless or is that the end goal? One of the things we provide that's in the data lake, Is that kind of the same so that the right people have access And a second follow-up to that is, and the people, the prospects customers, On the numbers of the 50% that ignore, it's Fortune, and Global 2000. of the hybrid maturity model, of the data lake, correct? John: It's a multi-Cloud the data management and And that metadata seems to be the key, some of the public cloud And the consequences of rationalizing it together. database on the data lake. in the data lake itself. You can't handle the full data set manner in the data lake itself. So that your metadata helps make that exist in the data lake But further to your point, if So you have to make and how that relates to the industry? and run on the data lake itself, right? stakes in the enterprise. a lot of that has to and some AI, so does George and we need on the continued momentum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Tony FischerPERSON

0.99+

oneQUANTITY

0.99+

TonyPERSON

0.99+

Dave AlantePERSON

0.99+

Tony FisherPERSON

0.99+

GeorgePERSON

0.99+

Ben SharmaPERSON

0.99+

DavePERSON

0.99+

New YorkLOCATION

0.99+

John FurrierPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ZeloniPERSON

0.99+

ZaloniPERSON

0.99+

Silicon ValleyLOCATION

0.99+

50%QUANTITY

0.99+

San Jose, CaliforniaLOCATION

0.99+

25%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

eight weeksQUANTITY

0.99+

two executivesQUANTITY

0.99+

first stepQUANTITY

0.99+

TableauTITLE

0.99+

eight yearsQUANTITY

0.99+

todayDATE

0.99+

Big DataORGANIZATION

0.98+

twoQUANTITY

0.98+

this weekDATE

0.98+

second oneQUANTITY

0.98+

OneQUANTITY

0.98+

first oneQUANTITY

0.98+

each phaseQUANTITY

0.98+

BenPERSON

0.97+

NYCLOCATION

0.97+

20-16DATE

0.97+

CloudTITLE

0.97+

StrataORGANIZATION

0.97+

Big Data SvORGANIZATION

0.97+

secondQUANTITY

0.96+

two use casesQUANTITY

0.96+

CubeORGANIZATION

0.96+

thirdQUANTITY

0.94+

The CubeORGANIZATION

0.91+

single architectureQUANTITY

0.91+

PowerTITLE

0.9+

MicahLOCATION

0.85+

ArcadiaTITLE

0.83+

Zoom DataTITLE

0.83+

Big Data SVORGANIZATION

0.82+

MicahPERSON

0.81+

ClickTITLE

0.8+

Strata-HadoobTITLE

0.8+

Zoom DataTITLE

0.78+

FortuneORGANIZATION

0.78+

SparkTITLE

0.78+

Power BITITLE

0.78+

#theCUBEORGANIZATION

0.77+

one geoQUANTITY

0.76+

one single unifiedQUANTITY

0.75+

Big Data Silicon ValleyORGANIZATION

0.72+

BondORGANIZATION

0.72+

HadoobORGANIZATION

0.72+

POCsORGANIZATION

0.67+

PRIMTITLE

0.66+

DataORGANIZATION

0.65+

lakeORGANIZATION

0.6+

PilotORGANIZATION

0.58+

XLTITLE

0.58+

of yearsQUANTITY

0.56+

GlobalORGANIZATION

0.55+