Predictions 2022: Top Analysts See the Future of Data
(bright music) >> In the 2010s, organizations became keenly aware that data would become the key ingredient to driving competitive advantage, differentiation, and growth. But to this day, putting data to work remains a difficult challenge for many, if not most organizations. Now, as the cloud matures, it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible. We've also seen better tooling in the form of data workflows, streaming, machine intelligence, AI, developer tools, security, observability, automation, new databases and the like. These innovations they accelerate data proficiency, but at the same time, they add complexity for practitioners. Data lakes, data hubs, data warehouses, data marts, data fabrics, data meshes, data catalogs, data oceans are forming, they're evolving and exploding onto the scene. So in an effort to bring perspective to the sea of optionality, we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond. Hello everyone, my name is Dave Velannte with theCUBE, and I'd like to welcome you to a special Cube presentation, analysts predictions 2022: the future of data management. We've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade. Let me introduce our six power panelists. Sanjeev Mohan is former Gartner Analyst and Principal at SanjMo. Tony Baer, principal at dbInsight, Carl Olofson is well-known Research Vice President with IDC, Dave Menninger is Senior Vice President and Research Director at Ventana Research, Brad Shimmin, Chief Analyst, AI Platforms, Analytics and Data Management at Omdia and Doug Henschen, Vice President and Principal Analyst at Constellation Research. Gentlemen, welcome to the program and thanks for coming on theCUBE today. >> Great to be here. >> Thank you. >> All right, here's the format we're going to use. I as moderator, I'm going to call on each analyst separately who then will deliver their prediction or mega trend, and then in the interest of time management and pace, two analysts will have the opportunity to comment. If we have more time, we'll elongate it, but let's get started right away. Sanjeev Mohan, please kick it off. You want to talk about governance, go ahead sir. >> Thank you Dave. I believe that data governance which we've been talking about for many years is now not only going to be mainstream, it's going to be table stakes. And all the things that you mentioned, you know, the data, ocean data lake, lake houses, data fabric, meshes, the common glue is metadata. If we don't understand what data we have and we are governing it, there is no way we can manage it. So we saw Informatica went public last year after a hiatus of six. I'm predicting that this year we see some more companies go public. My bet is on Culebra, most likely and maybe Alation we'll see go public this year. I'm also predicting that the scope of data governance is going to expand beyond just data. It's not just data and reports. We are going to see more transformations like spark jawsxxxxx, Python even Air Flow. We're going to see more of a streaming data. So from Kafka Schema Registry, for example. We will see AI models become part of this whole governance suite. So the governance suite is going to be very comprehensive, very detailed lineage, impact analysis, and then even expand into data quality. We already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management, data catalogs, also data access governance. So what we are going to see is that once the data governance platforms become the key entry point into these modern architectures, I'm predicting that the usage, the number of users of a data catalog is going to exceed that of a BI tool. That will take time and we already seen that trajectory. Right now if you look at BI tools, I would say there a hundred users to BI tool to one data catalog. And I see that evening out over a period of time and at some point data catalogs will really become the main way for us to access data. Data catalog will help us visualize data, but if we want to do more in-depth analysis, it'll be the jumping off point into the BI tool, the data science tool and that is the journey I see for the data governance products. >> Excellent, thank you. Some comments. Maybe Doug, a lot of things to weigh in on there, maybe you can comment. >> Yeah, Sanjeev I think you're spot on, a lot of the trends the one disagreement, I think it's really still far from mainstream. As you say, we've been talking about this for years, it's like God, motherhood, apple pie, everyone agrees it's important, but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking. I think one thing that deserves mention in this context is ESG mandates and guidelines, these are environmental, social and governance, regs and guidelines. We've seen the environmental regs and guidelines and posts in industries, particularly the carbon-intensive industries. We've seen the social mandates, particularly diversity imposed on suppliers by companies that are leading on this topic. We've seen governance guidelines now being imposed by banks on investors. So these ESGs are presenting new carrots and sticks, and it's going to demand more solid data. It's going to demand more detailed reporting and solid reporting, tighter governance. But we're still far from mainstream adoption. We have a lot of, you know, best of breed niche players in the space. I think the signs that it's going to be more mainstream are starting with things like Azure Purview, Google Dataplex, the big cloud platform players seem to be upping the ante and starting to address governance. >> Excellent, thank you Doug. Brad, I wonder if you could chime in as well. >> Yeah, I would love to be a believer in data catalogs. But to Doug's point, I think that it's going to take some more pressure for that to happen. I recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the nineties and that didn't happen quite the way we anticipated. And so to Sanjeev's point it's because it is really complex and really difficult to do. My hope is that, you know, we won't sort of, how do I put this? Fade out into this nebula of domain catalogs that are specific to individual use cases like Purview for getting data quality right or like data governance and cybersecurity. And instead we have some tooling that can actually be adaptive to gather metadata to create something. And I know its important to you, Sanjeev and that is this idea of observability. If you can get enough metadata without moving your data around, but understanding the entirety of a system that's running on this data, you can do a lot. So to help with the governance that Doug is talking about. >> So I just want to add that, data governance, like any other initiatives did not succeed even AI went into an AI window, but that's a different topic. But a lot of these things did not succeed because to your point, the incentives were not there. I remember when Sarbanes Oxley had come into the scene, if a bank did not do Sarbanes Oxley, they were very happy to a million dollar fine. That was like, you know, pocket change for them instead of doing the right thing. But I think the stakes are much higher now. With GDPR, the flood gates opened. Now, you know, California, you know, has CCPA but even CCPA is being outdated with CPRA, which is much more GDPR like. So we are very rapidly entering a space where pretty much every major country in the world is coming up with its own compliance regulatory requirements, data residents is becoming really important. And I think we are going to reach a stage where it won't be optional anymore. So whether we like it or not, and I think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption. We were focused on features and these features were disconnected, very hard for business to adopt. These are built by IT people for IT departments to take a look at technical metadata, not business metadata. Today the tables have turned. CDOs are driving this initiative, regulatory compliances are beating down hard, so I think the time might be right. >> Yeah so guys, we have to move on here. But there's some real meat on the bone here, Sanjeev. I like the fact that you called out Culebra and Alation, so we can look back a year from now and say, okay, he made the call, he stuck it. And then the ratio of BI tools to data catalogs that's another sort of measurement that we can take even though with some skepticism there, that's something that we can watch. And I wonder if someday, if we'll have more metadata than data. But I want to move to Tony Baer, you want to talk about data mesh and speaking, you know, coming off of governance. I mean, wow, you know the whole concept of data mesh is, decentralized data, and then governance becomes, you know, a nightmare there, but take it away, Tony. >> We'll put this way, data mesh, you know, the idea at least as proposed by ThoughtWorks. You know, basically it was at least a couple of years ago and the press has been almost uniformly almost uncritical. A good reason for that is for all the problems that basically Sanjeev and Doug and Brad we're just speaking about, which is that we have all this data out there and we don't know what to do about it. Now, that's not a new problem. That was a problem we had in enterprise data warehouses, it was a problem when we had over DoOP data clusters, it's even more of a problem now that data is out in the cloud where the data is not only your data lake, is not only us three, it's all over the place. And it's also including streaming, which I know we'll be talking about later. So the data mesh was a response to that, the idea of that we need to bait, you know, who are the folks that really know best about governance? It's the domain experts. So it was basically data mesh was an architectural pattern and a process. My prediction for this year is that data mesh is going to hit cold heart reality. Because if you do a Google search, basically the published work, the articles on data mesh have been largely, you know, pretty uncritical so far. Basically loading and is basically being a very revolutionary new idea. I don't think it's that revolutionary because we've talked about ideas like this. Brad now you and I met years ago when we were talking about so and decentralizing all of us, but it was at the application level. Now we're talking about it at the data level. And now we have microservices. So there's this thought of have we managed if we're deconstructing apps in cloud native to microservices, why don't we think of data in the same way? My sense this year is that, you know, this has been a very active search if you look at Google search trends, is that now companies, like enterprise are going to look at this seriously. And as they look at it seriously, it's going to attract its first real hard scrutiny, it's going to attract its first backlash. That's not necessarily a bad thing. It means that it's being taken seriously. The reason why I think that you'll start to see basically the cold hearted light of day shine on data mesh is that it's still a work in progress. You know, this idea is basically a couple of years old and there's still some pretty major gaps. The biggest gap is in the area of federated governance. Now federated governance itself is not a new issue. Federated governance decision, we started figuring out like, how can we basically strike the balance between getting let's say between basically consistent enterprise policy, consistent enterprise governance, but yet the groups that understand the data and know how to basically, you know, that, you know, how do we basically sort of balance the two? There's a huge gap there in practice and knowledge. Also to a lesser extent, there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data. You know, basically through the full life cycle, from develop, from selecting the data from, you know, building the pipelines from, you know, determining your access control, looking at quality, looking at basically whether the data is fresh or whether it's trending off course. So my prediction is that it will receive the first harsh scrutiny this year. You are going to see some organization and enterprises declare premature victory when they build some federated query implementations. You going to see vendors start with data mesh wash their products anybody in the data management space that they are going to say that where this basically a pipelining tool, whether it's basically ELT, whether it's a catalog or federated query tool, they will all going to get like, you know, basically promoting the fact of how they support this. Hopefully nobody's going to call themselves a data mesh tool because data mesh is not a technology. We're going to see one other thing come out of this. And this harks back to the metadata that Sanjeev was talking about and of the catalog just as he was talking about. Which is that there's going to be a new focus, every renewed focus on metadata. And I think that's going to spur interest in data fabrics. Now data fabrics are pretty vaguely defined, but if we just take the most elemental definition, which is a common metadata back plane, I think that if anybody is going to get serious about data mesh, they need to look at the data fabric because we all at the end of the day, need to speak, you know, need to read from the same sheet of music. >> So thank you Tony. Dave Menninger, I mean, one of the things that people like about data mesh is it pretty crisply articulate some of the flaws in today's organizational approaches to data. What are your thoughts on this? >> Well, I think we have to start by defining data mesh, right? The term is already getting corrupted, right? Tony said it's going to see the cold hard light of day. And there's a problem right now that there are a number of overlapping terms that are similar but not identical. So we've got data virtualization, data fabric, excuse me for a second. (clears throat) Sorry about that. Data virtualization, data fabric, data federation, right? So I think that it's not really clear what each vendor means by these terms. I see data mesh and data fabric becoming quite popular. I've interpreted data mesh as referring primarily to the governance aspects as originally intended and specified. But that's not the way I see vendors using it. I see vendors using it much more to mean data fabric and data virtualization. So I'm going to comment on the group of those things. I think the group of those things is going to happen. They're going to happen, they're going to become more robust. Our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half, so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access. Again, whether you define it as mesh or fabric or virtualization isn't really the point here. But this notion that there are different elements of data, metadata and governance within an organization that all need to be managed collectively. The interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not, it's almost double, 68% of organizations, I'm sorry, 79% of organizations that were using virtualized access express satisfaction with their access to the data lake. Only 39% express satisfaction if they weren't using virtualized access. >> Oh thank you Dave. Sanjeev we just got about a couple of minutes on this topic, but I know you're speaking or maybe you've always spoken already on a panel with (indistinct) who sort of invented the concept. Governance obviously is a big sticking point, but what are your thoughts on this? You're on mute. (panelist chuckling) >> So my message to (indistinct) and to the community is as opposed to what they said, let's not define it. We spent a whole year defining it, there are four principles, domain, product, data infrastructure, and governance. Let's take it to the next level. I get a lot of questions on what is the difference between data fabric and data mesh? And I'm like I can't compare the two because data mesh is a business concept, data fabric is a data integration pattern. How do you compare the two? You have to bring data mesh a level down. So to Tony's point, I'm on a warpath in 2022 to take it down to what does a data product look like? How do we handle shared data across domains and governance? And I think we are going to see more of that in 2022, or is "operationalization" of data mesh. >> I think we could have a whole hour on this topic, couldn't we? Maybe we should do that. But let's corner. Let's move to Carl. So Carl, you're a database guy, you've been around that block for a while now, you want to talk about graph databases, bring it on. >> Oh yeah. Okay thanks. So I regard graph database as basically the next truly revolutionary database management technology. I'm looking forward for the graph database market, which of course we haven't defined yet. So obviously I have a little wiggle room in what I'm about to say. But this market will grow by about 600% over the next 10 years. Now, 10 years is a long time. But over the next five years, we expect to see gradual growth as people start to learn how to use it. The problem is not that it's not useful, its that people don't know how to use it. So let me explain before I go any further what a graph database is because some of the folks on the call may not know what it is. A graph database organizes data according to a mathematical structure called a graph. The graph has elements called nodes and edges. So a data element drops into a node, the nodes are connected by edges, the edges connect one node to another node. Combinations of edges create structures that you can analyze to determine how things are related. In some cases, the nodes and edges can have properties attached to them which add additional informative material that makes it richer, that's called a property graph. There are two principle use cases for graph databases. There's semantic property graphs, which are use to break down human language texts into the semantic structures. Then you can search it, organize it and answer complicated questions. A lot of AI is aimed at semantic graphs. Another kind is the property graph that I just mentioned, which has a dazzling number of use cases. I want to just point out as I talk about this, people are probably wondering, well, we have relation databases, isn't that good enough? So a relational database defines... It supports what I call definitional relationships. That means you define the relationships in a fixed structure. The database drops into that structure, there's a value, foreign key value, that relates one table to another and that value is fixed. You don't change it. If you change it, the database becomes unstable, it's not clear what you're looking at. In a graph database, the system is designed to handle change so that it can reflect the true state of the things that it's being used to track. So let me just give you some examples of use cases for this. They include entity resolution, data lineage, social media analysis, Customer 360, fraud prevention. There's cybersecurity, there's strong supply chain is a big one actually. There is explainable AI and this is going to become important too because a lot of people are adopting AI. But they want a system after the fact to say, how do the AI system come to that conclusion? How did it make that recommendation? Right now we don't have really good ways of tracking that. Machine learning in general, social network, I already mentioned that. And then we've got, oh gosh, we've got data governance, data compliance, risk management. We've got recommendation, we've got personalization, anti money laundering, that's another big one, identity and access management, network and IT operations is already becoming a key one where you actually have mapped out your operation, you know, whatever it is, your data center and you can track what's going on as things happen there, root cause analysis, fraud detection is a huge one. A number of major credit card companies use graph databases for fraud detection, risk analysis, tracking and tracing turn analysis, next best action, what if analysis, impact analysis, entity resolution and I would add one other thing or just a few other things to this list, metadata management. So Sanjeev, here you go, this is your engine. Because I was in metadata management for quite a while in my past life. And one of the things I found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it, but graphs can, okay? Graphs can do things like say, this term in this context means this, but in that context, it means that, okay? Things like that. And in fact, logistics management, supply chain. And also because it handles recursive relationships, by recursive relationships I mean objects that own other objects that are of the same type. You can do things like build materials, you know, so like parts explosion. Or you can do an HR analysis, who reports to whom, how many levels up the chain and that kind of thing. You can do that with relational databases, but yet it takes a lot of programming. In fact, you can do almost any of these things with relational databases, but the problem is, you have to program it. It's not supported in the database. And whenever you have to program something, that means you can't trace it, you can't define it. You can't publish it in terms of its functionality and it's really, really hard to maintain over time. >> Carl, thank you. I wonder if we could bring Brad in, I mean. Brad, I'm sitting here wondering, okay, is this incremental to the market? Is it disruptive and replacement? What are your thoughts on this phase? >> It's already disrupted the market. I mean, like Carl said, go to any bank and ask them are you using graph databases to get fraud detection under control? And they'll say, absolutely, that's the only way to solve this problem. And it is frankly. And it's the only way to solve a lot of the problems that Carl mentioned. And that is, I think it's Achilles heel in some ways. Because, you know, it's like finding the best way to cross the seven bridges of Koenigsberg. You know, it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique, it's still unfortunately kind of stands apart from the rest of the community that's building, let's say AI outcomes, as a great example here. Graph databases and AI, as Carl mentioned, are like chocolate and peanut butter. But technologically, you think don't know how to talk to one another, they're completely different. And you know, you can't just stand up SQL and query them. You've got to learn, know what is the Carl? Specter special. Yeah, thank you to, to actually get to the data in there. And if you're going to scale that data, that graph database, especially a property graph, if you're going to do something really complex, like try to understand you know, all of the metadata in your organization, you might just end up with, you know, a graph database winter like we had the AI winter simply because you run out of performance to make the thing happen. So, I think it's already disrupted, but we need to like treat it like a first-class citizen in the data analytics and AI community. We need to bring it into the fold. We need to equip it with the tools it needs to do the magic it does and to do it not just for specialized use cases, but for everything. 'Cause I'm with Carl. I think it's absolutely revolutionary. >> Brad identified the principal, Achilles' heel of the technology which is scaling. When these things get large and complex enough that they spill over what a single server can handle, you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down. So that's still a problem to be solved. >> Sanjeev, any quick thoughts on this? I mean, I think metadata on the word cloud is going to be the largest font, but what are your thoughts here? >> I want to (indistinct) So people don't associate me with only metadata, so I want to talk about something slightly different. dbengines.com has done an amazing job. I think almost everyone knows that they chronicle all the major databases that are in use today. In January of 2022, there are 381 databases on a ranked list of databases. The largest category is RDBMS. The second largest category is actually divided into two property graphs and IDF graphs. These two together make up the second largest number databases. So talking about Achilles heel, this is a problem. The problem is that there's so many graph databases to choose from. They come in different shapes and forms. To Brad's point, there's so many query languages in RDBMS, in SQL. I know the story, but here We've got cipher, we've got gremlin, we've got GQL and then we're proprietary languages. So I think there's a lot of disparity in this space. >> Well, excellent. All excellent points, Sanjeev, if I must say. And that is a problem that the languages need to be sorted and standardized. People need to have a roadmap as to what they can do with it. Because as you say, you can do so many things. And so many of those things are unrelated that you sort of say, well, what do we use this for? And I'm reminded of the saying I learned a bunch of years ago. And somebody said that the digital computer is the only tool man has ever device that has no particular purpose. (panelists chuckle) >> All right guys, we got to move on to Dave Menninger. We've heard about streaming. Your prediction is in that realm, so please take it away. >> Sure. So I like to say that historical databases are going to become a thing of the past. By that I don't mean that they're going to go away, that's not my point. I mean, we need historical databases, but streaming data is going to become the default way in which we operate with data. So in the next say three to five years, I would expect that data platforms and we're using the term data platforms to represent the evolution of databases and data lakes, that the data platforms will incorporate these streaming capabilities. We're going to process data as it streams into an organization and then it's going to roll off into historical database. So historical databases don't go away, but they become a thing of the past. They store the data that occurred previously. And as data is occurring, we're going to be processing it, we're going to be analyzing it, we're going to be acting on it. I mean we only ever ended up with historical databases because we were limited by the technology that was available to us. Data doesn't occur in patches. But we processed it in patches because that was the best we could do. And it wasn't bad and we've continued to improve and we've improved and we've improved. But streaming data today is still the exception. It's not the rule, right? There are projects within organizations that deal with streaming data. But it's not the default way in which we deal with data yet. And so that's my prediction is that this is going to change, we're going to have streaming data be the default way in which we deal with data and how you label it and what you call it. You know, maybe these databases and data platforms just evolved to be able to handle it. But we're going to deal with data in a different way. And our research shows that already, about half of the participants in our analytics and data benchmark research, are using streaming data. You know, another third are planning to use streaming technologies. So that gets us to about eight out of 10 organizations need to use this technology. And that doesn't mean they have to use it throughout the whole organization, but it's pretty widespread in its use today and has continued to grow. If you think about the consumerization of IT, we've all been conditioned to expect immediate access to information, immediate responsiveness. You know, we want to know if an item is on the shelf at our local retail store and we can go in and pick it up right now. You know, that's the world we live in and that's spilling over into the enterprise IT world We have to provide those same types of capabilities. So that's my prediction, historical databases become a thing of the past, streaming data becomes the default way in which we operate with data. >> All right thank you David. Well, so what say you, Carl, the guy who has followed historical databases for a long time? >> Well, one thing actually, every database is historical because as soon as you put data in it, it's now history. They'll no longer reflect the present state of things. But even if that history is only a millisecond old, it's still history. But I would say, I mean, I know you're trying to be a little bit provocative in saying this Dave 'cause you know, as well as I do that people still need to do their taxes, they still need to do accounting, they still need to run general ledger programs and things like that. That all involves historical data. That's not going to go away unless you want to go to jail. So you're going to have to deal with that. But as far as the leading edge functionality, I'm totally with you on that. And I'm just, you know, I'm just kind of wondering if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way applications work. Saying that an application should respond instantly, as soon as the state of things changes. What do you say about that? >> I think that's true. I think we do have to think about things differently. It's not the way we designed systems in the past. We're seeing more and more systems designed that way. But again, it's not the default. And I agree 100% with you that we do need historical databases you know, that's clear. And even some of those historical databases will be used in conjunction with the streaming data, right? >> Absolutely. I mean, you know, let's take the data warehouse example where you're using the data warehouse as its context and the streaming data as the present and you're saying, here's the sequence of things that's happening right now. Have we seen that sequence before? And where? What does that pattern look like in past situations? And can we learn from that? >> So Tony Baer, I wonder if you could comment? I mean, when you think about, you know, real time inferencing at the edge, for instance, which is something that a lot of people talk about, a lot of what we're discussing here in this segment, it looks like it's got a great potential. What are your thoughts? >> Yeah, I mean, I think you nailed it right. You know, you hit it right on the head there. Which is that, what I'm seeing is that essentially. Then based on I'm going to split this one down the middle is that I don't see that basically streaming is the default. What I see is streaming and basically and transaction databases and analytics data, you know, data warehouses, data lakes whatever are converging. And what allows us technically to converge is cloud native architecture, where you can basically distribute things. So you can have a node here that's doing the real-time processing, that's also doing... And this is where it leads in or maybe doing some of that real time predictive analytics to take a look at, well look, we're looking at this customer journey what's happening with what the customer is doing right now and this is correlated with what other customers are doing. So the thing is that in the cloud, you can basically partition this and because of basically the speed of the infrastructure then you can basically bring these together and kind of orchestrate them sort of a loosely coupled manner. The other parts that the use cases are demanding, and this is part of it goes back to what Dave is saying. Is that, you know, when you look at Customer 360, when you look at let's say Smart Utility products, when you look at any type of operational problem, it has a real time component and it has an historical component. And having predictive and so like, you know, my sense here is that technically we can bring this together through the cloud. And I think the use case is that we can apply some real time sort of predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction, we have this real-time input. >> Sanjeev, did you have a comment? >> Yeah, I was just going to say that to Dave's point, you know, we have to think of streaming very different because in the historical databases, we used to bring the data and store the data and then we used to run rules on top, aggregations and all. But in case of streaming, the mindset changes because the rules are normally the inference, all of that is fixed, but the data is constantly changing. So it's a completely reversed way of thinking and building applications on top of that. >> So Dave Menninger, there seem to be some disagreement about the default. What kind of timeframe are you thinking about? Is this end of decade it becomes the default? What would you pin? >> I think around, you know, between five to 10 years, I think this becomes the reality. >> I think its... >> It'll be more and more common between now and then, but it becomes the default. And I also want Sanjeev at some point, maybe in one of our subsequent conversations, we need to talk about governing streaming data. 'Cause that's a whole nother set of challenges. >> We've also talked about it rather in two dimensions, historical and streaming, and there's lots of low latency, micro batch, sub-second, that's not quite streaming, but in many cases its fast enough and we're seeing a lot of adoption of near real time, not quite real-time as good enough for many applications. (indistinct cross talk from panelists) >> Because nobody's really taking the hardware dimension (mumbles). >> That'll just happened, Carl. (panelists laughing) >> So near real time. But maybe before you lose the customer, however we define that, right? Okay, let's move on to Brad. Brad, you want to talk about automation, AI, the pipeline people feel like, hey, we can just automate everything. What's your prediction? >> Yeah I'm an AI aficionados so apologies in advance for that. But, you know, I think that we've been seeing automation play within AI for some time now. And it's helped us do a lot of things especially for practitioners that are building AI outcomes in the enterprise. It's helped them to fill skills gaps, it's helped them to speed development and it's helped them to actually make AI better. 'Cause it, you know, in some ways provide some swim lanes and for example, with technologies like AutoML can auto document and create that sort of transparency that we talked about a little bit earlier. But I think there's an interesting kind of conversion happening with this idea of automation. And that is that we've had the automation that started happening for practitioners, it's trying to move out side of the traditional bounds of things like I'm just trying to get my features, I'm just trying to pick the right algorithm, I'm just trying to build the right model and it's expanding across that full life cycle, building an AI outcome, to start at the very beginning of data and to then continue on to the end, which is this continuous delivery and continuous automation of that outcome to make sure it's right and it hasn't drifted and stuff like that. And because of that, because it's become kind of powerful, we're starting to actually see this weird thing happen where the practitioners are starting to converge with the users. And that is to say that, okay, if I'm in Tableau right now, I can stand up Salesforce Einstein Discovery, and it will automatically create a nice predictive algorithm for me given the data that I pull in. But what's starting to happen and we're seeing this from the companies that create business software, so Salesforce, Oracle, SAP, and others is that they're starting to actually use these same ideals and a lot of deep learning (chuckles) to basically stand up these out of the box flip-a-switch, and you've got an AI outcome at the ready for business users. And I am very much, you know, I think that's the way that it's going to go and what it means is that AI is slowly disappearing. And I don't think that's a bad thing. I think if anything, what we're going to see in 2022 and maybe into 2023 is this sort of rush to put this idea of disappearing AI into practice and have as many of these solutions in the enterprise as possible. You can see, like for example, SAP is going to roll out this quarter, this thing called adaptive recommendation services, which basically is a cold start AI outcome that can work across a whole bunch of different vertical markets and use cases. It's just a recommendation engine for whatever you needed to do in the line of business. So basically, you're an SAP user, you look up to turn on your software one day, you're a sales professional let's say, and suddenly you have a recommendation for customer churn. Boom! It's going, that's great. Well, I don't know, I think that's terrifying. In some ways I think it is the future that AI is going to disappear like that, but I'm absolutely terrified of it because I think that what it really does is it calls attention to a lot of the issues that we already see around AI, specific to this idea of what we like to call at Omdia, responsible AI. Which is, you know, how do you build an AI outcome that is free of bias, that is inclusive, that is fair, that is safe, that is secure, that its audible, et cetera, et cetera, et cetera, et cetera. I'd take a lot of work to do. And so if you imagine a customer that's just a Salesforce customer let's say, and they're turning on Einstein Discovery within their sales software, you need some guidance to make sure that when you flip that switch, that the outcome you're going to get is correct. And that's going to take some work. And so, I think we're going to see this move, let's roll this out and suddenly there's going to be a lot of problems, a lot of pushback that we're going to see. And some of that's going to come from GDPR and others that Sanjeev was mentioning earlier. A lot of it is going to come from internal CSR requirements within companies that are saying, "Hey, hey, whoa, hold up, we can't do this all at once. "Let's take the slow route, "let's make AI automated in a smart way." And that's going to take time. >> Yeah, so a couple of predictions there that I heard. AI simply disappear, it becomes invisible. Maybe if I can restate that. And then if I understand it correctly, Brad you're saying there's a backlash in the near term. You'd be able to say, oh, slow down. Let's automate what we can. Those attributes that you talked about are non trivial to achieve, is that why you're a bit of a skeptic? >> Yeah. I think that we don't have any sort of standards that companies can look to and understand. And we certainly, within these companies, especially those that haven't already stood up an internal data science team, they don't have the knowledge to understand when they flip that switch for an automated AI outcome that it's going to do what they think it's going to do. And so we need some sort of standard methodology and practice, best practices that every company that's going to consume this invisible AI can make use of them. And one of the things that you know, is sort of started that Google kicked off a few years back that's picking up some momentum and the companies I just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing. You know, so like for the SAP example, we know, for example, if it's convolutional neural network with a long, short term memory model that it's using, we know that it only works on Roman English and therefore me as a consumer can say, "Oh, well I know that I need to do this internationally. "So I should not just turn this on today." >> Thank you. Carl could you add anything, any context here? >> Yeah, we've talked about some of the things Brad mentioned here at IDC and our future of intelligence group regarding in particular, the moral and legal implications of having a fully automated, you know, AI driven system. Because we already know, and we've seen that AI systems are biased by the data that they get, right? So if they get data that pushes them in a certain direction, I think there was a story last week about an HR system that was recommending promotions for White people over Black people, because in the past, you know, White people were promoted and more productive than Black people, but it had no context as to why which is, you know, because they were being historically discriminated, Black people were being historically discriminated against, but the system doesn't know that. So, you know, you have to be aware of that. And I think that at the very least, there should be controls when a decision has either a moral or legal implication. When you really need a human judgment, it could lay out the options for you. But a person actually needs to authorize that action. And I also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases. In some extent, they always will. So we'll always be chasing after them. But that's (indistinct). >> Absolutely Carl, yeah. I think that what you have to bear in mind as a consumer of AI is that it is a reflection of us and we are a very flawed species. And so if you look at all of the really fantastic, magical looking supermodels we see like GPT-3 and four, that's coming out, they're xenophobic and hateful because the people that the data that's built upon them and the algorithms and the people that build them are us. So AI is a reflection of us. We need to keep that in mind. >> Yeah, where the AI is biased 'cause humans are biased. All right, great. All right let's move on. Doug you mentioned mentioned, you know, lot of people that said that data lake, that term is not going to live on but here's to be, have some lakes here. You want to talk about lake house, bring it on. >> Yes, I do. My prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering. I say offering that doesn't mean it's going to be the dominant thing that organizations have out there, but it's going to be the pro dominant vendor offering in 2022. Now heading into 2021, we already had Cloudera, Databricks, Microsoft, Snowflake as proponents, in 2021, SAP, Oracle, and several of all of these fabric virtualization/mesh vendors joined the bandwagon. The promise is that you have one platform that manages your structured, unstructured and semi-structured information. And it addresses both the BI analytics needs and the data science needs. The real promise there is simplicity and lower cost. But I think end users have to answer a few questions. The first is, does your organization really have a center of data gravity or is the data highly distributed? Multiple data warehouses, multiple data lakes, on premises, cloud. If it's very distributed and you'd have difficulty consolidating and that's not really a goal for you, then maybe that single platform is unrealistic and not likely to add value to you. You know, also the fabric and virtualization vendors, the mesh idea, that's where if you have this highly distributed situation, that might be a better path forward. The second question, if you are looking at one of these lake house offerings, you are looking at consolidating, simplifying, bringing together to a single platform. You have to make sure that it meets both the warehouse need and the data lake need. So you have vendors like Databricks, Microsoft with Azure Synapse. New really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements, can meet the user and query concurrency requirements. Meet those tight SLS. And then on the other hand, you have the Oracle, SAP, Snowflake, the data warehouse folks coming into the data science world, and they have to prove that they can manage the unstructured information and meet the needs of the data scientists. I'm seeing a lot of the lake house offerings from the warehouse crowd, managing that unstructured information in columns and rows. And some of these vendors, Snowflake a particular is really relying on partners for the data science needs. So you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement. >> Thank you Doug. Well Tony, if those two worlds are going to come together, as Doug was saying, the analytics and the data science world, does it need to be some kind of semantic layer in between? I don't know. Where are you in on this topic? >> (chuckles) Oh, didn't we talk about data fabrics before? Common metadata layer (chuckles). Actually, I'm almost tempted to say let's declare victory and go home. And that this has actually been going on for a while. I actually agree with, you know, much of what Doug is saying there. Which is that, I mean I remember as far back as I think it was like 2014, I was doing a study. I was still at Ovum, (indistinct) Omdia, looking at all these specialized databases that were coming up and seeing that, you know, there's overlap at the edges. But yet, there was still going to be a reason at the time that you would have, let's say a document database for JSON, you'd have a relational database for transactions and for data warehouse and you had basically something at that time that resembles a dupe for what we consider your data life. Fast forward and the thing is what I was seeing at the time is that you were saying they sort of blending at the edges. That was saying like about five to six years ago. And the lake house is essentially on the current manifestation of that idea. There is a dichotomy in terms of, you know, it's the old argument, do we centralize this all you know in a single place or do we virtualize? And I think it's always going to be a union yeah and there's never going to be a single silver bullet. I do see that there are also going to be questions and these are points that Doug raised. That you know, what do you need for your performance there, or for your free performance characteristics? Do you need for instance high concurrency? You need the ability to do some very sophisticated joins, or is your requirement more to be able to distribute and distribute our processing is, you know, as far as possible to get, you know, to essentially do a kind of a brute force approach. All these approaches are valid based on the use case. I just see that essentially that the lake house is the culmination of it's nothing. It's a relatively new term introduced by Databricks a couple of years ago. This is the culmination of basically what's been a long time trend. And what we see in the cloud is that as we start seeing data warehouses as a check box items say, "Hey, we can basically source data in cloud storage, in S3, "Azure Blob Store, you know, whatever, "as long as it's in certain formats, "like, you know parquet or CSP or something like that." I see that as becoming kind of a checkbox item. So to that extent, I think that the lake house, depending on how you define is already reality. And in some cases, maybe new terminology, but not a whole heck of a lot new under the sun. >> Yeah. And Dave Menninger, I mean a lot of these, thank you Tony, but a lot of this is going to come down to, you know, vendor marketing, right? Some people just kind of co-op the term, we talked about you know, data mesh washing, what are your thoughts on this? (laughing) >> Yeah, so I used the term data platform earlier. And part of the reason I use that term is that it's more vendor neutral. We've tried to sort of stay out of the vendor terminology patenting world, right? Whether the term lake houses, what sticks or not, the concept is certainly going to stick. And we have some data to back it up. About a quarter of organizations that are using data lakes today, already incorporate data warehouse functionality into it. So they consider their data lake house and data warehouse one in the same, about a quarter of organizations, a little less, but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake. So it's pretty obvious that three quarters of organizations need to bring this stuff together, right? The need is there, the need is apparent. The technology is going to continue to converge. I like to talk about it, you know, you've got data lakes over here at one end, and I'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a server and you ignore it, right? That's not what a data lake is. So you've got data lake people over here and you've got database people over here, data warehouse people over here, database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities. So it's obvious that they're going to meet in the middle. I mean, I think it's like Tony says, I think we should declare victory and go home. >> As hell. So just a follow-up on that, so are you saying the specialized lake and the specialized warehouse, do they go away? I mean, Tony data mesh practitioners would say or advocates would say, well, they could all live. It's just a node on the mesh. But based on what Dave just said, are we gona see those all morphed together? >> Well, number one, as I was saying before, there's always going to be this sort of, you know, centrifugal force or this tug of war between do we centralize the data, do we virtualize? And the fact is I don't think that there's ever going to be any single answer. I think in terms of data mesh, data mesh has nothing to do with how you're physically implement the data. You could have a data mesh basically on a data warehouse. It's just that, you know, the difference being is that if we use the same physical data store, but everybody's logically you know, basically governing it differently, you know? Data mesh in space, it's not a technology, it's processes, it's governance process. So essentially, you know, I basically see that, you know, as I was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring, but there are going to be cases where, for instance, if I need, let's say like, Upserve, I need like high concurrency or something like that. There are certain things that I'm not going to be able to get efficiently get out of a data lake. And, you know, I'm doing a system where I'm just doing really brute forcing very fast file scanning and that type of thing. So I think there always will be some delineations, but I would agree with Dave and with Doug, that we are seeing basically a confluence of requirements that we need to essentially have basically either the element, you know, the ability of a data lake and the data warehouse, these need to come together, so I think. >> I think what we're likely to see is organizations look for a converge platform that can handle both sides for their center of data gravity, the mesh and the fabric virtualization vendors, they're all on board with the idea of this converged platform and they're saying, "Hey, we'll handle all the edge cases "of the stuff that isn't in that center of data gravity "but that is off distributed in a cloud "or at a remote location." So you can have that single platform for the center of your data and then bring in virtualization, mesh, what have you, for reaching out to the distributed data. >> As Dave basically said, people are happy when they virtualized data. >> I think we have at this point, but to Dave Menninger's point, they are converging, Snowflake has introduced support for unstructured data. So obviously literally splitting here. Now what Databricks is saying is that "aha, but it's easy to go from data lake to data warehouse "than it is from databases to data lake." So I think we're getting into semantics, but we're already seeing these two converge. >> So take somebody like AWS has got what? 15 data stores. Are they're going to 15 converge data stores? This is going to be interesting to watch. All right, guys, I'm going to go down and list do like a one, I'm going to one word each and you guys, each of the analyst, if you would just add a very brief sort of course correction for me. So Sanjeev, I mean, governance is going to to be... Maybe it's the dog that wags the tail now. I mean, it's coming to the fore, all this ransomware stuff, which you really didn't talk much about security, but what's the one word in your prediction that you would leave us with on governance? >> It's going to be mainstream. >> Mainstream. Okay. Tony Baer, mesh washing is what I wrote down. That's what we're going to see in 2022, a little reality check, you want to add to that? >> Reality check, 'cause I hope that no vendor jumps the shark and close they're offering a data niche product. >> Yeah, let's hope that doesn't happen. If they do, we're going to call them out. Carl, I mean, graph databases, thank you for sharing some high growth metrics. I know it's early days, but magic is what I took away from that, so magic database. >> Yeah, I would actually, I've said this to people too. I kind of look at it as a Swiss Army knife of data because you can pretty much do anything you want with it. That doesn't mean you should. I mean, there's definitely the case that if you're managing things that are in fixed schematic relationship, probably a relation database is a better choice. There are times when the document database is a better choice. It can handle those things, but maybe not. It may not be the best choice for that use case. But for a great many, especially with the new emerging use cases I listed, it's the best choice. >> Thank you. And Dave Menninger, thank you by the way, for bringing the data in, I like how you supported all your comments with some data points. But streaming data becomes the sort of default paradigm, if you will, what would you add? >> Yeah, I would say think fast, right? That's the world we live in, you got to think fast. >> Think fast, love it. And Brad Shimmin, love it. I mean, on the one hand I was saying, okay, great. I'm afraid I might get disrupted by one of these internet giants who are AI experts. I'm going to be able to buy instead of build AI. But then again, you know, I've got some real issues. There's a potential backlash there. So give us your bumper sticker. >> I'm would say, going with Dave, think fast and also think slow to talk about the book that everyone talks about. I would say really that this is all about trust, trust in the idea of automation and a transparent and visible AI across the enterprise. And verify, verify before you do anything. >> And then Doug Henschen, I mean, I think the trend is your friend here on this prediction with lake house is really becoming dominant. I liked the way you set up that notion of, you know, the data warehouse folks coming at it from the analytics perspective and then you get the data science worlds coming together. I still feel as though there's this piece in the middle that we're missing, but your, your final thoughts will give you the (indistinct). >> I think the idea of consolidation and simplification always prevails. That's why the appeal of a single platform is going to be there. We've already seen that with, you know, DoOP platforms and moving toward cloud, moving toward object storage and object storage, becoming really the common storage point for whether it's a lake or a warehouse. And that second point, I think ESG mandates are going to come in alongside GDPR and things like that to up the ante for good governance. >> Yeah, thank you for calling that out. Okay folks, hey that's all the time that we have here, your experience and depth of understanding on these key issues on data and data management really on point and they were on display today. I want to thank you for your contributions. Really appreciate your time. >> Enjoyed it. >> Thank you. >> Thanks for having me. >> In addition to this video, we're going to be making available transcripts of the discussion. We're going to do clips of this as well we're going to put them out on social media. I'll write this up and publish the discussion on wikibon.com and siliconangle.com. No doubt, several of the analysts on the panel will take the opportunity to publish written content, social commentary or both. I want to thank the power panelists and thanks for watching this special CUBE presentation. This is Dave Vellante, be well and we'll see you next time. (bright music)
SUMMARY :
and I'd like to welcome you to I as moderator, I'm going to and that is the journey to weigh in on there, and it's going to demand more solid data. Brad, I wonder if you that are specific to individual use cases in the past is because we I like the fact that you the data from, you know, Dave Menninger, I mean, one of the things that all need to be managed collectively. Oh thank you Dave. and to the community I think we could have a after the fact to say, okay, is this incremental to the market? the magic it does and to do it and that slows the system down. I know the story, but And that is a problem that the languages move on to Dave Menninger. So in the next say three to five years, the guy who has followed that people still need to do their taxes, And I agree 100% with you and the streaming data as the I mean, when you think about, you know, and because of basically the all of that is fixed, but the it becomes the default? I think around, you know, but it becomes the default. and we're seeing a lot of taking the hardware dimension That'll just happened, Carl. Okay, let's move on to Brad. And that is to say that, Those attributes that you And one of the things that you know, Carl could you add in the past, you know, I think that what you have to bear in mind that term is not going to and the data science needs. and the data science world, You need the ability to do lot of these, thank you Tony, I like to talk about it, you know, It's just a node on the mesh. basically either the element, you know, So you can have that single they virtualized data. "aha, but it's easy to go from I mean, it's coming to the you want to add to that? I hope that no vendor Yeah, let's hope that doesn't happen. I've said this to people too. I like how you supported That's the world we live I mean, on the one hand I And verify, verify before you do anything. I liked the way you set up We've already seen that with, you know, the time that we have here, We're going to do clips of this as well
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Menninger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Brad Shimmin | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Dave Velannte | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Brad | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
January of 2022 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
381 databases | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Sanjeev | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Omdia | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
SanjMo | ORGANIZATION | 0.99+ |
79% | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
15 data stores | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
Ben White, Domo | Virtual Vertica BDC 2020
>> Announcer: It's theCUBE covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. >> Hi, everybody. Welcome to this digital coverage of the Vertica Big Data Conference. You're watching theCUBE and my name is Dave Volante. It's my pleasure to invite in Ben White, who's the Senior Database Engineer at Domo. Ben, great to see you, man. Thanks for coming on. >> Great to be here and here. >> You know, as I said, you know, earlier when we were off-camera, I really was hoping I could meet you face-to-face in Boston this year, but hey, I'll take it, and, you know, our community really wants to hear from experts like yourself. But let's start with Domo as the company. Share with us what Domo does and what your role is there. >> Well, if I can go straight to the official what Domo does is we provide, we process data at BI scale, we-we-we provide BI leverage at cloud scale in record time. And so what that means is, you know, we are a business-operating system where we provide a number of analytical abilities to companies of all sizes. But we do that at cloud scale and so I think that differentiates us quite a bit. >> So a lot of your work, if I understand it, and just in terms of understanding what Domo does, there's a lot of pressure in terms of being real-time. It's not, like, you sometimes don't know what's coming at you, so it's ad-hoc. I wonder if you could sort of talk about that, confirm that, maybe add a little color to it. >> Yeah, absolutely, absolutely. That's probably the biggest challenge it is to being, to operating Domo is that it is an ad hoc environment. And certainly what that means, is that you've got analysts and executives that are able to submit their own queries with out very... With very few limitations. So from an engineering standpoint, that challenge in that of course is that you don't have this predictable dashboard to plan for, when it comes to performance planning. So it definitely presents some challenges for us that we've done some pretty unique things, I think, to address those. >> So it sounds like your background fits well with that. I understand your people have called you a database whisperer and an envelope pusher. What does that mean to a DBA in this day and age? >> The whisperer part is probably a lost art, in the sense that it's not really sustainable, right? The idea that, you know, whatever it is I'm able to do with the database, it has to be repeatable. And so that's really where analytics comes in, right? That's where pushing the envelope comes in. And in a lot of ways that's where Vertica comes in with this open architecture. And so as a person who has a reputation for saying, "I understand this is what our limitations should be, but I think we can do more." Having a platform like Vertica, with such an open architecture, kind of lets you push those limits quite a bit. >> I mean I've always felt like, you know, Vertica, when I first saw the stone breaker architecture and talked to some of the early founders, I always felt like it was the Ferrari of databases, certainly at the time. And it sounds like you guys use it in that regard. But talk a little bit more about how you use Vertica, why, you know, why MPP, why Vertica? You know, why-why can't you do this with RDBMS? Educate us, a little bit, on, sort of, the basics. >> For us it was, part of what I mentioned when we started, when we talked about the very nature of the Domo platform, where there's an incredible amount of resiliency required. And so Vertica, the MPP platform, of course, allows us to build individual database clusters that can perform best for the workload that might be assigned to them. So the open, the expandable, the... The-the ability to grow Vertica, right, as your base grows, those are all important factors, when you're choosing early on, right? Without a real idea of how growth would be or what it will look like. If you were kind of, throwing up something to the dark, you look at the Vertica platform and you can see, well, as I grow, I can, kind of, build with this, right? I can do some unique things with the platform in terms of this open architecture that will allow me to not have to make all my decisions today, right? (mutters) >> So, you're using Vertica, I know, at least in part, you're working with AWS as well, can you describe sort of your environment? Do you give anything on-prem, is everything in cloud? What's your set up look like? >> Sure, we have a hybrid cloud environment where we have a significant presence in public files in our own private cloud. And so, yeah, having said that, we certainly have a really an extensive presence, I would say, in AWS. So, they're definitely the partner of our when it comes to providing the databases and the server power that we need to operate on. >> From a standpoint of engineering and architecting a database, what were some of the challenges that you faced when you had to create that hybrid architecture? What did you face and how did you overcome that? >> Well, you know, some of the... There were some things we faced in terms of, one, it made it easy that Vertica and AWS have their own... They play well together, we'll say that. And so, Vertica was designed to work on AWS. So that part of it took care of it's self. Now our own private cloud and being able to connect that to our public cloud has been a part of our own engineering abilities. And again, I don't want to make little, make light of it, it certainly not impossible. And so we... Some of the challenges that pertain to the database really were in the early days, that you mentioned, when we talked a little bit earlier about Vertica's most recent eon mode. And I'm sure you'll get to that. But when I think of early challenges, some of the early challenges were the architecture of enterprise mode. When I talk about all of these, this idea that we can have unique databases or database clusters of different sizes, or this elasticity, because really, if you know the enterprise architecture, that's not necessarily the enterprise architecture. So we had to do some unique things, I think, to overcome that, right, early. To get around the rigidness of enterprise. >> Yeah, I mean, I hear you. Right? Enterprise is complex and you like when things are hardened and fossilized but, in your ad hoc environment, that's not what you needed. So talk more about eon mode. What is eon mode for you and how do you apply it? What are some of the challenges and opportunities there, that you've found? >> So, the opportunities were certainly in this elastic architecture and the ability to separate in the storage, immediately meant that for some of the unique data paths that we wanted to take, right? We could do that fairly quickly. Certainly we could expand databases, right, quickly. More importantly, now you can reduce. Because previously, in the past, right, when I mentioned the enterprise architecture, the idea of growing a database in itself has it's pain. As far as the time it takes to (mumbles) the data, and that. Then think about taking that database back down and (telephone interference). All of a sudden, with eon, right, we had this elasticity, where you could, kind of, start to think about auto scaling, where you can go up and down and maybe you could save some money or maybe you could improve performance or maybe you could meet demand, At a time where customers need it most, in a real way, right? So it's definitely a game changer in that regard. >> I always love to talk to the customers because I get to, you know, I hear from the vendor, what they say, and then I like to, sort of, validate it. So, you know, Vertica talks a lot about separating compute and storage, and they're not the only one, from an architectural standpoint who do that. But Vertica stresses it. They're the only one that does that with a hybrid architecture. They can do it on-prem, they can do it in the cloud. From your experience, well first of all, is that true? You may or may not know, but is that advantageous to you, and if so, why? >> Well, first of all, it's certainly true. Earlier in some of the original beta testing for the on-prem eon modes that we... I was able to participate in it and be aware of it. So it certainly a realty, they, it's actually supported on Pure storage with FlashBlade and it's quite impressive. You know, for who, who will that be for, tough one. It's probably Vertica's question that they're probably still answering, but I think, obviously, some enterprise users that probably have some hybrid cloud, right? They have some architecture, they have some hardware, that they themselves, want to make use of. We certainly would probably fit into one of their, you know, their market segments. That they would say that we might be the ones to look at on-prem eon mode. Again, the beauty of it is, the elasticity, right? The idea that you could have this... So a lot of times... So I want to go back real quick to separating compute. >> Sure. Great. >> You know, we start by separating it. And I like to think of it, maybe more of, like, the up link. Because in a true way, it's not necessarily separated because ultimately, you're bringing the compute and the storage back together. But to be able to decouple it quickly, replace nodes, bring in nodes, that certainly fits, I think, what we were trying to do in building this kind of ecosystem that could respond to unknown of a customer query or of a customer demand. >> I see, thank you for that clarification because you're right, it's really not separating, it's decoupling. And that's important because you can scale them independently, but you still need compute and you still need storage to run your work load. But from a cost standpoint, you don't have to buy it in chunks. You can buy in granular segments for whatever your workload requires. Is that, is that the correct understanding? >> Yeah, and to, the ability to able to reuse compute. So in the scenario of AWS or even in the scenario of your on-prem solution, you've got this data that's safe and secure in (mumbles) computer storage, but the compute that you have, you can reuse that, right? You could have a scenario that you have some query that needs more analytic, more-more fire power, more memory, more what have you that you have. And so you can kind of move between, and that's important, right? That's maybe more important than can I grow them separately. Can I, can I borrow it. Can I borrow that compute you're using for my (cuts out) and give it back? And you can do that, when you're so easily able to decouple the compute and put it where you want, right? And likewise, if you have a down period where customers aren't using it, you'd like to be able to not use that, if you no longer require it, you're not going to get it back. 'Cause it-it opened the door to a lot of those things that allowed performance and process department to meet up. >> I wonder if I can ask you a question, you mentioned Pure a couple of times, are you using Pure FlashBlade on-prem, is that correct? >> That is the solution that is supported, that is supported by Vertica for the on-prem. (cuts out) So at this point, we have been discussing with them about some our own POCs for that. Before, again, we're back to the idea of how do we see ourselves using it? And so we certainly discuss the feasibility of bringing it in and giving it the (mumbles). But that's not something we're... Heavily on right now. >> And what is Domo for Domo? Tell us about that. >> Well it really started as this idea, even in the company, where we say, we should be using Domo in our everyday business. From the sales folk to the marketing folk, right. Everybody is going to use Domo, it's a business platform. For us in engineering team, it was kind of like, well if we use Domo, say for instance, to be better at the database engineers, now we've pointed Domo at itself, right? Vertica's running Domo in the background to some degree and then we turn around and say, "Hey Domo, how can we better at running you?" So it became this kind of cool thing we'd play with. We're now able to put some, some methods together where we can actually do that, right. Where we can monitor using our platform, that's really good at processing large amounts of data and spitting out useful analytics, right. We take those analytics down, make recommendation changes at the-- For now, you've got Domo for Domo happening and it allows us to sit at home and work. Now, even when we have to, even before we had to. >> Well, you know, look. Look at us here. Right? We couldn't meet in Boston physically, we're now meeting remote. You're on a hot spot because you've got some weather in your satellite internet in Atlanta and we're having a great conversation. So-so, we're here with Ben White, who's a senior database engineer at Domo. I want to ask you about some of the envelope pushing that you've done around autonomous. You hear that word thrown around a lot. Means a lot of things to a lot of different people. How do you look at autonomous? And how does it fit with eon and some of the other things you're doing? >> You know, I... Autonomous and the idea idea of autonomy is something that I don't even know if that I have already, ready to define. And so, even in my discussion, I often mention it as a road to it. Because exactly where it is, it's hard to pin down, because there's always this idea of how much trust do you give, right, to the system or how much, how much is truly autonomous? How much already is being intervened by us, the engineers. So I do hedge on using that. But on this road towards autonomy, when we look at, what we're, how we're using Domo. And even what that really means for Vertica, because in a lot of my examples and a lot of the things that we've engineered at Domo, were designed to maybe overcome something that I thought was a limitation thing. And so many times as we've done that, Vertica has kind of met us. Like right after we've kind of engineered our architecture stuff, that we thought that could help on our side, Vertica has a release that kind of addresses it. So, the autonomy idea and the idea that we could analyze metadata, make recommendations, and then execute those recommendations without innervation, is that road to autonomy. Once the database is properly able to do that, you could see in our ad hoc environment how that would be pretty useful, where with literally millions of queries every hour, trying to figure out what's the best, you know, profile. >> You know for- >> (overlapping) probably do a better job in that, than we could. >> For years I felt like IT folks sometimes were really, did not want that automation, they wanted the knobs to turn. But I wonder if you can comment. I feel as though the level of complexity now, with cloud, with on-prem, with, you know, hybrid, multicloud, the scale, the speed, the real time, it just gets, the pace is just too much for humans. And so, it's almost like the industry is going to have to capitulate to the machine. And then, really trust the machine. But I'm still sensing, from you, a little bit of hesitation there, but light at the end of the tunnel. I wonder if you can comment? >> Sure. I think the light at the end of the tunnel is even in the recent months and recent... We've really begin to incorporate more machine learning and artificial intelligence into the model, right. And back to what we're saying. So I do feel that we're getting closer to finding conditions that we don't know about. Because right now our system is kind of a rule, rules based system, where we've said, "Well these are the things we should be looking for, these are the things that we think are a problem." To mature to the point where the database is recognizing anomalies and taking on pattern (mutters). These are problems you didn't know happen. And that's kind of the next step, right. Identifying the things you didn't know. And that's the path we're on now. And it's probably more exciting even than, kind of, nailing down all the things you think you know. We figure out what we don't know yet. >> So I want to close with, I know you're a prominent member of the, a respected member of the Vertica Customer Advisory Board, and you know, without divulging anything confidential, what are the kinds of things that you want Vertica to do going forward? >> Oh, I think, some of the in dated base for autonomy. The ability to take some of the recommendations that we know can derive from the metadata that already exists in the platform and start to execute some of the recommendations. And another thing we've talked about, and I've been pretty open about talking to it, talking about it, is the, a new version of the database designer, I think, is something that I'm sure they're working on. Lightweight, something that can give us that database design without the overhead. Those are two things, I think, as they nail or basically the database designer, as they respect that, they'll really have all the components in play to do in based autonomy. And I think that's, to some degree, where they're heading. >> Nice. Well Ben, listen, I really appreciate you coming on. You're a thought leader, you're very open, open minded, Vertica is, you know, a really open community. I mean, they've always been quite transparent in terms of where they're going. It's just awesome to have guys like you on theCUBE to-to share with our community. So thank you so much and hopefully we can meet face-to-face shortly. >> Absolutely. Well you stay safe in Boston, one of my favorite towns and so no doubt, when the doors get back open, I'll be coming down. Or coming up as it were. >> Take care. All right, and thank you for watching everybody. Dave Volante with theCUBE, we're here covering the Virtual Vertica Big Data Conference. (electronic music)
SUMMARY :
brought to you by Vertica. of the Vertica Big Data Conference. I really was hoping I could meet you face-to-face And so what that means is, you know, I wonder if you could sort of talk about that, confirm that, is that you don't have this predictable dashboard What does that mean to a DBA in this day and age? The idea that, you know, And it sounds like you guys use it in that regard. that can perform best for the workload that we need to operate on. Some of the challenges that pertain to the database and you like when things are hardened and fossilized and the ability to separate in the storage, but is that advantageous to you, and if so, why? The idea that you could have this... And I like to think of it, maybe more of, like, the up link. And that's important because you can scale them the compute and put it where you want, right? that is supported by Vertica for the on-prem. And what is Domo for Domo? From the sales folk to the marketing folk, right. I want to ask you about some of the envelope pushing and a lot of the things that we've engineered at Domo, than we could. But I wonder if you can comment. nailing down all the things you think you know. And I think that's, to some degree, where they're heading. It's just awesome to have guys like you on theCUBE Well you stay safe in Boston, All right, and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Ben White | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
Domo | ORGANIZATION | 0.99+ |
Vertica Customer Advisory Board | ORGANIZATION | 0.99+ |
Ben | PERSON | 0.99+ |
two things | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Vertica | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
Vertica Big Data Conference | EVENT | 0.97+ |
Domo | TITLE | 0.97+ |
Domo | PERSON | 0.96+ |
Virtual Vertica Big Data Conference | EVENT | 0.96+ |
Virtual Vertica Big Data Conference 2020 | EVENT | 0.96+ |
first | QUANTITY | 0.95+ |
eon | TITLE | 0.92+ |
one | QUANTITY | 0.87+ |
today | DATE | 0.87+ |
millions of queries | QUANTITY | 0.84+ |
FlashBlade | TITLE | 0.82+ |
Virtual Vertica | EVENT | 0.75+ |
couple | QUANTITY | 0.7+ |
Pure FlashBlade | COMMERCIAL_ITEM | 0.58+ |
BDC 2020 | EVENT | 0.56+ |
MPP | TITLE | 0.55+ |
times | QUANTITY | 0.51+ |
RDBMS | TITLE | 0.48+ |
Larry Lancaster, Zebrium | Virtual Vertica BDC 2020
>> Announcer: It's theCUBE! Covering the Virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Hi, everybody. Welcome back. You're watching theCUBE's coverage of the Vertica Virtual Big Data Conference. It was, of course, going to be in Boston at the Encore Hotel. Win big with big data with the new casino but obviously Coronavirus has changed all that. Our hearts go out and we are empathy to those people who are struggling. We are going to continue our wall-to-wall coverage of this conference and we're here with Larry Lancaster who's the founder and CTO of Zebrium. Larry, welcome to theCUBE. Thanks for coming on. >> Hi, thanks for having me. >> You're welcome. So first question, why did you start Zebrium? >> You know, I've been dealing with machine data a long time. So for those of you who don't know what that is, if you can imagine servers or whatever goes on in a data center or in a SAS shop. There's data coming out of those servers, out of those applications and basically, you can build a lot of cool stuff on that. So there's a lot of metrics that come out and there's a lot of log files that come. And so, I've built this... Basically spent my career building that sort of thing. So tools on top of that or products on top of that. The problem is that since at least log files are completely unstructured, it's always doing the same thing over and over again, which is going in and understanding the data and extracting the data and all that stuff. It's very time consuming. If you've done it like five times you don't want to do it again. So really, my idea was at this point with machine learning where it's at there's got to be a better way. So Zebrium was founded on the notion that we can just do all that automatically. We can take a pile of machine data, we can turn it into a database, and we can build stuff on top of that. And so the company is really all about bringing that value to the market. >> That's cool. I want to get in to that, just better understand who you're disrupting and understand that opportunity better. But before I do, tell us a little bit about your background. You got kind of an interesting background. Lot of tech jobs. Give us some color there. >> Yeah, so I started in the Valley I guess 20 years ago and when my son was born I left grad school. I was in grad school over at Berkeley, Biophysics. And I realized I needed to go get a job so I ended up starting in software and I've been there ever since. I mean, I spent a lot of time at, I guess I cut my teeth at Nedap, which was a storage company. And then I co-founded a business called Glassbeam, which was kind of an ETL database company. And then after that I ended up at Nimble Storage. Another company, EMC, ended up buying the Glassbeam so I went over there and then after Nimble though, which where I build the InfoSight platform. That's where I kind of, after that I was able to step back and take a year and a half and just go into my basement, actually, this is my kind of workspace here, and come up with the technology and actually build it so that I could go raise money and get a team together to build Zebrium. So that's really my career in a nutshell. >> And you've got Hello Kitty over your right shoulder, which is kind of cool >> That's right. >> And then up to the left you got your monitor, right? >> Well, I had it. It's over here, yeah. >> But it was great! Pull it out, pull it out, let me see it. So, okay, so you got that. So what do you do? You just sit there and code all night or what? >> Yeah, that's right. So Hello Kitty's over here. I have a daughter and she setup my workspace here on this side with Hello Kitty and so on. And over on this side, I've got my recliner where I basically lay it all the way back and then I pivot this thing down over my face and put my keyboard on my lap and I can just sit there for like 20 hours. It's great. Completely comfortable. >> That's cool. All right, better put that monitor back or our guys will yell at me. But so, obviously, we're talking to somebody with serious coding chops and I'll also add that the Nimble InfoSight, I think it was one of the best pick ups that HP, HPE, has had in a while. And the thing that interested me about that, Larry, is the ability that the company was able to take that InfoSight and poured it very quickly across its product lines. So that says to me it was a modern, architecture, I'm sure API, microservices, and all those cool buzz words, but the proof is in their ability to bring that IP to other parts of the portfolio. So, well done. >> Yeah, well thanks. Appreciate that. I mean, they've got a fantastic team there. And the other thing that helps is when you have the notion that you don't just build on top of the data, you extract the data, you structure it, you put that in a database, we used Vertica there for that, and then you build on top of that. Taking the time to build that layer is what lets you build a scalable platform. >> Yeah, so, why Vertica? I mean, Vertica's been around for awhile. You remember you had the you had the old RDBMS, Oracles, Db2s, SQL Server, and then the database was kind of a boring market. And then, all of a sudden, you had all of these MPP companies came out, a spade of them. They all got acquired, including Vertica. And they've all sort of disappeared and morphed into different brands and Micro Focus has preserved the Vertica brand. But it seems like Vertica has been able to survive the transitions. Why Vertica? What was it about that platform that was unique and interested you? >> Well, I mean, so they're the first fund to build, what I would call a real column store that's kind of market capable, right? So there was the C-Store project at Berkeley, which Stonebreaker was involved in. And then that became sort of the seed from which Vertica was spawned. So you had this idea of, let's lay things out in a columnar way. And when I say columnar, I don't just mean that the data for every column is in a different set of files. What I mean by that is it takes full advantage of things like run length and coding, and L file and coding, and block--impression, and so you end up with these massive orders of magnitude savings in terms of the data that's being pulled off of storage as well as as it's moving through the pipeline internally in Vertica's query processing. So why am I saying all this? Because it's fundamentally, it was a fundamentally disruptive technology. I think column stores are ubiquitous now in analytics. And I think you could name maybe a couple of projects which are mostly open source who do something like Vertica does but name me another one that's actually capable of serving an enterprise as a relational database. I still think Vertica is unique in being that one. >> Well, it's interesting because you're a startup. And so a lot of startups would say, okay, we're going with a born-in-the-cloud database. Now Vertica touts that, well look, we've embraced cloud. You know, we have, we run in the cloud, we run on PRAM, all different optionality. And you hear a lot of vendors say that, but a lot of times they're just taking their stack and stuffing it into the cloud. But, so why didn't you go with a cloud-native database and is Vertica able to, I mean, obviously, that's why you chose it, but I'm interested from a technologist standpoint as to why you, again, made that choice given all these other choices around there. >> Right, I mean, again, I'm not, so... As I explained a column store, which I think is the appropriate definition, I'm not aware of another cloud-native-- >> Hm, okay. >> I'm aware of other cloud-native transactional databases, I'm not aware of one that has the analytics form it and I've tried some of them. So it was not like I didn't look. What I was actually impressed with and I think what let me move forward using Vertica in our stack is the fact that Eon really is built from the ground up to be cloud-native. And so we've been using Eon almost ever since we started the work that we're doing. So I've been really happy with the performance and with reliability of Eon. >> It's interesting. I've been saying for years that Vertica's a diamond in the rough and it's previous owner didn't know what to do with it because it got distracted and now Micro Focus seems to really see the value and is obviously putting some investments in there. >> Yeah >> Tell me more about your business. Who are you disrupting? Are you kind of disrupting the do-it-yourself? Or is there sort of a big whale out there that you're going to go after? Add some color to that. >> Yeah, so our broader market is monitoring software, that's kind of the high-level category. So you have a lot of people in that market right now. Some of them are entrenched in large players, like Datadog would be a great example. Some of them are smaller upstarts. It's a pretty, it's a pretty saturated market. But what's happened over the last, I'd say two years, is that there's been sort of a push towards what's called observability in terms of at least how some of the products are architected, like Honeycomb, and how some of them are messaged. Most of them are messaged these days. And what that really means is there's been sort of an understanding that's developed that that MTTR is really what people need to focus on to keep their customers happy. If you're a SAS company, MTTR is going to be your bread and butter. And it's still measured in hours and days. And the biggest reason for that is because of what's called unknown unknowns. Because of complexity. Now a days, things are, applications are ten times as complex as they used to be. And what you end up with is a situation where if something is new, if it's a known issue with a known symptom and a known root cause, then you can setup a automation for it. But the ones that really cost a lot of time in terms of service disruption are unknown unknowns. And now you got to go dig into this massive mass of data. So observability is about making tools to help you do that, but it's still going to take you hours. And so our contention is, you need to automate the eyeball. The bottleneck is now the eyeball. And so you have to get away from this notion of a person's going to be able to do it infinitely more efficient and recognize that you need automated help. When you get an alert agent, it shouldn't be that, "Hey, something weird's happening. Now go dig in." It should be, "Here's a root cause and a symptom." And that should be proposed to you by a system that actually does the observing. That actually does the watching. And that's what Zebrium does. >> Yeah, that's awesome. I mean, you're right. The last thing you want is just another alert and it say, "Go figure something out because there's a problem." So how does it work, Larry? In terms of what you built there. Can you take us inside the covers? >> Yeah, sure. So there's really, right now there's two kinds of data that we're ingesting. There's metrics and there's log files. Metrics, there's actually sort of a framework that's really popular in DevOp circles especially but it's becoming popular everywhere, which is called Prometheus. And it's a way of exporting metrics so that scrapers can collect them. And so if you go look at a typical stack, you'll find that most of the open source components and many of the closed source components are going to have exporters that export all their stacks to Prometheus. So by supporting that stack we can bring in all of those metrics. And then there's also the log files. And so you've got host log files in a containerized environment, you've got container logs, and you've got application-specific logs, perhaps living on a host mount. And you want to pull all those back and you want to be able to associate this log that I've collected here is associated with the same container on the same host that this metric is associated with. But now what? So once you've got that, you've got a pile of unstructured logs. So what we do is we take a look at those logs and we say, let's structure those into tables, right? So where I used to have a log message, if I look in my log file and I see it says something like, X happened five times, right? Well, that event types going to occur again and it'll say, X happened six times or X happened three times. So if I see that as a human being, I can say, "Oh clearly, that's the same thing." And what's interesting here is the times that X, that X happened, and that this number read... I may want to know when the numbers happened as a time series, the values of that column. And so you can imagine it as a table. So now I have table for that event type and every time it happens, I get a row. And then I have a column with that number in it. And so now I can do any kind of analytics I want almost instantly across my... If I have all my event types structured that way, every thing changes. You can do real anomaly detection and incident detection on top of that data. So that's really how we go about doing it. How we go about being able to do autonomous monitoring in a way that's effective. >> How do you handle doing that for, like the Spoke app? Do you have to, does somebody have to build a connector to those apps? How do you handle that? >> Yeah, that's a really good question. So you're right. So if I go and install a typical log manager, there'll be connectors for different apps and usually what that means is pulling in the stuff on the left, if you were to be looking at that log line, and it will be things like a time stamp, or a severity, or a function name, or various other things. And so the connector will know how to pull those apart and then the stuff to the right will be considered the message and that'll get indexed for search. And so our approach is we actually go in with machine learning and we structure that whole thing. So there's a table. And it's going to have a column called severity, and timestamp, and function name. And then it's going to have columns that correspond to the parameters that are in that event. And it'll have a name associated with the constant parts of that event. And so you end up with a situation where you've structured all of it automatically so we don't need collectors. It'll work just as well on your home-grown app that has no collectors or no parsers to find or anything. It'll work immediately just as well as it would work on anything else. And that's important, because you can't be asking people for connectors to their own applications. It just, it becomes now they've go to stop what they're doing and go write code for you, for your platform and they have to maintain it. It's just untenable. So you can be up and running with our service in three minutes. It'll just be monitoring those for you. >> That's awesome! I mean, that is really a breakthrough innovation. So, nice. Love to see that hittin' the market. Who do you sell to? Both types of companies and what role within the company? >> Well, definitely there's two main sort of pushes that we've seen, or I should say pulls. One is from DevOps folks, SRE folks. So these are people who are tasked with monitoring an environment, basically. And then you've got people who are in engineering and they have a staging environment. And what they actually find valuable is... Because when we find an incident in a staging environment, yeah, half the time it's because they're tearing everything up and it's not release ready, whatever's in stage. That's fine, they know that. But the other half the time it's new bugs, it's issues and they're finding issues. So it's kind of diverged. You have engineering users and they don't have titles like QA, they're Dev engineers or Dev managers that are really interested. And then you've got DevOps and SRE people there (mumbles). >> And how do I consume your product? Is the SAS... I sign up and you say within three minutes I'm up and running. I'm paying by the drink. >> Well, (laughs) right. So there's a couple ways. So, right. So the easiest way is if you use Kubernetes. So Kubernetes is what's called a container orchestrator. So these days, you know Docker and containers and all that, so now there's container orchestrators have become, I wouldn't say ubiquitous but they're very popular now. So it's kind of on that inflection curve. I'm not exactly sure the penetration but I'm going to say 30-40% probably of shops that were interested are using container orchestrators. So if you're using Kubernetes, basically you can install our Kubernetes chart, which basically means copying and pasting a URL and so on into your little admin panel there. And then it'll just start collecting all the logs and metrics and then you just login on the website. And the way you do that is just go to our website and it'll show you how to sign up for the service and you'll get your little API key and link to the chart and you're off and running. You don't have to do anything else. You can add rules, you can add stuff, but you don't have to. You shouldn't have to, right? You should never have to do any more work. >> That's great. So it's a SAS capability and I just pay for... How do you price it? >> Oh, right. So it's priced on volume, data volume. I don't want to go too much into it because I'm not the pricing guy. But what I'll say is that it's, as far as I know it's as cheap or cheaper than any other log manager or metrics product. It's in that same neighborhood as the very low priced ones. Because right now, we're not trying to optimize for take. We're trying to make a healthy margin and get the value of autonomous monitoring out there. Right now, that's our priority. >> And it's running in the cloud, is that right? AWB West-- >> Yeah, that right. Oh, I should've also pointed out that you can have a free account if it's less than some number of gigabytes a day we're not going to charge. Yeah, so we run in AWS. We have a multi-tenant instance in AWS. And we have a Vertica Eon cluster behind that. And it's been working out really well. >> And on your freemium, you have used the Vertica Community Edition? Because they don't charge you for that, right? So is that how you do it or... >> No, no. We're, no, no. So, I don't want to go into that because I'm not the bizdev guy. But what I'll say is that if you're doing something that winds up being OEM-ish, you can work out the particulars with Vertica. It's not like you're going to just go pay retail and they won't let you distinguish between tests, and prod, and paid, and all that. They'll work with you. Just call 'em up. >> Yeah, and that's why I brought it up because Vertica, they have a community edition, which is not neutered. It runs Eon, it's just there's limits on clusters and storage >> There's limits. >> But it's still fully functional though. >> So to your point, we want it multi-tenant. So it's big just because it's multi-tenant. We have hundred of users on that (audio cuts out). >> And then, what's your partnership with Vertica like? Can we close on that and just describe that a little bit? >> What's it like. I mean, it's pleasant. >> Yeah, I mean (mumbles). >> You know what, so the important thing... Here's what's important. What's important is that I don't have to worry about that layer of our stack. When it comes to being able to get the performance I need, being able to get the economy of scale that I need, being able to get the absolute scale that I need, I've not been disappointed ever with Vertica. And frankly, being able to have acid guarantees and everything else, like a normal mature database that can join lots of tables and still be fast, that's also necessary at scale. And so I feel like it was definitely the right choice to start with. >> Yeah, it's interesting. I remember in the early days of big data a lot of people said, "Who's going to need these acid properties and all this complexity of databases." And of course, acid properties and SQL became the killer features and functions of these databases. >> Who didn't see that one coming, right? >> Yeah, right. And then, so you guys have done a big seed round. You've raised a little over $6 million dollars and you got the product market fit down. You're ready to rock, right? >> Yeah, that's right. So we're doing a launch probably, well, when this airs it'll probably be the day before this airs. Basically, yeah. We've got people... Like literally in the last, I'd say, six to eight weeks, It's just been this sort of pique of interest. All of a sudden, everyone kind of gets what we're doing, realizes they need it, and we've got a solution that seems to meet expectations. So it's like... It's been an amazing... Let me just say this, it's been an amazing start to the year. I mean, at the same time, it's been really difficult for us but more difficult for some other people that haven't been able to go to work over the last couple of weeks and so on. But it's been a good start to the year, at least for our business. So... >> Well, Larry, congratulations on getting the company off the ground and thank you so much for coming on theCUBE and being part of the Virtual Vertica Big Data Conference. >> Thank you very much. >> All right, and thank you everybody for watching. This is Dave Vellante for theCUBE. Keep it right there. We're covering wall-to-wall Virtual Vertica BDC. You're watching theCUBE. (upbeat music)
SUMMARY :
brought to you by Vertica. and we're here with Larry Lancaster why did you start Zebrium? and basically, you can build a lot of cool stuff on that. and understand that opportunity better. and actually build it so that I could go raise money It's over here, yeah. So what do you do? and then I pivot this thing down over my face and I'll also add that the Nimble InfoSight, And the other thing that helps is when you have the notion and Micro Focus has preserved the Vertica brand. and so you end up with these massive orders And you hear a lot of vendors say that, I'm not aware of another cloud-native-- I'm not aware of one that has the analytics form it and now Micro Focus seems to really see the value Are you kind of disrupting the do-it-yourself? And that should be proposed to you In terms of what you built there. And so you can imagine it as a table. And so you end up with a situation I mean, that is really a breakthrough innovation. and it's not release ready, I sign up and you say within three minutes And the way you do that So it's a SAS capability and I just pay for... and get the value of autonomous monitoring out there. that you can have a free account So is that how you do it or... and they won't let you distinguish between Yeah, and that's why I brought it up because Vertica, But it's still So to your point, I mean, it's pleasant. What's important is that I don't have to worry I remember in the early days of big data and you got the product market fit down. that haven't been able to go to work and thank you so much for coming on theCUBE All right, and thank you everybody for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Larry Lancaster | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
five times | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
six times | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Zebrium | ORGANIZATION | 0.99+ |
20 hours | QUANTITY | 0.99+ |
Glassbeam | ORGANIZATION | 0.99+ |
Nedap | ORGANIZATION | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
Nimble Storage | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
a year and a half | QUANTITY | 0.99+ |
Micro Focus | ORGANIZATION | 0.99+ |
ten times | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
three minutes | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
eight weeks | QUANTITY | 0.98+ |
Stonebreaker | ORGANIZATION | 0.98+ |
Prometheus | TITLE | 0.98+ |
30-40% | QUANTITY | 0.98+ |
Eon | ORGANIZATION | 0.98+ |
hundred of users | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Vertica Virtual Big Data Conference | EVENT | 0.98+ |
Kubernetes | TITLE | 0.97+ |
first fund | QUANTITY | 0.97+ |
Virtual Vertica Big Data Conference 2020 | EVENT | 0.97+ |
AWB West | ORGANIZATION | 0.97+ |
Virtual Vertica Big Data Conference | EVENT | 0.97+ |
Honeycomb | ORGANIZATION | 0.96+ |
SAS | ORGANIZATION | 0.96+ |
20 years ago | DATE | 0.96+ |
Both types | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Datadog | ORGANIZATION | 0.95+ |
two main | QUANTITY | 0.94+ |
over $6 million dollars | QUANTITY | 0.93+ |
Hello Kitty | ORGANIZATION | 0.93+ |
SQL | TITLE | 0.93+ |
Zebrium | PERSON | 0.91+ |
Spoke | TITLE | 0.89+ |
Encore Hotel | LOCATION | 0.88+ |
InfoSight | ORGANIZATION | 0.88+ |
Coronavirus | OTHER | 0.88+ |
one | QUANTITY | 0.86+ |
less | QUANTITY | 0.85+ |
Oracles | ORGANIZATION | 0.85+ |
2020 | DATE | 0.85+ |
CTO | PERSON | 0.84+ |
Vertica | TITLE | 0.82+ |
Nimble InfoSight | ORGANIZATION | 0.81+ |
Jeff Healey, Vertica at Micro Focus | CUBEConversations, March 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with top leaders all around the world, this is theCUBE Conversation. >> Hi everybody, I'm Dave Vellante, and welcome to the Vertica Big Data Conference virtual. This is our digital presentation, wall to wall coverage actually, of the Vertica Big Data Conference. And with me is Jeff Healy, who directs product marketing at Vertica. Jeff, good to see you. >> Good to see you, Dave. Thanks for the opportunity to chat. >> You're very welcome Now I'm excited about the products that you guys announced and you're hardcore into product marketing, but we're going to talk about the Vertica Big Data Conference. It's been a while since you guys had this. Obviously, new owner, new company, some changes, but that new company Microfocus has announced that it's investing, I think the number was $70 million into two areas. One was security and the other, of course, was Vertica. So we're really excited to be back at the virtual Big Data Conference. And let's hear it from you, what are your thoughts? >> Yeah, Dave, thanks. And we love having theCUBE at all of these events. We're thrilled to have the next Vertica Big Data Conference. Actually it was a physical event, we're moving it online. We know it's going to be a big hit because we've been doing this for some time particularly with two of the webcast series we have every month. One is under the Hood Webcast Series, which is led by our engineers and the other is what we call a Data Disruptors Webcast Series, which is led by all customers. So we're really confident this is going to be a big hit we've seen the registration spike. We just hit 1,000 and we're planning on having about 1,000 at the physical event. It's growing and growing. We're going to see those big numbers and it's not going to be a one time thing. We're going to keep the conversation going, make sure there's plenty of best practices learning throughout the year. >> We've been at all the big BDCs and the first one's were really in the heart of the Big Data Movement, really exciting time and the interesting thing about this event is it was always sort of customers talking to customers. There wasn't a lot of commercials, an intimate event. Of course I loved it because it was in our hometown. But I think you're trying to carry that theme obviously into the digital sphere. Maybe you can talk about that a little bit. >> Yeah, Dave, absolutely right. Of course, nothing replaces face to face, but everything that you just mentioned that makes it special about the Big Data Conference, and you know, you guys have been there throughout and shown great support in talking to so many customers and leaders and what have you. We're doing the same thing all right. So we had about 40 plus sessions planned for the physical event. We're going to run half of those and we're not going to lose anything though, that's the key point. So what makes the Vertica Big Data Conference really special is that the only presenters that are allowed to present are either engineers, Vertica engineers, or best practices engineers and then customers. Customers that actually use the product. There's no sales or marketing pitches or anything like that. And I'll tell you as far as the customer line up that we have, we've got five or six already lined up as part of those 20 sessions, customers like Uber, customers like the Trade Desk, customers like Phillips talking about predictive maintenance, so list goes on and on. You won't want to miss it if you're on the fence or if you're trying to figure out if you want to register for this event. Best part about it, it's all free, and if you can't attend it live, it will be live Q&A chat on every single one of those sessions, we promise we'll answer every question if we don't get it live, as we always do. They'll all be available on demand. So no reason not to register and attend or watch later. >> Thinking about the content over the years, in the early days of the Big Data Conference, of course Vertica started before the whole Big Data Conference meme really took off and then as it took off, plugged right into it, but back then the discussion was a lot of what do I do with big data, Gartner's three Vs and how do I wrangle it all, and what's the best approach and this stuff is, Hadoop is really complicated. Of course Vertica was an alternative to RDBMS that really couldn't scale or give that type of performance for analytical databases so you had your foot in that door. But now the conversation that's interesting your theme, it's win big with data. Of course, the physical event was at the Encore, which is the new Casino in Boston. But my point is, the conversation is no longer about, how to wrangle all this data, you know how to lower the cost of storing this data, how to make it go faster, and actually make it work. It's really about how to turn data into insights and transform your organizations and quote and quote, win with big data. >> That's right. Yeah, that's great point, Dave. And that's why I mean, we chose the title really, because it's about our customers and what they're able to do with our platform. And it's we know, it's not just one platform, all of the ecosystem, all of our incredible partners. Yeah it's funny when I started with the organization about seven years ago, we were closing lots of deals, and I was following up on case studies and it was like, Okay, why did you choose Vertica? Well, the queries went fast. Okay, so what does that mean for your business? We knew we're kind of in the early adopter stage. And we were disrupting the data warehouse market. Now we're talking to our customers that their volumes are growing, growing and growing. And they really have these analytical use cases again, talk to the value at the entire organization is gaining from it. Like that's the difference between now and a few years ago, just like you were saying, when Vertica disrupted the database market, but also the data warehouse market, you can speak to our customers and they can tell you exactly what's happening, how it's moving the needle or really advancing the entire organization, regardless of the analytical use case, whether it's an internet of things around predictive maintenance, or customer behavior analytics, they can speak confidently of it more than just, hey, our queries went faster. >> You know, I've mentioned before the Micro Focus investment, I want to drill into that a bit because the Vertica brand stands alone. It's a Micro Focus company, but Vertica has its own sort of brand awareness. The reason I've mentioned that is because if you go back to the early days of MPP Database, there was a spate of companies, startups that formed. And many if not all of those got acquired, some lived on with the Codebase, going into the cloud, but generally speaking, many of those brands have gone away Vertica stays. And so my point is that we've seen Vertica have staying power throughout, I think it's a function of the architecture that Stonebraker originally envisioned, you guys were early on the market had a lot of good customer traction, and you've been very responsive to a lot of the trends. Colin Mahony will talk about how you adopted and really embrace cloud, for example, and different data formats. And so you've really been able to participate in a lot of the new emerging waves that have come out to the market. And I would imagine some of that's cultural. I wonder if you could just address that in the context of BDC. >> Oh, yeah, absolutely. You hit on all the key points here, Dave. So a lot of changes in the industry. We're in the hottest industry, the tech industry right now. There's lots of competition. But one of the things we'll say in terms of, Hey, who do you compete with? You compete with these players in the cloud, open source alternatives, traditional enterprise data warehouses. That's true, right. And one of the things we've stayed true within calling is really kind of led the charge for the organization is that we know who we are right. So we're an analytical database platform. And we're constantly just working on that one sole Source Code base, to make sure that we don't provide a bunch of different technologies and databases, and different types of technologies need to stitch together. This platform just has unbelievable universal capabilities from everything from running analytics at scale, to in Database Machine Learning with the different approach to all different types of deployment models that are supported, right. We don't go to our companies and we say, yeah, we take care of all your problems but you have to stitch together all these different types of technologies. It's all based on that core Vertica engine, and we've expanded it to meet all these market needs. So Colin knows and what he believes and what he tells the team what we lead with, is that it lead with that one core platform that can address all these analytical initiatives. So we know who we are, we continue to improve on it, regardless of the pivots and the drastic measures that some of the other competitors have taken. >> You know, I got to ask you, so we're in the middle of this global pandemic with Coronavirus and COVID-19, and things change daily by the hour sometimes by the minute. I mean, every day you get up to something new. So you see a lot of forecasts, you see a lot of probability models, best case worst case likely case even though nobody really knows what that likely case looks like, So there's a lot of analytics going on and a lot of data that people are crunching new data sources come in every day. Are you guys participating directly in that, specifically your customers? Are they using your technology? You can't use a traditional data warehouse for this. It's just you know, too slow to asynchronous, the process is cumbersome. What are you seeing in the customer base as it relates to this crisis? >> Sure, well, I mean naturally, we have a lot of customers that are healthcare technology companies, companies, like Cerner companies like Philips, right, that are kind of leading the charge here. And of course, our whole motto has always been, don't throw away any the data, there's value in that data, you don't have to with Vertica right. So you got petabyte scale types of analytics across many of our customers. Again, just a few years ago, we called the customers a petabyte club. Now a majority of our large enterprise software companies are approaching those petabyte volumes. So it's important to be able to run those analytics at that scale and that volume. The other thing we've been seeing from some of our partners is really putting that analytics to use with visualizations. So one of the customers that's going to be presenting as part of the Vertica Big Data conferences is Domo. Domo has a really nice stout demo around be able to track the Coronavirus the outbreak and how we're getting care and things like that in a visual manner you're seeing more of those. Well, Domo embeds Vertica, right. So that's another customer of ours. So think of Vertica is that embedded analytical engine to support those visualizations so that just anyone in the world can track this. And hopefully as we see over time, cases go down we overcome this. >> Talk a little bit more about that. Because again, the BDC has always been engineers presenting to audiences, you guys have a lot of you just mentioned the demo by Domo, you have a lot of brand names that we've interviewed on theCUBE before, but maybe you could talk a little bit more about some of the customers that are going to be speaking at the virtual event, and what people can expect. >> Sure, yeah, absolutely. So we've got Uber that's presenting just a quick fact around Uber. Really, the analytical data warehouse is all Vertica, right. And it works very closely with Open Source or what have you. Just to quick stat on on Uber, 14 million rides per day, what Uber is able to do is connect the riders with the drivers so that they can determine the appropriate pricing. So Uber is going to be a great session that everyone will want to tune in on that. Others like the Trade Desk, right massive Ad Tech company 10 billion ad auctions daily, it may even be per second or per minute, the amount of scale and analytical volume that they have, that they are running the queries across, it can really only be accomplished with a few platforms in the world and that's Vertica that's another a hot one is with the Trade Desk. Philips is going to be presenting IoT analytical workloads we're seeing more and more of those across not only telematics, which you would expect within automotive, but predictive maintenance that cuts across all the original manufacturers and Philips has got a long history of being able to handle sensor data to be able to apply to those business cases where you can improve customer satisfaction and lower costs related to services. So around their MRI machines and predictive maintenance initiative, again, Vertica is kind of that heartbeat, that analytical platform that's driving those initiatives So list goes on and on. Again, the conversation is going to continue with the Data Disruptors in the Under Hood webcast series. Any customers that weren't able to present and we had a few that just weren't able to do it, they've already signed up for future months. So we're already booked out six months out more and more customer stories you're going to hear from Vertica.com. >> Awesome, and we're going to be sharing some of those on theCUBE as well, the BDC it's always been intimate event, one of my favorites, a lot of substance and I'm sure the online version, the virtual digital version is going to be the same. Jeff Healey, thanks so much for coming on theCUBE and give us a little preview of what we can expect at the Vertica BDC 2020. >> You bet. >> Thank you. >> Yeah, Dave, thanks to you and the whole CUBE team. Appreciate it >> Alright, and thank you for watching everybody. Keep it right here for all the coverage of the virtual Big Data conference 2020. You're watching theCUBE. I'm Dave Vellante, we'll see you soon
SUMMARY :
connecting with top leaders all around the world, actually, of the Vertica Big Data Conference. Thanks for the opportunity to chat. Now I'm excited about the products that you guys announced and it's not going to be a one time thing. and the interesting thing about this event is that the only presenters that are allowed to present how to wrangle all this data, you know how to lower the cost all of the ecosystem, all of our incredible partners. in a lot of the new emerging waves So a lot of changes in the industry. and a lot of data that people are crunching So one of the customers that's going to be presenting that are going to be speaking at the virtual event, Again, the conversation is going to continue and I'm sure the online version, the virtual digital version Yeah, Dave, thanks to you and the whole CUBE team. of the virtual Big Data conference 2020.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Healy | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Healey | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Microfocus | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
$70 million | QUANTITY | 0.99+ |
Colin | PERSON | 0.99+ |
20 sessions | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
March 2020 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Domo | ORGANIZATION | 0.98+ |
one platform | QUANTITY | 0.98+ |
Big Data Conference | EVENT | 0.98+ |
two areas | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
Vertica Big Data Conference | EVENT | 0.98+ |
Coronavirus | OTHER | 0.98+ |
Stonebraker | ORGANIZATION | 0.98+ |
about 40 plus sessions | QUANTITY | 0.97+ |
COVID-19 | OTHER | 0.96+ |
BDC | ORGANIZATION | 0.96+ |
one core platform | QUANTITY | 0.95+ |
Vertica BDC 2020 | EVENT | 0.95+ |
1,000 | QUANTITY | 0.95+ |
Vertica Big Data | EVENT | 0.95+ |
one time | QUANTITY | 0.95+ |
Micro Focus | ORGANIZATION | 0.94+ |
few years ago | DATE | 0.93+ |
about 1,000 | QUANTITY | 0.93+ |
Codebase | ORGANIZATION | 0.93+ |
Phillips | ORGANIZATION | 0.93+ |
Cerner | ORGANIZATION | 0.92+ |
10 billion ad auctions | QUANTITY | 0.91+ |
14 million rides per day | QUANTITY | 0.9+ |
Coronavirus | EVENT | 0.89+ |
first one | QUANTITY | 0.89+ |
Under Hood | TITLE | 0.86+ |
Hadoop | TITLE | 0.85+ |
BDC | EVENT | 0.83+ |
seven years ago | DATE | 0.8+ |
outbreak | EVENT | 0.79+ |
Carey James, Jason Schroedl, & Matt Maccaux | Big Data NYC 2017
>> Narrator: Live from Midtown Manhattan, it's theCUBE, covering BigData New York City 2017 Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Hey, welcome back everyone, live in New York, it's theCUBE coverage, day three of three days of wall-to-wall coverage of BigData at NYC, in conjunction with Strata Data right around the corner, separate event than ours, we've been covering. It's our eighth year. We're here expanding on our segment we just had with Matt from Deli EMC on, really on the front lines consultant, we've got Jason from BlueData, and Casey from BlueTalon, two separate companies but the blue in the name, team blue. And of course, Matt from Dell EMC, guys, welcome back to theCUBE and let's talk about the partnerships. I know you guys have a partnership, Dell EMC leads the front lines mostly with the customer base you guys come in with the secret sauce to help that solution which I want to get to in a minute, but the big theme here this week is partnerships. And before we get into the relationship that you guys have, I want you to talk about the changes in the ecosystem, because we're seeing a couple key things. Open source, one, and it's winning, continues to grow, but the Linux Foundation pointed out the open source that we cover that exponential growth is going to be in open-source software. You can see from 4 lines of code to billions in the next 10 years. So more onboarding, so clear development path. Ecosystems have work. Now they're coming into the enterprise with suppliers, whether it's consulting, it's front-end, or full stack developers coming together. How do you see ecosystems playing in both the supplier side and also the customer side? >> So we see from the supplier side, right, and from the customer side as well, and it kind of drives both of those conversations together is that you had the early days of I don't want vendor lock-in, right, I want to have a disparate virtual cornucopia of tools in the marketplace, and then they were, each individual shop was trying to develop those and implement those on their own. And what you're now seeing is that companies still want that diversity in the tools that they utilize, and that they work with, but they don't want that, the complication of having to deliver all those tools themselves, and so they're looking more for partners that can actually bring an ecosystem to the table where it's a loose coupling of events, but that one person actually has the forefront, has the customer's best interest in mind, and actually being able to drive through those pieces. And that's what we see from a partnership, why we're driving towards partnerships, 'cause we can be a point solution, we can solve a lot of pieces, but by bringing us as a part of an ecosystem and with a partner that can actually help deliver the customer and business value to the customer, that's where we're starting to see the traction and the movement and the wins for us as an organization. >> BlueData, you guys have had very big successes, big data as a service, docker containers, this is the programmer's nirvana. Infrastructure plus code, that's the DevOps ethos going mainstream. Your thoughts on partnering, 'cause you can't do it alone. >> Yeah, I mean, for us, speaking of DevOps, and we see our software platform provides a solution for bringing a DevOps approach to data science and big data analytics. And it's much more streamlined approached, an elastic and agile approach to big data analytics and data science, but to your point, we're partnered with Dell EMC because they bring together an entire solution that delivers an elastic platform for secure multi-tenant environments for data science teams and analytics teams for a variety of different open source tool sets. So there is a large ecosystem of open source tools out there from Hadoop to Spark to Kafka to a variety of different data science, machine learning and deep learning tool sets out there, and we provide through our platform the ability to dockerize all of those environments, make them available through self-service to the data science community so they can get up and running quickly and start building their models and running their algorithms. And for us, it's on any infrastructure. So, we work closely with Dell EMC to run it on Isilon and their infrastructure, Dell-powered servers, but also you can run it in a hybrid cloud architecture. So you could run it on Azure and now GCP, and AWS. >> So this is the agility piece for the developer. They get a lot of agility, they get their security. Dell EMC has all the infrastructure side, so you got to partner together. Matt, pull this together. The customer doesn't want, they want a single pane of glass, or however you want to look at it, they don't want to deal with the nuances. You guys got to bring it all together. They want it to work. Now the theme I hear at BigData New York is integration is everything, right, so, if it doesn't integrate, the plumbings not working. How important is it for the customer to have this smooth, seamless experience? >> It's critical for them to, they have to be able to believe that it's going to be a seamless experience, and these are just two partners in the ecosystem. When we talk to enterprise customers, they have other vendors. They have half a dozen or a dozen other vendors solving big data problems, right? The Hadoop analytic tools, on and on and on. And when they choose a partner like us, they want to see that we are bringing other partners to the table that are going to complement or enhance capabilities that they have, but they want to see two key things. And we need to see the same things as well when we look at our partnerships. We want to see APIs, we want to see open APIs that are well-documented so that we know these tools can play with each other, and two, these have to be organizations we can work with. At the end of the day, a customer does business with Dell EMC because they know we're going to stand behind whatever we put in front of them. >> John: They get a track record too, you're pretty solid. >> Yep, it is-- >> But I want to push on the ecosystem, not you guys, it's critical, but I mean one thing that I've seen over my 30 years in the enterprise is ecosystems, you see bullshit and you see real deal, right, so. A lot of customers are scared, now with all this FUD and new technology, it's hard to squint through what the BS is in an ecosystem. So how do you do ecosystems right in this new market? 'Cause like you said, it's not API, that's kind of technical, but philosophy-wise you can't do the barney deals, you got Pat Gelsinger standing up on stage at VMworld, basically flew down to stand in front of all the customers of VMworld's customers and said, we're not doing a barney deal. Now, he didn't say barney deals, that's our old term. He said, it's not an optical deal we're doing with VMware. We got your back. He didn't say that, but that's my interpretation, that's what he basically said. The CEO of AWS said that. That's a partner, you know what I'm saying? So, some deals are okay we got a deal on paper, what's the difference, how do you run an ecosystem, in your opinion? >> Yeah, it's not trivial. It's not an easy thing. It takes an executive, at that level, it takes a couple of executives coming together-- >> John: From the top, obviously. >> Committing, it's not just money, it's reputation, right? If you're at that level, it's about reputation which then trickles down to the company's reputation, and so within the ecosystem, we want to sort of crawl, walk, run. Let's do some projects-- >> So you're saying reputation in communities is the number one thing. >> I think so, people are not going to go, so you will always have the bleeding edge. Someone's going to go play with a tool, they're going to see if it works-- >> Wow, reputation's everything. >> Yeah. If it fails, they're going to tell, what is the saying, if something fails, if something bad happens you tell twelve people-- >> All right, so give them a compliment. What's BlueTalon do great for you guys? Explain their talent in the ecosystem. >> So BlueTalon's talent in the ecosystem, other than being just great people, we love Carey, is that they-- >> I'll get you to say something bad about him soon, but give him the compliment first. >> They have simplified the complexity of doing security, policy and role-based security for big data. So regardless of where your data lives, regardless of if it's Hadoop, Spark, Flink, Mongo, AWS, you define a policy once. And so if I am in front of the chief governance officer, my infrastructure doesn't have a value problem to them, but theirs does, right? The legal team, when we have to do proposals, this is what gets us through the legal and compliance for GDPR in this, it's that centralized control that is so critical to the capability we provide for big data. If you sprawl your data everywhere, and we know data sprawls everywhere-- >> So you can rely on them, these guys. >> Absolutely. >> All right, BlueData, give them a compliment, where do they fit? >> So they have solved the problem of deploying containers, big data environments, in any cloud. And the notion of ephemeral clusters for big data workloads is actually really, really hard to solve. We've seen a lot of organizations attempt to do this, we see frameworks out there, like Kupernetes, that people are trying to build on. These guys have fixed it. We have gone through the most rigorous security audits at the biggest banks in the world, and they have signed off because of the network segmentation and the data segmentation, it just works. >> I think I'm running a presidential debate, now you got to say something nice about him. No, I mean, Dell EMC we know what these guys do. But for you guys, I mean, how big is BlueTalon, company-wise? I mean, you guys are not small but you're not massive either. >> We're not small, but we're not massive, right. So, we're probably around 40 resources global, and so from our perspective, we're-- >> John: That's a great deal, working with a big gorilla in Dell EMC, they got a lot of market share, big muscle? >> Exactly, and so for us, like we talked about earlier, right, the big thing for us is ecosystem functions. We do what we do really well, right, we build software that does control unified access across multiple platforms as well as multiple distributions whether it be private cloud, on-prem, or public cloud, and for us, again, it's great that we have the software, it's great that we can do those things, but if we can't actually help customers use that software to deliver value, it's useless. >> Do you guys go to the market together, do you just hold hands in front of the customer, bundle products? >> No, we go to market together, so we actually, we work, a lot of our team in enablement is not enabling our customers, it is enabling Dell EMC on the use of our software and how to do that. So we actually work with Dell EMC to train and work-- >> So you're a tight partner. There's certification involved, close relationships, you're not mailing it in. >> And then we're also involved with the customer side as well, so it's not like we go, okay great, now it's sold, we throw up our hands and walk away. >> John: Well, they're counting on you that. >> They're counting on us for the specific pieces, but we're also working with Dell EMC so that we can get that breadth right in their reach, so that they can actually go confidently to their customers and actually understand where we fit and when we don't fit. Because we're not everything to everybody, right, and so they have to understand those pieces to be able to know when that works right and how the best practices are. And so again, we're 40 people, they're, I forget, there were 80,000 at one point? Maybe even more than that? But even in the services arm, there's several thousands of people in the-- >> What's the whole point of ecosystems you're getting at here? Point at the critical thing. You've got a big piece of the puzzle, it's not just they're bundling you in. You're an active part of that, and it's an integration world right, so he needs to rely on you to integrate with his systems. >> Yeah, we have to integrate with the other parts of the ecosystem too, so it really is a three-way integration on this perspective where they do what they do really well, we do what we do and they're complementary to each other, but without the services and the glue from Dell EMC-- >> So when you bring Dell EMC into the deals too? >> We do, so we bring Dell EMC into deals, and Dell EMC sells us through a reseller agreement with them so we actually help jointly either bring them to a deal we've already found, we'll bring services to them, or we'll actually go out and do joint development of customers. So we actually come out and help with the sales process and cycles to actually understand is there a fit or is there not a fit? So, it's not a one-size-fits-all, it's not just a, yes we got something on paper that we can sell you and we'll sell you every once in a while, it really is a way to develop an ecosystem to deliver value to the customer. >> All right, so let's talk about the customer mindset real quick. When you, are they, how far along on them, I really don't know much 'cause I'm really starting to probe in this area, how savvy are they to the partnership levels? I mean, you disclose it, you're transparent about it, but I mean, are customers getting that the partnering is very key? I mean, are they drilling, asking tough questions, are you kind of getting them educated one way, are they savvy about it? They may have been doing partners in house, but remember the enterprise had a generation of down-to-the-bone cutting, outsource everything, consolidation, and then you know, go back around 2010, the uplift on reinvestment hit, so we're kind of in this renaissance right now. So, thoughts? >> The partnership is actually the secret sauce that's part of our sales cycle. When we talk about big data outcomes and enabling self-service, customers assume oh, okay, you guys built some software, you've got some hardware, and then when we double-click into how we make this capable, we say oh, well we partner with BlueTalon and BlueData, and this other, and they go, wait a minute, that's not your software? No, no, we didn't build that. We have scoured the market and we've found partners that we work with and we trust, and all of a sudden you can see their shoulders relax and they realize that we're not just there to sell them more kit. We're actually there to help them solve their problems. And it is a game changer, because they deal with vendors every day. Software Vendor X, Software Vendor Y, Hardware Vendor Z, and so to have a company that they have good relationships with already bring more capabilities to them, the guard comes down and they say okay, let's talk about how we can make this work. >> All right, so let's get to the meat of the partnership, which I want to get to 'cause I think that's fundamental. Thanks for sharing perspective on the community piece. We're being on it, we've been doing, we're a community brand ourselves. We're not a close guard, we're not about restricting and censoring people at events, that's not what we're about. So you guys know that, so appreciate you commenting on the community there. The Elastic Data Platform you guys are talking about, it's a partnership deal. You provide an EPIC software, you guys providing some great security in there. What is it about, what's the benefit? So it's you're leading them to product, take a minute to explain the product and then the roles. >> Yeah, so the Elastic Data Platform is a capability, a set of capabilities that is meant to help our enterprise customers get to that next level of self-service. Data science as a service, and do that on any cloud with any tools in a security-controlled manner. That's what Elastic Data Platform is. And it's meant to plug in to the customer's existing investments and their existing tools and augment that, and through our services arm, we tie these technologies together using their open APIs, that's why that's so critical for us, and we bring that value back to our customers. >> And you guys are providing the EPIC software? What is EPIC software? I mean, I love epic software, that's an epic, I hope it's not an epic fail, so an epic name, but epic-- >> Elastic Private Instant Clusters, it's actually an acronym for what it stands for, that is what it provides for our customers. >> John: So you're saying that EPIC stands for-- >> Elastic Private Instant Clusters. So it can run in a private cloud environment on your on-prem infrastructure, but as I said before, it can run in a hybrid architecture on the public cloud as well. But yeah, I mean, we're working closely with the Dell EMC team, they're an investor, we work closely with their services organization, with their server organization, the storage organization, but they really are the glue that brings it all together. From services to software to hardware, and provides the complete solution to the customers. So, as I think Matt-- >> John: Multi-tenancy is a huge deal, multi-tenancy's a huge deal. >> Absolutely, yeah. Also the ability to have logical isolation between each of those different tenants for different data science teams, different analyst teams, you know, that's particularly at large financial services organizations like Barclays, you spoke yesterday, Matt alluded to earlier. They talked about the need to support a variety of different business units who each have their own unique use cases, whether it's batch processing with Hadoop or real-time streaming and fast data with Spark, Kafka, and NoSQL Database, or whether it's deep learning, machine learning. Each of those different tenants has different needs, and so you can spin up containers using our solution for each of those tenants. >> John: Yeah, that's been a big theme this week too, and so many little things, this one relates to this one, is the elastic nature of how people want to manage the provisioning of more resource. So, here's what we see. They're using collective intelligence, data, hey, they're data science guys, they figured it out! Whatever the usage is, they can do a virtual layer if you will, and then based upon the use they can then double down. So let the users drive this real collaborative, that seems to the a big theme, so this helps there. The other theme has been the centralized, this is the GDPR hanging over one's head, but the, even though that's more of threat and it's a gun to the head, it's the hammer or the guillotine, however you look at it, there's more of enablement around centralization, so it's not just the threat of that, it's other things that are benefiting. >> Right, it's more than just the threat of the GDPR and being compliant with those perspectives, right? The other big portion of this is, if you want to do, you do want to provide self-service. So the key to self-service is that's great, I can create an environment, but if it takes me a long time to get data to that environment to actually be able to utilize it or protect the data that's in that environment by having to rewrite policies from a different place, then you don't get the benefit right, the acceleration of the self-service. So having centralized policies of distributed enforcements gives you that elastic ability, right? Again, we can deploy the central engines again on-premises, but you can protect data that's in the cloud or protect data that's in a private cloud, so as companies move data for their different workloads, we can put the same protections with them and it goes immediately with them, so you don't have to manage it in multiple places. It's not like, oh, did I remember to put that rule over in this system? Oh, no I didn't, oh and guess what just happened to me? You know, I did get smacked with a big fine because I didn't, I wasn't compliant. So compliance-- >> How about Audit, too? I mean, are you checking the Audit side too? >> Yeah, so Audit's a great portion of that, and we do Audit for a couple of reasons. One is to make sure that you are compliant, but two is to make sure you actually have the right policies defined. Are people accessing the data the way you expect them to access that data? So that's another big portion of us and what we do from an audit perspective is that data usage lineage, and we actually tell you what the customer, what the user was trying to do. So if a customer's trying to access the data you see a large group trying to access a certain set of data but they're being denied access to it, now you can look and say, is that truly correct? Do I want them not being-- >> John: Well, Equifax, that thing was being phished out over months and months and months. Not just four, that thing has been phished over 10 times. In fact, state-sponsored actors were franchises of that organization. So, they were in the VPN, so it's not even, so you, so this is where the issues, okay, let's just say that happened again. You would have flagged it. >> We flag it. >> You would have seen the pattern access and said, okay, a lot of people cleaning us out. >> Yep, while it's happening. Right, so you get to see that usage, the lineage of the usage of the data, right, so you get to see that pattern as well. Not only who's trying to access, all right, 'cause protecting the perimeter is, as we all know, is no longer viable. So we actually get to watch the usage of the, the usage pattern so you can detect an anomaly in that type of system, as well as you can quickly change policies to shut down that gap, and then watch to see what happens, see who's continuing to try to hit it. >> Well, it's been a great conversation. Love that you guys are on and great to see the Elastic Data Platform come together through the partnerships, again. As you know, we're really passionate about highlighting and understanding more about the community dynamic as it becomes more than just socialization, it's a business model to the enterprise, as it was in open source. We'll be covering that. So I'd like to go around the panel here just to end this segment. Share something that someone might not know what's going on in industry that you want to point out, that's an observation, an anecdote that hasn't been covered, hasn't been serviced, it could be a haymaker, it could be something anecdotal, personal observation. In the big data world, BigData NYC this week or beyond, what should people know about that may or may not be covered out there that's happened that they should know about? >> Well, I think this one's, people pretty much should know about this one, right, but four or five years ago Hadoop was going to replace everything in the world. And two, three years ago the RDBMS's groups were like, Hadoop will never make it out of the science fair project. Right, we're in a world now where that's no longer true. It's somewhere in between. Hadoop is going to remain, and they're going to be continued, and the RDBMS is also going to continue. So you need to look at ecosystems that can actually allow you to cover both sides of that coin, which we're talking about here, is those types of tools are going to continue together forward. So you have to look at your entire ecosystem and move away from siloed functions to how you actually look at an entire data protection in data usage on environment. >> Matt? >> I would say that the technology adoption in the enterprise is outstripping the organization's ability to keep up with it. So as we deploy new technologies, tools, and techniques to do all sorts of really amazing things, we see the organization lagging in its ability to keep up. And so policies and procedures, operating models, whatever you want to call that, put it under the data governance umbrella, I suppose. If those don't keep up, you're going to end up with just an organization that is mismatched with the technology that is put into place, and ultimately you can end up in a massive compliance problem. Now, that's worst case. But even in best case, you're going to have a really inefficient use of your resources. My favorite question to ask organizations, so let's say you could put a timer on one of the data science sandboxes. So what happens when the timer goes off and the data science is not done? And you've got a line of people waiting for resources, what do you do? What is, how does the organization respond to that? It's a really simple question, but the answer's going to be very nuanced. So if that's the policy, that's the operating model stuff that we're talking about that we've got to think about when we enable self-service and self-security, those things have to come hand-in-hand. >> That's the operational thinking that needs to come through. >> Okay, Jason? >> Yeah, I think even for us, I mean this has been happening for some time now, but I think there still is this notion that the traditional way to deploy Hadoop and other big data workloads on prem is bare metal, and that's the way it's always been done. Or, you can run it in the cloud. But I think what we're seeing now, what we've seen evolve over the past couple of years is you can run your on-prem workloads using docker containers in a containerized environment. You can have this cloud-like experience on-prem but you can also provide the ability to be able to move those workloads, whether they're on-prem or in the cloud. So you can have this hybrid approach and multi-cloud approach. So I think that's fundamentally changing, it's a new dynamic, a new paradigm for big data, either on-prem or in the cloud. It doesn't have to be on bare metal anymore. And we get the same, we've been able to get-- >> It's on-prem, people want on-prem, that's where the action is, and cloud no doubt, but right now it's the transition. Hybrid cloud's definitely going to be there. I guess my observation is the tool shed problem. You know, I said earlier all day, you don't want to have a tool shed full of tools you don't use anymore or buy a hammer that wants to turn into a lawn mower 'cause the vendor changed, pivoted. You got to be careful what you buy, the tools, so don't think like a tool. Think like a platform. And I think having a platform mentality, understanding the system, or operating environment as you were getting to, I think really is a fundamental exercise that most decision makers think about. 'Cause again, your relationship with the Elastic Data Platform proves that this operating environment's evolving, it's not about the tool. The tool has to be enabled, and if the tool is enabled into the platform it should have a data model that falls into place, no one should have to think about it, you get the compliance, you get the docker container, so don't buy too many tools. If you do, make sure they're clean and in a clean tool shed! You got a lawnmower, I guess that's the platform. Bad analogy, but you know, I think tools has been the rage in this market, and now I think platforming it is something that we're seeing more of. So guys, thanks so much, appreciate it. Elastic Data Platform by Dell EMC, with the EPIC Platform from BlueData, and BlueTalon providing the data governance and compliance, great stuff, I'm certain the GDPR, BlueTalon, you guys got a bright future, congratulations. All right, more CUBE coverage after this short break, live from New York, it's theCUBE. (rippling music)
SUMMARY :
Brought to you by SiliconANGLE Media And before we get into the relationship that you guys have, the complication of having to deliver all those tools that's the DevOps ethos going mainstream. the ability to dockerize all of those environments, so you got to partner together. that it's going to be a seamless experience, but philosophy-wise you can't do the barney deals, It takes an executive, at that level, and so within the ecosystem, is the number one thing. so you will always have the bleeding edge. If it fails, they're going to tell, what is the saying, What's BlueTalon do great for you guys? but give him the compliment first. critical to the capability we provide for big data. and the data segmentation, it just works. I mean, you guys are not small and so from our perspective, we're-- Exactly, and so for us, like we talked about earlier, on the use of our software and how to do that. So you're a tight partner. we throw up our hands and walk away. and so they have to understand those pieces right, so he needs to rely on you the sales process and cycles to actually understand but I mean, are customers getting that the partnering and all of a sudden you can see their shoulders relax All right, so let's get to the meat of the partnership, Yeah, so the Elastic Data Platform is that is what it provides for our customers. and provides the complete solution to the customers. John: Multi-tenancy is a huge deal, and so you can spin up containers or the guillotine, however you look at it, So the key to self-service is and we actually tell you what the customer, so this is where the issues, You would have seen the pattern access and said, the usage pattern so you can detect an anomaly Love that you guys are on and great to see and the RDBMS is also going to continue. but the answer's going to be very nuanced. that needs to come through. and that's the way it's always been done. You got to be careful what you buy, the tools,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Barclays | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jason Schroedl | PERSON | 0.99+ |
BlueData | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
BlueTalon | ORGANIZATION | 0.99+ |
Matt Maccaux | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
80,000 | QUANTITY | 0.99+ |
Carey James | PERSON | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
NYC | LOCATION | 0.99+ |
4 lines | QUANTITY | 0.99+ |
Casey | PERSON | 0.99+ |
40 people | QUANTITY | 0.99+ |
two partners | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
BigData | ORGANIZATION | 0.99+ |
half a dozen | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
Deli EMC | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
twelve people | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
NoSQL | TITLE | 0.99+ |
four | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
billions | QUANTITY | 0.99+ |
Spark | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
eighth year | QUANTITY | 0.99+ |
over 10 times | QUANTITY | 0.99+ |
Kafka | TITLE | 0.98+ |
this week | DATE | 0.98+ |
Mongo | ORGANIZATION | 0.98+ |
2010 | DATE | 0.98+ |
three-way | QUANTITY | 0.98+ |
Flink | ORGANIZATION | 0.98+ |
a dozen | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Strata Data | ORGANIZATION | 0.97+ |
two separate companies | QUANTITY | 0.97+ |
Santhosh Mahendiran, Standard Chartered Bank | BigData NYC 2017
>> Announcer: Live, from Midtown Manhattan, it's theCUBE, covering Big Data New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. (upbeat techno music) >> Okay welcome back, we're live here in New York City. It's theCUBE's presentation of Big Data NYC, our fifth year doing this event in conjunction with Strata Data, formerly Strata Hadoop, formerly Strata Conference, formerly Hadoop World, we've been there from the beginning. Eight years covering Hadoop's ecosystem now Big Data. This is theCUBE, I'm John Furrier. Our next guest is Santhosh Mahendiran, who is the global head of technology analytics at Standard Chartered Bank. A practitioner in the field, here getting the data, checking out the scene, giving a presentation on your journey with Data at a bank, which is big financial obviously an adopter. Welcome to theCUBE. >> Thank you very much. >> So we always want to know what the practitioners are doing because at the end of the day there's a lot of vendors selling stuff here, so you got, everyone's got their story. End of the day you got to implement. >> That's right. >> And one of the themes is the data democratization which sounds warm and fuzzy, collaborating with data, this is all good stuff and you feel good and you move into the future, but at the end of the day it's got to have business value. >> That's right. >> And as you look at that, how do you look at the business value? Cause you want to be in the bleeding edge, you want to provide value and get that edge operationally. >> That's right. >> Where's the value in data democratization? How did you guys roll this out? Share your story. >> Okay, so let me start with the journey first before I come to the value part of it, right? So, data democratization is an outcome, but the journey has been something we started three years back. So what did we do, right? So we had some guiding principles to start our journey. The first was to say that we believed in the three S's, which is speed, scale, and it should be really, really flexible and super fast. So one of the challenges that we had was our historical data warehouses was entirely becoming redundant. And why was it? Because it was RDBMS centric, and it was extremely disparate. So we weren't able to scale up to meet the demands of managing huge chunks of data. So, the first step that we did was to re-pivot it to say that okay, let's embrace Hadoop. And what you mean by embracing is just not putting in the data lake, but we said that all our data will land into the data lake. And this journey started in 2015, so we have close to 80% of the Bank's data in the lake and it is end of day data right now and this data flows in on daily basis, and we have consumers who feed off that data. Now coming to your question about-- >> So the data lake's working? >> The data lake is working, up and running. >> People like it, you just got a good spot, batch 'em all you throw everything in the lake. >> So it is not real time, it is end of day. There is some data that is real-time, but the data lake is not entirely real-time, that I have to tell you. But one part is that the data lake is working. Second part to your question is how do I actually monetize it? Are you getting some value out of it? But I think that's where tools like Paxata has actually enabled us to accelerate this journey. So we call it data democratization. So the best part it's not about having the data. We want the business users to actually use the data. Typically, data has always been either delayed or denied in most of the cases to end-users and we have end-users waiting for the data but they don't get access to the data. It was done because primarily the size of the data was too huge and it wasn't flexible enough to be shared with. So how did tools like Paxata and the data lake help us? So what we did with data democratization is basically to say that "hey we'll get end-users to access the data first in a fast manner, in a self-service manner, and something that gives operational assurance to the data, so you don't hold the data and then say that you're going to get a subset of data to play with. We'll give you the entire set of data and we'll give you the right tools which you can play with. Most importantly, from an IT perspective, we'll be able to govern it. So that's the key about democratization. It's not about just giving them a tool, giving them all data and then say "go figure it out." It's about ensuring that "okay, you've got the tools, you've got the data, but we'll also govern it," so that you obviously have control over what they're doing. >> So now you govern it, they don't have to get involved in the governance, they just have access? >> No they don't need to. Yeah, they have access. So governance works both ways. We establish the boundaries. Look at it as a referee, and then say that "okay, there are guidelines that you don't," and within the datasets that key people have access to, you can further set rules. Now, coming back to specific use cases, I can talk about two specific cases which actually helped us to move the needle. The first is on stress testing, so being a financial institution, we typically have to report various numbers to our regulators, etc. The turnaround time was extremely huge. These kind of stress testing typically involve taking huge amount-- >> What were some of the turnaround times? >> Normally it was two to three weeks, some cases a month-- >> Wow. >> So we were able to narrow it down to days, but what we essentially did was as with any stress testing or reporting, it involved taking huge amounts of data, crunching them and then running some models and then showing the output, basically a number of transformations involved. Earlier, you first couldn't access the entire dataset, so that we solved-- >> So check, that was a good step one-- >> That was step one. >> But was there automation involved in that, the Paxata piece? >> Yeah, I wouldn't say it was fully automated end-to-end, but there was definitely automation given the fact that now you got Paxata to work off the data rather than someone extracting the data and then going off and figuring what needs to be done. The ability to work off the entire dataset was a big plus. So stress testing, bringing down the cycle time. The second one use case I can talk about is again anti-money laundering, and in our financial crime compliance space. We had processes that took time to report, given the clunkiness in the various handoffs that we needed to do. But again, empowering the users, giving the tool to them and then saying "hey, this"-- >> How about know your user, because we have to anti-money launder, you need to have to know your user base, that's all set their too? >> Yeah. So the good part is know the user, know your customer, KYCs all that part is set, but the key part is making sure the end-users are able to access the data much more earlier in the life cycle and are able to play with it. In the case of anti-money laundering, again first question of three weeks to four weeks was shortened down to question of days by giving tools like Paxata again in a structured manner and with which we're able to govern. >> You control this, so you knew what you were doing, but you let their tools do the job? >> Correct, so look at it this way. Typically, the data journey has always been IT-led. It has never been business-led. If you look at the generations of what happens is, you source the data which is IT-led, then you model the data which is IT-led, then you prepare then massage the data which is again IT-led and then you have tools on top of it which is again IT-led so the end-users get it only after the fourth stage. Now look at the generations within. All these life cycles apart from the fact that you source the data which is typically an IT issue, the rest need to be done by the actual business users and that's what we did. That's the progression of the generations in which we now we're in the third generation as I call it where our role is just to source the data and then say, "yeah we'll govern it in the matter and then preparation-- >> It's really an operating system and we were talking with Aaron with Elation's co-founder, we used the analogy of a car, how this show was like a car show engine show, what's in the engine and the technology and then it evolved every year, now it's like we're talking about the cars, now we're talking about driver experience-- >> That's right. >> At the end of the day, you just want to drive. You don't really care what's under the hood, you do but you don't, but there's those people who do care what's under the hood, so you can have best of both worlds. You've got the engines, you set up the infrastructure, but ultimately, you in the business side, you just want to drive, that's what's you're getting at? >> That's right. The time-to-market and speed to empower the users to play around with the data rather than IT trying to churn the data and confine access to data, that's a thing of the past. So we want more users to have faster access to data but at the same time govern it in a seamless manner. The word governance is still important because it's not about just give the data. >> And seamless is key. >> Seamless is key. >> Cause if you have democratization of data, you're implying that it is community-oriented, means that it's available, with access privileges all transparently or abstracted away from the users. >> Absolutely. >> So here's the question I want to ask you. There's been talk, I've been saying it for years going back to 2012 that an abstraction layer, a data layer will evolve and that'll be the real key. And then here in this show, I heard things like intelligent information fabric that is business, consumer-friendly. Okay, it's a mouthful, but intelligent information fabric in essence talks about an abstraction layer-- >> That's right. >> That doesn't really compromise anything but gives some enablement, creates some enabling value-- >> That's right. >> For software, how do you see that? >> As the word suggests, the earlier model was trying to build something for the end-users, but not which was end-user friendly, meaning to say, let me just give you a simple example. You had a data model that existed. Historically the way that we have approached using data is to say "hey, I've got a model and then let's fit that data into this model," without actually saying that "does this model actually serve the purpose?" You abstracted the model to a higher level. The whole point about intelligent data is about saying that, I'll give you a very simple analogy. Take zip code. Zipcode in US is very different from zipcode in India, it's very different from zipcode in Singapore. So if I had the ability for my data to come in, to say that "I know it's a zipcode, but this zipcode belongs to US, this zipcode belongs to Singapore, and this zipcode belongs to India," and more importantly, if I can further rev it up a notch, if I say that "this belongs to India, and this zipcode is valid." Look at where I'm going with intelligent sense. So that's what's up. If you look at the earlier model, you have to say that "yeah, this is a placeholder for zipcode." Now that makes sense, but what are you doing with it? >> Being a relational database model, it's just a field in a schema, you're taking it and abstracting it and creating value out of it. >> Precisely. So what I'm actually doing is accelerating the adoption, I'm making it more simpler for users to understand what the data is. So I don't need to as a user figure out "I got a zipcode, now is it a Singapore, India or what zipcode." >> So all this automation, Paxata's got a good system, we'll come back to the Paxata question in a second, I do want to drill down on that. But the big thing that I've been seeing at the show, and again Dave Alonte, my partner, co-CEO of Silicon Angle, we always talk about this all the time. He's more less bullish on Hadoop than I am. Although I love Hadoop, I think it's great but it's not the end-all, be-all. It's a great use case. We were critical early on and the thing we were critical on it was it was too much time being spent on the engine and how things are built, not on the business value. So there's like a lull period in the business where it was just too costly-- >> That's right. >> Total cost of ownership was a huge, huge problem. >> That's right. >> So now today, how did you deal with that and are you measuring the TCO or total cost of ownership cause at the end of the day, time to value, which is can you be up and running in 90 days with value and can you continue to do that, and then what's the overall cost to get there. Thoughts? >> So look I think TCO always underpins any technology investment. If someone said I'm doing a technology investment without thinking about TCO, I don't think he's a good technology leader, so TCO is obviously a driving factor. But TCO has multiple components. One is the TCO of the solution. The other aspect is TCO of what my value I'm going to get out of this system. So talking from an implementation perspective, what I look at as TCO is my whole ecosystem which is my hardware, software, so you spoke about Hadoop, you spoke about RDBMS, is Hadoop cheaper, etc? I don't want to get into that debate of cheaper or not but what I know is the ecosystem is becoming much, much more cheaper than before. And when I talk about ecosystem, I'm talking about RDBMS tools, I'm talking about Hadoop, I'm talking about BI tools, I'm talking about governance, I'm talking about this whole framework becoming cheaper. And it is also underpinned by the fact that hardware is also becoming cheaper. So the reality is all components in the whole ecosystem are becoming cheaper and given the fact that software is also becoming more open-sourced and people are open to using open-source software, I think the whole question of TCO becomes a much more pertinent question. Now coming to your point, do you measure it regularly? I think the honest answer is I don't think we are doing a good job of measuring it that well, but we do have that as one of the criteria for us to actually measure the success of our project. The way that we do is our implementation cost, at the time of writing out our PETs, we call it PETs, which is the Project Execution Document, we talk about cost. We say that "what's the implementation cost?" What are the business cases that are going to be an outcome of this? I'll give you an example of our anti-money laundering. I told you we reduced our cycle time from few weeks to a few days, and that in turn means the number of people involved in this whole process, you're reducing the overheads and the operational folks involved in it. That itself tells you how much we're able to save. So definitely, TCO is there and to say that-- >> And you are mindful of, it's what you look at, it's key. TCO is on your radar 100% you evaluate that into your deals? >> Yes, we do. >> So Paxata, what's so great about Paxata? Obviously you've had success with them. You're a customer, what's the deal. Was it the tech, was it the automation, the team? What was the key thing that got you engaged with them or specifically why Paxata? >> Look, I think the key to partnership there cannot be one ingredient that makes a partnership successful, I think there are multiple ingredients that make a partnership successful. We were one of the earliest adopters of Paxata. Given that we're a bank and we have multiple different systems and we have lot of manual processing involved, we saw Paxata as a good fit to govern these processes and ensure at the same time, users don't lose their experience. The good thing about Paxata that we like was obviously the simplicity and the look and feel of the tool. That's number one. Simplicity was a big point. The second one is about scale. The scale, the fact that it can take in millions of roles, it's not about just working off a sample of data. It can work on the entire dataset. That's very key for us. The third is to leverage our ecosystem, so it's not about saying "okay you give me this data, let me go figure out what to do and then," so Paxata works off the data lake. The fact that it can leverage the lake that we built, the fact that it's a simple and self-preparation tool which doesn't require a lot of time to bootstrap, so end-use people like you-- >> So it makes it usable. >> It's extremely user-friendly and usable in a very short period of time. >> And that helped with the journey? >> That really helped with the journey. >> Santosh, thanks so much for sharing. Santosh Mahendiran, who is the Global Tech Lead at the Analytics of the Bank at Standard Chartered Bank. Again, financial services, always a great early adopter, and you get success under your belt, congratulations. Data democratization is huge and again, it's an ecosystem, you got all that anti-money laundering to figure out, you got to get those reports out, lot of heavylifting? >> That's right, >> So thanks so much for sharing your story. >> Thank you very much. >> We'll give you more coverage after this short break, I'm John Furrier, stay tuned. More live coverage in New York City, its theCube.
SUMMARY :
Brought to you by SiliconANGLE Media here getting the data, checking out the scene, End of the day you got to implement. but at the end of the day it's got to have business value. how do you look at the business value? Where's the value in data democratization? So one of the challenges that we had was People like it, you just got a good spot, in most of the cases to end-users and we have end-users guidelines that you don't," and within the datasets that Earlier, you first couldn't access the entire dataset, So stress testing, bringing down the cycle time. So the good part is know the user, know your customer, That's the progression of the generations in which we At the end of the day, you just want to drive. but at the same time govern it in a seamless manner. Cause if you have democratization of data, So here's the question I want to ask you. So if I had the ability for my data to come in, and creating value out of it. So I don't need to as a user figure out "I got a zipcode, But the big thing that I've been seeing at the show, at the end of the day, time to value, which is can you be So the reality is all components in the whole ecosystem And you are mindful of, it's what you look at, it's key. Was it the tech, was it the automation, the team? The fact that it can leverage the lake that we built, It's extremely user-friendly and usable in a very at the Analytics of the Bank at Standard Chartered Bank. We'll give you more coverage after this short break,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Alonte | PERSON | 0.99+ |
Standard Chartered Bank | ORGANIZATION | 0.99+ |
three weeks | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
2012 | DATE | 0.99+ |
2015 | DATE | 0.99+ |
Santosh Mahendiran | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Aaron | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Santhosh Mahendiran | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Santosh | PERSON | 0.99+ |
four weeks | QUANTITY | 0.99+ |
TCO | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Midtown Manhattan | LOCATION | 0.99+ |
Paxata | ORGANIZATION | 0.99+ |
one ingredient | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one part | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Eight years | QUANTITY | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
Second part | QUANTITY | 0.98+ |
third generation | QUANTITY | 0.98+ |
fourth stage | QUANTITY | 0.98+ |
two specific cases | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
BigData | ORGANIZATION | 0.98+ |
NYC | LOCATION | 0.98+ |
both worlds | QUANTITY | 0.98+ |
first step | QUANTITY | 0.97+ |
three years back | DATE | 0.97+ |
second one | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
2017 | DATE | 0.96+ |
Hadoop | TITLE | 0.96+ |
Strata Data | ORGANIZATION | 0.96+ |
Strata Hadoop | ORGANIZATION | 0.94+ |
step one | QUANTITY | 0.94+ |
first question | QUANTITY | 0.93+ |
a month | QUANTITY | 0.92+ |
Elation | ORGANIZATION | 0.9+ |
Data | EVENT | 0.89+ |
2017 | EVENT | 0.89+ |
80% | QUANTITY | 0.88+ |
Paxata | TITLE | 0.88+ |
Big Data | EVENT | 0.84+ |
theCube | ORGANIZATION | 0.83+ |
Day 2 Kickoff - Oracle OpenWorld - #oow16 - #theCUBE
>> Announcer: Live from San Francisco, it's The Cube. Covering Oracle OpenWorld 2016. Brought to you by Oracle. Now here's your host, John Furrier and Peter Burris. >> Hey, welcome back everyone, day two of wall-to-wall coverage live broadcast on the internet, this is SiliconANGLE Media's The Cube. It's our flagship program, we go out to the events and extract the signal from the noise, I'm John Furrier co-CEO of SiliconANGLE Media, joined with Peter Burris head of research of SiliconANGLE Media, as well as the general manager of WikiBon research. Peter, we did twelve interviews yesterday, we got a line up today. And we've got a great range of content, great editorial commentary, great guests from Oracle Exec. We asked them the tough questions, we also have some great sponsored spots for our supporters. But really, it's about putting out that content I think. Yesterday, we're starting to see, and we're going to some guests today around philanthropy. Oracle is doing a lot of work in philanthropy and we're going to get some great women in tech content, something that Oracle has a great presence in. But, the big story, Day 2, is Thomas Kurian was on stage. He was not on stage for Sunday night's Larry Ellison keynote with, sorry, with Intel and so he had a personal issue and had to mail in the video. There was speculation on whether he would be here, but Kurian is the guy, he's the product guy he's running the engineering, running the product development. So we expected to hear huge announcements today. Kind of not much, I mean, as expected, infrastructure's a service, we heard about containus which was pre-announced by Larry, basically announces everything. But, not a lot of big game changing news, but a lot of the blocking and tackling a lot of the ground game for Oracle's march to the cloud. Really emphasizing on the speed of standing up a data center with the browser and API's, anyone can stand up to. All the capabilities of Oracle with standing up a virtual data center with all the isolation at the network level, giving that customer choice but bridge to the future. Again, this was expected. But this is a key part of Oracle's strategy. They need to lock down the infrastructure. To make PaaS work well and SaaS, they got to make it work well. So, not a lot of surprises, as expected, infrastructures of service. >> Well let's be honest about what Oracle's biggest threat is, what the biggest threat to Oracle. They've got this enormous presence in the applications business, we talked about yesterday, the apps business is sticky. When companies embed their applications, or embed an application into their business, they reconfigure the entire organization around it. It's hard to rip that out. They are clearly number one, and have been for a long time, in the database marketplace, and it's not hard to pop a database out. It is hard to pop a database out >> John: It's very sticky >> It's very very sticky, so hardware? It's getting a little bit easier, but the possibility that the infrastructure starts to become more public in nature, starts to move up into the database space and Oracle clearly wants to ensure that it doesn't encounter longer term problems with the bottom eroding. So it needs to sustain that presence and infrastructure and continue to give people an option, should they want to tie infrastructure directly, or continue to tie infrastructure directly to the database in the application. >> And we pressed Juan Loaiza yesterday, senior vice president on the development side, lead database guru, laid out his vision of how he sees the key elements of data center and he said it simply. Our number one priority at Oracle is to move workloads from on-prem to the cloud seamlessly back and forth... >> Peter: To our cloud. (laughing) >> So we'll say cloud for now but that's ultimately what the customers want. They want to be able to move to the cloud, ultimately have all that benefits of the operating model and the agility but moving it back and forth, again, to Oracle, it's the Oracle Cloud. So on Oracle and Oracle, job one is to make it really frickin' awesome. Fast, low-cost, and the infrastructure aside, put pressure on the Amazon choice but yet keep that there. So that's clear. But I want to ask you... >> Well let me pick up on that, maybe this is what you're going to ask me. Many years ago, I had a lot of experiences working with CIO's over the years. Many years ago I was in a staff meeting of a CIO, she was looking out across her team, and she had a way of starting the conversation with every single one of her direct reports. Head of development, head of security, head of infrastructure, and she routinely would start the question just to reinforce what she wanted folks to focus on. And every staff meeting she had, she started the conversation with the head of infrastructure with, so infrastructure's doing no harm this week. And in many respects, that's becoming the message of Oracle. Infrastructure's going to do no harm, we're going to make sure that on premise, you're covered. Cloud, you're covered. Hybrid, you're covered. We want to make sure that infrastructure's not a huge part of the conversation so you can stay focused on the upper levels. >> Yeah, that's a great point. Some people call it hardening the infrastructure, Paul Maritz in 2010 at VMWorld, talked about this hardened top and he talked about Intel processors, and saying hey, you know, Intel has a hard top and no one looks under the chip, it's totally hardened, it's proprietary code that makes stuff go fast. So I think Oracle has that same kind of mojo going on with the engineered systems and you're seeing that stuff kind of trying to harden the top so that the infrastructure doesn't do any damage. And that brings up the point though about what I'm seeing. And Oracle, I would love to get your comments here, and your take, I mean, look at the industry over the years, the competitive strategy is protect and fortify your core crown jewel. It's been the database. The database powers the application so the application certainly, a lot of revenue, but the database has been this sacred cow for Oracle. And you've been seeing it, and although they haven't been overt about it, protecting the database, keeping it in the swim lane, keeping it here, letting things develop on the side, and ultimately it was all about the database. This show, it's interesting, you're starting to see that swim lane expand. You're starting to see Oracle recognize the fact that hey it's okay to be the system of record, and it's actually quite sticky to be the systems of record in a high performance database environment, but yet yielding territory, or turf, to other databases. That's interesting because that takes the monolithic siloed mentality off the table. Question is, do you see it that way? And if so, does Oracle have to adjust its competitive strategy? >> John, I think, first off, most importantly you're absolutely right Oracle, the product that is named Oracle, is the Oracle RDBMS. When people twenty years ago talked about Oracle, that's what they were talking about. Oracle has been a database company, it's its roots. It's some of the great conversation we had yesterday were with the database people. I don't think that's going to change. I think that they're trying to extend it out and one of the ways they're looking at doing that is they're recognizing that as we look forward the next couple of years, this increasingly, it's going to be, and we talked about this a lot yesterday, how does, not only the data in the Oracle database manager become increasingly relevant to other applications and other use cases in the business. But also, how do the skills associated with that database manager become more relevant and more useful to the business, as new types of data, new types of applications, and new types of business models start to become even more relevant to the industry. So I think what Oracle... >> Like data value? >> Like data value, the value of data, how development, we talked yesterday about, maybe it was after one of the conversations, about we're now entering into this world of big data and we still don't know what I call, the body plans for business models are. We're in this notion where we don't know if it's going to look like a fish, or it's going to look like a mollusk, or it's going to look like something else. We know that it's not going to look like what the RDBMS world looks like. >> It's kind of like, what's your spirit animal in cloud. We don't know yet. >> We don't know yet, so Oracle has to be flexible as their customers have to be flexible, and not presume that it's going to look exactly like it did ten years ago. >> This game is not even started. >> There you go, and I think that's the key thing, John. I think that they're finally acknowledging that the new world is not going to look exactly like the old world. They have to be flexible, they have to facilitate their customers to be flexible so that they can rearrange things in response to new develops and innovation in the industry. >> You know I was talking to some of the people at Oracle yesterday, after the Century event folks, and what's coming down to is the industry is kind of spinning towards, we call the tech athlete, the smart people. So what's interesting is Oracle has always kind of kept their smartest brains behind the firewall. It's always been kind of competitive advantage to always serve customers on all that lock-in and competitive advantage drive revenue, they're highly profitable but now you're starting to see the battle between like Amazon web services and Oracle. Couple observations from my standpoint. One, they're putting their best technical people out front. Okay, clearly talent matters, organic growth matters, certainly the MNA thing is always going on. Amazon always puts their technical people out. So, observation, companies are putting their best technical people out there on the front lines. We're going to compete on our people. Two, the announcement volume, velocity, here at OpenWorld, probably is the most I've seen of all Oracle OpenWorld in seven years. Very similar to AWS re:Invent. There are so many announcements that are coming out of re:Invent this year, I think it's going to be more than last year, but even last year, it was raining, it was a tornado of announcements. It's hard to cover. In the tech press, talking to some of the folks yesterday, Wall Street Journal, New York Times, CNBC guys, it's like their heads are exploding. Because this is so much to cover. All the stuff's coming at the customer. And to me, that's an observation that you say, okay, what does it mean? >> Well, so let's talk about two great points tossed. Let's talk about the first one, or one of, on the Tube yesterday we talked, somebody said, you know, you mentioned who is someone at Oracle saying who is the industry's best CEO? Larry Ellison, okay, Larry Ellison's 70 years old. Larry Ellison is not going to be the CTO of Oracle forever. So number one, we're starting to see some of that talent start to emerge and start to become more out front because it has to. Number two, very importantly, John, talent creates and attracts talent. And I think the more that Oracle puts its talent out front in this period of significant disruption, that they are going to be more likely to attract other talent to Oracle. >> And that's what impacts the MNA game, too. >> You betcha. >> How to integrate in. >> We talked about what's happening amongst VCs in the Valley yesterday. And who are going to be the entrepreneurs? We're going to hear about Oracle. Is Oracle going to be more aggressive at developing some of those newly inventors and maybe not having them institutionally be part of Oracle? >> Reggie Bradford yesterday talked specifically about that MNA. Again, talent brings talent organically and inorganically. >> So putting those guys out in front is going to make Oracle a more attractive place as we go through this disruptive process. But I think the other thing that you mentioned is a really crucial point. At the end of the day, Oracle is introducing a lot of stuff, and as much as I've also ever seen. But it's coherent. One of the things that's really interesting about this conference or these sets of announcements is that they're covering everything, but it's one of the most coherent sets of announcements I've ever seen from Oracle. It's not a whole bunch of product piece parts. >> John: It's not fluffy. >> It's not fluffy and it's not piece parts. It's cloud. We are bringing all this stuff, and we're driving it into the cloud. 2017's going to be a huge year, because Oracle's, as you said yesterday, is putting everybody on alert. We're going to get really serious about this. >> And we have Oracle's keynote with Larry Ellison, you're going to watch it at three or one o'clock. And then we come back on the Cube for our analysis at 3:30. >> Peter: Pacific. >> Pacific time, we're going to go into great detail on the keynote, but one point we can't cover on here on the interest, we've got to go to our next segment is, and this is where you can expand on this, you mentioned business models. The developer is critical in the business model and the data. Data and developer, those two things we will really, really unpack at 3:30. Of course we'll analyze the heck out of Larry Ellison's keynote, because again, everyone's up front. Call to arms, this is not a false alarm for Oracle. It is battle stations. And we are going to see which company's got the best technical people out front. Where's the meat on the bone for the products? Of course, we've got it on the Cube. >> 2017 is going to be a year where leadership matters in the tech industry. >> Peter Buriss laying it down, I'm John Furrier. The Cube, all-day coverage, day two of three days. 12 videos yesterday, live broadcast again today. We'll keep pumping it out there, that's what we do. This is the Cube, we'll be right back, day two, Oracle OpenWorld live in San Francisco. We'll be right back. (upbeat instrumental music)
SUMMARY :
Brought to you by Oracle. but a lot of the blocking and tackling in the database marketplace, that the infrastructure of how he sees the key Peter: To our cloud. of the operating model and the agility a huge part of the conversation so you can It's been the database. and one of the ways they're one of the conversations, It's kind of like, what's that it's going to look exactly acknowledging that the new In the tech press, talking to that they are going to be more likely impacts the MNA game, too. in the Valley yesterday. about that MNA. One of the things that's into the cloud. the Cube for our analysis and the data. in the tech industry. This is the Cube, we'll
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Kurian | PERSON | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
Reggie Bradford | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Peter Buriss | PERSON | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Larry | PERSON | 0.99+ |
12 videos | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
twelve interviews | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
2017 | DATE | 0.99+ |
CNBC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
WikiBon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Yesterday | DATE | 0.99+ |