Rich Gaston, Micro Focus | Virtual Vertica BDC 2020
(upbeat music) >> Announcer: It's theCUBE covering the virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Welcome back to the Vertica Virtual Big Data Conference, BDC 2020. You know, it was supposed to be a physical event in Boston at the Encore. Vertica pivoted to a digital event, and we're pleased that The Cube could participate because we've participated in every BDC since the inception. Rich Gaston this year is the global solutions architect for security risk and governance at Micro Focus. Rich, thanks for coming on, good to see you. >> Hey, thank you very much for having me. >> So you got a chewy title, man. You got a lot of stuff, a lot of hairy things in there. But maybe you can talk about your role as an architect in those spaces. >> Sure, absolutely. We handle a lot of different requests from the global 2000 type of organization that will try to move various business processes, various application systems, databases, into new realms. Whether they're looking at opening up new business opportunities, whether they're looking at sharing data with partners securely, they might be migrating it to cloud applications, and doing migration into a Hybrid IT architecture. So we will take those large organizations and their existing installed base of technical platforms and data, users, and try to chart a course to the future, using Micro Focus technologies, but also partnering with other third parties out there in the ecosystem. So we have large, solid relationships with the big cloud vendors, with also a lot of the big database spenders. Vertica's our in-house solution for big data and analytics, and we are one of the first integrated data security solutions with Vertica. We've had great success out in the customer base with Vertica as organizations have tried to add another layer of security around their data. So what we will try to emphasize is an enterprise wide data security approach, where you're taking a look at data as it flows throughout the enterprise from its inception, where it's created, where it's ingested, all the way through the utilization of that data. And then to the other uses where we might be doing shared analytics with third parties. How do we do that in a secure way that maintains regulatory compliance, and that also keeps our company safe against data breach. >> A lot has changed since the early days of big data, certainly since the inception of Vertica. You know, it used to be big data, everyone was rushing to figure it out. You had a lot of skunkworks going on, and it was just like, figure out data. And then as organizations began to figure it out, they realized, wow, who's governing this stuff? A lot of shadow IT was going on, and then the CIO was called to sort of reign that back in. As well, you know, with all kinds of whatever, fake news, the hacking of elections, and so forth, the sense of heightened security has gone up dramatically. So I wonder if you can talk about the changes that have occurred in the last several years, and how you guys are responding. >> You know, it's a great question, and it's been an amazing journey because I was walking down the street here in my hometown of San Francisco at Christmastime years ago and I got a call from my bank, and they said, we want to inform you your card has been breached by Target, a hack at Target Corporation and they got your card, and they also got your pin. And so you're going to need to get a new card, we're going to cancel this. Do you need some cash? I said, yeah, it's Christmastime so I need to do some shopping. And so they worked with me to make sure that I could get that cash, and then get the new card and the new pin. And being a professional in the inside of the industry, I really questioned, how did they get the pin? Tell me more about this. And they said, well, we don't know the details, but you know, I'm sure you'll find out. And in fact, we did find out a lot about that breach and what it did to Target. The impact that $250 million immediate impact, CIO gone, CEO gone. This was a big one in the industry, and it really woke a lot of people up to the different types of threats on the data that we're facing with our largest organizations. Not just financial data; medical data, personal data of all kinds. Flash forward to the Cambridge Analytica scandal that occurred where Facebook is handing off data, they're making a partnership agreement --think they can trust, and then that is misused. And who's going to end up paying the cost of that? Well, it's going to be Facebook at a tune of about five billion on that, plus some other finds that'll come along, and other costs that they're facing. So what we've seen over the course of the past several years has been an evolution from data breach making the headlines, and how do my customers come to us and say, help us neutralize the threat of this breach. Help us mitigate this risk, and manage this risk. What do we need to be doing, what are the best practices in the industry? Clearly what we're doing on the perimeter security, the application security and the platform security is not enough. We continue to have breaches, and we are the experts at that answer. The follow on fascinating piece has been the regulators jumping in now. First in Europe, but now we see California enacting a law just this year. They came into a place that is very stringent, and has a lot of deep protections that are really far-reaching around personal data of consumers. Look at jurisdictions like Australia, where fiduciary responsibility now goes to the Board of Directors. That's getting attention. For a regulated entity in Australia, if you're on the Board of Directors, you better have a plan for data security. And if there is a breach, you need to follow protocols, or you personally will be liable. And that is a sea change that we're seeing out in the industry. So we're getting a lot of attention on both, how do we neutralize the risk of breach, but also how can we use software tools to maintain and support our regulatory compliance efforts as we work with, say, the largest money center bank out of New York. I've watched their audit year after year, and it's gotten more and more stringent, more and more specific, tell me more about this aspect of data security, tell me more about encryption, tell me more about money management. The auditors are getting better. And we're supporting our customers in that journey to provide better security for the data, to provide a better operational environment for them to be able to roll new services out with confidence that they're not going to get breached. With that confidence, they're not going to have a regulatory compliance fine or a nightmare in the press. And these are the major drivers that help us with Vertica sell together into large organizations to say, let's add some defense in depth to your data. And that's really a key concept in the security field, this concept of defense in depth. We apply that to the data itself by changing the actual data element of Rich Gaston, I will change that name into Ciphertext, and that then yields a whole bunch of benefits throughout the organization as we deal with the lifecycle of that data. >> Okay, so a couple things I want to mention there. So first of all, totally board level topic, every board of directors should really have cyber and security as part of its agenda, and it does for the reasons that you mentioned. The other is, GDPR got it all started. I guess it was May 2018 that the penalties went into effect, and that just created a whole Domino effect. You mentioned California enacting its own laws, which, you know, in some cases are even more stringent. And you're seeing this all over the world. So I think one of the questions I have is, how do you approach all this variability? It seems to me, you can't just take a narrow approach. You have to have an end to end perspective on governance and risk and security, and the like. So are you able to do that? And if so, how so? >> Absolutely, I think one of the key areas in big data in particular, has been the concern that we have a schema, we have database tables, we have CALMS, and we have data, but we're not exactly sure what's in there. We have application developers that have been given sandbox space in our clusters, and what are they putting in there? So can we discover that data? We have those tools within Micro Focus to discover sensitive data within in your data stores, but we can also protect that data, and then we'll track it. And what we really find is that when you protect, let's say, five billion rows of a customer database, we can now know what is being done with that data on a very fine grain and granular basis, to say that this business process has a justified need to see the data in the clear, we're going to give them that authorization, they can decrypt the data. Secure data, my product, knows about that and tracks that, and can report on that and say at this date and time, Rich Gaston did the following thing to be able to pull data in the clear. And that could be then used to support the regulatory compliance responses and then audit to say, who really has access to this, and what really is that data? Then in GDPR, we're getting down into much more fine grained decisions around who can get access to the data, and who cannot. And organizations are scrambling. One of the funny conversations that I had a couple years ago as GDPR came into place was, it seemed a couple of customers were taking these sort of brute force approach of, we're going to move our analytics and all of our data to Europe, to European data centers because we believe that if we do this in the U.S., we're going to violate their law. But if we do it all in Europe, we'll be okay. And that simply was a short-term way of thinking about it. You really can't be moving your data around the globe to try to satisfy a particular jurisdiction. You have to apply the controls and the policies and put the software layers in place to make sure that anywhere that someone wants to get that data, that we have the ability to look at that transaction and say it is or is not authorized, and that we have a rock solid way of approaching that for audit and for compliance and risk management. And once you do that, then you really open up the organization to go back and use those tools the way they were meant to be used. We can use Vertica for AI, we can use Vertica for machine learning, and for all kinds of really cool use cases that are being done with IOT, with other kinds of cases that we're seeing that require data being managed at scale, but with security. And that's the challenge, I think, in the current era, is how do we do this in an elegant way? How do we do it in a way that's future proof when CCPA comes in? How can I lay this on as another layer of audit responsibility and control around my data so that I can satisfy those regulators as well as the folks over in Europe and Singapore and China and Turkey and Australia. It goes on and on. Each jurisdiction out there is now requiring audit. And like I mentioned, the audits are getting tougher. And if you read the news, the GDPR example I think is classic. They told us in 2016, it's coming. They told us in 2018, it's here. They're telling us in 2020, we're serious about this, and here's the finds, and you better be aware that we're coming to audit you. And when we audit you, we're going to be asking some tough questions. If you can't answer those in a timely manner, then you're going to be facing some serious consequences, and I think that's what's getting attention. >> Yeah, so the whole big data thing started with Hadoop, and Hadoop is open, it's distributed, and it just created a real governance challenge. I want to talk about your solutions in this space. Can you tell us more about Micro Focus voltage? I want to understand what it is, and then get into sort of how it works, and then I really want to understand how it's applied to Vertica. >> Yeah, absolutely, that's a great question. First of all, we were the originators of format preserving encryption, we developed some of the core basic research out of Stanford University that then became the company of Voltage; that build-a-brand name that we apply even though we're part of Micro Focus. So the lineage still goes back to Dr. Benet down at Stanford, one of my buddies there, and he's still at it doing amazing work in cryptography and keeping moving the industry forward, and the science forward of cryptography. It's a very deep science, and we all want to have it peer-reviewed, we all want to be attacked, we all want it to be proved secure, that we're not selling something to a major money center bank that is potentially risky because it's obscure and we're private. So we have an open standard. For six years, we worked with the Department of Commerce to get our standard approved by NIST; The National Institute of Science and Technology. They initially said, well, AES256 is going to be fine. And we said, well, it's fine for certain use cases, but for your database, you don't want to change your schema, you don't want to have this increase in storage costs. What we want is format preserving encryption. And what that does is turns my name, Rich, into a four-letter ciphertext. It can be reversed. The mathematics of that are fascinating, and really deep and amazing. But we really make that very simple for the end customer because we produce APIs. So these application programming interfaces can be accessed by applications in C or Java, C sharp, other languages. But they can also be accessed in Microservice Manor via rest and web service APIs. And that's the core of our technical platform. We have an appliance-based approach, so we take a secure data appliance, we'll put it on Prim, we'll make 50 of them if you're a big company like Verizon and you need to have these co-located around the globe, no problem; we can scale to the largest enterprise needs. But our typical customer will install several appliances and get going with a couple of environments like QA and Prod to be able to start getting encryption going inside their organization. Once the appliances are set up and installed, it takes just a couple of days of work for a typical technical staff to get done. Then you're up and running to be able to plug in the clients. Now what are the clients? Vertica's a huge one. Vertica's one of our most powerful client endpoints because you're able to now take that API, put it inside Vertica, it's all open on the internet. We can go and look at Vertica.com/secure data. You get all of our documentation on it. You understand how to use it very quickly. The APIs are super simple; they require three parameter inputs. It's a really basic approach to being able to protect and access data. And then it gets very deep from there because you have data like credit card numbers. Very different from a street address and we want to take a different approach to that. We have data like birthdate, and we want to be able to do analytics on dates. We have deep approaches on managing analytics on protected data like Date without having to put it in the clear. So we've maintained a lead in the industry in terms of being an innovator of the FF1 standard, what we call FF1 is format preserving encryption. We license that to others in the industry, per our NIST agreement. So we're the owner, we're the operator of it, and others use our technology. And we're the original founders of that, and so we continue to sort of lead the industry by adding additional capabilities on top of FF1 that really differentiate us from our competitors. Then you look at our API presence. We can definitely run as a dup, but we also run in open systems. We run on main frame, we run on mobile. So anywhere in the enterprise or one in the cloud, anywhere you want to be able to put secure data, and be able to access the protect data, we're going to be there and be able to support you there. >> Okay so, let's say I've talked to a lot of customers this week, and let's say I'm running in Eon mode. And I got some workload running in AWS, I've got some on Prim. I'm going to take an appliance or multiple appliances, I'm going to put it on Prim, but that will also secure my cloud workloads as part of a sort of shared responsibility model, for example? Or how does that work? >> No, that's absolutely correct. We're really flexible that we can run on Prim or in the cloud as far as our crypto engine, the key management is really hard stuff. Cryptography is really hard stuff, and we take care of all that, so we've all baked that in, and we can run that for you as a service either in the cloud or on Prim on your small Vms. So really the lightweight footprint for me running my infrastructure. When I look at the organization like you just described, it's a classic example of where we fit because we will be able to protect that data. Let's say you're ingesting it from a third party, or from an operational system, you have a website that collects customer data. Someone has now registered as a new customer, and they're going to do E-commerce with you. We'll take that data, and we'll protect it right at the point of capture. And we can now flow that through the organization and decrypt it at will on any platform that you have that you need us to be able to operate on. So let's say you wanted to pick that customer data from the operational transaction system, let's throw it into Eon, let's throw it into the cloud, let's do analytics there on that data, and we may need some decryption. We can place secure data wherever you want to be able to service that use case. In most cases, what you're doing is a simple, tiny little atomic efetch across a protected tunnel, your typical TLS pipe tunnel. And once that key is then cashed within our client, we maintain all that technology for you. You don't have to know about key management or dashing. We're good at that; that's our job. And then you'll be able to make those API calls to access or protect the data, and apply the authorization authentication controls that you need to be able to service your security requirements. So you might have third parties having access to your Vertica clusters. That is a special need, and we can have that ability to say employees can get X, and the third party can get Y, and that's a really interesting use case we're seeing for shared analytics in the internet now. >> Yeah for sure, so you can set the policy how we want. You know, I have to ask you, in a perfect world, I would encrypt everything. But part of the reason why people don't is because of performance concerns. Can you talk about, and you touched upon it I think recently with your sort of atomic access, but can you talk about, and I know it's Vertica, it's Ferrari, etc, but anything that slows it down, I'm going to be a concern. Are customers concerned about that? What are the performance implications of running encryption on Vertica? >> Great question there as well, and what we see is that we want to be able to apply scale where it's needed. And so if you look at ingest platforms that we find, Vertica is commonly connected up to something like Kafka. Maybe streamsets, maybe NiFi, there are a variety of different technologies that can route that data, pipe that data into Vertica at scale. Secured data is architected to go along with that architecture at the node or at the executor or at the lowest level operator level. And what I mean by that is that we don't have a bottleneck that everything has to go through one process or one box or one channel to be able to operate. We don't put an interceptor in between your data and coming and going. That's not our approach because those approaches are fragile and they're slow. So we typically want to focus on integrating our APIs natively within those pipeline processes that come into Vertica within the Vertica ingestion process itself, you can simply apply our protection when you do the copy command in Vertica. So really basic simple use case that everybody is typically familiar with in Vertica land; be able to copy the data and put it into Vertica, and you simply say protect as part of the data. So my first name is coming in as part of this ingestion. I'll simply put the protect keyword in the Syntax right in SQL; it's nothing other than just an extension SQL. Very very simple, the developer, easy to read, easy to write. And then you're going to provide the parameters that you need to say, oh the name is protected with this kind of a format. To differentiate it between a credit card number and an alphanumeric stream, for example. So once you do that, you then have the ability to decrypt. Now, on decrypt, let's look at a couple different use cases. First within Vertica, we might be doing select statements within Vertica, we might be doing all kinds of jobs within Vertica that just operate at the SQL layer. Again, just insert the word "access" into the Vertica select string and provide us with the data that you want to access, that's our word for decryption, that's our lingo. And we will then, at the Vertica level, harness the power of its CPU, its RAM, its horsepower at the node to be able to operate on that operator, the decryption request, if you will. So that gives us the speed and the ability to scale out. So if you start with two nodes of Vertica, we're going to operate at X number of hundreds of thousands of transactions a second, depending on what you're doing. Long strings are a little bit more intensive in terms of performance, but short strings like social security number are our sweet spot. So we operate very very high speed on that, and you won't notice the overhead with Vertica, perse, at the node level. When you scale Vertica up and you have 50 nodes, and you have large clusters of Vertica resources, then we scale with you. And we're not a bottleneck and at any particular point. Everybody's operating independently, but they're all copies of each other, all doing the same operation. Fetch a key, do the work, go to sleep. >> Yeah, you know, I think this is, a lot of the customers have said to us this week that one of the reasons why they like Vertica is it's very mature, it's been around, it's got a lot of functionality, and of course, you know, look, security, I understand is it's kind of table sticks, but it's also can be a differentiator. You know, big enterprises that you sell to, they're asking for security assessments, SOC 2 reports, penetration testing, and I think I'm hearing, with the partnership here, you're sort of passing those with flying colors. Are you able to make security a differentiator, or is it just sort of everybody's kind of got to have good security? What are your thoughts on that? >> Well, there's good security, and then there's great security. And what I found with one of my money center bank customers here in San Francisco was based here, was the concern around the insider access, when they had a large data store. And the concern that a DBA, a database administrator who has privilege to everything, could potentially exfil data out of the organization, and in one fell swoop, create havoc for them because of the amount of data that was present in that data store, and the sensitivity of that data in the data store. So when you put voltage encryption on top of Vertica, what you're doing now is that you're putting a layer in place that would prevent that kind of a breach. So you're looking at insider threats, you're looking at external threats, you're looking at also being able to pass your audit with flying colors. The audits are getting tougher. And when they say, tell me about your encryption, tell me about your authentication scheme, show me the access control list that says that this person can or cannot get access to something. They're asking tougher questions. That's where secure data can come in and give you that quick answer of it's encrypted at rest. It's encrypted and protected while it's in use, and we can show you exactly who's had access to that data because it's tracked via a different layer, a different appliance. And I would even draw the analogy, many of our customers use a device called a hardware security module, an HSM. Now, these are fairly expensive devices that are invented for military applications and adopted by banks. And now they're really spreading out, and people say, do I need an HSM? Well, with secure data, we certainly protect your crypto very very well. We have very very solid engineering. I'll stand on that any day of the week, but your auditor is going to want to ask a checkbox question. Do you have HSM? Yes or no. Because the auditor understands, it's another layer of protection. And it provides me another tamper evident layer of protection around your key management and your crypto. And we, as professionals in the industry, nod and say, that is worth it. That's an expensive option that you're going to add on, but your auditor's going to want it. If you're in financial services, you're dealing with PCI data, you're going to enjoy the checkbox that says, yes, I have HSMs and not get into some arcane conversation around, well no, but it's good enough. That's kind of the argument then conversation we get into when folks want to say, Vertica has great security, Vertica's fantastic on security. Why would I want secure data as well? It's another layer of protection, and it's defense in depth for you data. When you believe in that, when you take security really seriously, and you're really paranoid, like a person like myself, then you're going to invest in those kinds of solutions that get you best in-class results. >> So I'm hearing a data-centric approach to security. Security experts will tell you, you got to layer it. I often say, we live in a new world. The green used to just build a moat around the queen, but the queen, she's leaving her castle in this world of distributed data. Rich, incredibly knowlegable guest, and really appreciate you being on the front lines and sharing with us your knowledge about this important topic. So thanks for coming on theCUBE. >> Hey, thank you very much. >> You're welcome, and thanks for watching everybody. This is Dave Vellante for theCUBE, we're covering wall-to-wall coverage of the Virtual Vertica BDC, Big Data Conference. Remotely, digitally, thanks for watching. Keep it right there. We'll be right back right after this short break. (intense music)
SUMMARY :
Vertica Big Data Conference 2020 brought to you by Vertica. and we're pleased that The Cube could participate But maybe you can talk about your role And then to the other uses where we might be doing and how you guys are responding. and they said, we want to inform you your card and it does for the reasons that you mentioned. and put the software layers in place to make sure Yeah, so the whole big data thing started with Hadoop, So the lineage still goes back to Dr. Benet but that will also secure my cloud workloads as part of a and we can run that for you as a service but can you talk about, at the node to be able to operate on that operator, a lot of the customers have said to us this week and we can show you exactly who's had access to that data and really appreciate you being on the front lines of the Virtual Vertica BDC, Big Data Conference.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Australia | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Target | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Vellante | PERSON | 0.99+ |
May 2018 | DATE | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
2018 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Target Corporation | ORGANIZATION | 0.99+ |
$250 million | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
Rich Gaston | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Turkey | LOCATION | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one box | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
C | TITLE | 0.99+ |
Stanford University | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
this week | DATE | 0.99+ |
National Institute of Science and Technology | ORGANIZATION | 0.99+ |
Each jurisdiction | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Vertica | TITLE | 0.99+ |
Rich | PERSON | 0.99+ |
this year | DATE | 0.98+ |
Vertica Virtual Big Data Conference | EVENT | 0.98+ |
one channel | QUANTITY | 0.98+ |
one process | QUANTITY | 0.98+ |
GDPR | TITLE | 0.98+ |
SQL | TITLE | 0.98+ |
five billion rows | QUANTITY | 0.98+ |
about five billion | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
C sharp | TITLE | 0.97+ |
Benet | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
four-letter | QUANTITY | 0.96+ |
Vertica Big Data Conference 2020 | EVENT | 0.95+ |
Hadoop | TITLE | 0.94+ |
Kafka | TITLE | 0.94+ |
Micro Focus | ORGANIZATION | 0.94+ |
Colin Mahony, Vertica at Micro Focus | Virtual Vertica BDC 2020
>>It's the queue covering the virtual vertical Big Data Conference 2020. Brought to you by vertical. >>Hello, everybody. Welcome to the new Normal. You're watching the Cube, and it's remote coverage of the vertical big data event on digital or gone Virtual. My name is Dave Volante, and I'm here with Colin Mahoney, who's a senior vice president at Micro Focus and the GM of Vertical Colin. Well, strange times, but the show goes on. Great to see you again. >>Good to see you too, Dave. Yeah, strange times indeed. Obviously, Safety first of everyone that we made >>a >>decision to go Virtual. I think it was absolutely the right all made it in advance of how things have transpired, but we're making the best of it and appreciate your time here, going virtual with us. >>Well, Joe and we're super excited to be here. As you know, the Cube has been at every single BDC since its inception. It's a great event. You just you just presented the key note to your to your audience, You know, it was remote. You didn't have that that live vibe. And you have a lot of fans in the vertical community But could you feel the love? >>Yeah, you know, it's >>it's hard to >>feel the love virtually, but I'll tell you what. The silver lining in all this is the reach that we have for this event now is much broader than it would have been a Z you know, you know, we brought this event back. It's been a few years since we've done it. We're super excited to do it, obviously, you know, in Boston, where it was supposed to be on location, but there wouldn't have been as many people that could participate. So the silver lining in all of this is that I think there's there's a lot of love out there we're getting, too. I have a lot of participants who otherwise would not have been able to participate in this. Both live as well. It's a lot of these assets that we're gonna have available. So, um, you know, it's out there. We've got an amazing customers and of practitioners with vertical. We've got so many have been with us for a long time. We've of course, have a lot of new customers as well that we're welcoming, so it's exciting. >>Well, it's been a while. Since you've had the BDC event, a lot of transpired. You're now part of micro focus, but I know you and I know the vertical team you guys have have not stopped. You've kept the innovation going. We've been following the announcements, but but bridge the gap between the last time. You know, we had coverage of this event and where we are today. A lot has changed. >>Oh, yeah, a lot. A lot has changed. I mean, you know, it's it's the software industry, right? So nothing stays the same. We constantly have Teoh keep going. Probably the only thing that stays the same is the name Vertical. Um and, uh, you know, you're not spending 10 which is just a phenomenal released for us. So, you know, overall, the the organization continues to grow. The dedication and commitment to this great form of vertical continues every single release we do as you know, and this hasn't changed. It's always about performance and scale and adding a whole bunch of new capabilities on that front. But it's also about are our main road map and direction that we're going towards. And I think one of the things have been great about it is that we've stayed true that from day one we haven't tried to deviate too much and get into things that are barred to outside your box. But we've really done, I think, a great job of extending vertical into places where people need a lot of help. And with vertical 10 we know we're going to talk more about that. But we've done a lot of that. It's super exciting for our customers, and all of this, of course, is driven by our customers. But back to the big data conference. You know, everybody has been saying this for years. It was one of the best conferences we've been to just so really it's. It's developers giving tech talks, its customers giving talks. And we have more customers that wanted to give talks than we had slots to fill this year at the event, which is another benefit, a little bit of going virtually accommodate a little bit more about obviously still a tight schedule. But it really was an opportunity for our community to come together and talk about not just America, but how to deal with data, you know, we know the volumes are slowing down. We know the complexity isn't slowing down. The things that people want to do with AI and machine learning are moving forward in a rapid pace as well. There's a lot talk about and share, and that's really huge part of what we try to do with it. >>Well, let's get into some of that. Um, your customers are making bets. Micro focus is actually making a bet on one vertical. I wanna get your perspective on one of the waves that you're riding and where are you placing your bets? >>Yeah, No, it's great. So, you know, I think that one of the waves that we've been writing for a long time, obviously Vertical started out as a sequel platform for analytics as a sequel, database engine, relational engine. But we always knew that was just sort of takes that we wanted to do. People were going to trust us to put enormous amounts of data in our platform and what we owe everyone else's lots of analytics to take advantage of that data in the lots of tools and capabilities to shape that data to get into the right format. The operational reporting but also in this day and age for machine learning and from some pretty advanced regressions and other techniques of things. So a huge part of vertical 10 is just doubling down on that commitment to what we call in database machine learning and ai. Um, And to do that, you know, we know that we're not going to come up with the world's best algorithms. Nor is that our focus to do. Our advantage is we have this massively parallel platform to ingest store, manage and analyze the data. So we made some announcements about incorporating PM ML models into the product. We continue to deepen our python integration. Building off of a new open source project we started with uber has been a great customer and partner on This is one of our great talks here at the event. So you know, we're continuing to do that, and it turns out that when it comes to anything analytics machine learning, certainly so much of what you have to do is actually prepare the big shape the data get the data in the right format, apply the model, fit the model test a model operationalized model and is a great platform to do that. So that's a huge bet that were, um, continuing to ride on, taking advantage of and then some of the other things that we've just been seeing. You continue. I'll take object. Storage is an example on, I think Hadoop and what would you point through ultimately was a huge part of this, but there's just a massive disruption going on in the world around object storage. You know, we've made several bets on S three early we created America Yang mode, which separates computing story. And so for us that separation is not just about being able to take care of your take advantage of cloud economics as we do, or the economics of object storage. It's also about being able to truly isolate workloads and start to set the sort of platform to be able to do very autonomous things in the databases in the database could actually start self analysing without impacting many operational workloads, and so that continues with our partnership with pure storage. On premise, we just announced that we're supporting beyond Google Cloud now. In addition to Amazon, we supported on we've got a CFS now being supported by are you on mode. So we continue to ride on that mega trend as well. Just the clouds in general. Whether it's a public cloud, it's a private cloud on premise. Giving our customers the flexibility and choice to run wherever it makes sense for them is something that we are very committed to. From a flexibility standpoint. There's a lot of lock in products out there. There's a lot of cloud only products now more than ever. We're hearing our customers that they want that flexibility to be able to run anywhere. They want the ease of use and simplicity of native cloud experiences, which we're giving them as well. >>I want to stay in that architectural component for a minute. Talk about separating compute from storage is not just about economics. I mean apart Is that you, you know, green, really scale compute separate from storage as opposed to in chunks. It's more efficient, but you're saying there's other advantages to operational and workload. Specificity. Um, what is unique about vertical In this regard, however, many others separate compute from storage? What's different about vertical? >>Yeah, I think you know, there's a lot of differences about how we do it. It's one thing if you're a cloud native company, you do it and you have a shared catalog. That's key value store that all of your customers are using and are on the same one. Frankly, it's probably more of a security concern than anything. But it's another thing. When you give that capability to each customer on their own, they're fully protected. They're not sharing it with any other customers. And that's something that we hear a lot of insights from our customers. They want to be able to separate compute and storage. But they want to be able to do this in their own environment so that they know that in their data catalog there's no one else is. You share in that catalog, there's no single point of failure. So, um, that's one huge advantage that we have. And frankly, I think it just comes from being a company that's operating on premise and, uh, up in the cloud. I think another huge advantages for us is we don't know what object storage platform is gonna win, nor do we necessarily have. We designed the young vote so that it's an sdk. We started with us three, but it could be anything. It's DFS. That's three. Who knows what what object storage formats were going to be there and then finally, beyond just the object storage. We're really one of the only database companies that actually allows our customers to natively operate on data in very different formats, like parquet and or if you're familiar with those in the Hadoop community. So we not only embrace this kind of object storage disruption, but we really embrace the different data formats. And what that means is our customers that have data pipelines that you know, fully automated, putting this information in different places. They don't have to completely reload everything to take advantage of the Arctic analytics. We can go where the data is connected into it, and we offer them a lot of different ways to take advantage of those analytics. So there are a couple of unique differences with verdict, and again, I think are really advance. You know, in many ways, by not being a cloud native platform is that we're very good at operating in different environments with different formats that changing formats over time. And I don't think a lot of the other companies out there that I think many, particularly many of the SAS companies were scrambling. They even have challenges moving from saying Amazon environment to a Microsoft azure environment with their office because they've got so much unique Band Aid. Excuse me in the background. Just holding the system up that is native to any of those. >>Good. I'm gonna summarize. I'm hearing from you your Ferrari of databases that we've always known. Your your object store agnostic? Um, it's any. It's the cloud experience that you can bring on Prem to virtually any cloud. All the popular clouds hybrid. You know, aws, azure, now Google or on Prem and in a variety of different data formats. And that is, I think, you know, you need the combination of those I think is unique in the marketplace. Um, before we get into the news, I want to ask you about data silos and data silos. You mentioned H DFs where you and I met back in the early days of big data. You know, in some respects, you know, Hadoop help break down the silos with distributing the date and leave it in place, and in other respects, they created Data Lakes, which became silos. And so we have. Yet all these other sales people are trying to get to, Ah, digital transformation meeting, putting data at their core virtually obviously, and leave it in place. What's your thoughts on that in terms of data being a silo buster Buster, How does verdict of way there? >>Yeah, so And you're absolutely right, I think if even if you look at his due for all the new data that gets into the do. In many ways, it's created yet another large island of data that many organizations are struggling with because it's separate from their core traditional data warehouse. It's separate from some of the operational systems that they have, and so there might be a lot of data in there, but they're still struggling with How do I break it out of that large silo and or combine it again? I think some some of the things that verdict it doesn't part of the announcement just attend his migration tools to make it really easy. If you do want to move it from one platform to another inter vertical, but you don't have to move it, you can actually take advantage of a lot of the data where it resides with vertical, especially in the Hadoop brown with our external table storage with our building or compartment natively. So we're very pragmatic about how our customers go about this. Very few customers, Many of them tried it with Hadoop and realize that didn't work. But very few customers want a wholesale. Just say we're going to throw everything out. We're gonna get rid of our data warehouse. We're gonna hit the pause button and we're going to go from there. Just it's not possible to do that. So we've spent a lot of time investing in the product, really work with them to go where the data is and then seamlessly migrate. And when it makes sense to migrate, you mentioned the performance of America. Um, and you talked about it is the variety. It definitely is. And one other thing that we're really proud of this is that it actually is not a gas guzzler. Easy either One of the things that we're seeing, a lot of the other cloud databases pound for pound you get on the 10th the hardware vertical running up there. You get over 10 x performance. We're seeing that a lot, so it's Ah, it's not just about the performance, but it's about the efficiency as well. And I think that efficiency is really important when it comes to silos. Because there's there's just only so much horsepower out there. And it's easier for companies to play tricks and lots of servers environment when they start up for so many organizations and cloud and frankly, looking at the bills they're getting from these cloud workloads that are running. They really conscious of that. >>Yeah. The big, big energy companies love the gas guzzlers. A lot of a lot of cloud. Cute. But let's get into the news. Uh, 10 dot io you shared with your the audience in your keynote. One of the one of the highlights of data. What do we need to know? >>Yeah, so, you know, again doubling down on these mega trends, I'll start with Machine Learning and ai. We've done a lot of work to integrate so that you can take native PM ml models, bring them into vertical, run them massively parallel and help shape you know your data and prepare it. Do all the work that we know is required true machine learning. And for all the hype that there is around it, this is really you know, people want to do a lot of unsupervised machine learning, whether it's for healthcare fraud, detection, financial services. So we've doubled down on that. We now also support things like Tensorflow and, you know, as I mentioned, we're not going to come up with the best algorithms. Our job is really to ensure that those algorithms that people coming up with could be incorporated, that we can run them against massive data sets super efficiently. So that's that's number one number two on object storage. We continue to support Mawr object storage platforms for ya mode in the cloud we're expanding to Google G CPI, Google's cloud beyond just Amazon on premise or in the cloud. Now we're also supporting HD fs with beyond. Of course, we continue to have a great relationship with our partners, your storage on premise. Well, what we continue to invest in the eon mode, especially. I'm not gonna go through all the different things here, but it's not just sort of Hey, you support this and then you move on. There's so many different things that we learn about AP I calls and how to save our customers money and tricks on performance and things on the third areas. We definitely continue to build on that flexibility of deployment, which is related to young vote with. Some are described, but it's also about simplicity. It's also about some of the migration tools that we've announced to make it easy to go from one platform to another. We have a great road map on these abuse on security, on performance and scale. I mean, for us. Those are the things that we're working on every single release. We probably don't talk about them as much as we need to, but obviously they're critically important. And so we constantly look at every component in this product, you know, Version 10 is. It is a huge release for any product, especially an analytic database platform. And so there's We're just constantly revisiting you know, some of the code base and figuring out how we can do it in new and better ways. And that's a big part of 10 as well. >>I'm glad you brought up the machine Intelligence, the machine Learning and AI piece because we would agree that it is really one of the things we've noticed is that you know the new innovation cocktail. It's not being driven by Moore's law anymore. It's really a combination of you. You've collected all this data over the last 10 years through Hadoop and other data stores, object stores, etcetera. And now you're applying machine intelligence to that. And then you've got the cloud for scale. And of course, we talked about you bringing the cloud experience, whether it's on Prem or hybrid etcetera. The reason why I think this is important I wanted to get your take on this is because you do see a lot of emerging analytic databases. Cloud Native. Yes, they do suck up, you know, a lot of compute. Yeah, but they also had a lot of value. And I really wanted to understand how you guys play in that new trend, that sort of cloud database, high performance, bringing in machine learning and AI and ML tools and then driving, you know, turning data into insights and from what I'm hearing is you played directly in that and your differentiation is a lot of the things that we talk about including the ability to do that on from and in the cloud and across clouds. >>Yeah, I mean, I think that's a great point. We were a great cloud database. We run very well upon three major clouds, and you could argue some of the other plants as well in other parts of the world. Um, if you talk to our customers and we have hundreds of customers who are running vertical in the cloud, the experience is very good. I think it would always be better. We've invested a lot in taking advantage of the native cloud ecosystem, so that provisioning and managing vertical is seamless when you're in that environment will continue to do that. But vertical excuse me as a cloud platform is phenomenal. And, um, you know, there's a There's a lot of confusion out there, you know? I think there's a lot of marketing dollars spent that won't name many of the companies here. You know who they are, You know, the cloud Native Data Warehouse and it's true, you know their their software as a service. But if you talk to a lot of our customers, they're getting very good and very similar. experiences with Bernie comic. We stopped short of saying where software is a service because ultimately our customers have that control of flexibility there. They're putting verdict on whichever cloud they want to run it on, managing it. Stay tuned on that. I think you'll you'll hear from or more from us about, you know, that going going even further. But, um, you know, we do really well in the cloud, and I think he on so much of yang. And, you know, this has really been a sort of 2.5 years and never for us. But so much of eon is was designed around. The cloud was designed around Cloud Data Lakes s three, separation of compute and storage on. And if you look at the work that we're doing around container ization and a lot of these other elements, it just takes that to the next level. And, um, there's a lot of great work, so I think we're gonna get continue to get better at cloud. But I would argue that we're already and have been for some time very good at being a cloud analytic data platform. >>Well, since you open the door I got to ask you. So it's e. I hear you from a performance and architectural perspective, but you're also alluding two. I think something else. I don't know what you can share with us. You said stay tuned on that. But I think you're talking about Optionality, maybe different consumption models. That am I getting that right and you share >>your difficult in that right? And actually, I'm glad you wrote something. I think a huge part of Cloud is also has nothing to do with the technology. I think it's how you and seeing the product. Some companies want to rent the product and they want to rent it for a certain period of time. And so we allow our customers to do that. We have incredibly flexible models of how you provision and purchase our product, and I think that helps a lot. You know, I am opening the door Ah, a little bit. But look, we have customers that ask us that we're in offer them or, you know, we can offer them platforms, brawl in. We've had customers come to us and say please take over systems, um, and offer something as a distribution as I said, though I think one thing that we've been really good at is focusing on on what is our core and where we really offer offer value. But I can tell you that, um, we introduced something called the Verdict Advisor Tool this year. One of the things that the Advisor Tool does is it collects information from our customer environments on premise or the cloud, and we run through our own machine learning. We analyze the customer's environment and we make some recommendations automatically. And a lot of our customers have said to us, You know, it's funny. We've tried managed service, tried SAS off, and you guys blow them away in terms of your ability to help us, like automatically managed the verdict, environment and the system. Why don't you guys just take this product and converted into a SAS offering, so I won't go much further than that? But you can imagine that there's a lot of innovation and a lot of thoughts going into how we can do that. But there's no reason that we have to wait and do that today and being able to offer our customers on premise customers that same sort of experience from a managed capability is something that we spend a lot of time thinking about as well. So again, just back to the automation that ease of use, the going above and beyond. Its really excited to have an analytic platform because we can do so much automation off ourselves. And just like we're doing with Perfect Advisor Tool, we're leveraging our own Kool Aid or Champagne Dawn. However you want to say Teoh, in fact, tune up and solve, um, some optimization for our customers automatically, and I think you're going to see that continue. And I think that could work really well in a bunch of different wallets. >>Welcome. Just on a personal note, I've always enjoyed our conversations. I've learned a lot from you over the years. I'm bummed that we can't hang out in Boston, but hopefully soon, uh, this will blow over. I loved last summer when we got together. We had the verdict throwback. We had Stone Breaker, Palmer, Lynch and Mahoney. We did a great series, and that was a lot of fun. So it's really it's a pleasure. And thanks so much. Stay safe out there and, uh, we'll talk to you soon. >>Yeah, you too did stay safe. I really appreciate it up. Unity and, you know, this is what it's all about. It's Ah, it's a lot of fun. I know we're going to see each other in person soon, and it's the people in the community that really make this happen. So looking forward to that, but I really appreciate it. >>Alright. And thank you, everybody for watching. This is the Cube coverage of the verdict. Big data conference gone, virtual going digital. I'm Dave Volante. We'll be right back right after this short break. >>Yeah.
SUMMARY :
Brought to you by vertical. Great to see you again. Good to see you too, Dave. I think it was absolutely the right all made it in advance of And you have a lot of fans in the vertical community But could you feel the love? to do it, obviously, you know, in Boston, where it was supposed to be on location, micro focus, but I know you and I know the vertical team you guys have have not stopped. I mean, you know, it's it's the software industry, on one of the waves that you're riding and where are you placing your Um, And to do that, you know, we know that we're not going to come up with the world's best algorithms. I mean apart Is that you, you know, green, really scale Yeah, I think you know, there's a lot of differences about how we do it. It's the cloud experience that you can bring on Prem to virtually any cloud. to another inter vertical, but you don't have to move it, you can actually take advantage of a lot of the data One of the one of the highlights of data. And so we constantly look at every component in this product, you know, And of course, we talked about you bringing the cloud experience, whether it's on Prem or hybrid etcetera. And if you look at the work that we're doing around container ization I don't know what you can share with us. I think it's how you and seeing the product. I've learned a lot from you over the years. Unity and, you know, this is what it's all about. This is the Cube coverage of the verdict.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Colin Mahoney | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Joe | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
uber | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
python | TITLE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
2.5 years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Kool Aid | ORGANIZATION | 0.99+ |
Vertical Colin | ORGANIZATION | 0.99+ |
10th | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Micro Focus | ORGANIZATION | 0.98+ |
each customer | QUANTITY | 0.98+ |
Moore | PERSON | 0.98+ |
America | LOCATION | 0.98+ |
this year | DATE | 0.98+ |
one platform | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
10 | TITLE | 0.96+ |
Vertica | ORGANIZATION | 0.96+ |
last summer | DATE | 0.95+ |
third areas | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.93+ |
Vertical | ORGANIZATION | 0.92+ |
this year | DATE | 0.92+ |
single point | QUANTITY | 0.92+ |
Big Data Conference 2020 | EVENT | 0.92+ |
Arctic | ORGANIZATION | 0.91+ |
Hadoop | ORGANIZATION | 0.89+ |
three major clouds | QUANTITY | 0.88+ |
H DFs | ORGANIZATION | 0.86+ |
Cloud Data Lakes | TITLE | 0.86+ |
Stone Breaker | ORGANIZATION | 0.86+ |
one huge advantage | QUANTITY | 0.86+ |
Hadoop | TITLE | 0.85+ |
BDC | EVENT | 0.83+ |
day one | QUANTITY | 0.83+ |
Version 10 | TITLE | 0.83+ |
Cube | COMMERCIAL_ITEM | 0.82+ |
Google Cloud | TITLE | 0.82+ |
BDC 2020 | EVENT | 0.81+ |
thing | QUANTITY | 0.79+ |
Bernie | PERSON | 0.79+ |
first | QUANTITY | 0.79+ |
over 10 x | QUANTITY | 0.78+ |
Prem | ORGANIZATION | 0.78+ |
one vertical | QUANTITY | 0.77+ |
Virtual Vertica | ORGANIZATION | 0.77+ |
Verdict | ORGANIZATION | 0.75+ |
SAS | ORGANIZATION | 0.75+ |
Champagne Dawn | ORGANIZATION | 0.73+ |
every single release | QUANTITY | 0.72+ |
Perfect | TITLE | 0.71+ |
years | QUANTITY | 0.7+ |
last 10 years | DATE | 0.69+ |
Palmer | ORGANIZATION | 0.67+ |
Tensorflow | TITLE | 0.65+ |
single release | QUANTITY | 0.65+ |
a minute | QUANTITY | 0.64+ |
Advisor Tool | TITLE | 0.63+ |
customers | QUANTITY | 0.62+ |
Ben White, Domo | Virtual Vertica BDC 2020
>> Announcer: It's theCUBE covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. >> Hi, everybody. Welcome to this digital coverage of the Vertica Big Data Conference. You're watching theCUBE and my name is Dave Volante. It's my pleasure to invite in Ben White, who's the Senior Database Engineer at Domo. Ben, great to see you, man. Thanks for coming on. >> Great to be here and here. >> You know, as I said, you know, earlier when we were off-camera, I really was hoping I could meet you face-to-face in Boston this year, but hey, I'll take it, and, you know, our community really wants to hear from experts like yourself. But let's start with Domo as the company. Share with us what Domo does and what your role is there. >> Well, if I can go straight to the official what Domo does is we provide, we process data at BI scale, we-we-we provide BI leverage at cloud scale in record time. And so what that means is, you know, we are a business-operating system where we provide a number of analytical abilities to companies of all sizes. But we do that at cloud scale and so I think that differentiates us quite a bit. >> So a lot of your work, if I understand it, and just in terms of understanding what Domo does, there's a lot of pressure in terms of being real-time. It's not, like, you sometimes don't know what's coming at you, so it's ad-hoc. I wonder if you could sort of talk about that, confirm that, maybe add a little color to it. >> Yeah, absolutely, absolutely. That's probably the biggest challenge it is to being, to operating Domo is that it is an ad hoc environment. And certainly what that means, is that you've got analysts and executives that are able to submit their own queries with out very... With very few limitations. So from an engineering standpoint, that challenge in that of course is that you don't have this predictable dashboard to plan for, when it comes to performance planning. So it definitely presents some challenges for us that we've done some pretty unique things, I think, to address those. >> So it sounds like your background fits well with that. I understand your people have called you a database whisperer and an envelope pusher. What does that mean to a DBA in this day and age? >> The whisperer part is probably a lost art, in the sense that it's not really sustainable, right? The idea that, you know, whatever it is I'm able to do with the database, it has to be repeatable. And so that's really where analytics comes in, right? That's where pushing the envelope comes in. And in a lot of ways that's where Vertica comes in with this open architecture. And so as a person who has a reputation for saying, "I understand this is what our limitations should be, but I think we can do more." Having a platform like Vertica, with such an open architecture, kind of lets you push those limits quite a bit. >> I mean I've always felt like, you know, Vertica, when I first saw the stone breaker architecture and talked to some of the early founders, I always felt like it was the Ferrari of databases, certainly at the time. And it sounds like you guys use it in that regard. But talk a little bit more about how you use Vertica, why, you know, why MPP, why Vertica? You know, why-why can't you do this with RDBMS? Educate us, a little bit, on, sort of, the basics. >> For us it was, part of what I mentioned when we started, when we talked about the very nature of the Domo platform, where there's an incredible amount of resiliency required. And so Vertica, the MPP platform, of course, allows us to build individual database clusters that can perform best for the workload that might be assigned to them. So the open, the expandable, the... The-the ability to grow Vertica, right, as your base grows, those are all important factors, when you're choosing early on, right? Without a real idea of how growth would be or what it will look like. If you were kind of, throwing up something to the dark, you look at the Vertica platform and you can see, well, as I grow, I can, kind of, build with this, right? I can do some unique things with the platform in terms of this open architecture that will allow me to not have to make all my decisions today, right? (mutters) >> So, you're using Vertica, I know, at least in part, you're working with AWS as well, can you describe sort of your environment? Do you give anything on-prem, is everything in cloud? What's your set up look like? >> Sure, we have a hybrid cloud environment where we have a significant presence in public files in our own private cloud. And so, yeah, having said that, we certainly have a really an extensive presence, I would say, in AWS. So, they're definitely the partner of our when it comes to providing the databases and the server power that we need to operate on. >> From a standpoint of engineering and architecting a database, what were some of the challenges that you faced when you had to create that hybrid architecture? What did you face and how did you overcome that? >> Well, you know, some of the... There were some things we faced in terms of, one, it made it easy that Vertica and AWS have their own... They play well together, we'll say that. And so, Vertica was designed to work on AWS. So that part of it took care of it's self. Now our own private cloud and being able to connect that to our public cloud has been a part of our own engineering abilities. And again, I don't want to make little, make light of it, it certainly not impossible. And so we... Some of the challenges that pertain to the database really were in the early days, that you mentioned, when we talked a little bit earlier about Vertica's most recent eon mode. And I'm sure you'll get to that. But when I think of early challenges, some of the early challenges were the architecture of enterprise mode. When I talk about all of these, this idea that we can have unique databases or database clusters of different sizes, or this elasticity, because really, if you know the enterprise architecture, that's not necessarily the enterprise architecture. So we had to do some unique things, I think, to overcome that, right, early. To get around the rigidness of enterprise. >> Yeah, I mean, I hear you. Right? Enterprise is complex and you like when things are hardened and fossilized but, in your ad hoc environment, that's not what you needed. So talk more about eon mode. What is eon mode for you and how do you apply it? What are some of the challenges and opportunities there, that you've found? >> So, the opportunities were certainly in this elastic architecture and the ability to separate in the storage, immediately meant that for some of the unique data paths that we wanted to take, right? We could do that fairly quickly. Certainly we could expand databases, right, quickly. More importantly, now you can reduce. Because previously, in the past, right, when I mentioned the enterprise architecture, the idea of growing a database in itself has it's pain. As far as the time it takes to (mumbles) the data, and that. Then think about taking that database back down and (telephone interference). All of a sudden, with eon, right, we had this elasticity, where you could, kind of, start to think about auto scaling, where you can go up and down and maybe you could save some money or maybe you could improve performance or maybe you could meet demand, At a time where customers need it most, in a real way, right? So it's definitely a game changer in that regard. >> I always love to talk to the customers because I get to, you know, I hear from the vendor, what they say, and then I like to, sort of, validate it. So, you know, Vertica talks a lot about separating compute and storage, and they're not the only one, from an architectural standpoint who do that. But Vertica stresses it. They're the only one that does that with a hybrid architecture. They can do it on-prem, they can do it in the cloud. From your experience, well first of all, is that true? You may or may not know, but is that advantageous to you, and if so, why? >> Well, first of all, it's certainly true. Earlier in some of the original beta testing for the on-prem eon modes that we... I was able to participate in it and be aware of it. So it certainly a realty, they, it's actually supported on Pure storage with FlashBlade and it's quite impressive. You know, for who, who will that be for, tough one. It's probably Vertica's question that they're probably still answering, but I think, obviously, some enterprise users that probably have some hybrid cloud, right? They have some architecture, they have some hardware, that they themselves, want to make use of. We certainly would probably fit into one of their, you know, their market segments. That they would say that we might be the ones to look at on-prem eon mode. Again, the beauty of it is, the elasticity, right? The idea that you could have this... So a lot of times... So I want to go back real quick to separating compute. >> Sure. Great. >> You know, we start by separating it. And I like to think of it, maybe more of, like, the up link. Because in a true way, it's not necessarily separated because ultimately, you're bringing the compute and the storage back together. But to be able to decouple it quickly, replace nodes, bring in nodes, that certainly fits, I think, what we were trying to do in building this kind of ecosystem that could respond to unknown of a customer query or of a customer demand. >> I see, thank you for that clarification because you're right, it's really not separating, it's decoupling. And that's important because you can scale them independently, but you still need compute and you still need storage to run your work load. But from a cost standpoint, you don't have to buy it in chunks. You can buy in granular segments for whatever your workload requires. Is that, is that the correct understanding? >> Yeah, and to, the ability to able to reuse compute. So in the scenario of AWS or even in the scenario of your on-prem solution, you've got this data that's safe and secure in (mumbles) computer storage, but the compute that you have, you can reuse that, right? You could have a scenario that you have some query that needs more analytic, more-more fire power, more memory, more what have you that you have. And so you can kind of move between, and that's important, right? That's maybe more important than can I grow them separately. Can I, can I borrow it. Can I borrow that compute you're using for my (cuts out) and give it back? And you can do that, when you're so easily able to decouple the compute and put it where you want, right? And likewise, if you have a down period where customers aren't using it, you'd like to be able to not use that, if you no longer require it, you're not going to get it back. 'Cause it-it opened the door to a lot of those things that allowed performance and process department to meet up. >> I wonder if I can ask you a question, you mentioned Pure a couple of times, are you using Pure FlashBlade on-prem, is that correct? >> That is the solution that is supported, that is supported by Vertica for the on-prem. (cuts out) So at this point, we have been discussing with them about some our own POCs for that. Before, again, we're back to the idea of how do we see ourselves using it? And so we certainly discuss the feasibility of bringing it in and giving it the (mumbles). But that's not something we're... Heavily on right now. >> And what is Domo for Domo? Tell us about that. >> Well it really started as this idea, even in the company, where we say, we should be using Domo in our everyday business. From the sales folk to the marketing folk, right. Everybody is going to use Domo, it's a business platform. For us in engineering team, it was kind of like, well if we use Domo, say for instance, to be better at the database engineers, now we've pointed Domo at itself, right? Vertica's running Domo in the background to some degree and then we turn around and say, "Hey Domo, how can we better at running you?" So it became this kind of cool thing we'd play with. We're now able to put some, some methods together where we can actually do that, right. Where we can monitor using our platform, that's really good at processing large amounts of data and spitting out useful analytics, right. We take those analytics down, make recommendation changes at the-- For now, you've got Domo for Domo happening and it allows us to sit at home and work. Now, even when we have to, even before we had to. >> Well, you know, look. Look at us here. Right? We couldn't meet in Boston physically, we're now meeting remote. You're on a hot spot because you've got some weather in your satellite internet in Atlanta and we're having a great conversation. So-so, we're here with Ben White, who's a senior database engineer at Domo. I want to ask you about some of the envelope pushing that you've done around autonomous. You hear that word thrown around a lot. Means a lot of things to a lot of different people. How do you look at autonomous? And how does it fit with eon and some of the other things you're doing? >> You know, I... Autonomous and the idea idea of autonomy is something that I don't even know if that I have already, ready to define. And so, even in my discussion, I often mention it as a road to it. Because exactly where it is, it's hard to pin down, because there's always this idea of how much trust do you give, right, to the system or how much, how much is truly autonomous? How much already is being intervened by us, the engineers. So I do hedge on using that. But on this road towards autonomy, when we look at, what we're, how we're using Domo. And even what that really means for Vertica, because in a lot of my examples and a lot of the things that we've engineered at Domo, were designed to maybe overcome something that I thought was a limitation thing. And so many times as we've done that, Vertica has kind of met us. Like right after we've kind of engineered our architecture stuff, that we thought that could help on our side, Vertica has a release that kind of addresses it. So, the autonomy idea and the idea that we could analyze metadata, make recommendations, and then execute those recommendations without innervation, is that road to autonomy. Once the database is properly able to do that, you could see in our ad hoc environment how that would be pretty useful, where with literally millions of queries every hour, trying to figure out what's the best, you know, profile. >> You know for- >> (overlapping) probably do a better job in that, than we could. >> For years I felt like IT folks sometimes were really, did not want that automation, they wanted the knobs to turn. But I wonder if you can comment. I feel as though the level of complexity now, with cloud, with on-prem, with, you know, hybrid, multicloud, the scale, the speed, the real time, it just gets, the pace is just too much for humans. And so, it's almost like the industry is going to have to capitulate to the machine. And then, really trust the machine. But I'm still sensing, from you, a little bit of hesitation there, but light at the end of the tunnel. I wonder if you can comment? >> Sure. I think the light at the end of the tunnel is even in the recent months and recent... We've really begin to incorporate more machine learning and artificial intelligence into the model, right. And back to what we're saying. So I do feel that we're getting closer to finding conditions that we don't know about. Because right now our system is kind of a rule, rules based system, where we've said, "Well these are the things we should be looking for, these are the things that we think are a problem." To mature to the point where the database is recognizing anomalies and taking on pattern (mutters). These are problems you didn't know happen. And that's kind of the next step, right. Identifying the things you didn't know. And that's the path we're on now. And it's probably more exciting even than, kind of, nailing down all the things you think you know. We figure out what we don't know yet. >> So I want to close with, I know you're a prominent member of the, a respected member of the Vertica Customer Advisory Board, and you know, without divulging anything confidential, what are the kinds of things that you want Vertica to do going forward? >> Oh, I think, some of the in dated base for autonomy. The ability to take some of the recommendations that we know can derive from the metadata that already exists in the platform and start to execute some of the recommendations. And another thing we've talked about, and I've been pretty open about talking to it, talking about it, is the, a new version of the database designer, I think, is something that I'm sure they're working on. Lightweight, something that can give us that database design without the overhead. Those are two things, I think, as they nail or basically the database designer, as they respect that, they'll really have all the components in play to do in based autonomy. And I think that's, to some degree, where they're heading. >> Nice. Well Ben, listen, I really appreciate you coming on. You're a thought leader, you're very open, open minded, Vertica is, you know, a really open community. I mean, they've always been quite transparent in terms of where they're going. It's just awesome to have guys like you on theCUBE to-to share with our community. So thank you so much and hopefully we can meet face-to-face shortly. >> Absolutely. Well you stay safe in Boston, one of my favorite towns and so no doubt, when the doors get back open, I'll be coming down. Or coming up as it were. >> Take care. All right, and thank you for watching everybody. Dave Volante with theCUBE, we're here covering the Virtual Vertica Big Data Conference. (electronic music)
SUMMARY :
brought to you by Vertica. of the Vertica Big Data Conference. I really was hoping I could meet you face-to-face And so what that means is, you know, I wonder if you could sort of talk about that, confirm that, is that you don't have this predictable dashboard What does that mean to a DBA in this day and age? The idea that, you know, And it sounds like you guys use it in that regard. that can perform best for the workload that we need to operate on. Some of the challenges that pertain to the database and you like when things are hardened and fossilized and the ability to separate in the storage, but is that advantageous to you, and if so, why? The idea that you could have this... And I like to think of it, maybe more of, like, the up link. And that's important because you can scale them the compute and put it where you want, right? that is supported by Vertica for the on-prem. And what is Domo for Domo? From the sales folk to the marketing folk, right. I want to ask you about some of the envelope pushing and a lot of the things that we've engineered at Domo, than we could. But I wonder if you can comment. nailing down all the things you think you know. And I think that's, to some degree, where they're heading. It's just awesome to have guys like you on theCUBE Well you stay safe in Boston, All right, and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Ben White | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
Domo | ORGANIZATION | 0.99+ |
Vertica Customer Advisory Board | ORGANIZATION | 0.99+ |
Ben | PERSON | 0.99+ |
two things | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Vertica | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
Vertica Big Data Conference | EVENT | 0.97+ |
Domo | TITLE | 0.97+ |
Domo | PERSON | 0.96+ |
Virtual Vertica Big Data Conference | EVENT | 0.96+ |
Virtual Vertica Big Data Conference 2020 | EVENT | 0.96+ |
first | QUANTITY | 0.95+ |
eon | TITLE | 0.92+ |
one | QUANTITY | 0.87+ |
today | DATE | 0.87+ |
millions of queries | QUANTITY | 0.84+ |
FlashBlade | TITLE | 0.82+ |
Virtual Vertica | EVENT | 0.75+ |
couple | QUANTITY | 0.7+ |
Pure FlashBlade | COMMERCIAL_ITEM | 0.58+ |
BDC 2020 | EVENT | 0.56+ |
MPP | TITLE | 0.55+ |
times | QUANTITY | 0.51+ |
RDBMS | TITLE | 0.48+ |
Joy King, Vertica | Virtual Vertica BDC 2020
>>Yeah, it's the queue covering the virtual vertical Big Data Conference 2020 Brought to You by vertical. >>Welcome back, everybody. My name is Dave Vellante, and you're watching the Cube's coverage of the verdict of Virtual Big Data conference. The Cube has been at every BTC, and it's our pleasure in these difficult times to be covering BBC as a virtual event. This digital program really excited to have Joy King joining us. Joy is the vice president of product and go to market strategy in particular. And if that weren't enough, he also runs marketing and education curve for him. So, Joe, you're a multi tool players. You've got the technical side and the marketing gene, So welcome to the Cube. You're always a great guest. Love to have you on. >>Thank you so much, David. The pleasure, it really is. >>So I want to get in. You know, we'll have some time. We've been talking about the conference and the virtual event, but I really want to dig in to the product stuff. It's a big day for you guys. You announced 10.0. But before we get into the announcements, step back a little bit you know, you guys are riding the waves. I've said to ah, number of our guests that that brick has always been good. It riding the wave not only the initial MPP, but you you embraced, embraced HD fs. You embrace data science and analytics and in the cloud. So one of the trends that you see the big waves that you're writing >>Well, you're absolutely right, Dave. I mean, what what I think is most interesting and important is because verdict is, at its core a true engineering culture founded by, well, a pretty famous guy, right, Dr Stone Breaker, who embedded that very technical vertical engineering culture. It means that we don't pretend to know everything that's coming, but we are committed to embracing the tech. An ology trends, the innovations, things like that. We don't pretend to know it all. We just do it all. So right now, I think I see three big imminent trends that we are addressing. And matters had we have been for a while, but that are particularly relevant right now. The first is a combination of, I guess, a disappointment in what Hadoop was able to deliver. I always feel a little guilty because she's a very reasonably capable elephant. She was designed to be HD fs highly distributed file store, but she cant be an entire zoo, so there's a lot of disappointment in the market, but a lot of data. In HD FM, you combine that with some of the well, not some the explosion of cloud object storage. You're talking about even more data, but even more data silos. So data growth and and data silos is Trend one. Then what I would say Trend, too, is the cloud Reality Cloud brings so many events. There are so many opportunities that public cloud computing delivers. But I think we've learned enough now to know that there's also some reality. The cloud providers themselves. Dave. Don't talk about it well, because not, is it more agile? Can you do things without having to manage your own data center? Of course you can. That the reality is it's a little more pricey than we expected. There are some security and privacy concerns. There's some workloads that can go to the cloud, so hybrid and also multi cloud deployments are the next trend that are mandatory. And then maybe the one that is the most exciting in terms of changing the world we could use. A little change right now is operationalize in machine learning. There's so much potential in the technology, but it's somehow has been stuck for the most part in science projects and data science lab, and the time is now to operationalize it. Those are the three big trends that vertical is focusing on right now. >>That's great. I wonder if I could ask you a couple questions about that. I mean, I like you have a soft spot in my heart for the and the thing about the Hadoop that that was, I think, profound was it got people thinking about, you know, bringing compute to the data and leaving data in place, and it really got people thinking about data driven cultures. It didn't solve all the problems, but it collected a lot of data that we can now take your third trend and apply machine intelligence on top of that data. And then the cloud is really the ability to scale, and it gives you that agility and that it's not really that cloud experience. It's not not just the cloud itself, it's bringing the cloud experience to wherever the data lives. And I think that's what I'm hearing from you. Those are the three big super powers of innovation today. >>That's exactly right. So, you know, I have to say I think we all know that Data Analytics machine learning none of that delivers real value unless the volume of data is there to be able to truly predict and influence the future. So the last 7 to 10 years has been correctly about collecting the data, getting the data into a common location, and H DFS was well designed for that. But we live in a capitalist world, and some companies stepped in and tried to make HD Fs and the broader Hadoop ecosystem be the single solution to big data. It's not true. So now that the key is, how do we take advantage of all of that data? And now that's exactly what verdict is focusing on. So as you know, we began our journey with vertical back in the day in 2007 with our first release, and we saw the growth of the dupe. So we announced many years ago verdict a sequel on that. The idea to be able to deploy vertical on Hadoop nodes and query the data in Hadoop. We wanted to help. Now with Verdict A 10. We are also introducing vertical in eon mode, and we can talk more about that. But Verdict and Ian Mode for HDs, This is a way to apply it and see sequel database management platform to H DFS infrastructure and data in each DFS file storage. And that is a great way to leverage the investment that so many companies have made in HD Fs. And I think it's fair to the elephant to treat >>her well. Okay, let's get into the hard news and auto. Um, she's got, but you got a mature stack, but one of the highlights of append auto. And then we can drill into some of the technologies >>Absolutely so in well in 2018 vertical announced vertical in Deon mode is the separation of compute from storage. Now this is a great example of vertical embracing innovation. Vertical was designed for on premises, data centers and bare metal servers, tightly coupled storage de l three eighties from Hewlett Packard Enterprises, Dell, etcetera. But we saw that cloud computing was changing fundamentally data center architectures, and it made sense to separate compute from storage. So you add compute when you need compute. You add storage when you need storage. That's exactly what the cloud's introduced, but it was only available on the club. So first thing we did was architect vertical and EON mode, which is not a new product. Eight. This is really important. It's a deployment option. And in 2018 our customers had the opportunity to deploy their vertical licenses in EON mode on AWS in September of 2019. We then broke an important record. We brought cloud architecture down to earth and we announced vertical in eon mode so vertical with communal or shared storage, leveraging pure storage flash blade that gave us all the advantages of separating compute from storage. All of the workload, isolation, the scale up scale down the ability to manage clusters. And we did that with on Premise Data Center. And now, with vertical 10 we are announcing verdict in eon mode on HD fs and vertically on mode on Google Cloud. So what we've got here, in summary, is vertical Andy on mode, multi cloud and multiple on premise data that storage, and that gives us the opportunity to help our customers both with the hybrid and multi cloud strategies they have and unifying their data silos. But America 10 goes farther. >>Well, let me stop you there, because I just wanna I want to mention So we talked to Joe Gonzalez and past Mutual, who essentially, he was brought in. And one of this task was the lead into eon mode. Why? Because I'm asking. You still had three separate data silos and they wanted to bring those together. They're investing heavily in technology. Joe is an expert, though that really put data at their core and beyond Mode was a key part of that because they're using S three and s o. So that was Ah, very important step for those guys carry on. What else do we need to know about? >>So one of the reasons, for example, that Mass Mutual is so excited about John Mode is because of the operational advantages. You think about exactly what Joe told you about multiple clusters serving must multiple use cases and maybe multiple divisions. And look, let's be clear. Marketing doesn't always get along with finance and finance doesn't necessarily get along with up, and I t is often caught the middle. Erica and Dion mode allows workload, isolation, meaning allocating the compute resource is that different use cases need without allowing them to interfere with other use cases and allowing everybody to access the data. So it's a great way to bring the corporate world together but still protect them from each other. And that's one of the things that Mass Mutual is going to benefit from, as well, so many of >>our other customers I also want to mention. So when I saw you, ah, last last year at the Pure Storage Accelerate conference just today we are the only company that separates you from storage that that runs on Prem and in the cloud. And I was like I had to think about it. I've researched. I still can't find anybody anybody else who doesn't know. I want to mention you beat actually a number of the cloud players with that capability. So good job and I think is a differentiator, assuming that you're giving me that cloud experience and the licensing and the pricing capability. So I want to talk about that a little >>bit. Well, you're absolutely right. So let's be clear. There is no question that the public cloud public clouds introduced the separation of compute storage and these advantages that they do not have the ability or the interest to replicate that on premise for vertical. We were born to be software only. We make no money on underlying infrastructure. We don't charge as a package for the hardware underneath, so we are totally motivated to be independent of that and also to continuously optimize the software to be as efficient as possible. And we do the exact same thing to your question about life. Cloud providers charge for note indignance. That's how they charge for their underlying infrastructure. Well, in some cases, if you're being, if you're talking about a use case where you have a whole lot of data, but you don't necessarily have a lot of compute for that workload, it may make sense to pay her note. Then it's unlimited data. But what if you have a huge compute need on a relatively small data set that's not so good? Vertical offers per node and four terabyte for our customers, depending on their use case, we also offer perpetual licenses for customers who want capital. But we also offer subscription for companies that they Nope, I have to have opt in. And while this can certainly cause some complexity for our field organization, we know that it's all about choice, that everybody in today's world wants it personalized just for me. And that's exactly what we're doing with our pricing in life. >>So just to clarify, you're saying I can pay by the drink if I want to. You're not going to force me necessarily into a term or Aiken choose to have, you know, more predictable pricing. Is that, Is that correct? >>Well, so it's partially correct. The first verdict, a subscription licensing is a fixed amount for the period of the subscription. We do that so many of our customers cannot, and I'm one of them, by the way, cannot tell finance what the budgets forecast is going to be for the quarter after I spent you say what it's gonna be before, So our subscription facing is a fixed amount for a period of time. However, we do respect the fact that some companies do want usage based pricing. So on AWS, you can use verdict up by the hour and you pay by the hour. We are about to launch the very same thing on Google Cloud. So for us, it's about what do you need? And we make it happen natively directly with us or through AWS and Google Cloud. >>So I want to send so the the fixed isn't some floor. And then if you want a surge above that, you can allow usage pricing. If you're on the cloud, correct. >>Well, you actually license your cluster vertical by the hour on AWS and you run your cluster there. Or you can buy a license from vertical or a fixed capacity or a fixed number of nodes and deploy it on the cloud. And then, if you want to add more nodes or add more capacity, you can. It's not usage based for the license that you bring to the cloud. But if you purchase through the cloud provider, it is usage. >>Yeah, okay. And you guys are in the marketplace. Is that right? So, again, if I want up X, I can do that. I can choose to do that. >>That's awesome. Next usage through the AWS marketplace or yeah, directly from vertical >>because every small business who then goes to a salesforce management system knows this. Okay, great. I can pay by the month. Well, yeah, Well, not really. Here's our three year term in it, right? And it's very frustrating. >>Well, and even in the public cloud you can pay for by the hour by the minute or whatever, but it becomes pretty obvious that you're better off if you have reserved instance types or committed amounts in that by vertical offers subscription. That says, Hey, you want to have 100 terabytes for the next year? Here's what it will cost you. We do interval billing. You want to do monthly orderly bi annual will do that. But we won't charge you for usage that you didn't even know you were using until after you get the bill. And frankly, that's something my finance team does not like. >>Yeah, I think you know, I know this is kind of a wonky discussion, but so many people gloss over the licensing and the pricing, and I think my take away here is Optionality. You know, pricing your way of That's great. Thank you for that clarification. Okay, so you got Google Cloud? I want to talk about storage. Optionality. If I found him up, I got history. I got I'm presuming Google now of you you're pure >>is an s three compatible storage yet So your story >>Google object store >>like Google object store Amazon s three object store HD fs pure storage flash blade, which is an object store on prim. And we are continuing on this theft because ultimately we know that our customers need the option of having next generation data center architecture, which is sort of shared or communal storage. So all the data is in one place. Workloads can be managed independently on that data, and that's exactly what we're doing. But what we already have in two public clouds and to on premise deployment options today. And as you said, I did challenge you back when we saw each other at the conference. Today, vertical is the only analytic data warehouse platform that offers that option on premise and in multiple public clouds. >>Okay, let's talk about the ah, go back through the innovation cocktail. I'll call it So it's It's the data applying machine intelligence to that data. And we've talked about scaling at Cloud and some of the other advantages of Let's Talk About the Machine Intelligence, the machine learning piece of it. What's your story there? Give us any updates on your embracing of tooling and and the like. >>Well, quite a few years ago, we began building some in database native in database machine learning algorithms into vertical, and the reason we did that was we knew that the architecture of MPP Columbia execution would dramatically improve performance. We also knew that a lot of people speak sequel, but at the time, not so many people spoke R or even Python. And so what if we could give act us to machine learning in the database via sequel and deliver that kind of performance? So that's the journey we started out. And then we realized that actually, machine learning is a lot more as everybody knows and just algorithms. So we then built in the full end to end machine learning functions from data preparation to model training, model scoring and evaluation all the way through to fold the point and all of this again sequel accessible. You speak sequel. You speak to the data and the other advantage of this approach was we realized that accuracy was compromised if you down sample. If you moved a portion of the data from a database to a specialty machine learning platform, you you were challenged by accuracy and also what the industry is calling replica ability. And that means if a model makes a decision like, let's say, credit scoring and that decision isn't anyway challenged, well, you have to be able to replicate it to prove that you made the decision correctly. And there was a bit of, ah, you know, blow up in the media not too long ago about a credit scoring decision that appeared to be gender bias. But unfortunately, because the model could not be replicated, there was no way to this Prove that, and that was not a good thing. So all of this is built in a vertical, and with vertical 10. We've taken the next step, just like with with Hadoop. We know that innovation happens within vertical, but also outside of vertical. We saw that data scientists really love their preferred language. Like python, they love their tools and platforms like tensor flow with vertical 10. We now integrate even more with python, which we have for a while, but we also integrate with tensorflow integration and PM ML. What does that mean? It means that if you build and train a model external to vertical, using the machine learning platform that you like, you can import that model into a vertical and run it on the full end to end process. But run it on all the data. No more accuracy challenges MPP Kilometer execution. So it's blazing fast. And if somebody wants to know why a model made a decision, you can replicate that model, and you can explain why those are very powerful. And it's also another cultural unification. Dave. It unifies the business analyst community who speak sequel with the data scientist community who love their tools like Tensorflow and Python. >>Well, I think joy. That's important because so much of machine intelligence and ai there's a black box problem. You can't replicate the model. Then you do run into a potential gender bias. In the example that you're talking about there in their many you know, let's say an individual is very wealthy. He goes for a mortgage and his wife goes for some credit she gets rejected. He gets accepted this to say it's the same household, but the bias in the model that may be gender bias that could be race bias. And so being able to replicate that in and open up and make the the machine intelligence transparent is very, very important, >>It really is. And that replica ability as well as accuracy. It's critical because if you're down sampling and you're running models on different sets of data, things can get confusing. And yet you don't really have a choice. Because if you're talking about petabytes of data and you need to export that data to a machine learning platform and then try to put it back and get the next at the next day, you're looking at way too much time doing it in the database or training the model and then importing it into the database for production. That's what vertical allows, and our customers are. So it right they reopens. Of course, you know, they are the ones that are sort of the Trailblazers they've always been, and ah, this is the next step. In blazing the ML >>thrill joint customers want analytics. They want functional analytics full function. Analytics. What are they pushing you for now? What are you delivering? What's your thought on that? >>Well, I would say the number one thing that our customers are demanding right now is deployment. Flexibility. What? What the what the CEO or the CFO mandated six months ago? Now shout Whatever that thou shalt is is different. And they would, I tell them is it is impossible. No, what you're going to be commanded to do or what options you might have in the future. The key is not having to choose, and they are very, very committed to that. We have a large telco customer who is multi cloud as their commit. Why multi cloud? Well, because they see innovation available in different public clouds. They want to take advantage of all of them. They also, admittedly, the that there's the risk of lock it right. Like any vendor, they don't want that either, so they want multi cloud. We have other customers who say we have some workloads that make sense for the cloud and some that we absolutely cannot in the cloud. But we want a unified analytics strategy, so they are adamant in focusing on deployment flexibility. That's what I'd say is 1st 2nd I would say that the interest in operationalize in machine learning but not necessarily forcing the analytics team to hammer the data science team about which tools or the best tools. That's the probably number two. And then I'd say Number three. And it's because when you look at companies like Uber or the Trade Desk or A T and T or Cerner performance at scale, when they say milliseconds, they think that flow. When they say petabytes, they're like, Yeah, that was yesterday. So performance at scale good enough for vertical is never good enough. And it's why we're constantly building at the core the next generation execution engine, database designer, optimization engine, all that stuff >>I wanna also ask you. When I first started following vertical, we covered the cube covering the BBC. One of things I noticed was in talking to customers and people in the community is that you have a community edition, uh, free addition, and it's not neutered ais that have you maintain that that ethos, you know, through the transitions into into micro focus. And can you talk about that a little bit >>absolutely vertical community edition is vertical. It's all of the verdict of functionality geospatial time series, pattern matching, machine learning, all of the verdict, vertical neon mode, vertical and enterprise mode. All vertical is the community edition. The only limitation is one terabyte of data and three notes, and it's free now. If you want commercial support, where you can file a support ticket and and things like that, you do have to buy the life. But it's free, and we people say, Well, free for how long? Like our field? I've asked that and I say forever and what he said, What do you mean forever? Because we want people to use vertical for use cases that are small. They want to learn that they want to try, and we see no reason to limit that. And what we look for is when they're ready to grow when they need the next set of data that goes beyond a terabyte or they need more compute than three notes, then we're here for them, and it also brings up an important thing that I should remind you or tell you about Davis. You haven't heard it, and that's about the Vertical Academy Academy that vertical dot com well, what is that? That is, well, self paced on demand as well as vertical essential certification. Training and certification means you have seven days with your hands on a vertical cluster hosted in the cloud to go through all the certification. And guess what? All of that is free. Why why would you give it for free? Because for us empowering the market, giving the market the expert East, the learning they need to take advantage of vertical, just like with Community Edition is fundamental to our mission because we see the advantage that vertical can bring. And we want to make it possible for every company all around the world that take advantage >>of it. I love that ethos of vertical. I mean, obviously great product. But it's not just the product. It's the business practices and really progressive progressive pricing and embracing of all these trends and not running away from the waves but really leaning in joy. Thanks so much. Great interview really appreciate it. And, ah, I wished we could have been faced face in Boston, but I think it's prudent thing to do, >>I promise you, Dave we will, because the verdict of BTC and 2021 is already booked. So I will see you there. >>Haas enjoyed King. Thanks so much for coming on the Cube. And thank you for watching. Remember, the Cube is running this program in conjunction with the virtual vertical BDC goto vertical dot com slash BBC 2020 for all the coverage and keep it right there. This is Dave Vellante with the Cube. We'll be right back. >>Yeah, >>yeah, yeah.
SUMMARY :
Yeah, it's the queue covering the virtual vertical Big Data Conference Love to have you on. Thank you so much, David. So one of the trends that you see the big waves that you're writing Those are the three big trends that vertical is focusing on right now. it's bringing the cloud experience to wherever the data lives. So now that the key is, how do we take advantage of all of that data? And then we can drill into some of the technologies had the opportunity to deploy their vertical licenses in EON mode on Well, let me stop you there, because I just wanna I want to mention So we talked to Joe Gonzalez and past Mutual, And that's one of the things that Mass Mutual is going to benefit from, I want to mention you beat actually a number of the cloud players with that capability. for the hardware underneath, so we are totally motivated to be independent of that So just to clarify, you're saying I can pay by the drink if I want to. So for us, it's about what do you need? And then if you want a surge above that, for the license that you bring to the cloud. And you guys are in the marketplace. directly from vertical I can pay by the month. Well, and even in the public cloud you can pay for by the hour by the minute or whatever, and the pricing, and I think my take away here is Optionality. And as you said, I'll call it So it's It's the data applying machine intelligence to that data. So that's the journey we started And so being able to replicate that in and open up and make the the and get the next at the next day, you're looking at way too much time doing it in the What are they pushing you for now? commanded to do or what options you might have in the future. And can you talk about that a little bit the market, giving the market the expert East, the learning they need to take advantage of vertical, But it's not just the product. So I will see you there. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
September of 2019 | DATE | 0.99+ |
Joe Gonzalez | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Joy King | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Joy | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Vertical Academy Academy | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
seven days | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
three notes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Hewlett Packard Enterprises | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
BBC | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Ian Mode | PERSON | 0.99+ |
six months ago | DATE | 0.99+ |
Python | TITLE | 0.99+ |
first release | QUANTITY | 0.99+ |
1st 2nd | QUANTITY | 0.99+ |
three year | QUANTITY | 0.99+ |
Mass Mutual | ORGANIZATION | 0.99+ |
Eight | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Stone Breaker | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
America 10 | TITLE | 0.98+ |
King | PERSON | 0.98+ |
today | DATE | 0.98+ |
four terabyte | QUANTITY | 0.97+ |
John Mode | PERSON | 0.97+ |
Haas | PERSON | 0.97+ |
yesterday | DATE | 0.97+ |
first verdict | QUANTITY | 0.96+ |
one place | QUANTITY | 0.96+ |
s three | COMMERCIAL_ITEM | 0.96+ |
single | QUANTITY | 0.95+ |
first thing | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
both | QUANTITY | 0.95+ |
Tensorflow | TITLE | 0.95+ |
Hadoop | TITLE | 0.95+ |
third trend | QUANTITY | 0.94+ |
MPP Columbia | ORGANIZATION | 0.94+ |
Hadoop | PERSON | 0.94+ |
last last year | DATE | 0.92+ |
three big trends | QUANTITY | 0.92+ |
vertical 10 | TITLE | 0.92+ |
two public clouds | QUANTITY | 0.92+ |
Pure Storage Accelerate conference | EVENT | 0.91+ |
Andy | PERSON | 0.9+ |
few years ago | DATE | 0.9+ |
next day | DATE | 0.9+ |
Mutual | ORGANIZATION | 0.9+ |
Mode | PERSON | 0.89+ |
telco | ORGANIZATION | 0.89+ |
three big | QUANTITY | 0.88+ |
eon | TITLE | 0.88+ |
Verdict | PERSON | 0.88+ |
three separate data | QUANTITY | 0.88+ |
Cube | COMMERCIAL_ITEM | 0.87+ |
petabytes | QUANTITY | 0.87+ |
Google Cloud | TITLE | 0.86+ |
Larry Lancaster, Zebrium | Virtual Vertica BDC 2020
>> Announcer: It's theCUBE! Covering the Virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Hi, everybody. Welcome back. You're watching theCUBE's coverage of the Vertica Virtual Big Data Conference. It was, of course, going to be in Boston at the Encore Hotel. Win big with big data with the new casino but obviously Coronavirus has changed all that. Our hearts go out and we are empathy to those people who are struggling. We are going to continue our wall-to-wall coverage of this conference and we're here with Larry Lancaster who's the founder and CTO of Zebrium. Larry, welcome to theCUBE. Thanks for coming on. >> Hi, thanks for having me. >> You're welcome. So first question, why did you start Zebrium? >> You know, I've been dealing with machine data a long time. So for those of you who don't know what that is, if you can imagine servers or whatever goes on in a data center or in a SAS shop. There's data coming out of those servers, out of those applications and basically, you can build a lot of cool stuff on that. So there's a lot of metrics that come out and there's a lot of log files that come. And so, I've built this... Basically spent my career building that sort of thing. So tools on top of that or products on top of that. The problem is that since at least log files are completely unstructured, it's always doing the same thing over and over again, which is going in and understanding the data and extracting the data and all that stuff. It's very time consuming. If you've done it like five times you don't want to do it again. So really, my idea was at this point with machine learning where it's at there's got to be a better way. So Zebrium was founded on the notion that we can just do all that automatically. We can take a pile of machine data, we can turn it into a database, and we can build stuff on top of that. And so the company is really all about bringing that value to the market. >> That's cool. I want to get in to that, just better understand who you're disrupting and understand that opportunity better. But before I do, tell us a little bit about your background. You got kind of an interesting background. Lot of tech jobs. Give us some color there. >> Yeah, so I started in the Valley I guess 20 years ago and when my son was born I left grad school. I was in grad school over at Berkeley, Biophysics. And I realized I needed to go get a job so I ended up starting in software and I've been there ever since. I mean, I spent a lot of time at, I guess I cut my teeth at Nedap, which was a storage company. And then I co-founded a business called Glassbeam, which was kind of an ETL database company. And then after that I ended up at Nimble Storage. Another company, EMC, ended up buying the Glassbeam so I went over there and then after Nimble though, which where I build the InfoSight platform. That's where I kind of, after that I was able to step back and take a year and a half and just go into my basement, actually, this is my kind of workspace here, and come up with the technology and actually build it so that I could go raise money and get a team together to build Zebrium. So that's really my career in a nutshell. >> And you've got Hello Kitty over your right shoulder, which is kind of cool >> That's right. >> And then up to the left you got your monitor, right? >> Well, I had it. It's over here, yeah. >> But it was great! Pull it out, pull it out, let me see it. So, okay, so you got that. So what do you do? You just sit there and code all night or what? >> Yeah, that's right. So Hello Kitty's over here. I have a daughter and she setup my workspace here on this side with Hello Kitty and so on. And over on this side, I've got my recliner where I basically lay it all the way back and then I pivot this thing down over my face and put my keyboard on my lap and I can just sit there for like 20 hours. It's great. Completely comfortable. >> That's cool. All right, better put that monitor back or our guys will yell at me. But so, obviously, we're talking to somebody with serious coding chops and I'll also add that the Nimble InfoSight, I think it was one of the best pick ups that HP, HPE, has had in a while. And the thing that interested me about that, Larry, is the ability that the company was able to take that InfoSight and poured it very quickly across its product lines. So that says to me it was a modern, architecture, I'm sure API, microservices, and all those cool buzz words, but the proof is in their ability to bring that IP to other parts of the portfolio. So, well done. >> Yeah, well thanks. Appreciate that. I mean, they've got a fantastic team there. And the other thing that helps is when you have the notion that you don't just build on top of the data, you extract the data, you structure it, you put that in a database, we used Vertica there for that, and then you build on top of that. Taking the time to build that layer is what lets you build a scalable platform. >> Yeah, so, why Vertica? I mean, Vertica's been around for awhile. You remember you had the you had the old RDBMS, Oracles, Db2s, SQL Server, and then the database was kind of a boring market. And then, all of a sudden, you had all of these MPP companies came out, a spade of them. They all got acquired, including Vertica. And they've all sort of disappeared and morphed into different brands and Micro Focus has preserved the Vertica brand. But it seems like Vertica has been able to survive the transitions. Why Vertica? What was it about that platform that was unique and interested you? >> Well, I mean, so they're the first fund to build, what I would call a real column store that's kind of market capable, right? So there was the C-Store project at Berkeley, which Stonebreaker was involved in. And then that became sort of the seed from which Vertica was spawned. So you had this idea of, let's lay things out in a columnar way. And when I say columnar, I don't just mean that the data for every column is in a different set of files. What I mean by that is it takes full advantage of things like run length and coding, and L file and coding, and block--impression, and so you end up with these massive orders of magnitude savings in terms of the data that's being pulled off of storage as well as as it's moving through the pipeline internally in Vertica's query processing. So why am I saying all this? Because it's fundamentally, it was a fundamentally disruptive technology. I think column stores are ubiquitous now in analytics. And I think you could name maybe a couple of projects which are mostly open source who do something like Vertica does but name me another one that's actually capable of serving an enterprise as a relational database. I still think Vertica is unique in being that one. >> Well, it's interesting because you're a startup. And so a lot of startups would say, okay, we're going with a born-in-the-cloud database. Now Vertica touts that, well look, we've embraced cloud. You know, we have, we run in the cloud, we run on PRAM, all different optionality. And you hear a lot of vendors say that, but a lot of times they're just taking their stack and stuffing it into the cloud. But, so why didn't you go with a cloud-native database and is Vertica able to, I mean, obviously, that's why you chose it, but I'm interested from a technologist standpoint as to why you, again, made that choice given all these other choices around there. >> Right, I mean, again, I'm not, so... As I explained a column store, which I think is the appropriate definition, I'm not aware of another cloud-native-- >> Hm, okay. >> I'm aware of other cloud-native transactional databases, I'm not aware of one that has the analytics form it and I've tried some of them. So it was not like I didn't look. What I was actually impressed with and I think what let me move forward using Vertica in our stack is the fact that Eon really is built from the ground up to be cloud-native. And so we've been using Eon almost ever since we started the work that we're doing. So I've been really happy with the performance and with reliability of Eon. >> It's interesting. I've been saying for years that Vertica's a diamond in the rough and it's previous owner didn't know what to do with it because it got distracted and now Micro Focus seems to really see the value and is obviously putting some investments in there. >> Yeah >> Tell me more about your business. Who are you disrupting? Are you kind of disrupting the do-it-yourself? Or is there sort of a big whale out there that you're going to go after? Add some color to that. >> Yeah, so our broader market is monitoring software, that's kind of the high-level category. So you have a lot of people in that market right now. Some of them are entrenched in large players, like Datadog would be a great example. Some of them are smaller upstarts. It's a pretty, it's a pretty saturated market. But what's happened over the last, I'd say two years, is that there's been sort of a push towards what's called observability in terms of at least how some of the products are architected, like Honeycomb, and how some of them are messaged. Most of them are messaged these days. And what that really means is there's been sort of an understanding that's developed that that MTTR is really what people need to focus on to keep their customers happy. If you're a SAS company, MTTR is going to be your bread and butter. And it's still measured in hours and days. And the biggest reason for that is because of what's called unknown unknowns. Because of complexity. Now a days, things are, applications are ten times as complex as they used to be. And what you end up with is a situation where if something is new, if it's a known issue with a known symptom and a known root cause, then you can setup a automation for it. But the ones that really cost a lot of time in terms of service disruption are unknown unknowns. And now you got to go dig into this massive mass of data. So observability is about making tools to help you do that, but it's still going to take you hours. And so our contention is, you need to automate the eyeball. The bottleneck is now the eyeball. And so you have to get away from this notion of a person's going to be able to do it infinitely more efficient and recognize that you need automated help. When you get an alert agent, it shouldn't be that, "Hey, something weird's happening. Now go dig in." It should be, "Here's a root cause and a symptom." And that should be proposed to you by a system that actually does the observing. That actually does the watching. And that's what Zebrium does. >> Yeah, that's awesome. I mean, you're right. The last thing you want is just another alert and it say, "Go figure something out because there's a problem." So how does it work, Larry? In terms of what you built there. Can you take us inside the covers? >> Yeah, sure. So there's really, right now there's two kinds of data that we're ingesting. There's metrics and there's log files. Metrics, there's actually sort of a framework that's really popular in DevOp circles especially but it's becoming popular everywhere, which is called Prometheus. And it's a way of exporting metrics so that scrapers can collect them. And so if you go look at a typical stack, you'll find that most of the open source components and many of the closed source components are going to have exporters that export all their stacks to Prometheus. So by supporting that stack we can bring in all of those metrics. And then there's also the log files. And so you've got host log files in a containerized environment, you've got container logs, and you've got application-specific logs, perhaps living on a host mount. And you want to pull all those back and you want to be able to associate this log that I've collected here is associated with the same container on the same host that this metric is associated with. But now what? So once you've got that, you've got a pile of unstructured logs. So what we do is we take a look at those logs and we say, let's structure those into tables, right? So where I used to have a log message, if I look in my log file and I see it says something like, X happened five times, right? Well, that event types going to occur again and it'll say, X happened six times or X happened three times. So if I see that as a human being, I can say, "Oh clearly, that's the same thing." And what's interesting here is the times that X, that X happened, and that this number read... I may want to know when the numbers happened as a time series, the values of that column. And so you can imagine it as a table. So now I have table for that event type and every time it happens, I get a row. And then I have a column with that number in it. And so now I can do any kind of analytics I want almost instantly across my... If I have all my event types structured that way, every thing changes. You can do real anomaly detection and incident detection on top of that data. So that's really how we go about doing it. How we go about being able to do autonomous monitoring in a way that's effective. >> How do you handle doing that for, like the Spoke app? Do you have to, does somebody have to build a connector to those apps? How do you handle that? >> Yeah, that's a really good question. So you're right. So if I go and install a typical log manager, there'll be connectors for different apps and usually what that means is pulling in the stuff on the left, if you were to be looking at that log line, and it will be things like a time stamp, or a severity, or a function name, or various other things. And so the connector will know how to pull those apart and then the stuff to the right will be considered the message and that'll get indexed for search. And so our approach is we actually go in with machine learning and we structure that whole thing. So there's a table. And it's going to have a column called severity, and timestamp, and function name. And then it's going to have columns that correspond to the parameters that are in that event. And it'll have a name associated with the constant parts of that event. And so you end up with a situation where you've structured all of it automatically so we don't need collectors. It'll work just as well on your home-grown app that has no collectors or no parsers to find or anything. It'll work immediately just as well as it would work on anything else. And that's important, because you can't be asking people for connectors to their own applications. It just, it becomes now they've go to stop what they're doing and go write code for you, for your platform and they have to maintain it. It's just untenable. So you can be up and running with our service in three minutes. It'll just be monitoring those for you. >> That's awesome! I mean, that is really a breakthrough innovation. So, nice. Love to see that hittin' the market. Who do you sell to? Both types of companies and what role within the company? >> Well, definitely there's two main sort of pushes that we've seen, or I should say pulls. One is from DevOps folks, SRE folks. So these are people who are tasked with monitoring an environment, basically. And then you've got people who are in engineering and they have a staging environment. And what they actually find valuable is... Because when we find an incident in a staging environment, yeah, half the time it's because they're tearing everything up and it's not release ready, whatever's in stage. That's fine, they know that. But the other half the time it's new bugs, it's issues and they're finding issues. So it's kind of diverged. You have engineering users and they don't have titles like QA, they're Dev engineers or Dev managers that are really interested. And then you've got DevOps and SRE people there (mumbles). >> And how do I consume your product? Is the SAS... I sign up and you say within three minutes I'm up and running. I'm paying by the drink. >> Well, (laughs) right. So there's a couple ways. So, right. So the easiest way is if you use Kubernetes. So Kubernetes is what's called a container orchestrator. So these days, you know Docker and containers and all that, so now there's container orchestrators have become, I wouldn't say ubiquitous but they're very popular now. So it's kind of on that inflection curve. I'm not exactly sure the penetration but I'm going to say 30-40% probably of shops that were interested are using container orchestrators. So if you're using Kubernetes, basically you can install our Kubernetes chart, which basically means copying and pasting a URL and so on into your little admin panel there. And then it'll just start collecting all the logs and metrics and then you just login on the website. And the way you do that is just go to our website and it'll show you how to sign up for the service and you'll get your little API key and link to the chart and you're off and running. You don't have to do anything else. You can add rules, you can add stuff, but you don't have to. You shouldn't have to, right? You should never have to do any more work. >> That's great. So it's a SAS capability and I just pay for... How do you price it? >> Oh, right. So it's priced on volume, data volume. I don't want to go too much into it because I'm not the pricing guy. But what I'll say is that it's, as far as I know it's as cheap or cheaper than any other log manager or metrics product. It's in that same neighborhood as the very low priced ones. Because right now, we're not trying to optimize for take. We're trying to make a healthy margin and get the value of autonomous monitoring out there. Right now, that's our priority. >> And it's running in the cloud, is that right? AWB West-- >> Yeah, that right. Oh, I should've also pointed out that you can have a free account if it's less than some number of gigabytes a day we're not going to charge. Yeah, so we run in AWS. We have a multi-tenant instance in AWS. And we have a Vertica Eon cluster behind that. And it's been working out really well. >> And on your freemium, you have used the Vertica Community Edition? Because they don't charge you for that, right? So is that how you do it or... >> No, no. We're, no, no. So, I don't want to go into that because I'm not the bizdev guy. But what I'll say is that if you're doing something that winds up being OEM-ish, you can work out the particulars with Vertica. It's not like you're going to just go pay retail and they won't let you distinguish between tests, and prod, and paid, and all that. They'll work with you. Just call 'em up. >> Yeah, and that's why I brought it up because Vertica, they have a community edition, which is not neutered. It runs Eon, it's just there's limits on clusters and storage >> There's limits. >> But it's still fully functional though. >> So to your point, we want it multi-tenant. So it's big just because it's multi-tenant. We have hundred of users on that (audio cuts out). >> And then, what's your partnership with Vertica like? Can we close on that and just describe that a little bit? >> What's it like. I mean, it's pleasant. >> Yeah, I mean (mumbles). >> You know what, so the important thing... Here's what's important. What's important is that I don't have to worry about that layer of our stack. When it comes to being able to get the performance I need, being able to get the economy of scale that I need, being able to get the absolute scale that I need, I've not been disappointed ever with Vertica. And frankly, being able to have acid guarantees and everything else, like a normal mature database that can join lots of tables and still be fast, that's also necessary at scale. And so I feel like it was definitely the right choice to start with. >> Yeah, it's interesting. I remember in the early days of big data a lot of people said, "Who's going to need these acid properties and all this complexity of databases." And of course, acid properties and SQL became the killer features and functions of these databases. >> Who didn't see that one coming, right? >> Yeah, right. And then, so you guys have done a big seed round. You've raised a little over $6 million dollars and you got the product market fit down. You're ready to rock, right? >> Yeah, that's right. So we're doing a launch probably, well, when this airs it'll probably be the day before this airs. Basically, yeah. We've got people... Like literally in the last, I'd say, six to eight weeks, It's just been this sort of pique of interest. All of a sudden, everyone kind of gets what we're doing, realizes they need it, and we've got a solution that seems to meet expectations. So it's like... It's been an amazing... Let me just say this, it's been an amazing start to the year. I mean, at the same time, it's been really difficult for us but more difficult for some other people that haven't been able to go to work over the last couple of weeks and so on. But it's been a good start to the year, at least for our business. So... >> Well, Larry, congratulations on getting the company off the ground and thank you so much for coming on theCUBE and being part of the Virtual Vertica Big Data Conference. >> Thank you very much. >> All right, and thank you everybody for watching. This is Dave Vellante for theCUBE. Keep it right there. We're covering wall-to-wall Virtual Vertica BDC. You're watching theCUBE. (upbeat music)
SUMMARY :
brought to you by Vertica. and we're here with Larry Lancaster why did you start Zebrium? and basically, you can build a lot of cool stuff on that. and understand that opportunity better. and actually build it so that I could go raise money It's over here, yeah. So what do you do? and then I pivot this thing down over my face and I'll also add that the Nimble InfoSight, And the other thing that helps is when you have the notion and Micro Focus has preserved the Vertica brand. and so you end up with these massive orders And you hear a lot of vendors say that, I'm not aware of another cloud-native-- I'm not aware of one that has the analytics form it and now Micro Focus seems to really see the value Are you kind of disrupting the do-it-yourself? And that should be proposed to you In terms of what you built there. And so you can imagine it as a table. And so you end up with a situation I mean, that is really a breakthrough innovation. and it's not release ready, I sign up and you say within three minutes And the way you do that So it's a SAS capability and I just pay for... and get the value of autonomous monitoring out there. that you can have a free account So is that how you do it or... and they won't let you distinguish between Yeah, and that's why I brought it up because Vertica, But it's still So to your point, I mean, it's pleasant. What's important is that I don't have to worry I remember in the early days of big data and you got the product market fit down. that haven't been able to go to work and thank you so much for coming on theCUBE All right, and thank you everybody for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Larry Lancaster | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
five times | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
six times | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Zebrium | ORGANIZATION | 0.99+ |
20 hours | QUANTITY | 0.99+ |
Glassbeam | ORGANIZATION | 0.99+ |
Nedap | ORGANIZATION | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
Nimble Storage | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
a year and a half | QUANTITY | 0.99+ |
Micro Focus | ORGANIZATION | 0.99+ |
ten times | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
three minutes | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
eight weeks | QUANTITY | 0.98+ |
Stonebreaker | ORGANIZATION | 0.98+ |
Prometheus | TITLE | 0.98+ |
30-40% | QUANTITY | 0.98+ |
Eon | ORGANIZATION | 0.98+ |
hundred of users | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Vertica Virtual Big Data Conference | EVENT | 0.98+ |
Kubernetes | TITLE | 0.97+ |
first fund | QUANTITY | 0.97+ |
Virtual Vertica Big Data Conference 2020 | EVENT | 0.97+ |
AWB West | ORGANIZATION | 0.97+ |
Virtual Vertica Big Data Conference | EVENT | 0.97+ |
Honeycomb | ORGANIZATION | 0.96+ |
SAS | ORGANIZATION | 0.96+ |
20 years ago | DATE | 0.96+ |
Both types | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Datadog | ORGANIZATION | 0.95+ |
two main | QUANTITY | 0.94+ |
over $6 million dollars | QUANTITY | 0.93+ |
Hello Kitty | ORGANIZATION | 0.93+ |
SQL | TITLE | 0.93+ |
Zebrium | PERSON | 0.91+ |
Spoke | TITLE | 0.89+ |
Encore Hotel | LOCATION | 0.88+ |
InfoSight | ORGANIZATION | 0.88+ |
Coronavirus | OTHER | 0.88+ |
one | QUANTITY | 0.86+ |
less | QUANTITY | 0.85+ |
Oracles | ORGANIZATION | 0.85+ |
2020 | DATE | 0.85+ |
CTO | PERSON | 0.84+ |
Vertica | TITLE | 0.82+ |
Nimble InfoSight | ORGANIZATION | 0.81+ |
Ron Cormier, The Trade Desk | Virtual Vertica BDC 2020
>> David: It's the cube covering the virtual Vertica Big Data conference 2020 brought to you by Vertica. Hello, buddy, welcome to this special digital presentation of the cube. We're tracking the Vertica virtual Big Data conferences, the cubes. I think fifth year doing the BDC. We've been to every big data conference that they've held and really excited to be helping with the digital component here in these interesting times. Ron Cormier is here, Principal database engineer at the Trade Desk. Ron, great to see you. Thanks for coming on. >> Hi, David, my pleasure, good to see you as well. >> So we're talking a little bit about your background you got, you're basically a Vertica and database guru, but tell us about your role at Trade Desk and then I want to get into a little bit about what Trade Desk does. >> Sure, so I'm a principal database engineer at the Trade Desk. The Trade Desk was one of my customers when I was working with Hp, at HP, as a member of the Vertica team, and I joined the Trade Desk in early 2016. And since then, I've been working on building out their Vertica capabilities and expanding the data warehouse footprint and as ever growing database technology, data volume environment. >> And the Trade Desk is an ad tech firm and you are specializing in real time ad serving and pricing. And I guess real time you know, people talk about real time a lot we define real time as before you lose the customer. Maybe you can talk a little bit about you know, the Trade Desk in the business and maybe how you define real time. >> Totally, so to give everybody kind of a frame of reference. Anytime you pull up your phone or your laptop and you go to a website or you use some app and you see an ad what's happening behind the scenes is an auction is taking place. And people are bidding on the privilege to show you an ad. And across the open Internet, this happens seven to 13 million times per second. And so the ads, the whole auction dynamic and the display of the ad needs to happen really fast. So that's about as real time as it gets outside of high frequency trading, as far as I'm aware. So we put the Trade Desk participates in those auctions, we bid on behalf of our customers, which are ad agencies, and the agencies represent brands so the agencies are the madman companies of the world and they have brands that under their guidance, and so they give us budget to spend, to place the ads and to display them and once the ads get displayed, so we bid on the hundreds of thousands of auctions per second. Once we make those bids, anytime we do make a bid some data flows into our data platform, which is powered by Vertica. And, so we're getting hundreds of thousands of events per second. We have other events that flow into Vertica as well. And we clean them up, we aggregate them, and then we run reports on the data. And we run about 40,000 reports per day on behalf of our customers. The reports aren't as real time as I was talking about earlier, they're more batch oriented. Our customers like to see big chunks of time, like a whole day or a whole week or a whole month on a single report. So we wait for that time period to complete and then we run the reports on the results. >> So you you have one of the largest commercial infrastructures, in the Big Data sphere. Paint a picture for us. I understand you got a couple of like 320 node clusters we're talking about petabytes of data. But describe what your environment looks like. >> Sure, so like I said, we've been very good customers for a while. And we started out with with a bunch of enterprise clusters. So the Enterprise Mode is the traditional Vertica deployment where the compute and the storage is tightly coupled all raid arrays on the servers. And we had four of those and we're doing okay, but our volumes are ever increasing, we wanted to store more data. And we wanted to run more reports in a shorter period of time, was to keep pushing. And so we had these four clusters and then we started talking with Vertica about Eon mode, and that's Vertica separation of compute and storage where you get the compute and the storage can be scaled independently, we can add storage without adding compute or vice versa or we can add both, like. So that was something that we were very interested in for a couple reasons. One, our enterprise clusters, we're running out of disk, like when adding disk is expensive. In Enterprise Mode, it's kind of a pain, you got to add, compute at the same time, so you kind of end up in an unbalanced place. So beyond mode that problem gets a lot better. We can add disk, infinite disk because it's backed by S3. And we can add compute really easy to scale, the number of things that we run in parallel concurrency, just add a sub cluster. So they are two US East and US west of Amazon, so reasonably diverse. And and the real benefit is that they can, we can stop nodes when we don't need them. Our workload is fairly lumpy, I call it. Like we, after the day completes, we do the ingest, we do the aggregation for ingesting and aggregating all day, but the final hour, so it needs to be completed. And then once that's done, then the number of reports that we need to run spikes up, it goes really high. And we run those reports, we spin up a bunch of extra compute on the fly, run those reports and then spin them down. And we don't have to pay for that, for the rest of the day. So Eon has been a nice Boone for us for both those reasons. >> I'd love to explore you on little bit more. I mean, it's relatively new, I think 2018 Vertica announced Eon mode, so it's only been out there a couple years. So I'm curious for the folks that haven't moved the Eon mode, can you which presumably they want to for the same reasons that you mentioned why by the stories and chunks when you're on Storage if you don't have to, what were some of the challenges that you had to, that you faced in going to Eon mode? What kind of things did you have to prepare for? Were there any out of scope expectations? Can you share that experience with us? >> Sure, so we were an early adopter. We participated in the beta program. I mean, we, I think it's fair to say we actually drove the requirements and a lot of ways because we approached Vertica early on. So the challenges were what you'd expect any early adopter to be going through. The sort of getting things working as expected. I mean, there's a number of cases, which I could touch upon, like, we found an efficiency in the way that it accesses the data on S3 and it was accessing the data too frequently, which ended up was just expensive. So our S3 bill went up pretty significantly for a couple of months. So that was a challenge, but we worked through that another was that we recently made huge strides in with Vertica was the ability to stop and start nodes and not have to start them very quickly. And when they start to not interfere with any running queries, so when we create, when we want to spin up a bunch to compute, there was a point in time when it would break certain queries that were already running. So that that was a challenge. But again, the very good team has been quite responsive to solving these issues and now that's behind us. In terms of those who need to get started, there's or looking to get started. there's a number of things to think about. Off the top of my head there's sort of new configuration items that you'll want to think about, like how instance type. So certainly the Amazon has a variety of instances and its important to consider one of Vertica's architectural advantages in these areas Vertica has this caching layer on the instances themselves. And what that does is if we can keep the data in cache, what we've found is that the performance is basically the same performance of Enterprise Mode. So having a good size cast when needed, can be a little worrying. So we went with the I three instance types, which have a lot of local NVME storage that we can, so we can cache data and get good performance. That's one thing to think about. The number of nodes, the instance type, certainly the number of shards is a sort of technical item that needs to be considered. It's how the data gets, its distributed. It's sort of a layer on top of the segmentation that some Vertica engineers will be familiar with. And probably I mean, the, one of the big things that one needs to consider is how to get data in the database. So if you have an existing database, there's no sort of nice tool yet to suck all the data into an Eon database. And so I think they're working on that. But we're at the point we got there. We had to, we exported all our data out of enterprise cluster as cache dumped it out to S3 and then we had the Eon cluster to suck that data. >> So awesome advice. Thank you for sharing that with the community. So but at the end of the day, so it sounds like you had some learning to do some tweaking to do and obviously how to get the data in. At the end of the day, was it worth it? What was the business impact? >> Yeah, it definitely was worth it for us. I mean, so right now, we have four times the data in our Eon cluster that we have in our enterprise clusters. We still run some enterprise clusters. We started with four at the peak. Now we're down to two. So we have the two young clusters. So it's been, I think our business would say it's been a huge win, like we're doing things that we really never could have done before, like for accessing the data on enterprise would have been really difficult. It would have required non trivial engineering to do things like daisy chaining clusters together, and then how to aggregate data across clusters, which would, again, non trivial. So we have all the data we want, we can continue to grow data, where running reports on seasonality. So our customers can compare their campaigns last year versus this year, which is something we just haven't been able to do in the past. We've expanded that. So we grew the data vertically, we've expanded the data horizontally as well. So we were adding columns to our aggregates. We are, in reaching the data much more than we have in the past. So while we still have enterprise kicking around, I'd say our clusters are doing the majority of the heavy lifting. >> And the cloud was part of the enablement, here, particularly with scale, is that right? And are you running certain... >> Definitely. >> And you are running on prem as well, or are you in a hybrid mode? Or is it all AWS? >> Great question, so yeah. When I've been speaking about enterprise, I've been referring to on prem. So we have a physical machines in data centers. So yeah, we are running a hybrid now and I mean, and so it's really hard to get like an apples to apples direct comparison of enterprise on prem versus Eon in the cloud. One thing that I touched upon in my presentation is it would require, if I try to get apples to apples, And I think about how I would run the entire workload on enterprise or on Eon, I had to run the entire thing, we want both, I tried to think about how many cores, we would need CPU cores to do that. And basically, it would be about the same number of cores, I think, for enterprise on prime versus Eon in the cloud. However, Eon nodes only need to be running half the course only need to be running about six hours out of the day. So the other the other 18 hours I can shut them down and not be paying for them, mostly. >> Interesting, okay, and so, I got to ask you, I mean, notwithstanding the fact that you've got a lot invested in Vertica, and get a lot of experience there. A lot of you know, emerging cloud databases. Did you look, I mean, you know, a lot about database, not just Vertica, your database guru in many areas, you know, traditional RDBMS, as well as MPP new cloud databases. What is it about Vertica that works for you in this specific sweet spot that you've chosen? What's really the difference there? >> Yeah, so I think the key differences is the maturity. There are a number, I am familiar with another, a number of other database platforms in the cloud and otherwise, column stores specifically, that don't have the maturity that we're used to and we need at our scale. So being able to specify alternate projections, so different sort orders on my data is huge. And, there's other platforms where we don't have that capability. And so the, Vertica is, of course, the original column store and they've had time to build up a lead in terms of their maturity and features and I think that other other column stores cloud, otherwise are playing a little bit of catch up in that regard. Of course, Vertica is playing catch up on the cloud side. But if I had to pick whether I wanted to write a column store, first graph from scratch, or use a defined file system, like a cloud file system from scratch, I'd probably think it would be easier to write the cloud file system. The column store is where the real smarts are. >> Interesting, let's talk a little bit about some of the challenges you have in reporting. You have a very dynamic nature of reporting, like I said, your clients want to they want to a time series, they just don't want to snap snapshot of a slice. But at the same time, your reporting is probably pretty lumpy, a very dynamic, you know, demand curve. So first of all, is that accurate? Can you describe that sort of dynamic, dynamism and how are you handling that? >> Yep, that's exactly right. It is lumpy. And that's the exact word that I use. So like, at the end of the UTC day, when UTC midnight rolls around, that's we do the final ingest the final aggregate and then the queue for the number of reports that need to run spikes. So the majority of those 40,000 reports that we run per day are run in the four to six hours after that spikes up. And so that's when we need to have all the compute come online. And that's what helps us answer all those queries as fast as possible. And that's a big reason why Eon is advantage for us because the rest of the day we kind of don't necessarily need all that compute and we can shut it down and not pay for it. >> So Ron, I wonder if you could share with us just sort of the wrap here, where you want to take this you're obviously very close to Vertica. Are you driving them in a heart and Eon mode, you mentioned before you'd like, you'd have the ability to load data into Eon mode would have been nice for you, I guess that you're kind of over that hump. But what are the kinds of things, If Column Mahoney is here in the room, what are you telling him that you want the team, the engineering team at Vertica to work on that would make your life better? >> I think the things that need the most attention sort of near term is just the smoothing out some of the edges in terms of making it a little bit more seamless in terms of the cloud aspects to it. So our goal is to be able to start instances and have them join the cluster in less than five minutes. We're not quite there yet. If you look at some of the other cloud database platforms, they're beating that handle it so I know the team is working on that. Some of the other things are the control. Like I mentioned, while we like control in the column store, we also want control on the cloud side of things in terms of being able to dedicate cluster, some clusters specific. We can pin workloads against a specific sub cluster and take advantage of the cast that's over there. We can say, okay, this resource pool. I mean, the sub cluster is a new concept, relatively new concept for Vertica. So being able to have control of many things at sub cluster level, resource pools, configuration parameters, and so on. >> Yeah, so I mean, I personally have always been impressed with Vertica. And their ability to sort of ride the wave adopt new trends. I mean, they do have a robust stack. It's been, you know, been 10 plus years around. They certainly embraced to do, the embracing machine learning, we've been talking about the cloud. So I actually have a lot of confidence to them, especially when you compare it to other sort of mid last decade MPP column stores that came out, you know, Vertica is one of the few remaining certainly as an independent brand. So I think that speaks the team there and the engineering culture. But give your final word. Just final thoughts on your role the company Vertica wherever you want to take it. >> Yeah, no, I mean, we're really appreciative and we value the partners that we have and so I think it's been a win win, like our volumes are, like I know that we have some data that got pulled into their test suite. So I think it's been a win win for both sides and it'll be a win for other Vertica customers and prospects, knowing that they're working with some of the highest volume, velocity variety data that (mumbles) >> Well, Ron, thanks for coming on. I wish we could have met face to face at the the Encore in Boston. I think next year we'll be able to do that. But I appreciate that technology allows us to have these remote conversations. Stay safe, all the best to you and your family. And thanks again. >> My pleasure, David, good speaking with you. >> And thank you for watching everybody, we're covering this is the Cubes coverage of the Vertica virtual Big Data conference. I'm Dave volante. We'll be right back right after this short break. (soft music)
SUMMARY :
brought to you by Vertica. So we're talking a little bit about your background and I joined the Trade Desk in early 2016. And the Trade Desk is an ad tech firm And people are bidding on the privilege to show you an ad. So you you have one of the largest And and the real benefit is that they can, for the same reasons that you mentioned why by dumped it out to S3 and then we had the Eon cluster So but at the end of the day, So we have all the data we want, And the cloud was part of the enablement, here, half the course only need to be running I mean, notwithstanding the fact that you've got that don't have the maturity about some of the challenges you have in reporting. because the rest of the day we kind of So Ron, I wonder if you could share with us in terms of the cloud aspects to it. the company Vertica wherever you want to take it. and we value the partners that we have Stay safe, all the best to you and your family. of the Vertica virtual Big Data conference.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Ron Cormier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
40,000 reports | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
18 hours | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dave volante | PERSON | 0.99+ |
next year | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
less than five minutes | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
early 2016 | DATE | 0.98+ |
apples | ORGANIZATION | 0.98+ |
two young clusters | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
about six hours | QUANTITY | 0.98+ |
Cubes | ORGANIZATION | 0.98+ |
six hours | QUANTITY | 0.98+ |
US East | LOCATION | 0.98+ |
Hp | ORGANIZATION | 0.98+ |
Eon | ORGANIZATION | 0.96+ |
S3 | TITLE | 0.95+ |
13 million times per second | QUANTITY | 0.94+ |
half | QUANTITY | 0.94+ |
prime | COMMERCIAL_ITEM | 0.94+ |
four times | QUANTITY | 0.92+ |
hundreds of thousands of auctions | QUANTITY | 0.92+ |
mid last decade | DATE | 0.89+ |
one thing | QUANTITY | 0.88+ |
One thing | QUANTITY | 0.87+ |
single report | QUANTITY | 0.85+ |
couple reasons | QUANTITY | 0.84+ |
four clusters | QUANTITY | 0.83+ |
first graph | QUANTITY | 0.81+ |
Vertica | TITLE | 0.81+ |
hundreds of thousands of events per second | QUANTITY | 0.8+ |
about 40,000 reports per day | QUANTITY | 0.78+ |
Vertica Big Data conference 2020 | EVENT | 0.77+ |
320 node | QUANTITY | 0.74+ |
a whole week | QUANTITY | 0.72+ |
Vertica virtual Big Data | EVENT | 0.7+ |
Joe Gonzalez, MassMutual | Virtual Vertica BDC 2020
(bright music) >> Announcer: It's theCUBE. Covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. Hello everybody, welcome back to theCUBE's coverage of the Vertica Big Data Conference, the Virtual BDC. My name is Dave Volante, and you're watching theCUBE. And we're here with Joe Gonzalez, who is a Vertica DBA, at MassMutual Financial. Joe, thanks so much for coming on theCUBE I'm sorry that we can't be face to face in Boston, but at least we're being responsible. So thank you for coming on. >> (laughs) Thank you for having me. It's nice to be here. >> Yeah, so let's set it up. We'll talk about, you know, a little bit about MassMutual. Everybody knows it's a big financial firm, but what's your role there and kind of your mission? >> So my role is Vertica DBA. I was hired January of last year to come on and manage their Vertica cluster. They've been on Vertica for probably about a year and a half before that started out on on-prem cluster and then move to AWS Enterprise in the cloud, and brought me on just as they were considering transitioning over to Vertica's EON mode. And they didn't really have anybody dedicated to Vertica, nobody who really knew and understood the product. And I've been working with Vertica for about probably six, seven years, at that point. I was looking for something new and landed a really good opportunity here with a great company. >> Yeah, you have a lot of experience in Vertica. You had a role as a market research, so you're a data guy, right? I mean that's really what you've been doing your entire career. >> I am, I've worked with Pitney Bowes, in the postage industry, I worked with healthcare auditing, after seven years in market research. And then I've been with MassMutual for a little over a year now, yeah, quite a lot. >> So tell us a little bit about kind of what your objectives are at MassMutual, what you're kind of doing with the platform, what application just supporting, paint a picture for us if you would. >> Certainly, so my role is, MassMutual just decided to make Vertica its enterprise data warehouse. So they've really bought into Vertica. And we're moving all of our data there probably about to good 80, 90% of MassMutual's data is going to be on the Vertica platform, in EON mode. So, and we have a wide usage of that data across corporation. Right now we're about 50 terabytes and growing quickly. And a wide variety of users. So there's a lot of ETLs coming in overnight, loading a lot of data, transforming a lot of data. And a lot of reporting tools are using it. So currently, Tableau MicroStrategy. We have Alteryx using it, and we also have API's running against it throughout the day, 24/7 with people coming in, especially now these days with the, you know, some financial uncertainty going on. A lot of people coming and checking their 401k's, checking their insurance and status and what not. So we have to handle a lot of concurrent traffic on top of the normal big query. So it's a quite diverse cluster. And I'm glad they're really investing in using Vertica as their overall solution for this. >> Yeah, I mean, these days your 401k like this, right? (laughing) Afraid to look. So I wonder, Joe if you could share with our audience. I mean, for those who might not be as familiar with the history of just Vertica, and specifically, about MPP, you've had historically you have, you know, traditional RDBMS, whether it's Db2 or Oracle, and then you had a spate of companies that came out with this notion of MPP Vertica is the one that, I think it's probably one of the few if only brands that they've survived, but what did that bring to the industry and why is that important for people to understand, just in terms of whatever it is, scale, performance, cost. Can you explain that? >> To me, it actually brought scale at good cost. And that's why I've been a big proponent of Vertica ever since I started using it. There's a number, like you said of different platforms where you can load big data and store and house big data. But the purpose of having that big data is not just for it to sit there, but to be used, and used in a variety of ways. And that's from, you know, something small, like the first installation I was on was about 10 terabytes. And, you know, I work with the data warehouses up to 100 terabytes, and, you know, there's Vertica installations with, you know, hundreds of petabytes on them. You want to be able to use that data, so you need a platform that's going to be able to access that data and get it to the clients, get it to the customers as quickly as possible, and not paying an arm and a leg for the privilege to do so. And Vertica allows companies to do that, not only get their data to clients and you know, in company users quickly, but save money while doing so. >> So, but so, why couldn't I just use a traditional RDBMS? Why not just throw it all into Oracle? >> One, cost, Oracle is very expensive while Vertica's a lot more affordable than that. But the column-score structure of Vertica allows for a lot more optimized queries. Some of the queries that you can run in Vertica in 2, 3, 4 seconds, will take minutes and sometimes hours in an RDBMS, like Oracle, like SQL Server. They have the capability to store that amount of data, no question, but the usability really lacks when you start querying tables that are 180 billion column, 180 billion rows rather of tables in Vertica that are over 1000 columns. Those will take hours to run on a traditional RDBMS and then running them in Vertica, I get my queries back in a sec. >> You know what's interesting to me, Joe and I wonder if you could comment, it seems that Vertica has done a good job of embracing, you know, riding the waves, whether it was HDFS and the big data in our early part of the big data era, the machine learning, machine intelligence. Whether it's, you know, TensorFlow and other data science tools, it seems like Vertica somehow in the cloud is the other one, right? A lot of times cloud is super disruptive, particularly to companies that started on-prem, it seems like Vertica somehow has been able to adopt and embrace some of these trends. Why, from your standpoint, first of all, from your standpoint, as a customer, is that true? And why do you think that is? Is it architectural? Is it true mindset engineering? I wonder if you could comment on that. >> It's absolutely true, I've started out again, on an on-prem Vertica data warehouse, and we kind of, you know, rolled kind of along with them, you know, more and more people have been using data, they want to make it accessible to people on the web now. And you know, having that, the option to provide that data from an on-prem solution, from AWS is key, and now Vertica is offering even a hybrid solution, if you want to keep some of your data behind a firewall, on-prem, and put some in the cloud as well. So data at Vertica has absolutely evolved along with the industry in ways that no other company really has that I've seen. And I think the reason for it and the reason I've stayed with Vertica, and specifically have remained at Vertica DBA for the last seven years, is because of the way Vertica stays in touch with it's persons. I've been working with the same people for the seven, eight years, I've been using Vertica, they're family. I'm part of their family, and you know, I'm good friends with some of these people. And they really are in tune not only with the customer but what they're doing. They really sit down with you and have those conversations about, you know, what are your needs? How can we make Vertica better? And they listen to their clients. You know, just having access to the data engineers who develop Vertica to be arranged on a phone call or whatnot, I've never had that with any other company. Vertica makes that available to their customers when they need it. So the personal touch is a huge for them. >> That's good, it's always good to get the confirmation from the practitioners, just not hear from the vendor. I want to ask you about the EON transition. You mentioned that MassMutual brought you in to help with that. What were some of the challenges that you faced? And how did you get over them? And what did, what is, why EON? You know, what was the goal, the outcome and some of the challenges maybe that you had to overcome? >> Right. So MassMutual had an interesting setup when I first came in. They had three different Vertica clusters to accommodate three different portions of their business. The data scientists who use the data quite extensively in very large queries, very intense queries, their work with their predictive analytics and whatnot. It was a separate one for the API's, which needed, you know, sub-second query response times. And the enterprise solution, they weren't always able to get the performance they needed, because the fast queries were being overrun by the larger queries that needed more resources. And then they had a third for starting to develop this enterprise data platform and started, you know, looking into their future. The first challenge was, first of all, bringing all those three together, and back into a single cluster, and allowing our users to have both of the heavy queries and the API queries running at the same time, on the same platform without having to completely separate them out onto different clusters. EON really helps with that because it allows to store that data in the S3 communal storage, have the main cluster set up to run the heavy queries. And then you can set up sub clusters that still point to that S3 data, but separates out the compute so that the API's really have their own resources to run and not be interfered with by the other process. >> Okay, so that, I'm hearing a couple of things. One is you're sort of busting down data silos. So you're able to have a much more coherent view of your data, which I would imagine is critical, certainly. Companies like MassMutual, have been around for 100 years, and so you've got all kinds of data dispersed. So to the extent that you can break down those silos, that's important, but also being able to I guess have granular increments of compute and storage is what I'm hearing. What does that do for you? It make that more efficient? Well, they are other business benefits? Maybe you could elucidate. >> Well, one cost is again, a huge benefit, the cost of running three different clusters in even AWS, in the enterprise solution was a little costly, you know, you had to have your dedicated servers here and there. So you're paying for like, you know, 12, 15 different servers, for example. Whereas we bring them all back into EON, I can run everything on a six-node production cluster. And you know, when things are busy, I can spin up the three-node top cluster for the API's, only paid for when I need them, and then bring them back into the main cluster when things are slowed down a bit, and they can get that performance that they need. So that saves a ton on resource costs, you know, you're not paying for the storage, you're paying for one S3 bucket, you're only paying for the nodes, these are two instances, that are up and running when you need them., and that is huge. And again, like you said, it gives us the ability to silo our data without having to completely separate our data into different storage areas. Which is a big benefit, it gives us the ability to query everything from one single cluster without having to synchronize it to, you know, three different ones. So this one going to have there's, this one going to have there's, but everyone's still looking at the same data and replicate that in QA and Devs so that people can do it outside of production and do some testing as well. >> So EON, obviously a very important innovation. And of course, Vertica touts the difference between others who separate huge storage, and you know, they're not the only one that does that, but they are really I think the only one that does it for on-prem, and virtually across clouds. So my question is, and I think you're doing a breakout session on the Virtual BDC. We're going to be in Boston, now we're doing it online. If I'm in the audience, I'm imagining I'm a junior DBA at an organization that maybe doesn't have a Joe. I haven't been an expert for seven years. How hard is it for me to get, what do I need to do to get up to speed on EON? It sounds great, I want it. I'm going to save my company money, but I'm nervous 'cause I've only been at Vertica DBA for, you know, a year, and I'm sort of, you know, not as experienced as you. What are the things that I should be thinking about? Do I need to bring in? Do I need to hire somebody? Do I need to bring in a consultant? Can I learn it myself? What would you advise? >> It's definitely easy enough that if you have at least a little bit of work experience, you can learn it yourself, okay? 'Cause the concepts are still there. There's some you know, little bits of nuances where you do need to be aware of certain changes between the Enterprise and EON edition. But I would also say consult with your Vertica Account Manager, consult with your, you know, let them bring in the right people from Vertica to help you get up to speed and if you need to, there are also resources available as far as consultants go, that will help you get up to speed very quickly. And we did work together with Vertica and with one of their partners, Clarity, in helping us to understand EON better, set it up the right way, you know, how do we take our, the number of shards for our data warehouse? You know, they helped us evaluate all that and pick the right number of shards, the right number of nodes to get set up and going. And, you know, helped us figure out the best ways to get our data over from the Enterprise Edition into EON very quickly and very efficient. So different with yourself. >> I wanted to ask you about organizational, you know, issues because, you know, the guys like you practitioners always tell me, "Look, the tech, technology comes and goes, that's kind of the easy part, we're good at that. It's the people it's the processes, the skill sets." What does your, you know, team regime look like? And do you have any sort of ideal team makeup or, you know, ideal advice, is it two piece of teams? Is it what kind of skills? What kind of interaction and communications to senior leadership? I wonder if you could just give us some color on that. >> One of the things that makes me extremely proud to be working for MassMutual right now, is that they do what a lot of companies have not been doing and that is investing in IT. They have put a lot of thought, a lot of money, and a lot of support into setting up their enterprise data platform and putting Vertica at the center. And not only did they put the money into getting the software that they needed, like Vertica, you know, MicroStrategy, and all the other tools that we were using to use that, they put the money in the people. Our managers are extremely supportive of us. We hired about 40 to 45 different people within a four-month time frame, data engineers, data analysts, data modelers, a nice mix of people across who can help shape your data and bring the data in and help the users use the data properly, and allow me as the database administrator to make sure that they're doing what they're doing most efficiently and focus on my job. So you have to have that diversity among the different data skills in order to make your team successful. >> That's awesome. Kind of a side question, and it's really not Vertica's wheelhouse, but I'm curious, you know, in the early days of the big data, you know, movement, a lot of the data scientists would complain, and they still do that, "80% of my time is spent wrangling data." The tools for the data engineer, the data scientists, the database, you know, experts, they're all different. And is that changing? And to what degree is that changing? Kind of what ending are we in and just in terms of a more facile environment for all those roles? >> Again, I think it depends on company to company, you know, what resources they make available to the data scientists. And the data scientists, we have a lot of them at MassMutual. And they're very much into doing a lot of machine learning, model training, predictive analytics. And they are, you know, used to doing it outside of Vertica too, you know, pulling that data out into Python and Scalars Bar, and tools like that. And they're also now just getting into using Vertica's in-database analytics and machine learning, which is a skill that, you know, definitely nobody else out there has. So being able to have one somebody who understands Vertica like myself, and being able to train other people to use Vertica the way that is most efficient for them is key. But also just having people who understand not only the tools that you're using, but how to model data, how to architect your tables, your schemas, the interaction between your tables and schemas and whatnot, you need to have that diversity in order to make this work. And our data scientists have benefited immensely from the struct that MassMutual put in place by our data management delivery team. >> That's great, I think I saw, somewhere in your background, that you've trained about 100 people in Vertica. Did I get that right? >> Yes, I've, since I started here, I've gone to our Boston location, our Springfield location, and our New York City location and trained, probably about this point, about 120, 140 of our Vertica users. And I'm trying to do, you know, a couple of follow-up sessions per year. >> So adoption, obviously, is a big goal of yours. Getting people to adopt the platform, but then more importantly, I guess, deliver business value and outcomes. >> Absolutely. >> Yeah, I wanted to ask you about encryption. You know, in the perfect world, everything would be encrypted, but there are trade offs. Are you using encryption? What are you doing in that regard? >> We are actually just getting into that now due to the New York and the CCPA regulations that are now in place. We do have a lot of Person Identifiable Information in our data store that does require encryption. So we are going through a month's long process that started in December, I think, it's actually a bit earlier than that, to start identifying all the columns, not only in our Vertica database, but in, you know, the other databases that we do use, you know, we have Postgres database, SQL Server, Teradata for the time being, until that moves into Vertica. And identify where that data sits, what downstream applications, pull that data from the data sources and store it locally as well, and starts encrypting that data. And because of the tight relationship between Voltage and Vertica, we settled on Voltages as the major platform to start doing that encryption. So we're going to be implementing that in Vertica probably within the next month or two, and roll it out to all the teams that have data that requires encryption. We're going to start rolling it out to the downstream application owners to make sure that they are encrypting the data as they get it pulled over. And we're also using another product for several other applications that don't mesh well as well with both. >> Voltage being micro, focuses encryption solution, correct? >> Right, yes. >> Yes, of course, like a focus for the audience's is the, it owns Vertica and if Vertica is a separate brand. So I want to ask you kind of close on what success looks like. You've been at this for a number of years, coming into MassMutual which was great to hear. I've had some past experience with MassMutual, it's an awesome company, I've been to the Springfield facility and in Boston as well, and I have great respect for them, and they've really always been a leader. So it's great to hear that they're investing in technology as a differentiator. What does success look like for you? Let's say you're at MassMutual for a few years, you're looking back, what success look like? Go. >> A good question. It's changing every day just, you know, with more and more, you know, applications coming onboard, more and more data being pulled in, more uses being found for the data that we have. I think success for me is making sure that Vertica, first of all, is always up made, is always running at its most optimal to keep our users happy. I think when I started, you know, we had a lot of processes that were running, you know, six, seven hours, some of them were taking, you know, almost a day long, because they were so complicated, we've got those running in under an hour now, some of them running in a matter of minutes. I want to keep that optimization going for all of our processes. Like I said, there's a lot of users using this data. And it's been hard over the first year of me being here to get to all of them. And thankfully, you know, I'm getting a bit of help now, I have a couple of system DBAs, and I'm training up to help out with these optimizations, you know, fixing queries, fixing projections to make sure that queries do run as quickly as possible. So getting that to its optimal stage is one. Two, getting our data encrypted and protected so that even if for whatever reasons, somehow somebody breaks into our data, they're not going to be able to get anything at all, because our data is 100% protected. And I think more companies need to be focusing on that as well. And third, I want to see our data science teams using more and more of Vertica's in-database predictive analytics, in-database machine learning products, and really helping make their jobs more efficient by doing so. >> Joe, you're awesome guest I mean, we always like I said, love having the practitioners on and getting the straight, skinny and pros. You're welcome back anytime, and as I say, I wish we could have met in Boston, maybe next year at the BDC. But it's great to have you online, and thanks for coming on theCUBE. >> And thank you for having me and hopefully we'll meet next year. >> Yeah, I hope so. And thank you everybody for watching that. Remember theCUBE is running concurrent with the Vertica Virtual BDC, it's vertica.com/bdc2020. If you want to check out all the keynotes, and all the breakout sessions, I'm Dave Volante for theCUBE. We'll be going. More interviews, for people right there. Thanks for watching. (bright music)
SUMMARY :
Big Data Conference 2020, brought to you by Vertica. (laughs) Thank you for having me. We'll talk about, you know, cluster and then move to AWS Enterprise in the cloud, Yeah, you have a lot of experience in Vertica. in the postage industry, I worked with healthcare auditing, paint a picture for us if you would. with the, you know, some financial uncertainty going on. and then you had a spate of companies that came out their data to clients and you know, Some of the queries that you can run in Vertica a good job of embracing, you know, riding the waves, And you know, having that, the option to provide and some of the challenges maybe that you had to overcome? It was a separate one for the API's, which needed, you know, So to the extent that you can break down those silos, So that saves a ton on resource costs, you know, and I'm sort of, you know, not as experienced as you. to help you get up to speed and if you need to, because, you know, the guys like you practitioners the database administrator to make sure that they're doing of the big data, you know, movement, Again, I think it depends on company to company, you know, Did I get that right? And I'm trying to do, you know, a couple of follow-up Getting people to adopt the platform, but then more What are you doing in that regard? the other databases that we do use, you know, So I want to ask you kind of close on what success looks like. And thankfully, you know, I'm getting a bit of help now, But it's great to have you online, And thank you for having me And thank you everybody for watching that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Gonzalez | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
MassMutual | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
December | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four-month | QUANTITY | 0.99+ |
vertica.com/bdc2020 | OTHER | 0.99+ |
Springfield | LOCATION | 0.99+ |
2 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two instances | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Scalars Bar | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
180 billion rows | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
15 different servers | QUANTITY | 0.99+ |
two piece | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
180 billion column | QUANTITY | 0.98+ |
over 1000 columns | QUANTITY | 0.98+ |
eight years | QUANTITY | 0.98+ |
Voltage | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
hundreds of petabytes | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
six-node | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one single cluster | QUANTITY | 0.98+ |
Vertica Big Data Conference | EVENT | 0.98+ |
MassMutual Financial | ORGANIZATION | 0.98+ |
4 seconds | QUANTITY | 0.98+ |
EON | ORGANIZATION | 0.98+ |
New York | LOCATION | 0.97+ |
about 10 terabytes | QUANTITY | 0.97+ |
first challenge | QUANTITY | 0.97+ |
next month | DATE | 0.97+ |
Keynote Analysis | Virtual Vertica BDC 2020
(upbeat music) >> Narrator: It's theCUBE, covering the Virtual Vertica Big Data Conference 2020. Brought to you by Vertica. >> Dave Vellante: Hello everyone, and welcome to theCUBE's exclusive coverage of the Vertica Virtual Big Data Conference. You're watching theCUBE, the leader in digital event tech coverage. And we're broadcasting remotely from our studios in Palo Alto and Boston. And, we're pleased to be covering wall-to-wall this digital event. Now, as you know, originally BDC was scheduled this week at the new Encore Hotel and Casino in Boston. Their theme was "Win big with big data". Oh sorry, "Win big with data". That's right, got it. And, I know the community was really looking forward to that, you know, meet up. But look, we're making the best of it, given these uncertain times. We wish you and your families good health and safety. And this is the way that we're going to broadcast for the next several months. Now, we want to unpack Colin Mahony's keynote, but, before we do that, I want to give a little context on the market. First, theCUBE has covered every BDC since its inception, since the BDC's inception that is. It's a very intimate event, with a heavy emphasis on user content. Now, historically, the data engineers and DBAs in the Vertica community, they comprised the majority of the content at this event. And, that's going to be the same for this virtual, or digital, production. Now, theCUBE is going to be broadcasting for two days. What we're doing, is we're going to be concurrent with the Virtual BDC. We got practitioners that are coming on the show, DBAs, data engineers, database gurus, we got a security experts coming on, and really a great line up. And, of course, we'll also be hearing from Vertica Execs, Colin Mahony himself right of the keynote, folks from product marketing, partners, and a number of experts, including some from Micro Focus, which is the, of course, owner of Vertica. But I want to take a moment to share a little bit about the history of Vertica. The company, as you know, was founded by Michael Stonebraker. And, Verica started, really they started out as a SQL platform for analytics. It was the first, or at least one of the first, to really nail the MPP column store trend. Not only did Vertica have an early mover advantage in MPP, but the efficiency and scale of its software, relative to traditional DBMS, and also other MPP players, is underscored by the fact that Vertica, and the Vertica brand, really thrives to this day. But, I have to tell you, it wasn't without some pain. And, I'll talk a little bit about that, and really talk about how we got here today. So first, you know, you think about traditional transaction databases, like Oracle or IMBDB tour, or even enterprise data warehouse platforms like Teradata. They were simply not purpose-built for big data. Vertica was. Along with a whole bunch of other players, like Netezza, which was bought by IBM, Aster Data, which is now Teradata, Actian, ParAccel, which was the basis for Redshift, Amazon's Redshift, Greenplum was bought, in the early days, by EMC. And, these companies were really designed to run as massively parallel systems that smoked traditional RDBMS and EDW for particular analytic applications. You know, back in the big data days, I often joked that, like an NFL draft, there was run on MPP players, like when you see a run on polling guards. You know, once one goes, they all start to fall. And that's what you saw with the MPP columnar stores, IBM, EMC, and then HP getting into the game. So, it was like 2011, and Leo Apotheker, he was the new CEO of HP. Frankly, he has no clue, in my opinion, with what to do with Vertica, and totally missed one the biggest trends of the last decade, the data trend, the big data trend. HP picked up Vertica for a song, it wasn't disclosed, but my guess is that it was around 200 million. So, rather than build a bunch of smart tokens around Vertica, which I always call the diamond in the rough, Apotheker basically permanently altered HP for years. He kind of ruined HP, in my view, with a 12 billion dollar purchase of Autonomy, which turned out to be one of the biggest disasters in recent M&A history. HP was forced to spin merge, and ended up selling most of its software to Microsoft, Micro Focus. (laughs) Luckily, during its time at HP, CEO Meg Whitman, largely was distracted with what to do with the mess that she inherited form Apotheker. So, Vertica was left alone. Now, the upshot is Colin Mahony, who was then the GM of Vertica, and still is. By the way, he's really the CEO, and he just doesn't have the title, I actually think they should give that to him. But anyway, he's been at the helm the whole time. And Colin, as you'll see in our interview, is a rockstar, he's got technical and business jobs, people love him in the community. Vertica's culture is really engineering driven and they're all about data. Despite the fact that Vertica is a 15-year-old company, they've really kept pace, and not been polluted by legacy baggage. Vertica, early on, embraced Hadoop and the whole open-source movement. And that helped give it tailwinds. It leaned heavily into cloud, as we're going to talk about further this week. And they got a good story around machine intelligence and AI. So, whereas many traditional database players are really getting hurt, and some are getting killed, by cloud database providers, Vertica's actually doing a pretty good job of servicing its install base, and is in a reasonable position to compete for new workloads. On its last earnings call, the Micro Focus CFO, Stephen Murdoch, he said they're investing 70 to 80 million dollars in two key growth areas, security and Vertica. Now, Micro Focus is running its Suse play on these two parts of its business. What I mean by that, is they're investing and allowing them to be semi-autonomous, spending on R&D and go to market. And, they have no hardware agenda, unlike when Vertica was part of HP, or HPE, I guess HP, before the spin out. Now, let me come back to the big trend in the market today. And there's something going on around analytic databases in the cloud. You've got companies like Snowflake and AWS with Redshift, as we've reported numerous times, and they're doing quite well, they're gaining share, especially of new workloads that are merging, particularly in the cloud native space. They combine scalable compute, storage, and machine learning, and, importantly, they're allowing customers to scale, compute, and storage independent of each other. Why is that important? Because you don't have to buy storage every time you buy compute, or vice versa, in chunks. So, if you can scale them independently, you've got granularity. Vertica is keeping pace. In talking to customers, Vertica is leaning heavily into the cloud, supporting all the major cloud platforms, as we heard from Colin earlier today, adding Google. And, why my research shows that Vertica has some work to do in cloud and cloud native, to simplify the experience, it's more robust in motor stack, which supports many different environments, you know deep SQL, acid properties, and DNA that allows Vertica to compete with these cloud-native database suppliers. Now, Vertica might lose out in some of those native workloads. But, I have to say, my experience in talking with customers, if you're looking for a great MMP column store that scales and runs in the cloud, or on-prem, Vertica is in a very strong position. Vertica claims to be the only MPP columnar store to allow customers to scale, compute, and storage independently, both in the cloud and in hybrid environments on-prem, et cetera, cross clouds, as well. So, while Vertica may be at a disadvantage in a pure cloud native bake-off, it's more robust in motor stack, combined with its multi-cloud strategy, gives Vertica a compelling set of advantages. So, we heard a lot of this from Colin Mahony, who announced Vertica 10.0 in his keynote. He really emphasized Vertica's multi-cloud affinity, it's Eon Mode, which really allows that separation, or scaling of compute, independent of storage, both in the cloud and on-prem. Vertica 10, according to Mahony, is making big bets on in-database machine learning, he talked about that, AI, and along with some advanced regression techniques. He talked about PMML models, Python integration, which was actually something that they talked about doing with Uber and some other customers. Now, Mahony also stressed the trend toward object stores. And, Vertica now supports, let's see S3, with Eon, S3 Eon in Google Cloud, in addition to AWS, and then Pure and HDFS, as well, they all support Eon Mode. Mahony also stressed, as I mentioned earlier, a big commitment to on-prem and the whole cloud optionality thing. So 10.0, according to Colin Mahony, is all about really doubling down on these industry waves. As they say, enabling native PMML models, running them in Vertica, and really doing all the work that's required around ML and AI, they also announced support for TensorFlow. So, object store optionality is important, is what he talked about in Eon Mode, with the news of support for Google Cloud and, as well as HTFS. And finally, a big focus on deployment flexibility. Migration tools, which are a critical focus really on improving ease of use, and you hear this from a lot of customers. So, these are the critical aspects of Vertica 10.0, and an announcement that we're going to be unpacking all week, with some of the experts that I talked about. So, I'm going to close with this. My long-time co-host, John Furrier, and I have talked some time about this new cocktail of innovation. No longer is Moore's law the, really, mainspring of innovation. It's now about taking all these data troves, bringing machine learning and AI into that data to extract insights, and then operationalizing those insights at scale, leveraging cloud. And, one of the things I always look for from cloud is, if you've got a cloud play, you can attract innovation in the form of startups. It's part of the success equation, certainly for AWS, and I think it's one of the challenges for a lot of the legacy on-prem players. Vertica, I think, has done a pretty good job in this regard. And, you know, we're going to look this week for evidence of that innovation. One of the interviews that I'm personally excited about this week, is a new-ish company, I would consider them a startup, called Zebrium. What they're doing, is they're applying AI to do autonomous log monitoring for IT ops. And, I'm interviewing Larry Lancaster, who's their CEO, this week, and I'm going to press him on why he chose to run on Vertica and not a cloud database. This guy is a hardcore tech guru and I want to hear his opinion. Okay, so keep it right there, stay with us. We're all over the Vertica Virtual Big Data Conference, covering in-depth interviews and following all the news. So, theCUBE is going to be interviewing these folks, two days, wall-to-wall coverage, so keep it right there. We're going to be right back with our next guest, right after this short break. This is Dave Vellante and you're watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Vertica. and the Vertica brand, really thrives to this day.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Larry Lancaster | PERSON | 0.99+ |
Colin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
70 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Michael Stonebraker | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
Stephen Murdoch | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Zebrium | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Verica | ORGANIZATION | 0.99+ |
Micro Focus | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Mahony | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Aster Data | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
First | QUANTITY | 0.99+ |
12 billion dollar | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
15-year-old | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
olin Mahony | PERSON | 0.98+ |
around 200 million | QUANTITY | 0.98+ |
Virtual Vertica Big Data Conference 2020 | EVENT | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
80 million dollars | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
two parts | QUANTITY | 0.97+ |
Vertica Virtual Big Data Conference | EVENT | 0.97+ |
Teradata | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Actian | ORGANIZATION | 0.97+ |
Dan Woicke, Cerner Corporation | Virtual Vertica BDC 2020
(gentle electronic music) >> Hello, everybody, welcome back to the Virtual Vertica Big Data Conference. My name is Dave Vellante and you're watching theCUBE, the leader in digital coverage. This is the Virtual BDC, as I said, theCUBE has covered every Big Data Conference from the inception, and we're pleased to be a part of this, even though it's challenging times. I'm here with Dan Woicke, the senior director of CernerWorks Engineering. Dan, good to see ya, how are things where you are in the middle of the country? >> Good morning, challenging times, as usual. We're trying to adapt to having the kids at home, out of school, trying to figure out how they're supposed to get on their laptop and do virtual learning. We all have to adapt to it and figure out how to get by. >> Well, it sure would've been my pleasure to meet you face to face in Boston at the Encore Casino, hopefully next year we'll be able to make that happen. But let's talk about Cerner and CernerWorks Engineering, what is that all about? >> So, CernerWorks Engineering, we used to be part of what's called IP, or Intellectual Property, which is basically the organization at Cerner that does all of our software development. But what we did was we made a decision about five years ago to organize my team with CernerWorks which is the hosting side of Cerner. So, about 80% of our clients choose to have their domains hosted within one of the two Kansas City data centers. We have one in Lee's Summit, in south Kansas City, and then we have one on our main campus that's a brand new one in downtown, north Kansas City. About 80, so we have about 27,000 environments that we manage in the Kansas City data centers. So, what my team does is we develop software in order to make it easier for us to monitor, manage, and keep those clients healthy within our data centers. >> Got it. I mean, I think of Cerner as a real advanced health tech company. It's the combination of healthcare and technology, the collision of those two. But maybe describe a little bit more about Cerner's business. >> So we have, like I said, 27,000 facilities across the world. Growing each day, thank goodness. And, our goal is to ensure that we reduce errors and we digitize the entire medical records for all of our clients. And we do that by having a consulting practice, we do that by having engineering, and then we do that with my team, which manages those particular clients. And that's how we got introduced to the Vertica side as well, when we introduced them about seven years ago. We were actually able to take a tremendous leap forward in how we manage our clients. And I'd be more than happy to talk deeper about how we do that. >> Yeah, and as we get into it, I want to understand, healthcare is all about outcomes, about patient outcomes and you work back from there. IT, for years, has obviously been a contributor but removed, and somewhat indirect from those outcomes. But, in this day and age, especially in an organization like yours, it really starts with the outcomes. I wonder if you could ratify that and talk about what that means for Cerner. >> Sorry, are you talking about medical outcomes? >> Yeah, outcomes of your business. >> So, there's two different sides to Cerner, right? There's the medical side, the clinical side, which is obviously our main practice, and then there's the side that I manage, which is more of the operational side. Both are very important, but they go hand in hand together. On the operational side, the goal is to ensure that our clinicians are on the system, and they don't know they're on the system, right? Things are progressing, doctors don't want to be on the system, trust me. My job is to ensure they're having the most seamless experience possible while they're on the EMR and have it just be one of their side jobs as opposed to taking their attention away from the patients. That make sense? >> Yeah it does, I mean, EMR and meaningful use, around the Affordable Care Act, really dramatically changed the unit. I mean, people had to demonstrate in order to get paid, and so that became sort of an unfunded mandate for folks and you really had to respond to that, didn't you? >> We did, we did that about three to four years ago. And we had to help our clients get through what's called meaningful use, there was different stages of meaningful use. And what we did, is we have the website called the Lights On Network which is free to all of our clients. Once you get onto the website the Lights On Network, you can actually show how you're measured and whether or not you're actually completing the different necessary tasks in order to get those payments for meaningful use. And it also allows you to see what your performance is on your domain, how the clinicians are doing on the system, how many hours they're spending on the system, how many orders they're executing. All of that is completely free and visible to our clients on the Lights On Network. And that's actually backed by some of the Vertica software that we've invested in. >> Yeah, so before we get into that, it sounds like your mission, really, is just great user experiences for the people that are on the network. Full stop. >> We do. So, one of the things that we invented about 10 years ago is called RTMS Timers. They're called Response Time Measurement System. And it started off as a way of us proving that clients are actually using the system, and now it's turned into more of a user outcomes. What we do is we collect 2.5 billion timers per day across all of our clients across the world. And every single one of those records goes to the Vertica platform. And then we've also developed a system on that which allows us in real time to go and see whether or not they're deviating from their normal. So we do baselines every hour of the week and then if they're deviating from those baselines, we can immediately call a service center and have them engage the client before they call in. >> So, Dan, I wonder if you could paint a picture. By the way, that's awesome. I wonder if you could paint a picture of your analytics environment. What does it look like? Maybe give us a sense of the scale. >> Okay. So, I've been describing how we operate, our remote hosted clients in the two Kansas City data centers, but all the software that we write, we also help our client hosted agents as well. Not only do we take care of what's going on at the Kansas City data center, but we do write software to ensure that all of clients are treated the same and we provide the same level of care and performance management across all those clients. So what we do is we have 90,000 agents that we have split across all these clients across the world. And every single hour, we're committing a billion rows to Vertica of operational data. So I talked a little bit about the RTMS timers, but we do things just like everyone else does for CPU, memory, Java Heap Stack. We can tell you how many concurrent users are on the system, I can tell you if there's an application that goes down unexpected, like a crash. I can tell you the response time from the network as most of us use Citrix at Cerner. And so what we do is we measure the amount of time it takes from the client side to PCs, it's sitting in the virtual data centers, sorry, in the hospitals, and then round trip to the Citrix servers that are sitting in the Kansas City data center. That's called the RTT, our round trip transactions. And what we've done is, over the last couple of years, what we've done is we've switched from just summarizing CPU and memory and all that high-level stuff, in order to go down to a user level. So, what are you doing, Dr. Smith, today? How many hours are you using the EMR? Have you experienced any slowness? Have you experienced any hourglass holding within your application? Have you experienced, unfortunately, maybe a crash? Have you experienced any slowness compared to your normal use case? And that's the step we've taken over the last few years, to go from summarization of high-level CPU memory, over to outcome metrics, which are what is really happening with a particular user. >> So, really granular views of how the system is being used and deep analytics on that. I wonder, go ahead, please. >> And, we weren't able to do that by summarizing things in traditional databases. You have to actually have the individual rows and you can't summarize information, you have to have individual metrics that point to exactly what's going on with a particular clinician. >> So, okay, the MPP architecture, the columnar store, the scalability of Vertica, that's what's key. That was my next question, let me take us back to the days of traditional RDBMS and then you brought in Vertica. Maybe you could give us a sense as to why, what that did for you, the before and after. >> Right. So, I'd been painting a picture going forward here about how traditionally, eight years ago, all we could do was summarize information. If CPU was going to go and jump up 8%, I could alarm the data center and say, hey, listen, CPU looks like it's higher, maybe an application's hanging more than it has been in the past. Things are a little slower, but I wouldn't be able to tell you who's affected. And that's where the whole thing has changed, when we brought Vertica in six years ago is that, we're able to take those 90,000 agents and commit a billion rows per hour operational data, and I can tell you exactly what's going on with each of our clinicians. Because you know, it's important for an entire domain to be healthy. But what about the 10 doctors that are experiencing frustration right now? If you're going to summarize that information and roll it up, you'll never know what those 10 doctors are experiencing and then guess what happens? They call the data center and complain, right? The squeaky wheels? We don't want that, we want to be able to show exactly who's experiencing a bad performance right now and be able to reach out to them before they call the help desk. >> So you're able to be proactive there, so you've gone from, Houston, we have a problem, we really can't tell you what it is, go figure it out, to, we see that there's an issue with these docs, or these users, and go figure that out and focus narrowly on where the problem is as opposed to trying to whack-a-mole. >> Exactly. And the other big thing that we've been able to do is corelation. So, we operate two gigantic data centers. And there's things that are shared, switches, network, shared storage, those things are shared. So if there is an issue that goes on with one of those pieces of equipment, it could affect multiple clients. Now that we have every row in Vertica, we have a new program in place called performance abnormality flags. And what we're able to do is provide a website in real time that goes through the entire stack from Citrix to network to database to back-end tier, all the way to the end-user desktop. And so if something was going to be related because we have a network switch going out of the data center or something's backing up slow, you can actually see which clients are on that switch, and, what we did five years ago before this, is we would deploy out five different teams to troubleshoot, right? Because five clients would call in, and they would all have the same problem. So, here you are having to spare teams trying to investigate why the same problem is happening. And now that we have all of the data within Vertica, we're able to show that in a real time fashion, through a very transparent dashboard. >> And so operational metrics throughout the stack, right? A game changer. >> It's very compact, right? I just label five different things, the stack from your end-user device all the way through the back-end to your database and all the way back. All that has to work properly, right? Including the network. >> How big is this, what are we talking about? However you measure it, terabytes, clusters. What can you share there? >> Sorry, you mean, the amount of data that we process within our data centers? >> Give us a fun fact. >> Absolute petabytes, yeah, for sure. And in Vertica right now we have two petabytes of data, and I purge it out every year, one year's worth of data within two different clusters. So we have to two different data centers I've been describing, what we've done is we've set Vertica up to be in both data centers, to be highly redundant, and then one of those is configured to do real-time analysis and corelation research, and then the other one is to provide service towards what I described earlier as our Lights On Network, so it's a very dedicated hardened cluster in one of our data centers to allow the Lights On Network to provide the transparency directly to our clients. So we want that one to be pristine, fast, and nobody touch it. As opposed to the other one, where, people are doing real-time, ad hoc queries, which sometimes aren't the best thing in the world. No matter what kind of database or how fast it is, people do bad things in databases and we just don't want that to affect what we show our clients in a transparent fashion. >> Yeah, I mean, for our audience, Vertica has always been aimed at these big, hairy, analytic problems, it's not for a tiny little data mart in a department, it's really the big scale problems. I wonder if I could ask you, so you guys, obviously, healthcare, with HIPAA and privacy, are you doing anything in the cloud, or is it all on-prem today? >> So, in the operational space that I manage, it's all on-premises, and that is changing. As I was describing earlier, we have an initiative to go to AWS and provide levels of service to countries like Sweden which does not want any operational data to leave that country's walls, whether it be operational data or whether it be PHI. And so, we have to be able to adapt into Vertia Eon Mode in order to provide the same services within Sweden. So obviously, Cerner's not going to go up and build a data center in every single country that requires us, so we're going to leverage our partnership with AWS to make this happen. >> Okay, so, I was going to ask you, so you're not running Eon Mode today, it's something that you're obviously interested in. AWS will allow you to keep the data locally in that region. In talking to a lot of practitioners, they're intrigued by this notion of being able to scale independently, storage from compute. They've said they wished that's a much more efficient way, I don't have to buy in chunks, if I'm out of storage, I don't have to buy compute, and vice-versa. So, maybe you could share with us what you're thinking, I know it's early days, but what's the logic behind the business case there? >> I think you're 100% correct in your assessment of taking compute away from storage. And, we do exactly what you say, we buy a server. And it has so much compute on it, and so much storage. And obviously, it's not scaled properly, right? Either storage runs out first or compute runs out first, but you're still paying big bucks for the entire server itself. So that's exactly why we're doing the POC right now for Eon Mode. And I sit on Vertica's TAB, the advisory board, and they've been doing a really good job of taking our requirements and listening to us, as to what we need. And that was probably number one or two on everybody's lists, was to separate storage from compute. And that's exactly what we're trying to do right now. >> Yeah, it's interesting, I've talked to some other customers that are on the customer advisory board. And Vertica is one of these companies that're pretty transparent about what goes on there. And I think that for the early adopters of Eon Mode there were some challenges with getting data into the new system, I know Vertica has been working on that very hard but you guys push Vertica pretty hard and from what I can tell, they listen. Your thoughts. >> They do listen, they do a great job. And even though the Big Data Conference is canceled, they're committed to having us go virtually to the CAD meeting on Monday, so I'm looking forward to that. They do listen to our requirements and they've been very very responsive. >> Nice. So, I wonder if you could give us some final thoughts as to where you want to take this thing. If you look down the road a year or two, what does success look like, Dan? >> That's a good question. Success means that we're a little bit more nimble as far as the different regions across the world that we can provide our services to. I want to do more corelation. I want to gather more information about what users are actually experiencing. I want to be able to have our phone never ring in our data center, I know that's a grand thought there. But I want to be able to look forward to measuring the data internally and reaching out to our clients when they have issues and then doing the proper corelation so that I can understand how things are intertwining if multiple clients are having an issue. That's the goal going forward. >> Well, in these trying times, during this crisis, it's critical that your operations are running smoothly. The last thing that organizations need right now, especially in healthcare, is disruption. So thank you for all the hard work that you and your teams are doing. I wish you and your family all the best. Stay safe, stay healthy, and thanks so much for coming on theCUBE. >> I really appreciate it, thanks for the opportunity. >> You're very welcome, and thank you, everybody, for watching, keep it right there, we'll be back with our next guest. This is Dave Vellante for theCUBE. Covering Virtual Vertica Big Data Conference. We'll be right back. (upbeat electronic music)
SUMMARY :
in the middle of the country? and figure out how to get by. been my pleasure to meet you and then we have one on our main campus and technology, the and then we do that with my team, Yeah, and as we get into it, the goal is to ensure that our clinicians in order to get paid, and so that became in order to get those for the people that are on the network. So, one of the things that we invented I wonder if you could paint a picture from the client side to PCs, of how the system is being used that point to exactly what's going on and then you brought in Vertica. and be able to reach out to them we really can't tell you what it is, And now that we have all And so operational metrics and all the way back. are we talking about? And in Vertica right now we in the cloud, or is it all on-prem today? So, in the operational I don't have to buy in chunks, and listening to us, as to what we need. that are on the customer advisory board. so I'm looking forward to that. as to where you want to take this thing. and reaching out to our that you and your teams are doing. thanks for the opportunity. and thank you, everybody,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Woicke | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Cerner | ORGANIZATION | 0.99+ |
Affordable Care Act | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dan | PERSON | 0.99+ |
10 doctors | QUANTITY | 0.99+ |
Sweden | LOCATION | 0.99+ |
90,000 agents | QUANTITY | 0.99+ |
five clients | QUANTITY | 0.99+ |
CernerWorks | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Kansas City | LOCATION | 0.99+ |
Smith | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Cerner Corporation | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Monday | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one year | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
27,000 facilities | QUANTITY | 0.99+ |
Houston | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
two petabytes | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
CernerWorks Engineering | ORGANIZATION | 0.98+ |
south Kansas City | LOCATION | 0.98+ |
eight years ago | DATE | 0.98+ |
about 80% | QUANTITY | 0.98+ |
Virtual Vertica Big Data Conference | EVENT | 0.98+ |
Citrix | ORGANIZATION | 0.98+ |
two different data centers | QUANTITY | 0.97+ |
each day | QUANTITY | 0.97+ |
four years ago | DATE | 0.97+ |
two different clusters | QUANTITY | 0.97+ |
six years ago | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
north Kansas City | LOCATION | 0.97+ |
HIPAA | TITLE | 0.97+ |
five different teams | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
five different things | QUANTITY | 0.95+ |
two different sides | QUANTITY | 0.95+ |
about 27,000 environments | QUANTITY | 0.95+ |
both data centers | QUANTITY | 0.95+ |
About 80 | QUANTITY | 0.95+ |
Response Time Measurement System | OTHER | 0.95+ |
two gigantic data centers | QUANTITY | 0.93+ |
Java Heap | TITLE | 0.92+ |
Gabriel Chapman, Pure Storage | Virtual Vertica BDC 2020
>>Yeah, it's the queue covering the virtual vertical Big Data Conference 2020. Brought to you by vertical. >>Hi, everybody. And welcome to this cube special presentation of the vertical virtual Big Data conference. The Cube is running in parallel with Day One and day two of the vertical of Big Data event. By the way, the Cube has been every single big data event in It's our pleasure to be here in the virtual slash digital event as well. Gabriel Chapman is here. He's the director of Flash Blade Products Solutions Marketing at Pure Storage. Great to see you. Thanks for coming on. >>Great to see you too. How's it going? >>It's going very well. I mean, I wish we were meeting in Boston at the Encore Hotel, but, uh, you know, and hopefully we'll be able to meet it, accelerate at some point, future or one of the sub shows that you guys are doing the regional shows, but because we've been covering that show as well. But I really want to get into it. And the last accelerate September 2019 pure and vertical announced. Ah, partnership. I remember a joint being ran up to me and said, Hey, you got to check this out. The separation of compute and storage by EON mode now available on Flash Blade. So, uh and and I believe still the only company that can support that separation and independent scaling both on Prem and in the cloud. So I want to ask, what were the trends and analytical database and cloud led to this partnership? You know, >>realistically, I think what we're seeing is that there's been a kind of a larger shift when it comes to modern analytics platforms towards moving away from the traditional, you know, Hadoop type architecture where we were doing on and leveraging a lot of directors that storage primarily because of the limitations of how that solution was architected. When we start to look at the larger trends towards you know how organizations want to do this type of work on premises, they're looking at solutions that allow them to scale the compute storage pieces independently and therefore, you know, the flash blade platform ended up being a great solution to support America in their transition Tian mode. Leveraging essentially is an S three object store. >>Okay, so let's let's circle back on that you guys in your in your announcement of the flash blade, you make the claim that Flash Blade is the industry's most advanced file and object storage platform ever. That's a bold statement. So defend that What? >>I would like to go beyond that and just say, you know, So we've really kind of looked at this from a standpoint of, you know, as as we've developed Flash Blade as a platform and keep in mind, it's been a product that's been around for over three years now and has been very successful for pure storage. The reality is, is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go, and we believe that we're a leader in that fast object best file storage place in realistically, which we start to see more organizations start to look at building solutions that leverage cloud storage characteristics. But doing so on Prem for a multitude of different reasons. We've built a platform that really addresses a lot of those needs around simplicity around, you know, making things this year that you know, fast matters for us. Ah, simple is smart. Um we can provide, you know, cloud integrations across the spectrum. And, you know, there's a subscription model that fits into that as well. We fall that that falls into our umbrella of what we consider the modern day takes variance. And it's something that we've built into the entire pure portfolio. >>Okay, so I want to get into the architecture a little bit of flash blade and then understand the fit for, uh, analytic databases generally, but specifically for vertical. So it is a blade, so you got compute and network included. It's a key value store based system. So you're talking about scale out. Unlike, unlike, uh, pure is sort of, you know, initial products which were scale up, Um, and so I want on It is a fabric based system. I want to understand what that all means to take us through the architecture. You know, some of the quote unquote firsts that you guys talk about. So let's start with sort of the blade >>aspect. Yeah, the blade aspect of what we call the flash blade. Because if you look at the actual platform, you have, ah, primarily a chassis with built in networking components, right? So there's ah, fabric interconnect with inside the platform that connects to each one of the individual blades. Individual blades have their own compute that drives basically a pure storage flash components inside. It's not like we're just taking SSD is and plugging them into a system and like you would with the traditional commodity off the shelf hardware design. This is very much an engineered solution that is built towards the characteristics that we believe were important with fast filing past object scalability, massive parallel ization. When it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to 150 that's that's the kind of scale that customers are looking for, especially as we start to address these larger analytics pools. They are multi petabytes data sets, you know that single addressable object space and, you know, file performance that is beyond what most of your traditional scale up storage platforms are able to deliver. >>Yes, I interviewed cause last September and accelerate, and Christie Pure has been attacked by some of the competitors. There's not having scale out. I asked him his thoughts on that, he said Well, first of all, our flash blade is scale out. He said, Look, anything that adds complexity, you know we avoid. But for the workloads that are associated with flash blade scale out is the right sort of approach. Maybe you could talk about why that is. Well, >>realistically, I think you know that that approach is better when we're starting to work with large, unstructured data sets. I mean, flash blade is unique. The architected to allow customers to achieve superior resource utilization for compute and storage, while at the same time, you know, reducing significantly the complexity that has arisen around this kind of bespoke or siloed nature of big data and analytics solutions. I mean, we're really kind of look at this from a standpoint of you have built and delivered are created applications in the public cloud space of dress, you know, object storage and an unstructured data. And for some organizations, the importance is bringing that on Prem. I mean, we do see about repatriation coming on a lot of organizations as these data egress, charges continue to expand and grow, um, and then organizations that want even higher performance and what we're able to get into the public cloud space. They are bringing that data back on Prem They are looking at from a stamp. We still want to be able to scale the way we scale in the cloud. We still want to operate the same way we operate in the cloud, but we want to do it within control of our own, our own borders. And so that's, you know, that's one of the bigger pieces to that. And we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models? A zealous the benefits and efficiencies of scale that they're able to afford but allowing customers to do that with inside their own data center. >>So you're talking about the trends earlier. You have these cloud native databases that allowed of the scaling of compute and storage independently. Vertical comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you, You love me. Here's a press release and then we go on or they're just straight, you know, go to market. Are there other aspects of this partnership that they're non Barney deal like, in other words, any specific engineering. Um, you know other go to market programs? Could you talk about that a little bit? Yeah, >>it's it's It's more than just that what we consider a channel meet in the middle or, you know, that Barney type of deal. It's realistically, you know, we've done some first with Veronica that I think, really Courtney, if they think you look at the architecture and how we did, we've brought to market together. Ah, we have solutions. Teams in the back end who are, you know, subject matter experts. In this space, if you talk to joy and the people from vertical, they're very high on our very excited about the partnership because it often it opens up a new set of opportunities for their customers to leverage on mode and get into some of the the nuance task specs of how they leverage the depot depot with inside each individual. Compute node in adjustments with inside their reach. Additional performance gains for customers on Prem and at the same time, for them, that's still tough. The ability to go into that cloud model if they wish to. And so I think a lot of it is around. How do we partner is to companies? How do we do a joint selling motions? How do we show up in and do white papers and all of the traditional marketing aspects that we bring to the market? And then, you know, joint selling opportunities exist where they are, and so that's realistically. I think, like any other organization that's going to market with a partner on MSP that they have, ah, strong partnership with. You'll continue to see us, you know, talking about are those mutually beneficial relationships and the solutions that we're bringing to the market. >>Okay, you know, of course, he used to be a Gartner analyst, and you go to the vendor side now, but it's but it's, but it's a Gartner analyst. You're obviously objective. You see it on, you know well, there's a lot of ways to skin the cat There, there their strengths, weaknesses, opportunities, threats, etcetera for every vendor. So you have you have vertical who's got a very mature stack and talking to a number of the customers out there who are using EON mode. You know there's certain workloads where these cloud native databases makes sense. It's not just the economics of scaling and storage independently. I want to talk more about that. There's flexibility aspect as well. But Vertical really has to play its its trump card, which is Look, we've got a big on premise state, and we're gonna bring that eon capability both on Prem and we're embracing the cloud now. There obviously have been there to play catch up in the cloud, but at the same time, they've got a much more mature stack than a lot of these other cloud native databases that might have just started a couple of years ago. So you know, so there's trade offs that customers have to make. How do you sort through that? Where do you see the interest in this? And and what's the sweet spot for this partnership? You know, we've >>been really excited to build the partnership with vertical A and provide, you know, we're really proud to provide pretty much the only on Prem storage platform that's validated with the yang mode to deliver a modern data experience for our customers together. You know, it's ah, it's that partnership that allows us to go into customers that on Prem space, where I think that there's still not to say that not everybody wants to go there, but I think there's aspects and solutions that worked very well there. But for the vast majority, I still think that there's, you know, the your data center is not going away. And you do want to have control over some of the many of the assets with inside of the operational confines. So therefore, we start to look at how do we can do the best of what cloud offers but on prim. And that's realistically, where we start to see the stronger push for those customers. You still want to manage their data locally. A swell as maybe even worked around some of the restrictions that they might have around cost and complexity hiring. You know, the different types of skills skill sets that are required to bring applications purely cloud native. It's still that larger part of that digital transformation that many organizations are going for going forward with. And realistically, I think they're taking a look at the pros and cons, and we've been doing cloud long enough where people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center. So I mean, realistically, as we move forward, that's, Ah, that better option when it comes to a modern architecture that can do, you know, we can deliver an address, a diverse set of performance requirements and allow the organization to continue to grow the model to the data, you know, based on the data that they're actually trying to leverage. And that's really what Flash was built for. It was built for a platform that could address small files or large files or high throughput, high throughput, low latency scale of petabytes in a single name. Space in a single rack is we like to put it in there. I mean, we see customers that have put 150 flash blades into production as a single name space. It's significant for organizations that are making that drive towards modern data experience with modern analytics platforms. Pure and Veronica have delivered an experience that can address that to a wide range of customers that are implementing uh, you know, particularly on technology. >>I'm interested in exploring the use case. A little bit further. You just sort of gave some parameters and some examples and some of the flexibility that you have, um, and take us through kind of what the customer discussions are like. Obviously you've got a big customer base, you and vertical that that's on Prem. That's the the unique advantage of this. But there are others. It's not just the economics of the granular scaling of compute and storage independently. There are other aspects of take us through that sort of a primary use case or use cases. Yeah, you >>know, I mean, I could give you a couple customer examples, and we have a large SAS analyst company which uses vertical on last way to authenticate the quality of digital media in real time, You know, then for them it makes a big difference is they're doing their streaming and whatnot that they can. They can fine tune the grand we control that. So that's one aspect that that we address. We have a multinational car car company, which uses vertical on flash blade to make thousands of decisions per second for autonomous vehicle decision making trees. You know, that's what really these new modern analytics platforms were built for, um, there's another healthcare organization that uses vertical on flash blade to enable healthcare providers to make decisions in real time. The impact lives, especially when we start to look at and, you know, the current state of affairs with code in the Corona virus. You know, those types of technologies, we're really going to help us kind of get of and help lower invent, bend that curve downward. So, you know, there's all these different areas where we can address that the goals and the achievements that we're trying to look bored with with real time analytics decision making tools like and you know, realistically is we have these conversations with customers they're looking to get beyond the ability of just, you know, a data scientist or a data architect looking to just kind of driving information >>that we're talking about Hadoop earlier. We're kind of going well beyond that now. And I guess what I'm saying is that in the first phase of cloud, it was all about infrastructure. It was about, you know, uh, spin it up. You know, compute and storage is a little bit of networking in there. >>It >>seems like the next new workload that's clearly emerging is you've got. And it started with the cloud native databases. But then bringing in, you know, AI and machine learning tooling on top of that Ah, and then being able to really drive these new types of insights and it's really about taking data these bog this bog of data that we've collected over the last 10 years. A lot of that is driven by a dupe bringing machine intelligence into the equation, scaling it with either cloud public cloud or bringing that cloud experience on Prem scale. You know, across organizations and across your partner network, that really is a new emerging workloads. You see that? And maybe talk a little bit about what you're seeing with customers. >>Yeah. I mean, it really is. We see several trends. You know, one of those is the ability to take a take this approach to move it out of the lab, but into production. Um, you know, especially when it comes to data science projects, machine learning projects that traditionally start out as kind of small proofs of concept, easy to spin up in the cloud. But when a customer wants to scale and move towards a riel you know, derived a significant value from that. They do want to be able to control more characteristic site, and we know machine learning, you know, needs toe needs to learn from a massive amounts of data to provide accuracy. There's just too much data retrieving the cloud for every training job. Same time Predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking. You know, we see this. Ah, the visualization of Data Analytics is Tricia deployed is being on a continuum with, you know, the things that we've been doing in the long in the past with data warehousing, data Lakes, ai on the other end. But this way, we're starting to manifest it and organizations that are looking towards getting more utility and better elasticity out of the data that they are working for. So they're not looking to just build apps, silos of bespoke ai environments. They're looking to leverage. Ah, you know, ah, platform that can allow them to, you know, do ai, for one thing, machine learning for another leverage multiple protocols to access that data because the tools are so much Jeff um, you know, it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environment. >>I think it's gonna be a big growth area in the coming years. Gable. I wish we were in Boston together. You would have painted your little corner of Boston orange. I know that you guys have but really appreciate you coming on the cube wall to wall coverage. Two days of the vertical vertical virtual big data conference. Keep it right there. Right back. Right after this short break, Yeah.
SUMMARY :
Brought to you by vertical. of the vertical of Big Data event. Great to see you too. future or one of the sub shows that you guys are doing the regional shows, but because we've been you know, the flash blade platform ended up being a great solution to support America Okay, so let's let's circle back on that you guys in your in your announcement of the I would like to go beyond that and just say, you know, So we've really kind of looked at this from a standpoint you know, initial products which were scale up, Um, and so I want on It is a fabric based object space and, you know, file performance that is beyond what most adds complexity, you know we avoid. you know, that's one of the bigger pieces to that. straight, you know, go to market. it's it's It's more than just that what we consider a channel meet in the middle or, you know, So you know, so there's trade offs that customers have to make. been really excited to build the partnership with vertical A and provide, you know, we're really proud to provide pretty and some examples and some of the flexibility that you have, um, and take us through you know, the current state of affairs with code in the Corona virus. It was about, you know, uh, spin it up. But then bringing in, you know, AI and machine learning data because the tools are so much Jeff um, you know, it is a growing diversity of I know that you guys have but really appreciate you coming on the cube wall to wall coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Gabriel Chapman | PERSON | 0.99+ |
September 2019 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Barney | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Two days | QUANTITY | 0.99+ |
Veronica | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
last September | DATE | 0.99+ |
thousands | QUANTITY | 0.98+ |
150 | QUANTITY | 0.98+ |
Courtney | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
one aspect | QUANTITY | 0.98+ |
Day One | QUANTITY | 0.97+ |
day two | QUANTITY | 0.97+ |
seven blades | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
Virtual Vertica | ORGANIZATION | 0.96+ |
over three years | QUANTITY | 0.96+ |
150 flash blades | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
single rack | QUANTITY | 0.94+ |
Corona virus | OTHER | 0.94+ |
single name | QUANTITY | 0.94+ |
first phase | QUANTITY | 0.94+ |
Pure Storage | ORGANIZATION | 0.93+ |
Prem | ORGANIZATION | 0.92+ |
Christie Pure | ORGANIZATION | 0.91+ |
single platform | QUANTITY | 0.91+ |
each individual | QUANTITY | 0.91+ |
this year | DATE | 0.91+ |
firsts | QUANTITY | 0.9+ |
Big Data Conference 2020 | EVENT | 0.9+ |
America | LOCATION | 0.89+ |
Flash Blade Products Solutions | ORGANIZATION | 0.89+ |
couple of years ago | DATE | 0.88+ |
single name | QUANTITY | 0.84+ |
each one | QUANTITY | 0.84+ |
one thing | QUANTITY | 0.83+ |
Tricia | PERSON | 0.82+ |
Pure | ORGANIZATION | 0.81+ |
last 10 years | DATE | 0.8+ |
Hadoop | TITLE | 0.75+ |
single addressable | QUANTITY | 0.74+ |
second | QUANTITY | 0.72+ |
Veronica | ORGANIZATION | 0.7+ |
Encore Hotel | LOCATION | 0.68+ |
Big Data | EVENT | 0.67+ |
Cube | COMMERCIAL_ITEM | 0.66+ |
SAS | ORGANIZATION | 0.65+ |
Flash Blade | TITLE | 0.62+ |
petabytes | QUANTITY | 0.62+ |
eon | ORGANIZATION | 0.59+ |
couple customer | QUANTITY | 0.55+ |
EON | ORGANIZATION | 0.53+ |
single big | QUANTITY | 0.5+ |
Big | EVENT | 0.49+ |
years | DATE | 0.48+ |
sub | QUANTITY | 0.46+ |
2020 | DATE | 0.33+ |
UNLIST TILL 4/2 - Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives
>> Sue: Hello everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session in entitled Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit the Vertica forums to post you questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So let's get started. Tom, over to you. >> Tom: Hello everyone and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third party integrations that enables the SoftMaker system that surrounds Vertica to thrive. So today, we'll be talking about some of our new open source initatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open source, to share, all being built out in the open on our GitHub page. Whether you use these open source projects or not, this is a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with the problem like this. You have some business requirements, to maybe build some new Vertica-powered application. Maybe you have to build some new system to visualize some data that's that's managed by Vertica. The various circumstances, lots of choices will might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool, or web framework, because the business has already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language, since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there, and we don't have integrations for all of them. So how do you make ends meet when you don't have all the tools you need? All you have to get creative, using tools like PyODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C-Library and most programming languages know how to call C code, somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done but native integrations are usually a lot smoother and easier. So rather than, for example, in Python trying to fight with PyODBC, to configure things and get Unicode working, and to compile all the different pieces, the right way is to make it all work smoothly. It would be much better if you could just PIP install library and get to work. And with Vertica-Python, a new Python client library, you can actually do that. So that story, I assume, probably sounds pretty familiar to you. Sounds probably familiar to a lot of the audience here because we're all using Vertica. And our challenge, as Big Data practitioners is to make sense of all this stuff, despite those technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica, and we're probably also using Vertica with other systems and tools too, because it's important to use the right tool for the right job. That's a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well established patterns, approaches, and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this fashion. Databases as a whole had had this goal from the very beginning. They solve the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem, without having to worry about all of these intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well defined boundary where lots of developers know SQL, and it allows you to express what you need without having to worry about those details. So we can be the experts in data management while you worry about your problems. This goes beyond though, what's accessible through SQL to Vertica. We've got well defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things write your own SQL functions, or extend database softwares with UDXs, you can do so. If you have a custom data format that might be a proprietary format, or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. To make it very easy to do passive, parallel, massive data movement, loading into Vertica but also to export Vertica to send data to other systems. And with these new features in time, we also could do the same kinds of things with Machine Learning models, importing and exporting to tools like TensorFlow. And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, of different varieties that solve all different problems that are common in this big data processing world. Whether it's open source, streaming systems like Kafka or Spark, or more traditional ETL tools on the loading side, but also, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side. And then of course, Vertica needs to be flexible enough to be able to run anywhere. So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards, and integrated it with all kinds of different open source systems. What we're really excited to talk about now is that we are taking our new integration projects and making those open source too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a foundation for all kinds of interesting applications and tools. Upon those libraries, we've also built some integrations ourselves. And we're using these new libraries to power some new integrations with some third party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts and exciting ways to solve new problems. And the code for all these things is available now on our GitHub page. And so you can use it however you like, and even help us make it better too. So the first such project that we have is called Vertica-Python. Vertica-Python began at our customer, Uber. And then in late 2018, we collaborated with them and we took it over and made Vertica-Python the first official open source client for Vertica You can use this to build your own Python applications, or you can use it via tools that were written in Python. Python has grown a lot in recent years and it's very common language to solve lots of different problems and use cases in the Big Data space from things like DevOps admission and Data Science or Machine Learning, or just homegrown applications. We use Python a lot internally for our own QA testing and automation needs. And with the Python 2 End Of Life, that happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2. And also to provide a nice migration path for all of you our users that might be worried about the same problems with their own Python code. So Vertica-Python is used already for lots of different tools, including Vertica's admintools now starting with 9.3.1. It was also used by DataDog to build a Vertica-DataDog integration that allows you to monitor your Vertica infrastructure within DataDog. So here's a little example of how you might use the Python Client to do some some work. So here we open in connection, we run a query to find out what node we've connected to, and then we do a little DataLoad by running a COPY statement. And this is designed to have a familiar look and feel if you've ever used a Python Database Client before. So we implement the DB API 2.0 standard and it feels like a Python package. So that includes things like, it's part of the centralized package manager, so you can just PIP install this right now and go start using it. We also have our client for Go length. So this is called vertica-sql-go. And this is a very similar story, just in a different context or the different programming language. So vertica-sql-go, began as a collaboration with the Microsoft Focus SecOps Group who builds microfocus' security products some of which use vertica internally to provide some of those analytics. So you can use this to build your own apps in the Go programming language but you can also use it via tools that are written Go. So most notably, we have our Grafana integration, which we'll talk a little bit more about later, that leverages this new clients to provide Grafana visualizations for vertica data. And Go is another rising popularity programming language 'cause it offers an interesting balance of different programming design trade-offs. So it's got good performance, got a good current concurrency and memory safety. And we liked all those things and we're using it to power some internal monitoring stuff of our own. And here's an example of the code you can write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query, and then it iterates over those rows, processing them using Go data types. You get that native look and feel just like you do in Python, except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for the DC projects, we're really doing open source development. We're not just putting code out on our GitHub page. So if you go out there and look, you can see that you can ask questions, you can report bugs, you can submit poll requests yourselves and you can collaborate directly with our engineering team and the other vertica users out on our GitHub page. Because it's out on our GitHub page, it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core vertica release cycle. So in 2019, for example, as we were building features to prepare for the Python 3 migration, we shipped 11 different releases with 40 customer reported issues, filed on GitHub. That was done over 78 different poll requests and with lots of community engagement as we do so. So lots of people are using this already, we see as our GitHub badge last showed with about 5000 downloads of this a day of people using it in their software. And again, we want to make this easy, not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2.0 license. The master branch is always available and stable with the latest creative functionality. And you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and with poll requests. And for vertica-python, it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future. 'Cause this offers some exciting opportunities for us to collaborate with you more directly than we have ever before. You can contribute improvements and help us guide the direction of these projects, but you can also work with each other to share knowledge and implementation details and various best practices. And so maybe you think, "Well, I don't use Python, "I don't use go so maybe it doesn't matter to me." But I would argue it really does matter. Because even if you don't use these tools and languages, there's lots of amazing vertica developers out there who do. And these clients do act as low level building blocks for all kinds of different interesting tools, both in these Python and Go worlds, but also well beyond that. Because these implementations and examples really generalize to lots of different use cases. And we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well, we all know SQL, but if you look at what SQL specifies, it really only talks about how to manipulate the data within the database. So once you're connected and in, you can run commands with SQL. But these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? So these interfaces are specific to a particular language or a technology stack. But the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. And then they also have a lot of metadata and Type Mapping because you want to use these clients the way you use those programming languages. Which might be different than the way that vertica's data types and vertica's semantics work. So some of this client interfaces are truly standards. And they are robust enough in terms of what they design and call for to support a truly pluggable driver model. Where you might write an application that codes directly against the standard interface, and you can then plug in a different database driver, like a JDBC driver, to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay. 'Cause it's good as a standard is, every database is unique for a reason. And so you can't really expose all of those unique properties of a database through these standard interfaces. So vertica's unique in that it can scale to the petabytes and beyond. And you can run it anywhere in any environment, whether it's on-prem or on clouds. So surely there's something about vertica that's unique, and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything, there's often a need and common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to define those comments, semantics that different databases might have in common, what you often see is tools will invent plug in layers or glue code to compensate by defining application wide standard to cover some of these same semantics. Later on, we'll get into some of those details and show off what exactly that means. So if you connect to a vertica database, what's actually happening under the covers? You have an application, you have a need to run some queries, so what does that actually look like? Well, probably as you would imagine, your application is going to invoke some API calls and some client library or tool. This library takes those API calls and implements them, usually by issuing some networking protocol operations, communicating over the network to ask vertica to do the heavy lifting required for that particular API call. And so these API's usually do the same kinds of things although some of the details might differ between these different interfaces. But you do things like establish a connection, run a query, iterate over your rows, manage your transactions, that sort of thing. Here's an example from vertica-python, which just goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call like this where I say Connect and I give it a DNS name, which is my entire cluster. And I give you my connection details, my username and password. And I tell the Python Client to get me a session, give me a connection so I can start doing some work. Well, in order to implement this, what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is, where you're going to connect to and why, by pressing the connection string. and vertica being a distributed system, we want to provide high availability, so we might need to do some DNS look-ups to resolve that DNS name which might be an entire cluster and not just a single machine. So that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability and DNS lookup stuff. And then once we connect, we might do Load Balancing too, to balance the connections across the different initiator nodes in the cluster, or in a sub cluster, as needed. Once we land on the node we want to be at, we might do some TLS to secure our connections. And vertica supports the industry standard TLS protocols, so this looks pretty familiar for everyone who've used TLS anywhere before. So you're going to do a certificate exchange and the client might send the server certificate too, and then you going to verify that the server is who it says it is, so that you can know that you trust it. Once you've established that connection, and secured it, then you can start actually beginning to request a session within vertica. So you going to send over your user information like, "Here's my username, "here's the database I want to connect to." You might send some information about your application like a session label, so that you can differentiate on the database with monitoring queries, what the different connections are and what their purpose is. And then you might also send over some session settings to do things like auto commit, to change the state of your session for the duration of this connection. So that you don't have to remember to do that with every query that you have. Once you've asked vertica for a session, before vertica will give you one, it has to authenticate you. and vertica has lots of different authentication mechanisms. So there's a negotiation that happens there to decide how to authenticate you. Vertica decides based on who you are, where you're coming from on the network. And then you'll do an auth-specific exchange depending on what the auth mechanism calls for until you are authenticated. Finally, vertica trusts you and lets you in, so you going to establish a session in vertica, and you might do some note keeping on the client side just to know what happened. So you might log some information, you might record what the version of the database is, you might do some protocol feature negotiation. So if you connect to a version of the database that doesn't support all these protocols, you might decide to turn some functionality off and that sort of thing. But finally, after all that, you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs. And we're excited here because with vertica-python we're really opening up the vertica client wire protocol for the first time. And so if you're a low level vertica developer and you might have used Postgres before, you might know that some of vertica's client protocol is derived from Postgres. But they do differ in many significant ways. And this is the first time we've ever revealed those details about how it works and why. So not all Postgres protocol features work with vertica because vertica doesn't support all the features that Postgres does. Postgres, for example, has a large object interface that allows you to stream very wide data values over. Whereas vertica doesn't really have very wide data values, you have 30, you have long bar charts, but that's about as wide as you can get. Similarly, the vertica protocol supports lots of features not present in Postgres. So Load Balancing, for example, which we just went through an example of, Postgres is a single node system, it doesn't really make sense for Postgres to have Load Balancing. But Load Balancing is really important for vertica because it is a distributed system. Vertica-python serves as an open reference implementation of this protocol. With all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below, all these different things are new protocol features that we've implemented since August 2019, out in the open on our GitHub page for Python. Now, the vertica-sql-go implementation of these things is still in progress, but the core protocols are there for basic query operations. There's more to do there but we'll get there soon. So this is really cool 'cause not only do you have now a Python Client implementation, and you have a Go client implementation of this, but you can use this protocol reference to do lots of other things, too. The obvious thing you could do is build more clients for other languages. So if you have a need for a client in some other language that are vertica doesn't support yet, now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real vertica database, you can simulate some of that. You can also use it to do things like query routing and proxies. So Uber, for example, this log here in this link tells a great story of how they route different queries to different vertical clusters by intercepting these protocol messages, parsing the queries in them and deciding which clusters to send them to. So a lot of these things are just ideas today, but now that you have the source code, there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these things. So here's a picture of vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has an internet of things use case to it, where we have lots of different sensors recording flight data, feeding into Kafka which then gets loaded into vertica. And then finally, it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something something happens. So in these highlighted sections here, you notice a drop in some of the activity, that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Well, Grafana specializes in visualizing time series data. And time can be really tricky for computers to do correctly. You got time zones, daylight savings, leap seconds, negative infinity timestamps, please don't ever use those. In every system, if it wasn't hard enough, just with those problems, what makes it harder is that every system does it slightly differently. So if you're querying some time data, how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture, which is implemented in Go on it's front end, which is implemented with JavaScript? Well, you read this from bottom up in terms of the processing. First, you select the timestamp and Vertica is timestamp has to be converted to a Go time object. And we have to reconcile the differences that there might be as we translate it. So Go time has a different time zone specifier format, and it also supports nanosecond precision, while Vertica only supports microsecond precision. So that's not too big of a deal when you're querying data because you just see some extra zeros, not fractional seconds. But on the way in, if we're loading data, we have to find a way to resolve those things. Once it's into the Go process, it has to be converted further to render in the JavaScript UI. So that there, the Go time object has to be converted to a JavaScript Angular JS Date object. And there too, we have to reconcile those differences. So a lot of these differences might just be presentation, and not so much the actual data changing, but you might want to choose to render the date into a more human readable format, like we've done in this example here. Here's another picture. This is another picture of some time series data, and this one shows you can actually write your own queries with Grafana to provide answers. So if you look closely here you can see there's actually some functions that might not look too familiar with you if you know vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real vertica syntax? Well, it's not sufficient to just know how to manipulate data, it's also really important that you know how to operate with metadata. So information about how the data works in the data source, Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic I/O that we just saw in the previous example. So it needs to know, how do you connect to the data source to get some time data? How do you know what time data types and functions there are and how they behave? How do you generate a query that references a time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you do know which tables have time columns and then they might be worth rendering in this kind of UI. So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers. And so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints and metadata customization needed to have an extensible data source back end. So we have another open source project, the Vertica-Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript and the front end plugins and also with Go in the back end plugins to provide vertica connectivity inside Grafana. So the way this works, is that the plugin frameworks defines those standardizing functions like time and time filter, and it's our plugin that's going to rewrite them in terms of vertica syntax. So in this example, time gets rewritten to a vertica cast. And time filter becomes a BETWEEN predicate. So that's one example of how you can use Grafana, but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out in our GitHub page. For some advanced integrations, there's clearly a need to go beyond these standards. So SQL and these surrounding standards, like JDBC, and ODBC, were really critical in the early days of Vertica, because they really enabled a lot of generic database tools. And those will always continue to play a really important role, but the Big Data technology space moves a lot faster than these old database data can keep up with. So there's all kinds of new advanced analytics and query pushdown logic that were never possible 10 or 20 years ago, that Vertica can do natively. There's also all kinds of data-oriented application workflows doing things like streaming data, or Parallel Loading or Machine Learning. And all of these things, we need to build software with, but we don't really have standards to go by. So what do we do there? Well, open source implementations make for easier integrations, and applications all over the place. So even if you're not using Grafana for example, other tools have similar challenges that you need to overcome. And it helps to have an example there to show you how to do it. Take Machine Learning, for example. There's been many excellent Machine Learning tools that have arisen over the years to make data science and the task of Machine Learning lot easier. And a lot of those have basic database connectivity, but they generally only treat the database as a source of data. So they do lots of data I/O to extract data from a database like Vertica for processing in some other engine. We all know that's not the most efficient way to do it. It's much better if you can leverage Vertica scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead, we have a project called vertica-ml-python. And this serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a familiar machine learning workflow that scales with vertica. So it feels similar to like a scickit-learn project except all the processing and aggregation and heavy lifting and data processing happens in vertica. So this makes for a much more lightweight, scalable approach than you might otherwise be used to. So with vertica-ml-python, you can probably use this yourself. But you could also see how it works. So if it doesn't meet all your needs, you could still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework. And so this is an older GitHub project. We've actually had this for a couple of years, but it is really useful and important so I wanted to plug it here. With our User Defined eXtensions framework or UDXs, this allows you to extend the operators that vertica executes when it does a database load or a database query. So with UDXs, you can write your own domain logic in a C++, Java or Python or R. And you can call them within the context of a SQL query. And vertica brings your logic to that data, and makes it fast and scalable and fault tolerant and correct for you. So you don't have to worry about all those hard problems. So our UDX examples, demonstrate how you can use our SDK to solve interesting problems. And some of these examples might be complete, total usable packages or libraries. So for example, we have a curl source that allows you to extract data from any curlable endpoint and load into vertica. We've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a vertica query, all kinds of parsers and string processors and things like that. We also have more exciting and interesting things where you might not really think of vertica being able to do that, like a heat map generator, which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems, but also how you can use this SDK to solve neat things that maybe no one else has to solve, or maybe that are unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in vertica-python these clients, really generalizes well beyond the needs of a database client. Anyone that's ever built a vertica integration or an application, probably has a need to write some integration tests. And that could be hard to do with all the moving parts, in the big data solution. But with our code being open source, you can see in vertica-python, in particular, how we've structured our tests to facilitate smooth testing that's fast, deterministic and easy to use. So we've automated the download process, the installation deployment process, of a Vertica Community Edition. And with a single click, you can run through the tests locally and part of the PR workflow via Travis CI. We also do this for multiple different python environments. So for all python versions from 2.7 up to 3.8 for different Python interpreters, and for different Linux distros, we're running through all of them very quickly with ease, thanks to all this automation. So today, you can see how we do it in vertica-python, in the future, we might want to spin that out into its own stand-alone testbed starter projects so that if you're starting any new vertica integration, this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space . Well, there's a lot of it. So in terms of the the client stuff, for Python, we are marching towards our 1.0 release, which is when we aim to be protocol complete to support all of vertica's unique protocols, including COPY LOCAL and some new protocols invented to support complex types, which is our new feature in vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests in the Go client fronts, just about a year behind Python in terms of its protocol implementation, but the basic operations are there. But we still have more work to do to implement things like load balancing, some of the advanced auths and other things. But they're two, we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally, this question of, "Well, what about beyond database clients? "What else might we want to do with open source?" If you're building a very deep or a robust vertica integration, you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or you're a vendor that resells vertica packaged as a black box piece of a larger solution, you might to have managed the whole operational lifecycle of vertica. There's even fewer standards for doing all these different things compared to the SQL clients. So we started with the SQL clients 'cause that's a well established pattern, there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things and do have real standards. So we talked a little bit about how you could do UDXs or testing or Machine Learning, but there's all sorts of other use cases too. That's why we're excited to announce here our awesome vertica, which is a new collection of open source resources available on our GitHub page. So if you haven't heard of this awesome manifesto before, I highly recommend you check out this GitHub page on the right. We're not unique here but there's lots of awesome projects for all kinds of different tools and systems out there. And it's a great way to establish a community and share different resources, whether they're open source projects, blogs, examples, references, community resources, and all that. And this tool is an open source project. So it's an open source wiki. And you can contribute to it by submitting yourself to PR. So we've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. So definitely check this out and help us make it better. So with that, I'm going to wrap up. I wanted to thank you all. Special thanks to Siting Ren and Roger Huebner, who are the project leads for the Python and Go clients respectively. And also, thanks to all the customers out there who've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. So if you want to talk to us, you can find us at this email address here. But of course, you can also find us on the Vertica forums, or you could talk to us on GitHub too. And there you can find links to all the different projects I talked about today. And so with that, I think we're going to wrap up and now we're going to hand it off for some Q&A.
SUMMARY :
Also a reminder that you can maximize your screen and frameworks to solve the problems you need to solve.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom Wall | PERSON | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Roger Huebner | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
Python 2 | TITLE | 0.99+ |
August 2019 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Python 3 | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Sue | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
python | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
late 2018 | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
end of 2019 | DATE | 0.99+ |
Vertica | TITLE | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
C++ | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
vertica-python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
11 different releases | QUANTITY | 0.99+ |
UDXs | TITLE | 0.99+ |
Kafka | TITLE | 0.99+ |
Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives | TITLE | 0.98+ |
Grafana | ORGANIZATION | 0.98+ |
PyODBC | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
UDX | TITLE | 0.98+ |
vertica 10 | TITLE | 0.98+ |
ODBC | TITLE | 0.98+ |
10 | DATE | 0.98+ |
Postgres | TITLE | 0.98+ |
DataDog | ORGANIZATION | 0.98+ |
40 customer reported issues | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
UNLIST TILL 4/2 - Vertica @ Uber Scale
>> Sue: Hi, everybody. Thank you for joining us today, for the Virtual Vertica BDC 2020. This breakout session is entitled "Vertica @ Uber Scale" My name is Sue LeClaire, Director of Marketing at Vertica. And I'll be your host for this webinar. Joining me is Girish Baliga, Director I'm sorry, user, Uber Engineering Manager of Big Data at Uber. Before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. There will be a Q and A session, at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer offline. Alternately, you can also Vertica forums to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. And as a reminder, you can maximize your screen by clicking the double arrow button, in the lower right corner of the slides. And yet, this virtual session is being recorded, and you'll be able to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Girish over to you. >> Girish: Thanks a lot Sue. Good afternoon, everyone. Thanks a lot for joining this session. My name is Girish Baliga. And as Sue mentioned, I manage interactive and real time analytics teams at Uber. Vertica is one of the main platforms that we support, and Vertica powers a lot of core business use cases. In today's talk, I wanted to cover two main things. First, how Vertica is powering critical business use cases, across a variety of orgs in the company. And second, how we are able to do this at scale and with reliability, using some of the additional functionalities and systems that we have built into the Vertica ecosystem at Uber. And towards the end, I also have a little extra bonus for all of you. I will be sharing an easy way for you to take advantage of, many of the ideas and solutions that I'm going to present today, that you can apply to your own Vertica deployments in your companies. So stick around and put on your seat belts, and let's go start on the ride. At Uber, our mission is to ignite opportunity by setting the world in motion. So we are focused on solving mobility problems, and enabling people all over the world to solve their local problems, their local needs, their local issues, in a manner that's efficient, fast and reliable. As our CEO Dara has said, we want to become the mobile operating system of local cities and communities throughout the world. As of today, Uber is operational in over 10,000 cities around the world. So, across our various business lines, we have over 110 million monthly users, who use our rides, services, or eat services, and a whole bunch of other services that we provide to Uber. And just to give you a scale of our daily operations, we in the ride business, have over 20 million trips per day. And that each business is also catching up, particularly during the recent times that we've been having. And so, I hope these numbers give you a scale of the amount of data, that we process each and every day. And support our users in their analytical and business reporting needs. So who are these users at Uber? Let's take a quick look. So, Uber to describe it very briefly, is a lot like Amazon. We are largely an operation and logistics company. And employee work based reflects that. So over 70% of our employees work in teams, which come under the umbrella of Community Operations and Centers of Excellence. So these are all folks working in various cities and towns that we operate around the world, and run the Uber businesses, as somewhat local businesses responding to local needs, local market conditions, local regulation and so forth. And Vertica is one of the most important tools, that these folks use in their day to day business activities. So they use Vertica to get insights into how their businesses are going, to deeply into any issues that they want to triage , to generate reports, to plan for the future, a whole lot of use cases. The second big class of users, are in our marketplace team. So marketplace is the engineering team, that backs our ride shared business. And as part of this, running this business, a key problem that they have to solve, is how to determine what prices to set, for particular rides, so that we have a good match between supply and demand. So obviously the real time pricing decisions they're made by serving systems, with very detailed and well crafted machine learning models. However, the training data that goes into this models, the historical trends, the insights that go into building these models, a lot of these things are powered by the data that we store, and serve out of Vertica. Similarly, in each business, we have use cases spanning all the way from engineering and back-end systems, to support operations, incentives, growth, and a whole bunch of other domains. So the big class of applications that we support across a lot of these business lines, is dashboards and reporting. So we have a lot of dashboards, which are built by core data analysts teams and shared with a whole bunch of our operations and other teams. So these are dashboards and reports that run, periodically say once a week or once a day even, depending on the frequency of data that they need. And many of these are powered by the data, and the analytics support that we provide on our Vertica platform. Another big category of use cases is for growth marketing. So this is to understand historical trends, figure out what are various business lines, various customer segments, various geographical areas, doing in terms of growth, where it is necessary for us to reinvest or provide some additional incentives, or marketing support, and so forth. So the analysis that backs a lot of these decisions, is powered by queries running on Vertica. And finally, the heart and soul of Uber is data science. So data science is, how we provide best in class algorithms, pricing, and matching. And a lot of the analysis that goes into, figuring out how to build these systems, how to build the models, how to build the various coefficients and parameters that go into making real time decisions, are based on analysis that data scientists run on Vertica systems. So as you can see, Vertica usage spans a whole bunch of organizations and users, all across the different Uber teams and ecosystems. Just to give you some quick numbers, we have over 5000 weekly active, people who run queries at least once a week, to do some critical business role or problem to solve, that they have in their day to day operations. So next, let's see how Vertica fits into the Uber data ecosystem. So when users open up their apps, and request for a ride or order food delivery on each platform, the apps are talking to our serving systems. And the serving systems use online storage systems, to store the data as the trips and eat orders are getting processed in real time. So for this, we primarily use an in house built, key value storage system called Schemaless, and an open source system called Cassandra. We also have other systems like MySQL and Redis, which we use for storing various bits of data to support serving systems. So all of this operations generates a lot of data, that we then want to process and analyze, and use for our operational improvements. So, we have ingestion systems that periodically pull in data from our serving systems and land them in our data lake. So at Uber a data lake is powered by Hadoop, with files stored on HDFS clusters. So once the raw data lines on the data lake, we then have ETL jobs that process these raw datasets, and generate, modeled and customize datasets which we then use for further analysis. So once these model datasets are available, we load them into our data warehouse, which is entirely powered by Vertica. So then we have a business intelligence layer. So with internal tools, like QueryBuilder, which is a UI interface to write queries, and look at results. And it read over the front-end sites, and Dashbuilder, which is a dash, board building tool, and report management tool. So these are all various tools that we have built within Uber. And these can talk to Vertica and run SQL queries to power, whatever, dashboards and reports that they are supporting. So this is what the data ecosystem looks like at Uber. So why Vertica and what does it really do for us? So it powers insights, that we show on dashboards as folks use, and it also powers reports that we run periodically. But more importantly, we have some core, properties and core feature sets that Vertica provides, which allows us to support many of these use cases, very well and at scale. So let me take a brief tour of what these are. So as I mentioned, Vertica powers Uber's data warehouse. So what this means is that we load our core fact and dimension tables onto Vertica. The core fact tables are all the trips, all the each orders and all these other line items for various businesses from Uber, stored as partitioned tables. So think of having one partition per day, as well as dimension tables like cities, users, riders, career partners and so forth. So we have both these two kinds of datasets, which will load into Vertica. And we have full historical data, all the way since we launched these businesses to today. So that folks can do deeper longitudinal analysis, so they can look at patterns, like how the business has grown from month to month, year to year, the same month, over a year, over multiple years, and so forth. And, the really powerful thing about Vertica, is that most of these queries, you run the deep longitudinal queries, run very, very fast. And that's really why we love Vertica. Because we see query latency P90s. That is 90 percentile of all queries that we run on our platform, typically finish in under a minute. So that's very important for us because Vertica is used, primarily for interactive analytics use cases. And providing SQL query execution times under a minute, is critical for our users and business owners to get the most out of analytics and Big Data platforms. Vertica also provides a few advanced features that we use very heavily. So as you might imagine, at Uber, one of the most important set of use cases we have is around geospatial analytics. In particular, we have some critical internal dashboards, that rely very heavily on being able to restrict datasets by geographic areas, cities, source destination pairs, heat maps, and so forth. And Vertica has a rich array of functions that we use very heavily. We also have, support for custom projections in Vertica. And this really helps us, have very good performance for critical datasets. So for instance, in some of our core fact tables, we have done a lot of query and analysis to figure out, how users run their queries, what kind of columns they use, what combination of columns they use, and what joints they do for typical queries. And then we have laid out our custom projections to maximize performance on these particular dimensions. And the ability to do that through Vertica, is very valuable for us. So we've also had some very successful collaborations, with the Vertica engineering team. About a year and a half back, we had open-sourced a Python Client, that we had built in house to talk to Vertica. We were using this Python Client in our business intelligence layer that I'd shown on the previous slide. And we had open-sourced it after working closely with Eng team. And now Vertica formally supports the Python Client as an open-source project, which you can download to and integrate into your systems. Another more recent example of collaboration is the Vertica Eon mode on GCP. So as most of or at least some of you know, Vertica Eon mode is formally supported on AWS. And at Uber, we were also looking to see if we could run our data infrastructure on GCP. So Vertica team hustled on this, and provided us early preview version, which we've been testing out to see how performance, is impacted by running on the Cloud, and on GCP. And so far, I think things are going pretty well, but we should have some numbers about this very soon. So here I have a visualization of an internal dashboard, that is powered solely by data and queries running on Vertica. So this GIF has sequence have different visualizations supported by this tool. So for instance, here you see a heat map, downgrading heat map of source of traffic demand for ride shares. And then you will see a bunch of arrows here about source destination pairs and the trip lines. And then you can see how demand moves around. So, as the cycles through the various animations, you can basically see all the different kinds of insights, and query shapes that we send to Vertica, which powers this critical business dashboard for our operations teams. All right, so now how do we do all of this at scale? So, we started off with a single Vertica cluster, a few years back. So we had our data lake, the data would land into Vertica. So these are the core fact and dimension tables that I just spoke about. And then Vertica powers queries at our business intelligence layer, right? So this is a very simple, and effective architecture for most use cases. But at Uber scale, we ran into a few problems. So the first issue that we have is that, Uber is a pretty big company at this point, with a lot of users sending almost millions of queries every week. And at that scale, what we began to see was that a single cluster was not able to handle all the query traffic. So for those of you who have done an introductory course, on queueing theory, you will realize that basically, even though you could have all the query is processed through a single serving system. You will tend to see larger and larger queue wait times, as the number of queries pile up. And what this means in practice for end users, is that they are basically just seeing longer and longer query latencies. But even though the actual query execution time on Vertica itself, is probably less than a minute, their query sitting in the queue for a bunch of minutes, and that's the end user perceived latency. So this was a huge problem for us. The second problem we had was that the cluster becomes a single point of failure. Now Vertica can handle single node failures very gracefully, and it can probably also handle like two or three node failures depending on your cluster size and your application. But very soon, you will see that, when you basically have beyond a certain number of failures or nodes in maintenance, then your cluster will probably need to be restarted or you will start seeing some down times due to other issues. So another example of why you would have to have a downtime, is when you're upgrading software in your clusters. So, essentially we're a global company, and we have users all around the world, we really cannot afford to have downtime, even for one hour slot. So that turned out to be a big problem for us. And as I mentioned, we could have hardware issues. So we we might need to upgrade our machines, or we might need to replace storage or memory due to issues with the hardware in there, due to normal wear and tear, or due to abnormal issues. And so because of all of these things, having a single point of failure, having a single cluster was not really practical for us. So the next thing we did, was we set up multiple clusters, right? So we had a bunch of identities clusters, all of which have the same datasets. So then we would basically load data using ingestion pipelines from our data lake, onto each of these clusters. And then the business intelligence layer would be able to query any of these clusters. So this actually solved most of the issues that I pointed out in the previous slide. So we no longer had a single point of failure. Anytime we had to do version upgrades, we would just take off one cluster offline, upgrade the software on it. If we had node failures, we would probably just take out one cluster, if we had to, or we would just have some spare nodes, which would rotate into our production clusters and so forth. However, having multiple clusters, led to a new set of issues. So the first problem was that since we have multiple clusters, you would end up with inconsistent schema. So one of the things to understand about our platform, is that we are an infrastructure team. So we don't actually own or manage any of the data that is served on Vertica clusters. So we have dataset owners and publishers, who manage their own datasets. Now exposing multiple clusters to these dataset owners. Turns out, it's not a great idea, right? Because they are not really aware of, the importance of having consistency of schemas and datasets across different clusters. So over time, what we saw was that the schema for the same tables would basically get out of order, because they were all the updates are not consistently applied on all clusters. Or maybe they were just experimenting some new columns or some new tables in one cluster, but they forgot to delete it, whatever the case might be. We basically ended up in a situation where, we saw a lot of inconsistent schemas, even across some of our core tables in our different clusters. A second issue was, since we had ingestion pipelines that were ingesting data independently into all these clusters, these pipelines could fail independently as well. So what this meant is that if, for instance, the ingestion pipeline into cluster B failed, then the data there would be older than clusters A and C. So, when a query comes in from the BI layer, and if it happens to hit B, you would probably see different results, than you would if you went to a or C. And this was obviously not an ideal situation for our end users, because they would end up seeing slightly inconsistent, slightly different counts. But then that would lead to a bad situation for them where they would not able to fully trust the data that was, and the results and insights that were being returned by the SQL queries and Vertica systems. And then the third problem was, we had a lot of extra replication. So the 20/80 Rule, or maybe even the 90/10 Rule, applies to datasets on our clusters as well. So less than 10% of our datasets, for instance, in 90% of the queries, right? And so it doesn't really make sense for us to replicate all of our data on all the clusters. And so having this set up where we had to do that, was obviously very suboptimal for us. So then what we did, was we basically built some additional systems to solve these problems. So this brings us to our Vertica ecosystem that we have in production today. So on the ingestion side, we built a system called Vertica Data Manager, which basically manages all the ingestion into various clusters. So at this point, people who are managing datasets or dataset owners and publishers, they no longer have to be aware of individual clusters. They just set up their ingestion pipelines with an endpoint in Vertica Data Manager. And the Vertica Data Manager ensures that, all the schemas and data is consistent across all our clusters. And on the query side, we built a proxy layer. So what this ensures is that, when queries come in from the BI layer, the query was forwarded, smartly and with knowledge and data about which cluster up, which clusters are down, which clusters are available, which clusters are loaded, and so forth. So with these two layers of abstraction between our ingestion and our query, we were able to have a very consistent, almost single system view of our entire Vertica deployment. And the third bit, we had put in place, was the data manifest, which were the communication mechanism between ingestion and proxy. So the data manifest basically is a listing of, which tables are available on which clusters, which clusters are up to date, and so forth. So with this ecosystem in place, we were also able to solve the extra replication problem. So now we basically have some big clusters, where all the core tables, and all the tables, in fact, are served. So any query that hits 90%, less so tables, goes to the big clusters. And most of the queries which hit 10% heavily queried important tables, can also be served by many other small clusters, so much more efficient use of resources. So this basically is the view that we have today, of Vertica within Uber, so external to our team, folks, just have an endpoint, where they basically set up their ingestion jobs, and another endpoint where they can forward their Vertica SQL queries. And they are so to a proxy layer. So let's get a little more into details, about each of these layers. So, on the data management side, as I mentioned, we have two kinds of tables. So we have dimension tables. So these tables are updated every cycle, so the list of cities list of drivers, the list of users and so forth. So these change not so frequently, maybe once a day or so. And so we are able to, and since these datasets are not very big, we basically swap them out on every single cycle. Whereas the fact tables, so these are tables which have information about our trips or each orders and so forth. So these are partition. So we have one partition roughly per day, for the last couple of years, and then we have more of a hierarchical partitions set up for older data. So what we do is we load the partitions for the last three days on every cycle. The reason we do that, is because not all our data comes in at the same time. So we have updates for trips, going over the past two or three days, for instance, where people add ratings to their trips, or provide feedback for drivers and so forth. So we want to capture them all in the row corresponding to that particular trip. And so we upload partitions for the last few days to make sure we capture all those updates. And we also update older partitions, if for instance, records were deleted for retention purposes, or GDPR purposes, for instance, or other regulatory reasons. So we do this less frequently, but these are also updated if necessary. So there are endpoints which allow dataset owners to specify what partitions they want to update. And as I mentioned, data is typically managed using a hierarchical partitioning scheme. So in this way, we are able to make sure that, we take advantage of the data being clustered by day, so that we don't have to update all the data at once. So when we are recovering from an cluster event, like a version upgrade or software upgrade, or hardware fix or failure handling, or even when we are adding a new cluster to the system, the data manager takes care of updating the tables, and copying all the new partitions, making sure the schemas are all right. And then we update the data and schema consistency and make sure everything is up to date before we, add this cluster to our serving pool, and the proxy starts sending traffic to it. The second thing that the data manager provides is consistency. So the main thing we do here, is we do atomic updates of our tables and partitions for fact tables using a two-phase commit scheme. So what we do is we load all the new data in temp tables, in all the clusters in phase one. And then when all the clusters give us access signals, then we basically promote them to primary and set them as the main serving tables for incoming queries. We also optimize the load, using Vertica Data Copy. So what this means is earlier, in a parallel pipelines scheme, we had to ingest data individually from HDFS clusters into each of the Vertica clusters. That took a lot of HDFS bandwidth. But using this nice feature that Vertica provides called Vertica Data Copy, we just load it data into one cluster and then much more efficiently copy it, to the other clusters. So this has significantly reduced our ingestion overheads, and speed it up our load process. And as I mentioned as the second phase of the commit, all data is promoted at the same time. Finally, we make sure that all the data is up to date, by doing some checks around the number of rows and various other key signals for freshness and correctness, which we compare with the data in the data lake. So in terms of schema changes, VDM automatically applies these consistently across all the clusters. So first, what we do is we stage these changes to make sure that these are correct. So this catches errors that are trying to do, an incompatible update, like changing a column type or something like that. So we make sure that schema changes are validated. And then we apply them to all clusters atomically again for consistency. And provide a overall consistent view of our data to all our users. So on the proxy side, we have transparent support for, replicated clusters to all our users. So the way we handle that is, as I mentioned, the cluster to table mapping is maintained in the manifest database. And when we have an incoming query, the proxy is able to see which cluster has all the tables in that query, and route the query to the appropriate cluster based on the manifest information. Also the proxy is aware of the health of individual clusters. So if for some reason a cluster is down for maintenance or upgrades, the proxy is aware of this information. And it does the monitoring based on query response and execution times as well. And it uses this information to route queries to healthy clusters, and do some load balancing to ensure that we award hotspots on various clusters. So the key takeaways that I have from the stock, are primarily these. So we started off with single cluster mode on Vertica, and we ran into a bunch of issues around scaling and availability due to cluster downtime. We had then set up a bunch of replicated clusters to handle the scaling and availability issues. Then we run into issues around schema consistency, data staleness, and data replication. So we built an entire ecosystem around Vertica, with abstraction layers around data management and ingestion, and proxy. And with this setup, we were able to enforce consistency and improve storage utilization. So, hopefully this gives you all a brief idea of how we have been able to scale Vertica usage at Uber, and power some of our most business critical and important use cases. So as I mentioned at the beginning, I have a interesting and simple extra update for you. So an easy way in which you all can take advantage of many of the features that we have built into our ecosystem, is to use the Vertica Eon mode. So the Vertica Eon mode, allows you to set up multiple clusters with consistent data updates, and set them up at various different sizes to handle different query loads. And it automatically handles many of these issues that I mentioned in our ecosystem. So do check it out. We've also been, trying it out on DCP, and initial results look very, very promising. So thank you all for joining me on this talk today. I hope you guys learned something new. And hopefully you took away something that you can also apply to your systems. We have a few more time for some questions. So I'll pause for now and take any questions.
SUMMARY :
Any questions that we don't address, So the first issue that we have is that,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Girish Baliga | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Girish | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
one hour | QUANTITY | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sue | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Dara | PERSON | 0.99+ |
first issue | QUANTITY | 0.99+ |
less than a minute | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
first problem | QUANTITY | 0.99+ |
third problem | QUANTITY | 0.99+ |
third bit | QUANTITY | 0.99+ |
less than 10% | QUANTITY | 0.99+ |
each platform | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second issue | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
today | DATE | 0.99+ |
second phase | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
over 10,000 cities | QUANTITY | 0.99+ |
over 70% | QUANTITY | 0.99+ |
each business | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
second problem | QUANTITY | 0.98+ |
Vertica | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Vertica Data Manager | TITLE | 0.98+ |
two-phase | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
90 percentile | QUANTITY | 0.98+ |
once a week | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
single point | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
once a day | QUANTITY | 0.97+ |
Redis | TITLE | 0.97+ |
one partition | QUANTITY | 0.97+ |
under a minute | QUANTITY | 0.97+ |
@ Uber Scale | ORGANIZATION | 0.96+ |
UNLIST TILL 4/2 - A Technical Overview of Vertica Architecture
>> Paige: Hello, everybody and thank you for joining us today on the Virtual Vertica BDC 2020. Today's breakout session is entitled A Technical Overview of the Vertica Architecture. I'm Paige Roberts, Open Source Relations Manager at Vertica and I'll be your host for this webinar. Now joining me is Ryan Role-kuh? Did I say that right? (laughs) He's a Vertica Senior Software Engineer. >> Ryan: So it's Roelke. (laughs) >> Paige: Roelke, okay, I got it, all right. Ryan Roelke. And before we begin, I want to be sure and encourage you guys to submit your questions or your comments during the virtual session while Ryan is talking as you think of them as you go along. You don't have to wait to the end, just type in your question or your comment in the question box below the slides and click submit. There'll be a Q and A at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to get back to you offline. Now, alternatively, you can visit the Vertica forums to post your question there after the session as well. Our engineering team is planning to join the forums to keep the conversation going, so you can have a chat afterwards with the engineer, just like any other conference. Now also, you can maximize your screen by clicking the double arrow button in the lower right corner of the slides and before you ask, yes, this virtual session is being recorded and it will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now, let's get started. Over to you, Ryan. >> Ryan: Thanks, Paige. Good afternoon, everybody. My name is Ryan and I'm a Senior Software Engineer on Vertica's Development Team. I primarily work on improving Vertica's query execution engine, so usually in the space of making things faster. Today, I'm here to talk about something that's more general than that, so we're going to go through a technical overview of the Vertica architecture. So the intent of this talk, essentially, is to just explain some of the basic aspects of how Vertica works and what makes it such a great database software and to explain what makes a query execute so fast in Vertica, we'll provide some background to explain why other databases don't keep up. And we'll use that as a starting point to discuss an academic database that paved the way for Vertica. And then we'll explain how Vertica design builds upon that academic database to be the great software that it is today. I want to start by sharing somebody's approximation of an internet minute at some point in 2019. All of the data on this slide is generated by thousands or even millions of users and that's a huge amount of activity. Most of the applications depicted here are backed by one or more databases. Most of this activity will eventually result in changes to those databases. For the most part, we can categorize the way these databases are used into one of two paradigms. First up, we have online transaction processing or OLTP. OLTP workloads usually operate on single entries in a database, so an update to a retail inventory or a change in a bank account balance are both great examples of OLTP operations. Updates to these data sets must be visible immediately and there could be many transactions occurring concurrently from many different users. OLTP queries are usually key value queries. The key uniquely identifies the single entry in a database for reading or writing. Early databases and applications were probably designed for OLTP workloads. This example on the slide is typical of an OLTP workload. We have a table, accounts, such as for a bank, which tracks information for each of the bank's clients. An update query, like the one depicted here, might be run whenever a user deposits $10 into their bank account. Our second category is online analytical processing or OLAP which is more about using your data for decision making. If you have a hardware device which periodically records how it's doing, you could analyze trends of all your devices over time to observe what data patterns are likely to lead to failure or if you're Google, you might log user search activity to identify which links helped your users find the answer. Analytical processing has always been around but with the advent of the internet, it happened at scales that were unimaginable, even just 20 years ago. This SQL example is something you might see in an OLAP workload. We have a table, searches, logging user activity. We will eventually see one row in this table for each query submitted by users. If we want to find out what time of day our users are most active, then we could write a query like this one on the slide which counts the number of unique users running searches for each hour of the day. So now let's rewind to 2005. We don't have a picture of an internet minute in 2005, we don't have the data for that. We also don't have the data for a lot of other things. The term Big Data is not quite yet on anyone's radar and The Cloud is also not quite there or it's just starting to be. So if you have a database serving your application, it's probably optimized for OLTP workloads. OLAP workloads just aren't mainstream yet and database engineers probably don't have them in mind. So let's innovate. It's still 2005 and we want to try something new with our database. Let's take a look at what happens when we do run an analytic workload in 2005. Let's use as a motivating example a table of stock prices over time. In our table, the symbol column identifies the stock that was traded, the price column identifies the new price and the timestamp column indicates when the price changed. We have several other columns which, we should know that they're there, but we're not going to use them in any example queries. This table is designed for analytic queries. We're probably not going to make any updates or look at individual rows since we're logging historical data and want to analyze changes in stock price over time. Our database system is built to serve OLTP use cases, so it's probably going to store the table on disk in a single file like this one. Notice that each row contains all of the columns of our data in row major order. There's probably an index somewhere in the memory of the system which will help us to point lookups. Maybe our system expects that we will use the stock symbol and the trade time as lookup keys. So an index will provide quick lookups for those columns to the position of the whole row in the file. If we did have an update to a single row, then this representation would work great. We would seek to the row that we're interested in, finding it would probably be very fast using the in-memory index. And then we would update the file in place with our new value. On the other hand, if we ran an analytic query like we want to, the data access pattern is very different. The index is not helpful because we're looking up a whole range of rows, not just a single row. As a result, the only way to find the rows that we actually need for this query is to scan the entire file. We're going to end up scanning a lot of data that we don't need and that won't just be the rows that we don't need, there's many other columns in this table. Many information about who made the transaction, and we'll also be scanning through those columns for every single row in this table. That could be a very serious problem once we consider the scale of this file. Stocks change a lot, we probably have thousands or millions or maybe even billions of rows that are going to be stored in this file and we're going to scan all of these extra columns for every single row. If we tried out our stocks use case behind the desk for the Fortune 500 company, then we're probably going to be pretty disappointed. Our queries will eventually finish, but it might take so long that we don't even care about the answer anymore by the time that they do. Our database is not built for the task we want to use it for. Around the same time, a team of researchers in the North East have become aware of this problem and they decided to dedicate their time and research to it. These researchers weren't just anybody. The fruits of their labor, which we now like to call the C-Store Paper, was published by eventual Turing Award winner, Mike Stonebraker, along with several other researchers from elite universities. This paper presents the design of a read-optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. That sounds exactly like what we want for our stocks use case. Reasoning about what makes our queries executions so slow brought our researchers to the Memory Hierarchy, which essentially is a visualization of the relative speeds of different parts of a computer. At the top of the hierarchy, we have the fastest data units, which are, of course, also the most expensive to produce. As we move down the hierarchy, components get slower but also much cheaper and thus you can have more of them. Our OLTP databases data is stored in a file on the hard disk. We scanned the entirety of this file, even though we didn't need most of the data and now it turns out, that is just about the slowest thing that our query could possibly be doing by over two orders of magnitude. It should be clear, based on that, that the best thing we can do to optimize our query's execution is to avoid reading unnecessary data from the disk and that's what the C-Store researchers decided to look at. The key innovation of the C-Store paper does exactly that. Instead of storing data in a row major order, in a large file on disk, they transposed the data and stored each column in its own file. Now, if we run the same select query, we read only the relevant columns. The unnamed columns don't factor into the table scan at all since we don't even open the files. Zooming out to an internet scale sized data set, we can appreciate the savings here a lot more. But we still have to read a lot of data that we don't need to answer this particular query. Remember, we had two predicates, one on the symbol column and one on the timestamp column. Our query is only interested in AAPL stock, but we're still reading rows for all of the other stocks. So what can we do to optimize our disk read even more? Let's first partition our data set into different files based on the timestamp date. This means that we will keep separate files for each date. When we query the stocks table, the database knows all of the files we have to open. If we have a simple predicate on the timestamp column, as our sample query does, then the database can use it to figure out which files we don't have to look at at all. So now all of our disk reads that we have to do to answer our query will produce rows that pass the timestamp predicate. This eliminates a lot of wasteful disk reads. But not all of them. We do have another predicate on the symbol column where symbol equals AAPL. We'd like to avoid disk reads of rows that don't satisfy that predicate either. And we can avoid those disk reads by clustering all the rows that match the symbol predicate together. If all of the AAPL rows are adjacent, then as soon as we see something different, we can stop reading the file. We won't see any more rows that can pass the predicate. Then we can use the positions of the rows we did find to identify which pieces of the other columns we need to read. One technique that we can use to cluster the rows is sorting. So we'll use the symbol column as a sort key for all of the columns. And that way we can reconstruct a whole row by seeking to the same row position in each file. It turns out, having sorted all of the rows, we can do a bit more. We don't have any more wasted disk reads but we can still be more efficient with how we're using the disk. We've clustered all of the rows with the same symbol together so we don't really need to bother repeating the symbol so many times in the same file. Let's just write the value once and say how many rows we have. This one length encoding technique can compress large numbers of rows into a small amount of space. In this example, we do de-duplicate just a few rows but you can imagine de-duplicating many thousands of rows instead. This encoding is great for reducing the amounts of disk we need to read at query time, but it also has the additional benefit of reducing the total size of our stored data. Now our query requires substantially fewer disk reads than it did when we started. Let's recap what the C-Store paper did to achieve that. First, we transposed our data to store each column in its own file. Now, queries only have to read the columns used in the query. Second, we partitioned the data into multiple file sets so that all rows in a file have the same value for the partition column. Now, a predicate on the partition column can skip non-matching file sets entirely. Third, we selected a column of our data to use as a sort key. Now rows with the same value for that column are clustered together, which allows our query to stop reading data once it finds non-matching rows. Finally, sorting the data this way enables high compression ratios, using one length encoding which minimizes the size of the data stored on the disk. The C-Store system combined each of these innovative ideas to produce an academically significant result. And if you used it behind the desk of a Fortune 500 company in 2005, you probably would've been pretty pleased. But it's not 2005 anymore and the requirements of a modern database system are much stricter. So let's take a look at how C-Store fairs in 2020. First of all, we have designed the storage layer of our database to optimize a single query in a single application. Our design optimizes the heck out of that query and probably some similar ones but if we want to do anything else with our data, we might be in a bit of trouble. What if we just decide we want to ask a different question? For example, in our stock example, what if we want to plot all the trade made by a single user over a large window of time? How do our optimizations for the previous query measure up here? Well, our data's partitioned on the trade date, that could still be useful, depending on our new query. If we want to look at a trader's activity over a long period of time, we would have to open a lot of files. But if we're still interested in just a day's worth of data, then this optimization is still an optimization. Within each file, our data is ordered on the stock symbol. That's probably not too useful anymore, the rows for a single trader aren't going to be clustered together so we will have to scan all of the rows in order to figure out which ones match. You could imagine a worse design but as it becomes crucial to optimize this new type of query, then we might have to go as far as reconfiguring the whole database. The next problem of one of scale. One server is probably not good enough to serve a database in 2020. C-Store, as described, runs on a single server and stores lots of files. What if the data overwhelms this small system? We could imagine exhausting the file system's inodes limit with lots of small files due to our partitioning scheme. Or we could imagine something simpler, just filling up the disk with huge volumes of data. But there's an even simpler problem than that. What if something goes wrong and C-Store crashes? Then our data is no longer available to us until the single server is brought back up. A third concern, another one of scalability, is that one deployment does not really suit all possible things and use cases we could imagine. We haven't really said anything about being flexible. A contemporary database system has to integrate with many other applications, which might themselves have pretty restricted deployment options. Or the demands imposed by our workloads have changed and the setup you had before doesn't suit what you need now. C-Store doesn't do anything to address these concerns. What the C-Store paper did do was lead very quickly to the founding of Vertica. Vertica's architecture and design are essentially all about bringing the C-Store designs into an enterprise software system. The C-Store paper was just an academic exercise so it didn't really need to address any of the hard problems that we just talked about. But Vertica, the first commercial database built upon the ideas of the C-Store paper would definitely have to. This brings us back to the present to look at how an analytic query runs in 2020 on the Vertica Analytic Database. Vertica takes the key idea from the paper, can we significantly improve query performance by changing the way our data is stored and give its users the tools to customize their storage layer in order to heavily optimize really important or commonly wrong queries. On top of that, Vertica is a distributed system which allows it to scale up to internet-sized data sets, as well as have better reliability and uptime. We'll now take a brief look at what Vertica does to address the three inadequacies of the C-Store system that we mentioned. To avoid locking into a single database design, Vertica provides tools for the database user to customize the way their data is stored. To address the shortcomings of a single node system, Vertica coordinates processing among multiple nodes. To acknowledge the large variety of desirable deployments, Vertica does not require any specialized hardware and has many features which smoothly integrate it with a Cloud computing environment. First, we'll look at the database design problem. We're a SQL database, so our users are writing SQL and describing their data in SQL way, the Create Table statement. Create Table is a logical description of what your data looks like but it doesn't specify the way that it has to be stored, For a single Create Table, we could imagine a lot of different storage layouts. Vertica adds some extensions to SQL so that users can go even further than Create Table and describe the way that they want the data to be stored. Using terminology from the C-Store paper, we provide the Create Projection statement. Create Projection specifies how table data should be laid out, including column encoding and sort order. A table can have multiple projections, each of which could be ordered on different columns. When you query a table, Vertica will answer the query using the projection which it determines to be the best match. Referring back to our stock example, here's a sample Create Table and Create Projection statement. Let's focus on our heavily optimized example query, which had predicates on the stock symbol and date. We specify that the table data is to be partitioned by date. The Create Projection Statement here is excellent for this query. We specify using the order by clause that the data should be ordered according to our predicates. We'll use the timestamp as a secondary sort key. Each projection stores a copy of the table data. If you don't expect to need a particular column in a projection, then you can leave it out. Our average price query didn't care about who did the trading, so maybe our projection design for this query can leave the trader column out entirely. If the question we want to ask ever does change, maybe we already have a suitable projection, but if we don't, then we can create another one. This example shows another projection which would be much better at identifying trends of traders, rather than identifying trends for a particular stock. Next, let's take a look at our second problem, that one, or excuse me, so how should you decide what design is best for your queries? Well, you could spend a lot of time figuring it out on your own, or you could use Vertica's Database Designer tool which will help you by automatically analyzing your queries and spitting out a design which it thinks is going to work really well. If you want to learn more about the Database Designer Tool, then you should attend the session Vertica Database Designer- Today and Tomorrow which will tell you a lot about what the Database Designer does and some recent improvements that we have made. Okay, now we'll move to our next problem. (laughs) The challenge that one server does not fit all. In 2020, we have several orders of magnitude more data than we had in 2005. And you need a lot more hardware to crunch it. It's not tractable to keep multiple petabytes of data in a system with a single server. So Vertica doesn't try. Vertica is a distributed system so will deploy multiple severs which work together to maintain such a high data volume. In a traditional Vertica deployment, each node keeps some of the data in its own locally-attached storage. Data is replicated so that there is a redundant copy somewhere else in the system. If any one node goes down, then the data that it served is still available on a different node. We'll also have it so that in the system, there's no special node with extra duties. All nodes are created equal. This ensures that there is no single point of failure. Rather than replicate all of your data, Vertica divvies it up amongst all of the nodes in your system. We call this segmentation. The way data is segmented is another parameter of storage customization and it can definitely have an impact upon query performance. A common way to segment data is by using a hash expression, which essentially randomizes the node that a row of data belongs to. But with a guarantee that the same data will always end up in the same place. Describing the way data is segmented is another part of the Create Projection Statement, as seen in this example. Here we segment on the hash of the symbol column so all rows with the same symbol will end up on the same node. For each row that we load into the system, we'll apply our segmentation expression. The result determines which segment the row belongs to and then we'll send the row to each node which holds the copy of that segment. In this example, our projection is marked KSAFE 1, so we will keep one redundant copy of each segment. When we load a row, we might find that its segment had copied on Node One and Node Three, so we'll send a copy of the row to each of those nodes. If Node One is temporarily disconnected from the network, then Node Three can serve the other copy of the segment so that the whole system remains available. The last challenge we brought up from the C-Store design was that one deployment does not fit all. Vertica's cluster design neatly addressed many of our concerns here. Our use of segmentation to distribute data means that a Vertica system can scale to any size of deployment. And since we lack any special hardware or nodes with special purposes, Vertica servers can run anywhere, on premise or in the Cloud. But let's suppose you need to scale out your cluster to rise to the demands of a higher workload. Suppose you want to add another node. This changes the division of the segmentation space. We'll have to re-segment every row in the database to find its new home and then we'll have to move around any data that belongs to a different segment. This is a very expensive operation, not something you want to be doing all that often. Traditional Vertica doesn't solve that problem especially well, but Vertica Eon Mode definitely does. Vertica's Eon Mode is a large set of features which are designed with a Cloud computing environment in mind. One feature of this design is elastic throughput scaling, which is the idea that you can smoothly change your cluster size without having to pay the expenses of shuffling your entire database. Vertica Eon Mode had an entire session dedicated to it this morning. I won't say any more about it here, but maybe you already attended that session or if you haven't, then I definitely encourage you to listen to the recording. If you'd like to learn more about the Vertica architecture, then you'll find on this slide links to several of the academic conference publications. These four papers here, as well as Vertica Seven Years Later paper which describes some of the Vertica designs seven years after the founding and also a paper about the innovations of Eon Mode and of course, the Vertica documentation is an excellent resource for learning more about what's going on in a Vertica system. I hope you enjoyed learning about the Vertica architecture. I would be very happy to take all of your questions now. Thank you for attending this session.
SUMMARY :
A Technical Overview of the Vertica Architecture. Ryan: So it's Roelke. in the question box below the slides and click submit. that the best thing we can do
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ryan | PERSON | 0.99+ |
Mike Stonebraker | PERSON | 0.99+ |
Ryan Roelke | PERSON | 0.99+ |
2005 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
$10 | QUANTITY | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Paige | PERSON | 0.99+ |
Node Three | TITLE | 0.99+ |
Today | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
each file | QUANTITY | 0.99+ |
Roelke | PERSON | 0.99+ |
each row | QUANTITY | 0.99+ |
Node One | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
each hour | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
second category | QUANTITY | 0.99+ |
each column | QUANTITY | 0.99+ |
One technique | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two predicates | QUANTITY | 0.99+ |
each node | QUANTITY | 0.99+ |
One server | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
C-Store | TITLE | 0.99+ |
second problem | QUANTITY | 0.99+ |
Ryan Role | PERSON | 0.99+ |
Third | QUANTITY | 0.99+ |
North East | LOCATION | 0.99+ |
each segment | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
single entry | QUANTITY | 0.98+ |
each date | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
one row | QUANTITY | 0.98+ |
one server | QUANTITY | 0.98+ |
single server | QUANTITY | 0.98+ |
single entries | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
two paradigms | QUANTITY | 0.97+ |
a day | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
billions of rows | QUANTITY | 0.97+ |
Vertica | TITLE | 0.97+ |
4/2 | DATE | 0.97+ |
single application | QUANTITY | 0.97+ |
each query | QUANTITY | 0.97+ |
Each projection | QUANTITY | 0.97+ |
UNLIST TILL 4/2 The Data-Driven Prognosis
>> Narrator: Hi, everyone, thanks for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled toward Zero Unplanned Downtime of Medical Imaging Systems using Big Data. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Mauro Barbieri, lead architect of analytics at Philips. Before we begin, I want to encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. And we'll answer as many questions as we're able to during that time. Any questions that we don't get to we'll do our best to answer them offline. Alternatively, you can also visit the vertical forums to post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And yes, this virtual session is being recorded, and we'll be available to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Mauro, over to you. >> Thank you, good day everyone. So medical imaging systems such as MRI scanners, interventional guided therapy machines, CT scanners, the XR system, they need to provide hospitals, optimal clinical performance but also predictable cost of ownership. So clinicians understand the need for maintenance of these devices, but they just want to be non intrusive and scheduled. And whenever there is a problem with the system, the hospital suspects Philips services to resolve it fast and and the first interaction with them. In this presentation you will see how we are using big data to increase the uptime of our medical imaging systems. I'm sure you have heard of the company Phillips. Phillips is a company that was founded in 129 years ago in actually 1891 in Eindhoven in Netherlands, and they started by manufacturing, light bulbs, and other electrical products. The two brothers Gerard and Anton, they took an investment from their father Frederik, and they set up to manufacture and sale light bulbs. And as you may know, a key technology for making light bulbs is, was glass and vacuum. So when you're good at making glass products and vacuum and light bulbs, then there is an easy step to start making radicals like they did but also X ray tubes. So Philips actually entered very early in the market of medical imaging and healthcare technology. And this is what our is our core as a company, and it's also our future. So, healthcare, I mean, we are in a situation now in which everybody recognize the importance of it. And and we see incredible trends in a transition from what we call Volume Based Healthcare to Value Base, where, where the clinical outcomes are driving improvements in the healthcare domain. Where it's not enough to respond to healthcare challenges, but we need to be involved in preventing and maintaining the population wellness and from a situation in which we episodically are in touch with healthcare we need to continuously monitor and continuously take care of populations. And from healthcare facilities and technology available to a few elected and reach countries we want to make health care accessible to everybody throughout the world. And this of course, has poses incredible challenges. And this is why we are transforming the Philips to become a healthcare technology leader. So from Philips has been a concern realizing and active in many sectors in many sectors and realizing what kind of technologies we've been focusing on healthcare. And we have been transitioning from creating and selling products to making solutions to addresses ethical challenges. And from selling boxes, to creating long term relationships with our customers. And so, if you have known the Philips brand from from Shavers from, from televisions to light bulbs, you probably now also recognize the involvement of Philips in the healthcare domain, in diagnostic imaging, in ultrasound, in image guided therapy and systems, in digital pathology, non invasive ventilation, as well as patient monitoring intensive care, telemedicine, but also radiology, cardiology and oncology informatics. Philips has become a powerhouse of healthcare technology. To give you an idea of this, these are the numbers for, from 2019 about almost 20 billion sales, 4% comparable sales growth with respect to the previous year and about 10% of the sales are reinvested in R&D. This is also shown in the number of patents rights, last year we filed more than 1000 patents in, in the healthcare domain. And the company is about 80,000 employees active globally in over 100 countries. So, let me focus now on the type of products that are in the scope of this presentation. This is a Philips Magnetic Resonance Imaging Scanner, also called Ingenia 3.0 Tesla is an incredible machine. Apart from being very beautiful as you can see, it's a it's a very powerful technology. It can make high resolution images of the human body without harmful radiation. And it's a, it's a, it's a complex machine. First of all, it's massive, it weights 4.6 thousand kilograms. And it has superconducting magnets cooled with liquid helium at -269 degrees Celsius. And it's actually full of software millions and millions of lines of code. And it's occupied three rooms. What you see in this picture, the examination room, but there is also a technical room which is full of of of equipment of custom hardware, and machinery that is needed to operate this complex device. This is another system, it's an interventional, guided therapy system where the X ray is used during interventions with the patient on the table. You see on the left, what we call C-arm, a robotic arm that moves and can take images of the patient while it's been operated, it's used for cardiology intervention, neurological intervention, cardiovascular intervention. There's a table that moves in very complex ways and it again it occupies two rooms, this room that we see here and but also a room full of cabinets and hardwood and computers. This is another another characteristic of this machine is that it has to operate it as it is used during medical interventions, and so it has to interact with all kind of other equipment. This is another system it's a, it's a, it's a Computer Tomography Scanner Icon which is a unique, it is unique due to its special detection technology. It has an image resolution up to 0.5 millimeters and making thousand by thousand pixel images. And it is also a complex machine. This is a picture of the inside of a compatible device not really an icon, but it has, again three rotating, which waits two and a half turn. So, it's a combination of X ray tube on top, high voltage generators to power the extra tube and in a ray of detectors to create the images. And this rotates at 220 right per minutes, making 50 frames per second to make 3D reconstruction of the of the body. So a lot of technology, complex technology and this technology is made for this situation. We make it for clinicians, who are busy saving people lives. And of course, they want optimal clinical performance. They want the best technology to treat the patients. But they also want predictable cost of ownership. They want predictable system operations. They want their clinical schedules not interrupted. So, they understand these machines are complex full of technology. And these machines may have, may require maintenance, may require software update, sometimes may even say they require some parts, horrible parts to be replaced, but they don't want to have it unplanned. They don't want to have unplanned downtime. They would hate send, having to send patients home and to have to reschedule visits. So they understand maintenance. They just want to have a schedule predictable and non intrusive. So already a number of years ago, we started a transition from what we call Reactive Maintenance services of these devices to proactive. So, let me show you what we mean with this. Normally, if a system has an issue system on the field, and traditional reactive workflow would be that, this the customer calls a call center, reports the problem. The company servicing the device would dispatch a field service engineer, the field service engineer would go on site, do troubleshooting, literally smell, listen to noise, watch for lights, for, for blinking LEDs or other unusual issues and would troubleshoot the issue, find the root cause and perhaps decide that the spare part needs to be replaced. He would order a spare part. The part would have to be delivered at the site. Either immediately or the engineer would would need to come back another day when the part is available, perform the repair. That means replacing the parts, do all the needed tests and validations. And finally release the system for clinical use. So as you can see, there is a lot of, there are a lot of steps, and also handover of information from one to between different people, between different organizations even. Would it be better to actually keep monitoring the installed base, keep observing the machine and actually based on the information collected, detect or predict even when an issue is is going to happen? And then instead of reacting to a customer calling, proactively approach the customer scheduling, preventive service, and therefore avoid the problem. So this is actually what we call Corrective Service. And this is what we're being transitioning to using Big Data and Big Data is just one ingredient. In fact, there are more things that are needed. The devices themselves need to be designed for reliability and predictability. If the device is a black box does not communicate to the outside world the status, if it does not transmit data, then of course, it is not possible to observe and therefore, predict issues. This of course requires a remote service infrastructure or an IoT infrastructure as it is called nowadays. The passivity to connect the medical device with a data center in enterprise infrastructure, collect the data and perform the remote troubleshooting and the predictions. Also the right processes and the right organization is to be in place, because an organization that is, you know, waiting for the customer to call and then has a number of few service engineers available and a certain amount of spare parts and stock is a different organization from an organization that actually is continuously observing the installed base and is scheduling actions to prevent issues. And in other pillar is knowledge management. So in order to realize predictive models and to have predictive service action, it's important to manage knowledge about failure modes, about maintenance procedures very well to have it standardized and digitalized and available. And last but not least, of course, the predictive models themselves. So we talked about transmitting data from the installed base on the medical device, to an enterprise infrastructure that would analyze the data and generate predictions that's predictive models are exactly the last ingredient that is needed. So this is not something that I'm, you know, I'm telling you for the first time is actually a strategic intent of Philips, where we aim for zero unplanned downtime. And we market it that way. We also is not a secret that we do it by using big data. And, of course, there could be other methods to to achieving the same goal. But we started using big data already now well, quite quite many years ago. And one of the reasons is that our medical devices already are wired to collect lots of data about the functioning. So they collect events, error logs that are sensor connecting sensor data. And to give you an idea, for example, just as an order of magnitudes of size of the data, the one MRI scanner can log more than 1 million events per day, hundreds of thousands of sensor readings and tens of thousands of many other data elements. And so this is truly big data. On the other hand, this data was was actually not designed for predictive maintenance, you have to think a medical device of this type of is, stays in the field for about 10 years. Some a little bit longer, some of it's shorter. So these devices have been designed 10 years ago, and not necessarily during the design, and not all components were designed, were designed with predictive maintenance in mind with IoT, and with the latest technology at that time, you know, progress, will not so forward looking at the time. So the actual the key challenge is taking the data which is already available, which is already logged by the medical devices, integrating it and creating predictive models. And if we dive a little bit more into the research challenges, this is one of the Challenges. How to integrate diverse data sources, especially how to automate the costly process of data provisioning and cleaning? But also, once you have the data, let's say, how to create these models that can predict failures and the degradation of performance of a single medical device? Once you have these models and alerts, another challenge is how to automatically recommend service actions based on the probabilistic information on these possible failures? And once you have the insights even if you can recommend action still recommending an action should be done with the goal of planning, maintenance, for generating value. That means balancing costs and benefits, preventing unplanned downtimes without of course scheduling and unnecessary interventions because every intervention, of course, is a disruption for the clinical schedule. And there are many more applications that can be built off such as the optimal management of spare parts supplies. So how do you approach this problem? Our approach was to collect into one database Vertica. A large amount of historical data, first of all historical data coming from the medical devices, so event logs, parameter value system configuration, sensor readings, all the data that we have at our disposal, that in the same database together with records of failures, maintenance records, service work orders, part replacement contracts, so basically the evidence of failures and once you have data from the medical devices, and data from the failures in the same database, it becomes possible to correlate event logs, errors, signal sensor readings with records of failures and records of part replacement and maintenance operations. And we did that also with a specific approach. So we, we create integrated teams, and every integrated team at three figures, not necessarily three people, they were actually multiple people. But there was at least one business owner from a service organization. And this business owner is the person who knows what is relevant, which use case are relevant to solve for a particular type of product or a particular market. What basically is generating value or is worthwhile tackling as an organization. And we have data scientists, data scientists are the one who actually can manipulate data. They can write the queries, they can write the models and robust statistics. They can create visualization and they are the ones who really manipulate the data. Last but not least, very important is subject matter experts. Subject Matter Experts are the people who know the failure modes, who know about the functioning of the medical devices, perhaps they're even designed, they come from the design side, or they come from the service innovation side or even from the field. People who have been servicing the machines in real life for many, many years. So, they are familiar with the failure models, but also familiar with the type of data that is logged and the processes and how actually the systems behave, if you if you if you if you allow me in, in the wild in the in the field. So the combination of these three secrets was a key. Because data scientist alone, just statisticians basically are people who can all do machine learning. And they're not very effective because the data is too complicated. That's why you more than too complex, so they will spend a huge amount of time just trying to figure out the data. Or perhaps they will spend the time in tackling things that are useless, because it's such an interesting knows much quicker which data points are useful, which phenomenon can be found in the data or probably not found. So the combination of subject matter experts and data scientists is very powerful and together gathered by a business owner, we could tackle the most useful use cases first. So, this teams set up to work and they developed three things mainly, first of all, they develop insights on the failure modes. So, by looking at the data, and analyzing information about what happened in the field, they find out exactly how things fail in a very pragmatic and quantitative way. Also, they of course, set up to develop the predictive model with associated alerts and service actions. And a predictive model is just not an alert is just not a flag. Just not a flag, only flag that turns on like a like a traffic light, you know, but there's much more than that. It's such an alert is to be interpreted and used by highly skilled and trained engineer, for example, in a in a call center, who needs to evaluate that error and plan a service action. Service action may involve the ordering a replacement of an expensive part, it may involve calling up the customer hospital and scheduling a period of downtime, downtime to replace a part. So it has an impact on the clinical practice, could have an impact. So, it is important that the alert is coupled with sufficient evidence and information for such a highly skilled trained engineer to plan the service session efficiently. So, it's it's, it's a lot of work in terms of preparing data, preparing visualizations, and making sure that old information is represented correctly and in a compact form. Additionally, These teams develop, get insight into the failure modes and so they can provide input to the R&D organization to improve the products. So, to summarize these graphically, we took a lot of historical data from, coming from the medical devices from the history but also data from relational databases, where the service, work orders, where the part replacement, the contact information, we integrated it, and we set up to the data analytics. From there we don't have value yet, only value starts appearing when we use the insights of data analytics the model on live data. When we process live data with the module we can generate alerts, and the alerts can be used to plan the maintenance and the maintenance therefore the plant maintenance replaces replacing downtime is creating value. To give an idea of the, of the type of I cannot show you the details of these modules, all of these predictive models. But to give you an idea, this is just a picture of some of the components of our medical device for which we have models for which we have, for which we call the failure modes, hard disk, clinical grade monitoring, monitors, X ray tubes, and so forth. This is for MRI machines, a lot of custom hardware and other types of amplifiers and electronics. The alerts are then displayed in a in a dashboard, what we call a Remote monitoring dashboard. We have a team of remote monitoring engineers that basically surveyors the install base, looks at this dashboard picks up these alerts. And an alert as I said before is not just one flag, it contains a lot of information about the failure and about the medical device. And the remote monitor engineer basically will pick up these alerts, they review them and they create cases for the markets organization to handle. So, they see an alert coming in they create a case. So that the particular call center in in some country can call the customer and schedule and make an appointment to schedule a service action or it can add it preventive action to the schedule of the field service engineer who's already supposed to go to visit the customer for example. This is a picture and high-level picture of the overall data person architecture. On the bottom we have install base install base is formed by all our medical devices that are connected to our Philips and more service network. Data is transmitted in a in a secure and in a secure way to our enterprise infrastructure. Where we have a so called Data Lake, which is basically an archive where we store the data as it comes from, from the customers, it is scrubbed and protected. From there, we have a processes ETL, Extract, Transform and Load that in parallel, analyze this information, parse all these files and all this data and extract the relevant parameters. All this, the reason is that the data coming from the medical device is very verbose, and in legacy formats, sometimes in binary formats in strange legacy structures. And therefore, we parse it and we structure it and we make it magically usable by data science teams. And the results are stored in a in a vertica cluster, in a data warehouse. In the same data warehouse, where we also store information from other enterprise systems from all kinds of databases from SQL, Microsoft SQL Server, Tera Data SAP from Salesforce obligations. So, the enterprise IT system also are connected to vertica the data is inserted into vertica. And then from vertica, the data is pulled by our predictive models, which are Python and Rscripts that run on our proprietary environment helps with insights. From this proprietary environment we generate the alerts which are then used by the remote monitoring application. It's not the only application this is the case of remote monitoring. We also have applications for particular remote service. So whenever we cannot prevent or predict we cannot predict an issue from happening or we cannot prevent an issue from happening and we need to react on a customer call, then we can still use the data to very quickly troubleshoot the system, find the root cause and advice or the best service session. Additionally, there are reliability dashboards because all this data can also be used to perform reliability studies and improve the design of the medical devices and is used by R&D. And the access is with all kinds of tools. So Vertica gives the flexibility to connect with JDBC to connect dashboards using Power BI to create dashboards and click view or just simply use RM Python directly to perform analytics. So little summary of the, of the size of the data for the for the moment we have integrated about 500 terabytes worth of data tables, about 30 trillion data points. More than eighty different data sources. For our complete connected install base, including our customer relation management system SAP, we also have connected, we have integrated data from from the factory for repair shops, this is very useful because having information from the factory allows to characterize components and devices when they are new, when they are still not used. So, we can model degradation, excuse me, predict failures much better. Also, we have many years of historical data and of course 24/7 live feeds. So, to get all this going, we we have chosen very simple designs from the very beginning this was developed in the back the first system in 2015. At that time, we went from scratch to production eight months and is also very stable system. To achieve that, we apply what we call Exhaustive Error Handling. When you process, most of people attending this conference probably know when you are dealing with Big Data, you have probably you face all kinds of corner cases you feel that will never happen. But just because of the sheer volume of the data, you find all kinds of strange things. And that's what you need to take care of, if you want to have a stable, stable platform, stable data pipeline. Also other characteristic is that, we need to handle live data, but also be able to, we need to be able to reprocess large historical datasets, because insights into the data are getting generated over time by the team that is using the data. And very often, they find not only defects, but also they have changed requests for new data to be extracted to distract in a different way to be aggregated in a different way. So basically, the platform is continuously crunching data. Also, components have built-in monitoring capabilities. Transparent transparency builds trust by showing how the platform behaves. People actually trust that they are having all the data which is available, or if they don't see the data or if something is not functioning they can see why and where the processing has stopped. A very important point is documentation of data sources every data point as a so called Data Provenance Fields. That is not only the medical device where it comes from, with all this identifier, but also from which file, from which moment in time, from which row, from which byte offset that data point comes. This allows to identify and not only that, but also when this data point was created, by whom, by whom meaning which version of the platform and of the ETL created a data point. This allows us to identify issues and also to fix only the subset of when an issue is identified and fixed. It's possible then to fix only subset of the data that is impacted by that issue. Again, this grid trusts in data to essential for this type of applications. We actually have different environments in our analytic solution. One that we call data science environment is more or less what I've shown so far, where it's deployed in our Philips private cloud, but also can be deployed in in in public cloud such as Amazon. It contains the years of historical data, it allows interactive data exploration, human queries, therefore, it is a highly viable load. It is used for the training of machine learning algorithms and this design has been such that we it is for allowing rapid prototyping and for large data volumes. In other environments is the so called Production Environment where we actually score the models with live data from generation of the alerts. So this environment does not require years of data just months, because a model to make a prediction does not need necessarily years of data, but maybe some model even a couple of weeks or a few months, three months, six months depending on the type of data on the failure which has been predicted. And this has highly optimized queries because the applications are stable. It only only change when we deploy new models or new versions of the models. And it is designed optimized for low latency, high throughput and reliability is no human intervention, no human queries. And of course, there are development staging environments. And one of the characteristics. Another characteristic of all this work is that what we call Data Driven Service Innovation. In all this work, we use the data in every step of the process. The First business case creation. So, basically, some people ask how did you manage to find the unlocked investment to create such a platform and to work on it for years, you know, how did you start? Basically, we started with a business case and the business case again for that we use data. Of course, you need to start somewhere you need to have some data, but basically, you can use data to make a quantitative analysis of the current situation and also make it as accurate as possible estimate quantitative of value creation, if you have that basically, is you can justify the investments and you can start building. Next to that data is used to decide where to focus your efforts. In this case, we decided to focus on the use cases that had the maximum estimated business impact, with business impact meaning here, customer value, as well as value for the company. So we want to reduce unplanned downtime, we want to give value to our customers. But it would be not sustainable, if for creating value, we would start replacing, you know, parts without any consideration for the cost of it. So it needs to be sustainable. Also, then we use data to analyze the failure modes to actually do digging into the data understanding of things fail, for visualization, and to do reliability analysis. And of course, then data is a key to do feature engineering for the development of the predictive models for training the models and for the validation with historical data. So data is all over the place. And last but not least, again, these models is architecture generates new data about the alerts and about the how good the alerts are, and how well they can predict failures, how much downtime is being saved, how money issues have been prevented. So this also data that needs to be analyzed and provides insights on the performance of this, of this models and can be used to improve the models found. And last but not least, once you have performance of the models you can use data to, to quantify as much as possible the value which is created. And it is when you go back to the first step, you made the business value you you create the first business case with estimates. Can you, can you actually show that you are creating value? And the more you can, have this fitness feedback loop closed and quantify the better it is for having more and more impact. Among the key elements that are needed for realizing this? So I want to mention one about data documentation is the practice that we started already six years ago is proven to be very valuable. We document always how data is extracted and how it is stored in, in data model documents. Data Model documents specify how data goes from one place to the other, in this case from device logs, for example, to a table in vertica. And it includes things such as the finish of duplicates, queries to check for duplicates, and of course, the logical design of the tables below the physical design of the table and the rationale. Next to it, there is a data dictionary that explains for each column in the data model from a subject matter expert perspective, what that means, such as its definition and meaning is if it's, if it's a measurement, the use of measure and the range. Or if it's a, some sort of, of label the spec values, or whether the value is raw or or calculated. This is essential for maximizing the value of data for allowing people to use data. Last but not least, also an ETL design document, it explains how the transformation has happened from the source to the destination including very important the failure and the strategy. For example, when you cannot parse part of a file, should you load only what you can parse or drop the entire file completely? So, import best effort or do all or nothing or how to populate records for which there is no value what are the default values and you know, how to have the data is normalized or transform and also to avoid duplicates. This again is very important to provide to the users of the data, if full picture of all the data itself. And this is not just, this the formal process the documents are reviewed and approved by all the stakeholders into the subject matter experts and also the data scientists from a function that we have started called Data Architect. So to, this is something I want to give about, oh, yeah and of course the the documents are available to the end users of the data. And we even have links with documents of the data warehouse. So if you are, if you get access to the database, and you're doing your research and you see a table or a view, you think, well, it could be that could be interesting. It looks like something I could use for my research. Well, the data itself has a link to the document. So from the database while you're exploring data, you can retrieve a link to the place where the document is available. This is just the quick summary of some of the of the results that I'm allowed to share at this moment. This is about image guided therapy, using our remote service infrastructure for remotely connected system with the right contracts. We can achieve we have we have reduced downtime by 14% more than one out of three of cases are resolved remotely without an engineer having to go outside. 82% is the first time right fixed rate that means that the issue is fixed either remotely or if a visit at the site is needed, that visit only one visit is needed. So at that moment, the engineer we decided the right part and fix this straightaway. And this result on average on 135 hours more operational availability per year. This therefore, the ability to treat more patients for the same costs. I'd like to conclude with citing some nice testimonials from some of our customers, showing that the value that we've created is really high impact and this concludes my presentation. Thanks for your attention so far. >> Thank you Morrow, very interesting. And we've got a number of questions that we that have come in. So let's get to them. The first one, how many devices has Philips connected worldwide? And how do you determine which related center data workloads get analyzed with protocols? >> Okay, so this is just two questions. So the first question how many devices are connected worldwide? Well, actually, I'm not allowed to tell you the precise number of connected devices worldwide, but what I can tell is that we are in the order of tens of thousands of devices. And of all types actually. And then, how would we determine which related sensor gets analyzed with vertica well? And a little bit how I set In the in the presentation is a combination of two approaches is a data driven approach and the knowledge driven approach. So a knowledge driven approach because we make maximum use of our knowledge of the failure modes, and the behavior of the medical devices and of their components to select what we think are promising data points and promising features. However, from that moment on data science kicks in, and it's actually data science is used to look at the actual data and come up with quantitative information of what is really happening. So, it could be that an expert is convinced that the particular range of value of a sensor are indicative of a particular failure. And it turns out that maybe it was too optimistic on the other way around that in practice, there are many other situations situation he was not aware of. That could happen. So thanks to the data, then we, you know, get a better understanding of the phenomenon and we get the better modeling. I bet I answered that, any question? >> Yeah, we have another question. Do you have plans to perform any analytics at the edge? >> Now that's a good question. So I can't disclose our plans on this right now, but at the edge devices are certainly one of the options we look at to help our customers towards Zero Unplanned Downtime. Not only that, but also to facilitate the integration of our solution with existing and future hospital IT infrastructure. I mean, we're talking about advanced security, privacy and guarantee that the data is always safe remains. patient data and clinical data remains does not go outside the parameters of the hospital of course, while we want to enhance our functionality provides more value with our services. Yeah, so edge definitely very interesting area of innovation. >> Another question, what are the most helpful vertica features that you rely on? >> I would say, the first that comes to mind, to me at this moment is ease of integration. Basically, with vertica, we will be able to load any data source in a very easy way. And also it really can be interfaced very easily with old type of ions as an application. And this, of course, is not unique to vertica. Nevertheless, the added value here is that this is coupled with an incredible speed, incredible speed for loading and for querying. So it's basically a very versatile tool to innovate fast for data science, because basically we do not end up another thing is multiple projections, advanced encoding and compression. So this allows us to perform the optimizations only when we need it and without having to touch applications or queries. So if we want to achieve high performance, we Basically spend a little effort on improving the projection. And now we can achieve very often dramatic increases in performance. Another feature is EO mode. This is great for for cloud for cloud deployment. >> Okay, another question. What is the number one lesson learned that you can share? >> I think that would my advice would be document control your entire data pipeline, end to end, create positive feedback loops. So I hear that what I hear often is that enterprises I mean Philips is one of them that are not digitally native. I mean, Philips is 129 years old as a company. So you can imagine the the legacy that we have, we will not, you know, we are not born with Web, like web companies are with with, you know, with everything online and everything digital. So enterprises that are not digitally native, sometimes they struggle to innovate in big data or into to do data driven innovation, because, you know, the data is not available or is in silos. Data is controlled by different parts of the organ of the organization with different processes. There is not as a super strong enterprise IT system, providing all the data, you know, for everybody with API's. So my advice is to, to for the very beginning, a creative creating as soon as possible, an end to end solution, from data creation to consumption. That creates value for all the stakeholders of the data pipeline. It is important that everyone in the data pipeline from the producer of the data to the to the consumers, basically in order to pipeline everybody gets a piece of value, piece of the cake. When the value is proven to all stakeholders, everyone would naturally contribute to keep the data pipeline running, and to keep the quality of the data high. That's the students there. >> Yeah, thank you. And in the area of machine learning, what types of innovations do you plan to adopt to help with your data pipeline? >> So, in the error of machine learning, we're looking at things like automatically detecting the deterioration of models to trigger improvement action, as well as connected with active learning. Again, focused on improving the accuracy of our predictive models. So active learning is when the additional human intervention labeling of difficult cases is triggered. So the machine learning classifier may not be able to, you know, classify correctly all the time and instead of just randomly picking up some cases for a human to review, you, you want the costly humans to only review the most valuable cases, from a machine learning point of view, the ones that would contribute the most in improving the classifier. Another error is is deep learning and was not working on it, I mean, but but also applications of more generic anomaly detection algorithms. So the challenge of anomaly detection is that we are not only interested in finding anomalies but also in the recommended proper service actions. Because without a proper service action, and alert generated because of an anomaly, the data loses most of its value. So, this is where I think we, you know. >> Go ahead. >> No, that's, that's it, thanks. >> Okay, all right. So that's all the time that we have today for questions. I want to thank the audience for attending Mauro's presentation and also for your questions. If you weren't able to, if we weren't able to answer your question today, I'd ask let we'll let you know that we'll respond via email. And again, our engineers will be at the vertica, on the vertica quorums awaiting your other questions. It would help us greatly if you could give us some feedback and rate the session before you sign off. Your rating will help us guide us as when we're looking at content to provide for the next vertica BTC. Also, note that a replay of today's event and a PDF copy of the slides will be available on demand, we'll let you know when that'll be by email hopefully later this week. And of course, we invite you to share the content with your colleagues. Again, thank you for your participation today. This includes this breakout session and hope you have a wonderful day. Thank you. >> Thank you
SUMMARY :
in the lower right corner of the slide. and perhaps decide that the spare part needs to be replaced. So let's get to them. and the behavior of the medical devices Do you have plans to perform any analytics at the edge? and guarantee that the data is always safe remains. on improving the projection. What is the number one lesson learned that you can share? from the producer of the data to the to the consumers, And in the area of machine learning, what types the deterioration of models to trigger improvement action, and a PDF copy of the slides will be available on demand,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mauro Barbieri | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Gerard | PERSON | 0.99+ |
Frederik | PERSON | 0.99+ |
Phillips | ORGANIZATION | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
Mauro | PERSON | 0.99+ |
Eindhoven | LOCATION | 0.99+ |
4.6 thousand kilograms | QUANTITY | 0.99+ |
two rooms | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
14% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Anton | PERSON | 0.99+ |
4% | QUANTITY | 0.99+ |
135 hours | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
82% | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
three rooms | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
more than 1000 patents | QUANTITY | 0.99+ |
1891 | DATE | 0.99+ |
Today | DATE | 0.99+ |
Power BI | TITLE | 0.99+ |
Netherlands | LOCATION | 0.99+ |
one ingredient | QUANTITY | 0.99+ |
three figures | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 100 countries | QUANTITY | 0.99+ |
later this week | DATE | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
SQL | TITLE | 0.98+ |
about 10% | QUANTITY | 0.98+ |
about 80,000 employees | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
Python | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
two brothers | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
about 30 trillion data points | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
about 500 terabytes | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
each column | QUANTITY | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
tens of thousands of devices | QUANTITY | 0.97+ |
first system | QUANTITY | 0.96+ |
about 10 years | QUANTITY | 0.96+ |
10 years ago | DATE | 0.96+ |
one visit | QUANTITY | 0.95+ |
Morrow | PERSON | 0.95+ |
up to 0.5 millimeters | QUANTITY | 0.95+ |
More than eighty different data sources | QUANTITY | 0.95+ |
129 years ago | DATE | 0.95+ |
first interaction | QUANTITY | 0.94+ |
one flag | QUANTITY | 0.94+ |
three things | QUANTITY | 0.93+ |
thousand | QUANTITY | 0.93+ |
50 frames per second | QUANTITY | 0.93+ |
First business | QUANTITY | 0.93+ |
Joy King, Vertica | CUBEConversations, March 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, everybody, welcome back to theCUBE's coverage of the Virtual Vertica BDC, Big Data Conference. It was, of course, going to be in Boston, but now we're covering it online. It's really our pleasure to invite back Joy King, she's the vice president of product and go-to-market strategy at Vertica. She also manages marketing and education programs. Joy, great to see you. >> It's great to be back, as always, Dave, thank you. >> Let's talk about BDC, Virtual BDC. We took a break. theCUBE has been at every Big Data Conference. I love that show, great customers, awesome buzz, great outside speakers. I actually had the pleasure of being up on stage with some database experts, of which I'm not, but I'm a (laughs) inch deep and a mile wide. >> I remember that! (laughs) >> And it was a lot of fun going head to head with some of the folks, and just really a great vibe over that conference. But, so, now, you had to make the decision, because of the coronavirus, to go digital. You didn't delay, and I love the fact that you guys leaned right in, you've got all this content. So talk about what we can expect at BDC. >> Well, you know, Dave, the BDC is really special, and I have to give Colin Mahoney, our GM, the credit for the idea. Sometimes his ideas are really good, and the execution can be, well, challenging. But when we started the BDC, he had an idea. He said, "You know, we have such a passionate "community, we need to get them together. "We need, like, a user group." Well, that user group, for the first BDC, was the first and only event I have ever been responsible for where, yes, it's true, we exceeded the fire code of the venue, and we had more people that registered than we were allowed to accept. That's never happened before. It's because the passion was so real. We made a commitment. We said the only people that could speak at the BDC were engineers who architected and write the code, and customers who've used the code. We were determined to keep the technical credibility, the value of best practices, the sharing among the community. Marketing was responsible for appropriate amounts of coffee and alcohol at the appropriate times, (Dave laughs) but today, that is still why the BDC is so special. Now, I have to tell you, we have been somewhat limited in our ability to confirm coffee, alcohol, et cetera in the Virtual BDC, but we are still true to our mission. The people that will be speaking during the sessions that we have, and for all of the recordings that we will do in addition after we complete the live BDC, are engineers and architects who design and write the code, hands on the keyboard, and customers who use Vertica to power their businesses every day. That's the rule. Some people don't like it, but that's how we play. >> Well, and to your point, and we've interviewed a number of your customers, and I can second that. The database engineers are proud to put Vertica in their title. >> Yes. >> They embrace it, they love to train people and get adoption going, so that's awesome. Let's talk about some of the logistics of the BDC, the Virtual BDC. Tuesday, March 31st, and then the next day, April 1st, you've got keynotes, you've got breakouts, and of course, we've got theCUBE. After the keynotes, we'll be doing CUBE coverage for two days, wall-to-wall coverage of Virtual BDC. And to your point, and I think this is a nuance that I think people are going to learn with digital, is there's a post-event that really is going to continue that engagement with your community. >> That's right. As much as everybody knows there's nothing that replaces face-to-face interaction, there are advantages to the virtual world. First of all, people are getting pretty creative, I've got to say, and second, it gives global reach to people who would have loved to come to the BDC but couldn't. They couldn't travel, there were restrictions, they were busy with other things. So, yes, all day Tuesday and all day Wednesday. After the keynote on Tuesday will be two parallel tracks, and this is East Coast time, from U.S. East Coast time, on Tuesday afternoon, and then two parallel tracks all day Wednesday. And then on Thursday, in addition to all of those webinars, all of those sessions being available on demand, we are also, right now, recording additional sessions because we just didn't have enough slots, but we had more speakers, both customers and engineers, that wanted to, and all of that will be available on the BDC website on Thursday and beyond. And we're going to continue with two webinar series that we're very proud of. One is called "Under the Hood," which is technical webinars, and the other is called "Data Disruptors," and those are the customers that love to tell their stories. And that, in parallel with ongoing CUBE interviews, will keep the energy all the way up until late March of 2021, when we have already confirmed the next live BDC. >> Awesome, so go to vertica.com/bdc2020, register, you got to register, to see the keynotes. It's lightweight registration, it's not a hundred fields, we want you to come in. And then, of course, theCUBE.net is going to be covering, theCUBE interviews, and SiliconANGLE.com will have editorial. Joy, looking forward to it. Thanks so much for giving us the update, and we'll see you online. >> It will be a pleasure, see ya, bye. >> And we'll see you. Thank you, everybody, and go, like I said, go register, again, it's vertica.com/bdc2020. This is Dave Vellante from theCUBE, and we'll see you at the Virtual Vertica Big Data Conference. (upbeat music)
SUMMARY :
connecting with thought leaders all around the world, coverage of the Virtual Vertica BDC, Big Data Conference. I actually had the pleasure of being because of the coronavirus, to go digital. and for all of the recordings that we will do Well, and to your point, and we've interviewed of the BDC, the Virtual BDC. and the other is called "Data Disruptors," And then, of course, theCUBE.net is going to be covering, at the Virtual Vertica Big Data Conference.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Colin Mahoney | PERSON | 0.99+ |
Joy King | PERSON | 0.99+ |
Thursday | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
two days | QUANTITY | 0.99+ |
vertica.com/bdc2020 | OTHER | 0.99+ |
March 2020 | DATE | 0.99+ |
Joy | PERSON | 0.99+ |
late March of 2021 | DATE | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Tuesday afternoon | DATE | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
two parallel tracks | QUANTITY | 0.99+ |
Tuesday, March 31st | DATE | 0.98+ |
Wednesday | DATE | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
theCUBE.net | OTHER | 0.96+ |
Virtual Vertica Big Data Conference | EVENT | 0.96+ |
Under the Hood | TITLE | 0.94+ |
Big Data | EVENT | 0.92+ |
First | QUANTITY | 0.9+ |
BDC | ORGANIZATION | 0.9+ |
next day, | DATE | 0.9+ |
BDC | EVENT | 0.89+ |
Big Data Conference | EVENT | 0.88+ |
Virtual Vertica BDC | EVENT | 0.87+ |
East Coast time | TITLE | 0.84+ |
two webinar series | QUANTITY | 0.82+ |
U.S. East Coast | LOCATION | 0.79+ |
CUBE | ORGANIZATION | 0.77+ |
Virtual BDC | EVENT | 0.75+ |
April 1st | DATE | 0.75+ |
CUBEConversations | EVENT | 0.72+ |
first BDC | QUANTITY | 0.72+ |
Data Disruptors | TITLE | 0.72+ |
hundred fields | QUANTITY | 0.7+ |
BDC | LOCATION | 0.56+ |
Vertica | TITLE | 0.55+ |
SiliconANGLE.com | ORGANIZATION | 0.49+ |
CUBE | EVENT | 0.45+ |