Dan Woicke, Cerner Corporation | Virtual Vertica BDC 2020
(gentle electronic music) >> Hello, everybody, welcome back to the Virtual Vertica Big Data Conference. My name is Dave Vellante and you're watching theCUBE, the leader in digital coverage. This is the Virtual BDC, as I said, theCUBE has covered every Big Data Conference from the inception, and we're pleased to be a part of this, even though it's challenging times. I'm here with Dan Woicke, the senior director of CernerWorks Engineering. Dan, good to see ya, how are things where you are in the middle of the country? >> Good morning, challenging times, as usual. We're trying to adapt to having the kids at home, out of school, trying to figure out how they're supposed to get on their laptop and do virtual learning. We all have to adapt to it and figure out how to get by. >> Well, it sure would've been my pleasure to meet you face to face in Boston at the Encore Casino, hopefully next year we'll be able to make that happen. But let's talk about Cerner and CernerWorks Engineering, what is that all about? >> So, CernerWorks Engineering, we used to be part of what's called IP, or Intellectual Property, which is basically the organization at Cerner that does all of our software development. But what we did was we made a decision about five years ago to organize my team with CernerWorks which is the hosting side of Cerner. So, about 80% of our clients choose to have their domains hosted within one of the two Kansas City data centers. We have one in Lee's Summit, in south Kansas City, and then we have one on our main campus that's a brand new one in downtown, north Kansas City. About 80, so we have about 27,000 environments that we manage in the Kansas City data centers. So, what my team does is we develop software in order to make it easier for us to monitor, manage, and keep those clients healthy within our data centers. >> Got it. I mean, I think of Cerner as a real advanced health tech company. It's the combination of healthcare and technology, the collision of those two. But maybe describe a little bit more about Cerner's business. >> So we have, like I said, 27,000 facilities across the world. Growing each day, thank goodness. And, our goal is to ensure that we reduce errors and we digitize the entire medical records for all of our clients. And we do that by having a consulting practice, we do that by having engineering, and then we do that with my team, which manages those particular clients. And that's how we got introduced to the Vertica side as well, when we introduced them about seven years ago. We were actually able to take a tremendous leap forward in how we manage our clients. And I'd be more than happy to talk deeper about how we do that. >> Yeah, and as we get into it, I want to understand, healthcare is all about outcomes, about patient outcomes and you work back from there. IT, for years, has obviously been a contributor but removed, and somewhat indirect from those outcomes. But, in this day and age, especially in an organization like yours, it really starts with the outcomes. I wonder if you could ratify that and talk about what that means for Cerner. >> Sorry, are you talking about medical outcomes? >> Yeah, outcomes of your business. >> So, there's two different sides to Cerner, right? There's the medical side, the clinical side, which is obviously our main practice, and then there's the side that I manage, which is more of the operational side. Both are very important, but they go hand in hand together. On the operational side, the goal is to ensure that our clinicians are on the system, and they don't know they're on the system, right? Things are progressing, doctors don't want to be on the system, trust me. My job is to ensure they're having the most seamless experience possible while they're on the EMR and have it just be one of their side jobs as opposed to taking their attention away from the patients. That make sense? >> Yeah it does, I mean, EMR and meaningful use, around the Affordable Care Act, really dramatically changed the unit. I mean, people had to demonstrate in order to get paid, and so that became sort of an unfunded mandate for folks and you really had to respond to that, didn't you? >> We did, we did that about three to four years ago. And we had to help our clients get through what's called meaningful use, there was different stages of meaningful use. And what we did, is we have the website called the Lights On Network which is free to all of our clients. Once you get onto the website the Lights On Network, you can actually show how you're measured and whether or not you're actually completing the different necessary tasks in order to get those payments for meaningful use. And it also allows you to see what your performance is on your domain, how the clinicians are doing on the system, how many hours they're spending on the system, how many orders they're executing. All of that is completely free and visible to our clients on the Lights On Network. And that's actually backed by some of the Vertica software that we've invested in. >> Yeah, so before we get into that, it sounds like your mission, really, is just great user experiences for the people that are on the network. Full stop. >> We do. So, one of the things that we invented about 10 years ago is called RTMS Timers. They're called Response Time Measurement System. And it started off as a way of us proving that clients are actually using the system, and now it's turned into more of a user outcomes. What we do is we collect 2.5 billion timers per day across all of our clients across the world. And every single one of those records goes to the Vertica platform. And then we've also developed a system on that which allows us in real time to go and see whether or not they're deviating from their normal. So we do baselines every hour of the week and then if they're deviating from those baselines, we can immediately call a service center and have them engage the client before they call in. >> So, Dan, I wonder if you could paint a picture. By the way, that's awesome. I wonder if you could paint a picture of your analytics environment. What does it look like? Maybe give us a sense of the scale. >> Okay. So, I've been describing how we operate, our remote hosted clients in the two Kansas City data centers, but all the software that we write, we also help our client hosted agents as well. Not only do we take care of what's going on at the Kansas City data center, but we do write software to ensure that all of clients are treated the same and we provide the same level of care and performance management across all those clients. So what we do is we have 90,000 agents that we have split across all these clients across the world. And every single hour, we're committing a billion rows to Vertica of operational data. So I talked a little bit about the RTMS timers, but we do things just like everyone else does for CPU, memory, Java Heap Stack. We can tell you how many concurrent users are on the system, I can tell you if there's an application that goes down unexpected, like a crash. I can tell you the response time from the network as most of us use Citrix at Cerner. And so what we do is we measure the amount of time it takes from the client side to PCs, it's sitting in the virtual data centers, sorry, in the hospitals, and then round trip to the Citrix servers that are sitting in the Kansas City data center. That's called the RTT, our round trip transactions. And what we've done is, over the last couple of years, what we've done is we've switched from just summarizing CPU and memory and all that high-level stuff, in order to go down to a user level. So, what are you doing, Dr. Smith, today? How many hours are you using the EMR? Have you experienced any slowness? Have you experienced any hourglass holding within your application? Have you experienced, unfortunately, maybe a crash? Have you experienced any slowness compared to your normal use case? And that's the step we've taken over the last few years, to go from summarization of high-level CPU memory, over to outcome metrics, which are what is really happening with a particular user. >> So, really granular views of how the system is being used and deep analytics on that. I wonder, go ahead, please. >> And, we weren't able to do that by summarizing things in traditional databases. You have to actually have the individual rows and you can't summarize information, you have to have individual metrics that point to exactly what's going on with a particular clinician. >> So, okay, the MPP architecture, the columnar store, the scalability of Vertica, that's what's key. That was my next question, let me take us back to the days of traditional RDBMS and then you brought in Vertica. Maybe you could give us a sense as to why, what that did for you, the before and after. >> Right. So, I'd been painting a picture going forward here about how traditionally, eight years ago, all we could do was summarize information. If CPU was going to go and jump up 8%, I could alarm the data center and say, hey, listen, CPU looks like it's higher, maybe an application's hanging more than it has been in the past. Things are a little slower, but I wouldn't be able to tell you who's affected. And that's where the whole thing has changed, when we brought Vertica in six years ago is that, we're able to take those 90,000 agents and commit a billion rows per hour operational data, and I can tell you exactly what's going on with each of our clinicians. Because you know, it's important for an entire domain to be healthy. But what about the 10 doctors that are experiencing frustration right now? If you're going to summarize that information and roll it up, you'll never know what those 10 doctors are experiencing and then guess what happens? They call the data center and complain, right? The squeaky wheels? We don't want that, we want to be able to show exactly who's experiencing a bad performance right now and be able to reach out to them before they call the help desk. >> So you're able to be proactive there, so you've gone from, Houston, we have a problem, we really can't tell you what it is, go figure it out, to, we see that there's an issue with these docs, or these users, and go figure that out and focus narrowly on where the problem is as opposed to trying to whack-a-mole. >> Exactly. And the other big thing that we've been able to do is corelation. So, we operate two gigantic data centers. And there's things that are shared, switches, network, shared storage, those things are shared. So if there is an issue that goes on with one of those pieces of equipment, it could affect multiple clients. Now that we have every row in Vertica, we have a new program in place called performance abnormality flags. And what we're able to do is provide a website in real time that goes through the entire stack from Citrix to network to database to back-end tier, all the way to the end-user desktop. And so if something was going to be related because we have a network switch going out of the data center or something's backing up slow, you can actually see which clients are on that switch, and, what we did five years ago before this, is we would deploy out five different teams to troubleshoot, right? Because five clients would call in, and they would all have the same problem. So, here you are having to spare teams trying to investigate why the same problem is happening. And now that we have all of the data within Vertica, we're able to show that in a real time fashion, through a very transparent dashboard. >> And so operational metrics throughout the stack, right? A game changer. >> It's very compact, right? I just label five different things, the stack from your end-user device all the way through the back-end to your database and all the way back. All that has to work properly, right? Including the network. >> How big is this, what are we talking about? However you measure it, terabytes, clusters. What can you share there? >> Sorry, you mean, the amount of data that we process within our data centers? >> Give us a fun fact. >> Absolute petabytes, yeah, for sure. And in Vertica right now we have two petabytes of data, and I purge it out every year, one year's worth of data within two different clusters. So we have to two different data centers I've been describing, what we've done is we've set Vertica up to be in both data centers, to be highly redundant, and then one of those is configured to do real-time analysis and corelation research, and then the other one is to provide service towards what I described earlier as our Lights On Network, so it's a very dedicated hardened cluster in one of our data centers to allow the Lights On Network to provide the transparency directly to our clients. So we want that one to be pristine, fast, and nobody touch it. As opposed to the other one, where, people are doing real-time, ad hoc queries, which sometimes aren't the best thing in the world. No matter what kind of database or how fast it is, people do bad things in databases and we just don't want that to affect what we show our clients in a transparent fashion. >> Yeah, I mean, for our audience, Vertica has always been aimed at these big, hairy, analytic problems, it's not for a tiny little data mart in a department, it's really the big scale problems. I wonder if I could ask you, so you guys, obviously, healthcare, with HIPAA and privacy, are you doing anything in the cloud, or is it all on-prem today? >> So, in the operational space that I manage, it's all on-premises, and that is changing. As I was describing earlier, we have an initiative to go to AWS and provide levels of service to countries like Sweden which does not want any operational data to leave that country's walls, whether it be operational data or whether it be PHI. And so, we have to be able to adapt into Vertia Eon Mode in order to provide the same services within Sweden. So obviously, Cerner's not going to go up and build a data center in every single country that requires us, so we're going to leverage our partnership with AWS to make this happen. >> Okay, so, I was going to ask you, so you're not running Eon Mode today, it's something that you're obviously interested in. AWS will allow you to keep the data locally in that region. In talking to a lot of practitioners, they're intrigued by this notion of being able to scale independently, storage from compute. They've said they wished that's a much more efficient way, I don't have to buy in chunks, if I'm out of storage, I don't have to buy compute, and vice-versa. So, maybe you could share with us what you're thinking, I know it's early days, but what's the logic behind the business case there? >> I think you're 100% correct in your assessment of taking compute away from storage. And, we do exactly what you say, we buy a server. And it has so much compute on it, and so much storage. And obviously, it's not scaled properly, right? Either storage runs out first or compute runs out first, but you're still paying big bucks for the entire server itself. So that's exactly why we're doing the POC right now for Eon Mode. And I sit on Vertica's TAB, the advisory board, and they've been doing a really good job of taking our requirements and listening to us, as to what we need. And that was probably number one or two on everybody's lists, was to separate storage from compute. And that's exactly what we're trying to do right now. >> Yeah, it's interesting, I've talked to some other customers that are on the customer advisory board. And Vertica is one of these companies that're pretty transparent about what goes on there. And I think that for the early adopters of Eon Mode there were some challenges with getting data into the new system, I know Vertica has been working on that very hard but you guys push Vertica pretty hard and from what I can tell, they listen. Your thoughts. >> They do listen, they do a great job. And even though the Big Data Conference is canceled, they're committed to having us go virtually to the CAD meeting on Monday, so I'm looking forward to that. They do listen to our requirements and they've been very very responsive. >> Nice. So, I wonder if you could give us some final thoughts as to where you want to take this thing. If you look down the road a year or two, what does success look like, Dan? >> That's a good question. Success means that we're a little bit more nimble as far as the different regions across the world that we can provide our services to. I want to do more corelation. I want to gather more information about what users are actually experiencing. I want to be able to have our phone never ring in our data center, I know that's a grand thought there. But I want to be able to look forward to measuring the data internally and reaching out to our clients when they have issues and then doing the proper corelation so that I can understand how things are intertwining if multiple clients are having an issue. That's the goal going forward. >> Well, in these trying times, during this crisis, it's critical that your operations are running smoothly. The last thing that organizations need right now, especially in healthcare, is disruption. So thank you for all the hard work that you and your teams are doing. I wish you and your family all the best. Stay safe, stay healthy, and thanks so much for coming on theCUBE. >> I really appreciate it, thanks for the opportunity. >> You're very welcome, and thank you, everybody, for watching, keep it right there, we'll be back with our next guest. This is Dave Vellante for theCUBE. Covering Virtual Vertica Big Data Conference. We'll be right back. (upbeat electronic music)
SUMMARY :
in the middle of the country? and figure out how to get by. been my pleasure to meet you and then we have one on our main campus and technology, the and then we do that with my team, Yeah, and as we get into it, the goal is to ensure that our clinicians in order to get paid, and so that became in order to get those for the people that are on the network. So, one of the things that we invented I wonder if you could paint a picture from the client side to PCs, of how the system is being used that point to exactly what's going on and then you brought in Vertica. and be able to reach out to them we really can't tell you what it is, And now that we have all And so operational metrics and all the way back. are we talking about? And in Vertica right now we in the cloud, or is it all on-prem today? So, in the operational I don't have to buy in chunks, and listening to us, as to what we need. that are on the customer advisory board. so I'm looking forward to that. as to where you want to take this thing. and reaching out to our that you and your teams are doing. thanks for the opportunity. and thank you, everybody,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Woicke | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Cerner | ORGANIZATION | 0.99+ |
Affordable Care Act | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dan | PERSON | 0.99+ |
10 doctors | QUANTITY | 0.99+ |
Sweden | LOCATION | 0.99+ |
90,000 agents | QUANTITY | 0.99+ |
five clients | QUANTITY | 0.99+ |
CernerWorks | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Kansas City | LOCATION | 0.99+ |
Smith | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Cerner Corporation | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Monday | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one year | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
27,000 facilities | QUANTITY | 0.99+ |
Houston | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
two petabytes | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
CernerWorks Engineering | ORGANIZATION | 0.98+ |
south Kansas City | LOCATION | 0.98+ |
eight years ago | DATE | 0.98+ |
about 80% | QUANTITY | 0.98+ |
Virtual Vertica Big Data Conference | EVENT | 0.98+ |
Citrix | ORGANIZATION | 0.98+ |
two different data centers | QUANTITY | 0.97+ |
each day | QUANTITY | 0.97+ |
four years ago | DATE | 0.97+ |
two different clusters | QUANTITY | 0.97+ |
six years ago | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
north Kansas City | LOCATION | 0.97+ |
HIPAA | TITLE | 0.97+ |
five different teams | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
five different things | QUANTITY | 0.95+ |
two different sides | QUANTITY | 0.95+ |
about 27,000 environments | QUANTITY | 0.95+ |
both data centers | QUANTITY | 0.95+ |
About 80 | QUANTITY | 0.95+ |
Response Time Measurement System | OTHER | 0.95+ |
two gigantic data centers | QUANTITY | 0.93+ |
Java Heap | TITLE | 0.92+ |
UNLISTED DO NOT PUBLISH Woicke Edit Suggestions
six five four three two one hi everybody and welcome to this cube special presentation of the verdict of virtual big data conference the cube is running in parallel with day 1 and day 2 of the verdict the big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage Gabe great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on prime and in the cloud so gave I want to ask you what were the trends in analytical database and plowed that led to this partnership you know realistically I think what we're seeing is that there's been kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct mass storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash blade platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging >> essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement I defend that it's supposed to yeah I I like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash Wade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically which we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem for a multitude of different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that falls into our umbrella of what we consider the modern day day experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically for Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect mean we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytics pools mayo multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yeah I saw you interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked them his thoughts on that he said well first of all our flash plate is scale out he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center so you're talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently Vertica comes in with Eon a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs could you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do how we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on they're very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the Depot or Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and it's the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring into the market and then you know joint selling opportunities exist where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our shows mutually beneficial relationships and the solutions that we're bringing it to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all and you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there who are using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously having they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical Aeon mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and then solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers you still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough where people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allowed the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built for it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what to discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the unique advantage of this but there are others it's not just the economics of the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a couple customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that we need to address we have a multi national car company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision making trees you know that's what really these new modern analytics platforms were built for there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact vibes especially when we start to look at and you know the current state of affairs little Co vid and the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like birth and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information you know you know I'm gonna set this model up and we'll come back in a day now we need to make these and the performs characteristics the Aeon mode and vertical allows for can get us towards this almost near real-time analytics decision-making process and that the customers and that's the kind of conversations that we're having with customers who really need to be able to turn this around very quickly instead of waiting well I think you're hitting on something that is actually pretty relevant and that is that near real-time analytic you know database we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud native databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on-premise scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know that derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years Gabe well I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sure but I really appreciate you coming on the cube and thank you very much have a great day you too okay thank you everybody for watching this is the cubes coverage wall-to-wall coverage two days of the vertical Vertica virtual Big Data conference keep her at their very back right after this short break
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Boston | LOCATION | 0.99+ |
September 2019 | DATE | 0.99+ |
Gabriel Chapman | PERSON | 0.99+ |
Barney | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Gabe | PERSON | 0.99+ |
Woicke | PERSON | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
last September | DATE | 0.97+ |
over three years | QUANTITY | 0.97+ |
one aspect | QUANTITY | 0.96+ |
first phase | QUANTITY | 0.96+ |
pure | ORGANIZATION | 0.96+ |
Christopher | PERSON | 0.95+ |
one | QUANTITY | 0.95+ |
single rack | QUANTITY | 0.95+ |
a hundred and fifty | QUANTITY | 0.95+ |
day 2 | QUANTITY | 0.95+ |
both | QUANTITY | 0.93+ |
seven blades | QUANTITY | 0.93+ |
Depot | ORGANIZATION | 0.93+ |
150 flash blades | QUANTITY | 0.92+ |
Hadoop | ORGANIZATION | 0.92+ |
single namespace | QUANTITY | 0.92+ |
single platform | QUANTITY | 0.92+ |
day 1 | QUANTITY | 0.92+ |
coronavirus | OTHER | 0.91+ |
firsts | QUANTITY | 0.91+ |
first | QUANTITY | 0.9+ |
flash Wade | TITLE | 0.89+ |
single | QUANTITY | 0.88+ |
each one | QUANTITY | 0.88+ |
a day | QUANTITY | 0.87+ |
a couple years ago | DATE | 0.85+ |
thousands of decisions per second | QUANTITY | 0.83+ |
Prem | ORGANIZATION | 0.77+ |
prime | COMMERCIAL_ITEM | 0.77+ |
Encore | LOCATION | 0.74+ |
single addressable | QUANTITY | 0.72+ |
Big Data | EVENT | 0.72+ |
each individual | QUANTITY | 0.71+ |
Aeon | ORGANIZATION | 0.68+ |
Boston Orange | LOCATION | 0.65+ |
Vertica | TITLE | 0.62+ |
egress | ORGANIZATION | 0.62+ |
every single | QUANTITY | 0.6+ |
last 10 years | DATE | 0.6+ |
a couple customer | QUANTITY | 0.59+ |
Eon | TITLE | 0.55+ |
pieces | QUANTITY | 0.54+ |
petabytes | QUANTITY | 0.53+ |
flash blade | ORGANIZATION | 0.52+ |
Eon | ORGANIZATION | 0.51+ |
sub shows | QUANTITY | 0.5+ |
Hadoop | TITLE | 0.49+ |
six | QUANTITY | 0.49+ |
petabyte | QUANTITY | 0.48+ |
lot | QUANTITY | 0.47+ |
big | EVENT | 0.43+ |
vertigo | PERSON | 0.34+ |