Image Title

Search Results for NiFi:

COMMUNICATIONS Acellerating Network


 

(upbeat music) >> Hi, today I'm going to talk about network analytics and what that means for telecommunications as we go forward, thinking about 5G, what the impact that's likely to have on network analytics and the data requirement, not just to run the network and to understand the network a little bit better, but also to inform the rest of the operation of the telecommunications business. So as we think about where we are in terms of network analytics and what that is over the last 20 years, the telecommunications industry has evolved its management infrastructure to abstract away from some of the specific technologies in the network. So what do we mean by that? Well, in the, when initial telecommunications networks were designed there were management systems that were built in. Eventually fault management systems, assurance systems, provisioning systems, and so on, were abstracted away. So it didn't matter what network technology had, whether it was a Nokia technology or Erickson technology or Huawei technology or whoever it happened to be. You could just look at your fault management system and understand where faults were happened. As we got into the last sort of 10, 15 years or so telecommunication service providers become, became more sophisticated in terms of their approach to data analytics and specifically network analytics and started asking questions about why and what if in relation to their network performance and network behavior. And so network analytics as a sort of an independent functioning was born and over time more and more data began to get loaded into the network analytics function. So today just about every carrier in the world has a network analytics function that deals with vast quantities of data in big data environments that are now being migrated to the cloud. As all telecommunications carriers are migrating as many IT workloads as possible to the cloud. So what are the things that are happening as we migrate to the cloud that drive enhancements in use cases and enhancements in scale in telecommunications network analytics? Well, 5G is the big thing, right? So 5G, it's not just another G in that sense. I mean, in some cases, in some senses it is 5G means greater bandwidth and lower latency and all those good things. So, you know, we can watch YouTube videos with less interference and, and less sluggish bandwidth and so on and so forth. But 5G is really about the enterprise and enterprise services transformation. 5G is a more secure kind of a network, but 5G is also a more pervasive network. 5G has a fundamentally different network topology than previous generations. So there's going to be more masts. And that means that you can have more pervasive connectivity. So things like IOT and edge applications, autonomous car, current smart cities, these kinds of things are all much better served because you've got more masts, that of course means that you're going to have a lot more data as well and we'll get to that. The second piece is immersive digital services. So with more masts, with more connectivity, with lower latency, with higher bandwidth with the potential is immense for services innovation. And we don't know what those services are going to be. We know that technologies like augmented reality, virtual reality, things like this have great potential, but we have yet to see where those commercial applications are going to be, but the innovation and the innovation potential for 5G is phenomenal. It certainly means that we're going to have a lot more edge devices. And that again is going to lead to an increase in the amount of data that we have available. And then the idea of pervasive connectivity when it comes to smart cities, autonomous cars, integrated traffic management systems, all of this kind of stuff, those kind of smart environments thrive where you've got this kind of pervasive connectivity, this persistent connection to the network. Again, that's going to drive more innovation. And again, because you've got these new connected devices, you're going to get even more data. So this rise, this exponential rise in data is really what's driving the change in network analytics. And there are four major vectors that are driving this increase in data in terms of both volume and in terms of speed. So the first thing is more physical elements. So we said already that 5G networks are going to have a different topology. 5G networks will have more devices, more masts. And so with more physical elements in the network, you're going to get more physical data coming off those physical networks. And so that needs to be aggregated and collected and managed and stored and analyzed and understood so that we can have a better understanding as to, you know, why things happened the way they do, why the network behaves in which they do in ways that it does and why devices that are connected to the network and ultimately of course, consumers, whether they be enterprises or retail customers behave in the way they do in relation to their interaction with the network. Edge nodes and devices. We're going to have an explosion in terms of the number of devices. We've already seen IOT devices with your different kinds of trackers and sensors that are hanging off the edge of the network, whether it's to make buildings smarter or car smart or people smarter in terms of having the measurements and the connectivity and all that sort of stuff. So the numbers of devices on the edge and beyond the edge are going to be phenomenal. One of the things that we've been trying to wrestle with as an industry over the last few years is where does a telco network end and where does the enterprise, or even the consumer network begin? It used to be very clear that, you know, the telco network ended at the router but now it's not that clear anymore because in the enterprise space, particularly with virtualized networking, which we're going to talk about in a second, you start to see end to end network services being deployed. And so are they being those services in some instances that are being managed by the service provider themselves, and in some cases by the enterprise client. Again, the line between where the telco network ends and where the enterprise or the consumer network begins is not clear. So those edge, the proliferation of devices at the edge, in terms of, you know, what those devices are, what the data yield is and what the policies are that need to govern those devices in terms of security and privacy and things like that, that's all going to be really, really important. Virtualized services, we just touched on that briefly. One of the big, big trends that's happening right now is not just the shift of IT operations onto the cloud, but the shift of the network onto the cloud, the virtualization of network infrastructure. And that has two major impacts. First of all, it means that you've got the agility and all of the scale benefits that you get from migrating workloads to the cloud, the elasticity and the growth and all that sort of stuff, but arguably more importantly for the telco it means that with a virtualized network infrastructure, you can offer entire networks to enterprise clients. So, you know, selling to a government department, for example, who's looking to stand up a system for, you know, certification of, you know, export certification, something like that. You can not just sell them the connectivity, but you can sell them the networking and the infrastructure in order to serve that entire end to end application. You could send, you could offer them in theory, an entire end-to-end communications network. And with 5G network slicing, they can even have their own little piece of the 5G bandwidth that's been allocated to gets a carrier and have a complete end to end environment. So the kinds of services that can be offered by telcos given virtualize network infrastructure are many and varied and it's an outstanding opportunity. But what it also means is that the number of network elements virtualized in this case is also exploding. And that means the amount of data that we're getting on, informing us as to how those network elements are behaving, how they're performing is going to go up as well. And then finally, AI complexity. So on the demand inside while historically network analytics, big data has been driven by returns in terms of data monetization, whether that's through cost avoidance or service assurance, or even revenue generation through data monetization and things like that. AI is transforming telecommunications and every other industry. The potential for autonomous operations is extremely attractive. And so understanding how the end-to-end telecommunication service delivery infrastructure works is essential as a training ground for AI models that can help to automate a huge amount of telecommunications operating processes. So the AI demand for data is just going through the roof. And so all of these things combined to mean that big data is getting explosive. It is absolutely going through the roof. So that's a huge thing that's happening. So as telecommunications companies around the world are looking at their network analytics infrastructure, which was initially designed for service insurance primarily and how they migrate that to the cloud. These things are impacting on those decisions because you're not just looking at migrating a workload to operate in the cloud that used to work in the data center. Now you're looking at migrating a workload but also expanding the use cases in that workload. And bear in mind many of those workloads are going to need to remain on-prem. So they'll need to be within a private cloud or at best a hybrid cloud environment in order to satisfy regulatory jurisdiction requirements. So let's talk about an example. So LG Uplus is fantastic service provider in Korea, huge growth in that business over the last, over the last 10, 15 years or so. And obviously most people would be familiar with LG, the electronics brand, maybe less so with, with LG Uplus, but they've been doing phenomenal work and were the first business in the world to launch commercial 5G in 2019. And so a huge milestone that they achieved. And at the same time they deployed the Network Real-time Analytics Platform or NRAP from a combination of Cloudera and our partner Caremark . Now, there were a number of things that were driving the requirement for the analytics platform at the time. Clearly the 5G launch was the big thing that they had in mind, but there were other things that were at play as well. So within the 5G launch, they were looking for a visibility of services and service assurance and service quality. So, you know, what services have been launched? How are they being taken up? What are the issues that are arising? Where the faults happening? Where are the problems? Because clearly when you launch a new service like that you want to understand and be on top of the issues as they arise. So that was really, really important. A second piece was and, you know, this is not a new story to any telco in the world, right? But there are silos in operation. And so it taking advantage of, or eliminating redundancies through the process of digital transformation it was really important. And so particular, the two silos between wired and the wireless sides of the business needed to come together so that there would be an integrated network management system for LG Uplus as they rolled out 5G. So eliminating redundancy and driving cost savings through the integration of the silos was really, really important. And that's a process and the people think every bit, as much as it is a systems and a data thing so another big driver. And the fourth one, you know, we've talked a little bit about some of these things, right? 5G brings huge opportunity for enterprise services innovation. So industry 4.0, digital experience, these kinds of use cases were very important in the South Korean market and in the business of LG Uplus And so looking at AI and how can you apply AI to network management? Again, there's a number of use cases, really, really exciting use cases that have gone live now in LG Uplus since we did this initial deployment and they're making fantastic strides there. Big data analytics for users across LG Uplus, right? So it's not just for, it's not just for the immediate application of 5G or the support or the 5G network, but also for other data analysts and data scientists across the LG Uplus business. Network analytics while primarily it's, it's primary use case is around network management. LG Uplus or network analytics has applications across the entire business, right? So, you know, for customer churn or next best offer for understanding customer experience and customer behavior really important there for digital advertising, for product innovation, all sorts of different use cases and departments within the business needed access to this information. So collaboration sharing across the network, the real-time network analytics platform it was very important. And then finally, as I mentioned, LG group is much bigger than just LG Uplus. It's got the electronics and other pieces, and they had launched a major group wide digital transformation program in 2019. And so being a part of that was important as well. Some of the seems that they were looking to address. So first of all, the integration of wired and wireless data sources, and so getting your assurance data sources, your network data sources and so on integration was really, really important. Scale was massive for them. You know, they're talking about billions of transactions in under a minute being processed and hundreds of terabytes per day. So, you know, phenomenal scale that needed to be, you know, available out of the box as it were. Real time indicators and alarms. And there was lots of KPIs and thresholds set that, you know, to make, made to meet certain criteria, certain standards. Customer specific real time analysis of 5G, services particularly for the launch, root cause analysis and AI based prediction on service anomalies and service issues was a core use case. As I talked about already the provision of service, of data services across the organization. And then support for 5G, served the business service impact was extremely important. So it's not just understand, well, you know, that you have an outreach in a particular network element, but what is the impact on the business of LG Uplus, but also what is the impact on the business of the customer from an outage or an anomaly or a problem on the network. So being able to answer those kinds of questions really, really important too. And as I said between Cloudera and Caremark and LG Uplus they have already, themselves an intrinsic part of the solution, this is what we ended up building. So a big, complicated architecture side. I really don't want to go into too much detail here. You can see these things for yourself, but let me skip through it really quickly. So, first of all, the key data sources. You have all of your wireless network information, other data sources, this is really important 'cause sometimes you kind of skip over this. There are other systems that are in place like the enterprise data warehouse that needed to be integrated as well. Southbound and northbound interfaces. So we get our data, yo know, from the network and so on and network management applications through both file interfaces, Kafka, NiFi are important technologies. And also the RDBMS systems that, you know, like the enterprise data warehouse that we're able to feed that into the system. And then northbound, you know, we spoke already about making network analytics services available across the enterprise. So, you know, having both a file and API interface available for other systems and other consumers across the enterprise is very important. Lots of stuff going on then in the platform itself. Two petabytes and persistent storage, Cloudera HDFS, 300 nodes for the raw data storage and then Kudu for real time storage for, you know, real-time indicator analysis around generation and other real time processes. So there was the core of the solution spark processes for ETL, key quality indicators and alarming, and also a bunch of work done around data preparation, data generation for transferal to, for party systems through the northbound interfaces. Impala API queries for real-time systems there on the right hand side and then a whole bunch of clustering classification, prediction jobs through the ML processes, the machine learning processes. Again, another key use case, and we've done a bunch of work on that, and I encourage you to have a look at the Cloudera website for more detail on some of the work that we did here. Some pretty cool stuff. And then finally, just the upstream services, some of these, there's lots more than simply these ones, but service assurance is really, really important so SQM, CEM and ACD right to the service quality management customer experience autonomous control is really, really important consumers of the real-time analytics platform and your conventional service assurance functions like faulted performance management. These things are as much consumers of the information and the network analytics platform as they are providers of data to the network analytics platform. So some of the specific use cases that have been stood up and that are delivering value to this day and there's lots of more besides, but these are just three that we pulled out. So, first of all, sort of specific monitoring and customer quality analysis care and response. So again, growing from the initial 5G launch, and then broadening into broader services, understanding where there are issues so that when people complain, when people have an issue that we can answer the concerns of the client in a substantive way. AI functions around root cause analysis understanding why things went wrong when they went wrong and also making recommendations as to how to avoid those occurrences in the future. So, you know, what preventative measures can be taken. And then finally, the collaboration function across LG Uplus extremely important and continues to be important to this day where data is shared throughout the enterprise, through the API Lira, through file interfaces and other things and through interface integrations with upstream systems. So that's kind of the real quick run through of LG Uplus. And the numbers are just staggering. You know, we've seen upwards of a billion transactions in under 40 seconds being tested. And we've gone through beyond those thresholds now already, and we're start, and this isn't just a theoretical sort of a benchmarking test or something like that. We're seeing these kinds of volumes of data and not too far down the track. So with those things that I mentioned earlier or with the proliferation of network infrastructure in the 5G context with virtualized elements, with all of these other bits and pieces are driving massive volumes of data towards the network analytics platform. So phenomenal scale. This is just one example. We work with service providers all over the world is over 80% of the top 100 telecommunication service providers run on Cloudera. They use Cloudera in the network and we're seeing those customers all migrating Legacy Cloudera platforms now onto CDP onto the Cloudera Data Platform. They're increasing the jobs that they do. So it's not just warehousing, not just ingestion of ETL and moving into things like machine learning. And also looking at new data sources from places like NW DAF the network data analytics function in 5G or the management and orchestration layer in software defined network function virtualization. So, you know, new use cases coming in all the time, new data sources coming in all the time, growth in the application scope from, as we say, from edge to AI. And so it's really exciting to see how the footprint is growing and how the applications in telecommunications are really making a difference in facilitating network transformation. And that's covering, that's me covered for today. I hope you found that helpful. By all means please reach out. There's a couple of links here. You can follow me on Twitter. You can connect to the telecommunications page. Reach out to me directly at Cloudera I'd love to answer your questions and talk to you about how big data is transforming networks and how network transformation is accelerating telcos throughout the world.

Published Date : Aug 5 2021

SUMMARY :

and in the business of LG Uplus

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KoreaLOCATION

0.99+

2019DATE

0.99+

LGORGANIZATION

0.99+

second pieceQUANTITY

0.99+

NokiaORGANIZATION

0.99+

10QUANTITY

0.99+

CaremarkORGANIZATION

0.99+

EricksonORGANIZATION

0.99+

todayDATE

0.99+

ClouderaORGANIZATION

0.99+

HuaweiORGANIZATION

0.99+

LG UplusORGANIZATION

0.99+

threeQUANTITY

0.99+

over 80%QUANTITY

0.99+

one exampleQUANTITY

0.99+

telcoORGANIZATION

0.98+

YouTubeORGANIZATION

0.98+

OneQUANTITY

0.98+

Two petabytesQUANTITY

0.98+

KafkaTITLE

0.98+

under 40 secondsQUANTITY

0.98+

four major vectorsQUANTITY

0.98+

bothQUANTITY

0.97+

first businessQUANTITY

0.97+

first thingQUANTITY

0.97+

hundreds of terabytesQUANTITY

0.97+

TwitterORGANIZATION

0.96+

ClouderaTITLE

0.96+

NiFiTITLE

0.95+

FirstQUANTITY

0.95+

two silosQUANTITY

0.95+

fourth oneQUANTITY

0.94+

5GORGANIZATION

0.93+

both volumeQUANTITY

0.9+

15 yearsQUANTITY

0.89+

last 20 yearsDATE

0.88+

100 telecommunication service providersQUANTITY

0.86+

two major impactsQUANTITY

0.85+

UplusCOMMERCIAL_ITEM

0.85+

firstQUANTITY

0.79+

a billion transactionsQUANTITY

0.74+

billions of transactionsQUANTITY

0.73+

both fileQUANTITY

0.71+

last few yearsDATE

0.71+

coupleQUANTITY

0.71+

under a minuteQUANTITY

0.7+

linksQUANTITY

0.7+

LegacyTITLE

0.69+

API LiraTITLE

0.69+

300 nodesQUANTITY

0.66+

a secondQUANTITY

0.64+

KuduORGANIZATION

0.63+

5GQUANTITY

0.6+

4.0QUANTITY

0.56+

South KoreanOTHER

0.55+

NRAPTITLE

0.48+

Rich Gaston, Micro Focus | Virtual Vertica BDC 2020


 

(upbeat music) >> Announcer: It's theCUBE covering the virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Welcome back to the Vertica Virtual Big Data Conference, BDC 2020. You know, it was supposed to be a physical event in Boston at the Encore. Vertica pivoted to a digital event, and we're pleased that The Cube could participate because we've participated in every BDC since the inception. Rich Gaston this year is the global solutions architect for security risk and governance at Micro Focus. Rich, thanks for coming on, good to see you. >> Hey, thank you very much for having me. >> So you got a chewy title, man. You got a lot of stuff, a lot of hairy things in there. But maybe you can talk about your role as an architect in those spaces. >> Sure, absolutely. We handle a lot of different requests from the global 2000 type of organization that will try to move various business processes, various application systems, databases, into new realms. Whether they're looking at opening up new business opportunities, whether they're looking at sharing data with partners securely, they might be migrating it to cloud applications, and doing migration into a Hybrid IT architecture. So we will take those large organizations and their existing installed base of technical platforms and data, users, and try to chart a course to the future, using Micro Focus technologies, but also partnering with other third parties out there in the ecosystem. So we have large, solid relationships with the big cloud vendors, with also a lot of the big database spenders. Vertica's our in-house solution for big data and analytics, and we are one of the first integrated data security solutions with Vertica. We've had great success out in the customer base with Vertica as organizations have tried to add another layer of security around their data. So what we will try to emphasize is an enterprise wide data security approach, where you're taking a look at data as it flows throughout the enterprise from its inception, where it's created, where it's ingested, all the way through the utilization of that data. And then to the other uses where we might be doing shared analytics with third parties. How do we do that in a secure way that maintains regulatory compliance, and that also keeps our company safe against data breach. >> A lot has changed since the early days of big data, certainly since the inception of Vertica. You know, it used to be big data, everyone was rushing to figure it out. You had a lot of skunkworks going on, and it was just like, figure out data. And then as organizations began to figure it out, they realized, wow, who's governing this stuff? A lot of shadow IT was going on, and then the CIO was called to sort of reign that back in. As well, you know, with all kinds of whatever, fake news, the hacking of elections, and so forth, the sense of heightened security has gone up dramatically. So I wonder if you can talk about the changes that have occurred in the last several years, and how you guys are responding. >> You know, it's a great question, and it's been an amazing journey because I was walking down the street here in my hometown of San Francisco at Christmastime years ago and I got a call from my bank, and they said, we want to inform you your card has been breached by Target, a hack at Target Corporation and they got your card, and they also got your pin. And so you're going to need to get a new card, we're going to cancel this. Do you need some cash? I said, yeah, it's Christmastime so I need to do some shopping. And so they worked with me to make sure that I could get that cash, and then get the new card and the new pin. And being a professional in the inside of the industry, I really questioned, how did they get the pin? Tell me more about this. And they said, well, we don't know the details, but you know, I'm sure you'll find out. And in fact, we did find out a lot about that breach and what it did to Target. The impact that $250 million immediate impact, CIO gone, CEO gone. This was a big one in the industry, and it really woke a lot of people up to the different types of threats on the data that we're facing with our largest organizations. Not just financial data; medical data, personal data of all kinds. Flash forward to the Cambridge Analytica scandal that occurred where Facebook is handing off data, they're making a partnership agreement --think they can trust, and then that is misused. And who's going to end up paying the cost of that? Well, it's going to be Facebook at a tune of about five billion on that, plus some other finds that'll come along, and other costs that they're facing. So what we've seen over the course of the past several years has been an evolution from data breach making the headlines, and how do my customers come to us and say, help us neutralize the threat of this breach. Help us mitigate this risk, and manage this risk. What do we need to be doing, what are the best practices in the industry? Clearly what we're doing on the perimeter security, the application security and the platform security is not enough. We continue to have breaches, and we are the experts at that answer. The follow on fascinating piece has been the regulators jumping in now. First in Europe, but now we see California enacting a law just this year. They came into a place that is very stringent, and has a lot of deep protections that are really far-reaching around personal data of consumers. Look at jurisdictions like Australia, where fiduciary responsibility now goes to the Board of Directors. That's getting attention. For a regulated entity in Australia, if you're on the Board of Directors, you better have a plan for data security. And if there is a breach, you need to follow protocols, or you personally will be liable. And that is a sea change that we're seeing out in the industry. So we're getting a lot of attention on both, how do we neutralize the risk of breach, but also how can we use software tools to maintain and support our regulatory compliance efforts as we work with, say, the largest money center bank out of New York. I've watched their audit year after year, and it's gotten more and more stringent, more and more specific, tell me more about this aspect of data security, tell me more about encryption, tell me more about money management. The auditors are getting better. And we're supporting our customers in that journey to provide better security for the data, to provide a better operational environment for them to be able to roll new services out with confidence that they're not going to get breached. With that confidence, they're not going to have a regulatory compliance fine or a nightmare in the press. And these are the major drivers that help us with Vertica sell together into large organizations to say, let's add some defense in depth to your data. And that's really a key concept in the security field, this concept of defense in depth. We apply that to the data itself by changing the actual data element of Rich Gaston, I will change that name into Ciphertext, and that then yields a whole bunch of benefits throughout the organization as we deal with the lifecycle of that data. >> Okay, so a couple things I want to mention there. So first of all, totally board level topic, every board of directors should really have cyber and security as part of its agenda, and it does for the reasons that you mentioned. The other is, GDPR got it all started. I guess it was May 2018 that the penalties went into effect, and that just created a whole Domino effect. You mentioned California enacting its own laws, which, you know, in some cases are even more stringent. And you're seeing this all over the world. So I think one of the questions I have is, how do you approach all this variability? It seems to me, you can't just take a narrow approach. You have to have an end to end perspective on governance and risk and security, and the like. So are you able to do that? And if so, how so? >> Absolutely, I think one of the key areas in big data in particular, has been the concern that we have a schema, we have database tables, we have CALMS, and we have data, but we're not exactly sure what's in there. We have application developers that have been given sandbox space in our clusters, and what are they putting in there? So can we discover that data? We have those tools within Micro Focus to discover sensitive data within in your data stores, but we can also protect that data, and then we'll track it. And what we really find is that when you protect, let's say, five billion rows of a customer database, we can now know what is being done with that data on a very fine grain and granular basis, to say that this business process has a justified need to see the data in the clear, we're going to give them that authorization, they can decrypt the data. Secure data, my product, knows about that and tracks that, and can report on that and say at this date and time, Rich Gaston did the following thing to be able to pull data in the clear. And that could be then used to support the regulatory compliance responses and then audit to say, who really has access to this, and what really is that data? Then in GDPR, we're getting down into much more fine grained decisions around who can get access to the data, and who cannot. And organizations are scrambling. One of the funny conversations that I had a couple years ago as GDPR came into place was, it seemed a couple of customers were taking these sort of brute force approach of, we're going to move our analytics and all of our data to Europe, to European data centers because we believe that if we do this in the U.S., we're going to violate their law. But if we do it all in Europe, we'll be okay. And that simply was a short-term way of thinking about it. You really can't be moving your data around the globe to try to satisfy a particular jurisdiction. You have to apply the controls and the policies and put the software layers in place to make sure that anywhere that someone wants to get that data, that we have the ability to look at that transaction and say it is or is not authorized, and that we have a rock solid way of approaching that for audit and for compliance and risk management. And once you do that, then you really open up the organization to go back and use those tools the way they were meant to be used. We can use Vertica for AI, we can use Vertica for machine learning, and for all kinds of really cool use cases that are being done with IOT, with other kinds of cases that we're seeing that require data being managed at scale, but with security. And that's the challenge, I think, in the current era, is how do we do this in an elegant way? How do we do it in a way that's future proof when CCPA comes in? How can I lay this on as another layer of audit responsibility and control around my data so that I can satisfy those regulators as well as the folks over in Europe and Singapore and China and Turkey and Australia. It goes on and on. Each jurisdiction out there is now requiring audit. And like I mentioned, the audits are getting tougher. And if you read the news, the GDPR example I think is classic. They told us in 2016, it's coming. They told us in 2018, it's here. They're telling us in 2020, we're serious about this, and here's the finds, and you better be aware that we're coming to audit you. And when we audit you, we're going to be asking some tough questions. If you can't answer those in a timely manner, then you're going to be facing some serious consequences, and I think that's what's getting attention. >> Yeah, so the whole big data thing started with Hadoop, and Hadoop is open, it's distributed, and it just created a real governance challenge. I want to talk about your solutions in this space. Can you tell us more about Micro Focus voltage? I want to understand what it is, and then get into sort of how it works, and then I really want to understand how it's applied to Vertica. >> Yeah, absolutely, that's a great question. First of all, we were the originators of format preserving encryption, we developed some of the core basic research out of Stanford University that then became the company of Voltage; that build-a-brand name that we apply even though we're part of Micro Focus. So the lineage still goes back to Dr. Benet down at Stanford, one of my buddies there, and he's still at it doing amazing work in cryptography and keeping moving the industry forward, and the science forward of cryptography. It's a very deep science, and we all want to have it peer-reviewed, we all want to be attacked, we all want it to be proved secure, that we're not selling something to a major money center bank that is potentially risky because it's obscure and we're private. So we have an open standard. For six years, we worked with the Department of Commerce to get our standard approved by NIST; The National Institute of Science and Technology. They initially said, well, AES256 is going to be fine. And we said, well, it's fine for certain use cases, but for your database, you don't want to change your schema, you don't want to have this increase in storage costs. What we want is format preserving encryption. And what that does is turns my name, Rich, into a four-letter ciphertext. It can be reversed. The mathematics of that are fascinating, and really deep and amazing. But we really make that very simple for the end customer because we produce APIs. So these application programming interfaces can be accessed by applications in C or Java, C sharp, other languages. But they can also be accessed in Microservice Manor via rest and web service APIs. And that's the core of our technical platform. We have an appliance-based approach, so we take a secure data appliance, we'll put it on Prim, we'll make 50 of them if you're a big company like Verizon and you need to have these co-located around the globe, no problem; we can scale to the largest enterprise needs. But our typical customer will install several appliances and get going with a couple of environments like QA and Prod to be able to start getting encryption going inside their organization. Once the appliances are set up and installed, it takes just a couple of days of work for a typical technical staff to get done. Then you're up and running to be able to plug in the clients. Now what are the clients? Vertica's a huge one. Vertica's one of our most powerful client endpoints because you're able to now take that API, put it inside Vertica, it's all open on the internet. We can go and look at Vertica.com/secure data. You get all of our documentation on it. You understand how to use it very quickly. The APIs are super simple; they require three parameter inputs. It's a really basic approach to being able to protect and access data. And then it gets very deep from there because you have data like credit card numbers. Very different from a street address and we want to take a different approach to that. We have data like birthdate, and we want to be able to do analytics on dates. We have deep approaches on managing analytics on protected data like Date without having to put it in the clear. So we've maintained a lead in the industry in terms of being an innovator of the FF1 standard, what we call FF1 is format preserving encryption. We license that to others in the industry, per our NIST agreement. So we're the owner, we're the operator of it, and others use our technology. And we're the original founders of that, and so we continue to sort of lead the industry by adding additional capabilities on top of FF1 that really differentiate us from our competitors. Then you look at our API presence. We can definitely run as a dup, but we also run in open systems. We run on main frame, we run on mobile. So anywhere in the enterprise or one in the cloud, anywhere you want to be able to put secure data, and be able to access the protect data, we're going to be there and be able to support you there. >> Okay so, let's say I've talked to a lot of customers this week, and let's say I'm running in Eon mode. And I got some workload running in AWS, I've got some on Prim. I'm going to take an appliance or multiple appliances, I'm going to put it on Prim, but that will also secure my cloud workloads as part of a sort of shared responsibility model, for example? Or how does that work? >> No, that's absolutely correct. We're really flexible that we can run on Prim or in the cloud as far as our crypto engine, the key management is really hard stuff. Cryptography is really hard stuff, and we take care of all that, so we've all baked that in, and we can run that for you as a service either in the cloud or on Prim on your small Vms. So really the lightweight footprint for me running my infrastructure. When I look at the organization like you just described, it's a classic example of where we fit because we will be able to protect that data. Let's say you're ingesting it from a third party, or from an operational system, you have a website that collects customer data. Someone has now registered as a new customer, and they're going to do E-commerce with you. We'll take that data, and we'll protect it right at the point of capture. And we can now flow that through the organization and decrypt it at will on any platform that you have that you need us to be able to operate on. So let's say you wanted to pick that customer data from the operational transaction system, let's throw it into Eon, let's throw it into the cloud, let's do analytics there on that data, and we may need some decryption. We can place secure data wherever you want to be able to service that use case. In most cases, what you're doing is a simple, tiny little atomic efetch across a protected tunnel, your typical TLS pipe tunnel. And once that key is then cashed within our client, we maintain all that technology for you. You don't have to know about key management or dashing. We're good at that; that's our job. And then you'll be able to make those API calls to access or protect the data, and apply the authorization authentication controls that you need to be able to service your security requirements. So you might have third parties having access to your Vertica clusters. That is a special need, and we can have that ability to say employees can get X, and the third party can get Y, and that's a really interesting use case we're seeing for shared analytics in the internet now. >> Yeah for sure, so you can set the policy how we want. You know, I have to ask you, in a perfect world, I would encrypt everything. But part of the reason why people don't is because of performance concerns. Can you talk about, and you touched upon it I think recently with your sort of atomic access, but can you talk about, and I know it's Vertica, it's Ferrari, etc, but anything that slows it down, I'm going to be a concern. Are customers concerned about that? What are the performance implications of running encryption on Vertica? >> Great question there as well, and what we see is that we want to be able to apply scale where it's needed. And so if you look at ingest platforms that we find, Vertica is commonly connected up to something like Kafka. Maybe streamsets, maybe NiFi, there are a variety of different technologies that can route that data, pipe that data into Vertica at scale. Secured data is architected to go along with that architecture at the node or at the executor or at the lowest level operator level. And what I mean by that is that we don't have a bottleneck that everything has to go through one process or one box or one channel to be able to operate. We don't put an interceptor in between your data and coming and going. That's not our approach because those approaches are fragile and they're slow. So we typically want to focus on integrating our APIs natively within those pipeline processes that come into Vertica within the Vertica ingestion process itself, you can simply apply our protection when you do the copy command in Vertica. So really basic simple use case that everybody is typically familiar with in Vertica land; be able to copy the data and put it into Vertica, and you simply say protect as part of the data. So my first name is coming in as part of this ingestion. I'll simply put the protect keyword in the Syntax right in SQL; it's nothing other than just an extension SQL. Very very simple, the developer, easy to read, easy to write. And then you're going to provide the parameters that you need to say, oh the name is protected with this kind of a format. To differentiate it between a credit card number and an alphanumeric stream, for example. So once you do that, you then have the ability to decrypt. Now, on decrypt, let's look at a couple different use cases. First within Vertica, we might be doing select statements within Vertica, we might be doing all kinds of jobs within Vertica that just operate at the SQL layer. Again, just insert the word "access" into the Vertica select string and provide us with the data that you want to access, that's our word for decryption, that's our lingo. And we will then, at the Vertica level, harness the power of its CPU, its RAM, its horsepower at the node to be able to operate on that operator, the decryption request, if you will. So that gives us the speed and the ability to scale out. So if you start with two nodes of Vertica, we're going to operate at X number of hundreds of thousands of transactions a second, depending on what you're doing. Long strings are a little bit more intensive in terms of performance, but short strings like social security number are our sweet spot. So we operate very very high speed on that, and you won't notice the overhead with Vertica, perse, at the node level. When you scale Vertica up and you have 50 nodes, and you have large clusters of Vertica resources, then we scale with you. And we're not a bottleneck and at any particular point. Everybody's operating independently, but they're all copies of each other, all doing the same operation. Fetch a key, do the work, go to sleep. >> Yeah, you know, I think this is, a lot of the customers have said to us this week that one of the reasons why they like Vertica is it's very mature, it's been around, it's got a lot of functionality, and of course, you know, look, security, I understand is it's kind of table sticks, but it's also can be a differentiator. You know, big enterprises that you sell to, they're asking for security assessments, SOC 2 reports, penetration testing, and I think I'm hearing, with the partnership here, you're sort of passing those with flying colors. Are you able to make security a differentiator, or is it just sort of everybody's kind of got to have good security? What are your thoughts on that? >> Well, there's good security, and then there's great security. And what I found with one of my money center bank customers here in San Francisco was based here, was the concern around the insider access, when they had a large data store. And the concern that a DBA, a database administrator who has privilege to everything, could potentially exfil data out of the organization, and in one fell swoop, create havoc for them because of the amount of data that was present in that data store, and the sensitivity of that data in the data store. So when you put voltage encryption on top of Vertica, what you're doing now is that you're putting a layer in place that would prevent that kind of a breach. So you're looking at insider threats, you're looking at external threats, you're looking at also being able to pass your audit with flying colors. The audits are getting tougher. And when they say, tell me about your encryption, tell me about your authentication scheme, show me the access control list that says that this person can or cannot get access to something. They're asking tougher questions. That's where secure data can come in and give you that quick answer of it's encrypted at rest. It's encrypted and protected while it's in use, and we can show you exactly who's had access to that data because it's tracked via a different layer, a different appliance. And I would even draw the analogy, many of our customers use a device called a hardware security module, an HSM. Now, these are fairly expensive devices that are invented for military applications and adopted by banks. And now they're really spreading out, and people say, do I need an HSM? Well, with secure data, we certainly protect your crypto very very well. We have very very solid engineering. I'll stand on that any day of the week, but your auditor is going to want to ask a checkbox question. Do you have HSM? Yes or no. Because the auditor understands, it's another layer of protection. And it provides me another tamper evident layer of protection around your key management and your crypto. And we, as professionals in the industry, nod and say, that is worth it. That's an expensive option that you're going to add on, but your auditor's going to want it. If you're in financial services, you're dealing with PCI data, you're going to enjoy the checkbox that says, yes, I have HSMs and not get into some arcane conversation around, well no, but it's good enough. That's kind of the argument then conversation we get into when folks want to say, Vertica has great security, Vertica's fantastic on security. Why would I want secure data as well? It's another layer of protection, and it's defense in depth for you data. When you believe in that, when you take security really seriously, and you're really paranoid, like a person like myself, then you're going to invest in those kinds of solutions that get you best in-class results. >> So I'm hearing a data-centric approach to security. Security experts will tell you, you got to layer it. I often say, we live in a new world. The green used to just build a moat around the queen, but the queen, she's leaving her castle in this world of distributed data. Rich, incredibly knowlegable guest, and really appreciate you being on the front lines and sharing with us your knowledge about this important topic. So thanks for coming on theCUBE. >> Hey, thank you very much. >> You're welcome, and thanks for watching everybody. This is Dave Vellante for theCUBE, we're covering wall-to-wall coverage of the Virtual Vertica BDC, Big Data Conference. Remotely, digitally, thanks for watching. Keep it right there. We'll be right back right after this short break. (intense music)

Published Date : Mar 31 2020

SUMMARY :

Vertica Big Data Conference 2020 brought to you by Vertica. and we're pleased that The Cube could participate But maybe you can talk about your role And then to the other uses where we might be doing and how you guys are responding. and they said, we want to inform you your card and it does for the reasons that you mentioned. and put the software layers in place to make sure Yeah, so the whole big data thing started with Hadoop, So the lineage still goes back to Dr. Benet but that will also secure my cloud workloads as part of a and we can run that for you as a service but can you talk about, at the node to be able to operate on that operator, a lot of the customers have said to us this week and we can show you exactly who's had access to that data and really appreciate you being on the front lines of the Virtual Vertica BDC, Big Data Conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AustraliaLOCATION

0.99+

EuropeLOCATION

0.99+

TargetORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Dave VellantePERSON

0.99+

May 2018DATE

0.99+

NISTORGANIZATION

0.99+

2016DATE

0.99+

BostonLOCATION

0.99+

2018DATE

0.99+

San FranciscoLOCATION

0.99+

New YorkLOCATION

0.99+

Target CorporationORGANIZATION

0.99+

$250 millionQUANTITY

0.99+

50QUANTITY

0.99+

Rich GastonPERSON

0.99+

SingaporeLOCATION

0.99+

TurkeyLOCATION

0.99+

FerrariORGANIZATION

0.99+

six yearsQUANTITY

0.99+

2020DATE

0.99+

one boxQUANTITY

0.99+

ChinaLOCATION

0.99+

CTITLE

0.99+

Stanford UniversityORGANIZATION

0.99+

JavaTITLE

0.99+

FirstQUANTITY

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

U.S.LOCATION

0.99+

this weekDATE

0.99+

National Institute of Science and TechnologyORGANIZATION

0.99+

Each jurisdictionQUANTITY

0.99+

bothQUANTITY

0.99+

VerticaTITLE

0.99+

RichPERSON

0.99+

this yearDATE

0.98+

Vertica Virtual Big Data ConferenceEVENT

0.98+

one channelQUANTITY

0.98+

one processQUANTITY

0.98+

GDPRTITLE

0.98+

SQLTITLE

0.98+

five billion rowsQUANTITY

0.98+

about five billionQUANTITY

0.97+

OneQUANTITY

0.97+

C sharpTITLE

0.97+

BenetPERSON

0.97+

firstQUANTITY

0.96+

four-letterQUANTITY

0.96+

Vertica Big Data Conference 2020EVENT

0.95+

HadoopTITLE

0.94+

KafkaTITLE

0.94+

Micro FocusORGANIZATION

0.94+

UNLIST TILL 4/2 - Keep Data Private


 

>> Paige: Hello everybody and thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled Keep Data Private Prepare and Analyze Without Unencrypting With Voltage SecureData for Vertica. I'm Paige Roberts, Open Source Relations Manager at Vertica, and I'll be your host for this session. Joining me is Rich Gaston, Global Solutions Architect, Security, Risk, and Government at Voltage. And before we begin, I encourage you to submit your questions or comments during the virtual session, you don't have to wait till the end. Just type your question as it occurs to you, or comment, in the question box below the slide and then click Submit. There'll be a Q&A session at the end of the presentation where we'll try to answer as many of your questions as we're able to get to during the time. Any questions that we don't address we'll do our best to answer offline. Now, if you want, you can visit the Vertica Forum to post your questions there after the session. Now, that's going to take the place of the Developer Lounge, and our engineering team is planning to join the Forum, to keep the conversation going. So as a reminder, you can also maximize your screen by clicking the double arrow button, in the lower-right corner of the slides. That'll allow you to see the slides better. And before you ask, yes, this virtual session is being recorded and it will be available to view on-demand this week. We'll send you a notification as soon as it's ready. All right, let's get started. Over to you, Rich. >> Rich: Hey, thank you very much, Paige, and appreciate the opportunity to discuss this topic with the audience. My name is Rich Gaston and I'm a Global Solutions Architect, within the Micro Focus team, and I work on global Data privacy and protection efforts, for many different organizations, looking to take that journey toward breach defense and regulatory compliance, from platforms ranging from mobile to mainframe, everything in between, cloud, you name it, we're there in terms of our solution sets. Vertica is one of our major partners in this space, and I'm very excited to talk with you today about our solutions on the Vertica platform. First, let's talk a little bit about what you're not going to learn today, and that is, on screen you'll see, just part of the mathematics that goes into, the format-preserving encryption algorithm. We are the originators and authors and patent holders on that algorithm. Came out of research from Stanford University, back in the '90s, and we are very proud, to take that out into the market through the NIST standard process, and license that to others. So we are the originators and maintainers, of both standards and athureader in the industry. We try to make this easy and you don't have to learn any of this tough math. Behind this there are also many other layers of technology. They are part of the security, the platform, such as stateless key management. That's a really complex area, and we make it very simple for you. We have very mature and powerful products in that space, that really make your job quite easy, when you want to implement our technology within Vertica. So today, our goal is to make Data protection easy for you, to be able to understand the basics of Voltage Secure Data, you're going to be learning how the Vertica UDx, can help you get started quickly, and we're going to see some examples of how Vertica plus Voltage Secure Data, are going to be working together, in our customer cases out in the field. First, let's take you through a quick introduction to Voltage Secure Data. The business drivers and what's this all about. First of all, we started off with Breach Defense. We see that despite continued investments, in personal perimeter and platform security, Data breaches continue to occur. Voltage Secure Data plus Vertica, provides defense in depth for sensitive Data, and that's a key concept that we're going to be referring to. in the security field defense in depth, is a standard approach to be able to provide, more layers of protection around sensitive assets, such as your Data, and that's exactly what Secure Data is designed to do. Now that we've come through many of these breach examples, and big ticket items, getting the news around breaches and their impact, the business regulators have stepped up, and regulatory compliance, is now a hot topic in Data privacy. Regulations such as GDPR came online in 2018 for the EU. CCPA came online just this year, a couple months ago for California, and is the de-facto standard for the United States now, as organizations are trying to look at, the best practices for providing, regulatory compliance around Data privacy and protection. These gives massive new rights to consumers, but also obligations to organizations, to protect that personal Data. Secure Data Plus Vertica provides, fine grained authorization around sensitive Data, And we're going to show you exactly how that works, within the Vertica platform. At the bottom, you'll see some of the snippets there, of the news articles that just keep racking up, and our goal is to keep you off the news, to keep your company safe, so that you can have the assurance, that even if there is an unintentional, or intentional breach of Data out of the corporation, if it is protected by voltage Secure Data, it will be of no value to those hackers, and then you have no impact, in terms of risk to the organization. What do we mean by defense in depth? Let's take a look first at the encryption types, and the benefits that they provide, and we see our customers implementing, all kinds of different protection mechanisms, within the organization. You could be looking at disk level protection, file system protection, protection on the files themselves. You could protect the entire Database, you could protect our transmissions, as they go from the client to the server via TLS, or other protected tunnels. And then we look at Field-level Encryption, and that's what we're talking about today. That's all the above protections, at the perimeter level at the platform level. Plus, we're giving you granular access control, to your sensitive Data. Our main message is, keep the Data protected for at the earliest possible point, and only access it, when you have a valid business need to do so. That's a really critical aspect as we see Vertica customers, loading terabytes, petabytes of Data, into clusters of Vertica console, Vertica Database being able to give access to that Data, out to a wide variety of end users. We started off with organizations having, four people in an office doing Data science, or analytics, or Data warehousing, or whatever it's called within an organization, and that's now ballooned out, to a new customer coming in and telling us, we're going to have 1000 people accessing it, plus service accounts accessing Vertica, we need to be able to provide fine level access control, and be able to understand what are folks doing with that sensitive Data? And how can we Secure it, the best practices possible. In very simple state, voltage protect Data at rest and in motion. The encryption of Data facilitates compliance, and it reduces your risk of breach. So if you take a look at what we mean by feel level, we could take a name, that name might not just be in US ASCII. Here we have a sort of Latin one extended, example of Harold Potter, and we could take a look at the example protected Data. Notice that we're taking a character set approach, to protecting it, meaning, I've got an alphanumeric option here for the format, that I'm applying to that name. That gives me a mix of alpha and numeric, and plus, I've got some of that Latin one extended alphabet in there as well, and that's really controllable by the end customer. They can have this be just US ASCII, they can have it be numbers for numbers, you can have a wide variety, of different protection mechanisms, including ignoring some characters in the alphabet, in case you want to maintain formatting. We've got all the bells and whistles, that you would ever want, to put on top of format preserving encryption, and we continue to add more to that platform, as we go forward. Taking a look at tax ID, there's an example of numbers for numbers, pretty basic, but it gives us the sort of idea, that we can very quickly and easily keep the Data protected, while maintaining the format. No schema changes are going to be required, when you want to protect that Data. If you look at credit card number, really popular example, and the same concept can be applied to tax ID, often the last four digits will be used in a tax ID, to verify someone's identity. That could be on an automated telephone system, it could be a customer service representative, just trying to validate the security of the customer, and we can keep that Data in the clear for that purpose, while protecting the entire string from breach. Dates are another critical area of concern, for a lot of medical use cases. But we're seeing Date of Birth, being included in a lot of Data privacy conversations, and we can protect dates with dates, they're going to be a valid date, and we have some really nifty tools, to maintain offsets between dates. So again, we've got the real depth of capability, within our encryption, that's not just saying, here's a one size fits all approach, GPS location, customer ID, IP address, all of those kinds of Data strings, can be protected by voltage Secure Data within Vertica. Let's take a look at the UDx basics. So what are we doing, when we add Voltage to Vertica? Vertica stays as is in the center. In fact, if you get the Vertical distribution, you're getting the Secure Data UDx onboard, you just need to enable it, and have Secure Data virtual appliance, that's the box there on the middle right. That's what we come in and add to the mix, as we start to be able to add those capabilities to Vertica. On the left hand side, you'll see that your users, your service accounts, your analytics, are still typically doing Select, Update, Insert, Delete, type of functionality within Vertica. And they're going to come into Vertica's access control layer, they're going to also access those services via SQL, and we simply extend SQL for Vertica. So when you add the UDx, you get additional syntax that we can provide, and we're going to show you examples of that. You can also integrate that with concepts, like Views within Vertica. So that we can say, let's give a view of Data, that gives the Data in the clear, using the UDx to decrypt that Data, and let's give everybody else, access to the raw Data which is protected. Third parties could be brought in, folks like contractors or folks that aren't vetted, as closely as a security team might do, for internal sensitive Data access, could be given access to the Vertical cluster, without risk of them breaching and going into some area, they're not supposed to take a look at. Vertica has excellent control for access, down even to the column level, which is phenomenal, and really provides you with world class security, around the Vertical solution itself. Secure Data adds another layer of protection, like we're mentioning, so that we can have Data protected in use, Data protected at rest, and then we can have the ability, to share that protected Data throughout the organization. And that's really where Secure Data shines, is the ability to protect that Data on mainframe, on mobile, and open systems, in the cloud, everywhere you want to have that Data move to and from Vertica, then you can have Secure Data, integrated with those endpoints as well. That's an additional solution on top, the Secure Data Plus Vertica solution, that is bundled together today for a sales purpose. But we can also have that conversation with you, about those wider Secure Data use cases, we'd be happy to talk to you about that. Security to the virtual appliance, is a lightweight appliance, sits on something like eight cores, 16 gigs of RAM, 100 gig of disk or 200 gig of disk, really a lightweight appliance, you can have one or many. Most customers have four in production, just for redundancy, they don't need them for scale. But we have some customers with 16 or more in production, because they're running such high volumes of transaction load. They're running a lot of web service transactions, and they're running Vertica as well. So we're going to have those virtual appliances, as co-located around the globe, hooked up to all kinds of systems, like Syslog, LDAP, load balancers, we've got a lot of capability within the appliance, to fit into your enterprise IP landscape. So let me get you directly into the neat, of what does the UDx do. If you're technical and you know SQL, this is probably going to be pretty straightforward to you, you'll see the copy command, used widely in Vertica to get Data into Vertica. So let's try to protect that Data when we're ingesting it. Let's grab it from maybe a CSV file, and put it straight into Vertica, but protected on the way and that's what the UDx does. We have Voltage Secure protectors, an added syntax, like I mentioned, to the Vertica SQL. And that allows us to say, we're going to protect the customer first name, using the parameters of hyper alphanumeric. That's our internal lingo of a format, within Secure Data, this part of our API, the API is require very few inputs. The format is the one, that you as a developer will be supplying, and you'll have different ones for maybe SSN, you'll have different formats for street address, but you can reuse a lot of your formats, across a lot of your PII, PHI Data types. Protecting after ingest is also common. So I've got some Data, that's already been put into a staging area, perhaps I've got a landing zone, a sandbox of some sort, now I want to be able to move that, into a different zone in Vertica, different area of the schema, and I want to have that Data protected. We can do that with the update command, and simply again, you'll notice Voltage Secure protect, nothing too wild there, basically the same syntax. We're going to query unprotected Data. How do we search once I've encrypted all my Data? Well, actually, there's a pretty nifty trick to do so. If you want to be able to query unprotected Data, and we have the search string, like a phone number there in this example, simply call Voltage Secure protect on that, now you'll have the cipher text, and you'll be able to search the stored cipher text. Again, we're just format preserving encrypting the Data, and it's just a string, and we can always compare those strings, using standard syntax and SQL. Using views to decrypt Data, again a powerful concept, in terms of how to make this work, within the Vertica Landscape, when you have a lot of different groups of users. Views are very powerful, to be able to point a BI tool, for instance, business intelligence tools, Cognos, Tableau, etc, might be accessing Data from Vertica with simple queries. Well, let's point them to a view that does the hard work, and uses the Vertical nodes, and its horsepower of CPU and RAM, to actually run that Udx, and do the decryption of the Data in use, temporarily in memory, and then throw that away, so that it can't be breached. That's a nice way to keep your users active and working and going forward, with their Data access and Data analytics, while also keeping the Data Secure in the process. And then we might want to export some Data, and push it out to someone in a clear text manner. We've got a third party, needs to take the tax ID along with some Data, to do some processing, all we need to do is call Voltage Secure Access, again, very similar to the protect call, and you're writing the parameter again, and boom, we have decrypted the Data and used again, the Vertical resources of RAM and CPU and horsepower, to do the work. All we're doing with Voltage Secure Data Appliance, is a real simple little key fetch, across a protected tunnel, that's a tiny atomic transaction, gets done very quick, and you're good to go. This is it in terms of the UDx, you have a couple of calls, and one parameter to pass, everything else is config driven, and really, you're up and running very quickly. We can even do demos and samples of this Vertical Udx, using hosted appliances, that we put up for pre sales purposes. So folks want to get up and get a demo going. We could take that Udx, configure it to point to our, appliance sitting on the internet, and within a couple of minutes, we're up and running with some simple use cases. Of course, for on-prem deployment, or deployment in the cloud, you'll want your own appliance in your own crypto district, you have your own security, but it just shows, that we can easily connect to any appliance, and get this working in a matter of minutes. Let's take a look deeper at the voltage plus Vertica solution, and we'll describe some of the use cases and path to success. First of all your steps to, implementing Data-centric security and Vertica. Want to note there on the left hand side, identify sensitive Data. How do we do this? I have one customer, where they look at me and say, Rich, we know exactly what our sensitive Data is, we develop the schema, it's our own App, we have a customer table, we don't need any help in this. We've got other customers that say, Rich, we have a very complex Database environment, with multiple Databases, multiple schemas, thousands of tables, hundreds of thousands of columns, it's really, really complex help, and we don't know what people have been doing exactly, with some of that Data, We've got various teams that share this resource. There, we do have additional tools, I wanted to give a shout out to another microfocus product, which is called Structured Data Manager. It's a great tool that helps you identify sensitive Data, with some really amazing technology under the hood, that can go into a Vertica repository, scan those tables, take a sample of rows or a full table scan, and give you back some really good reports on, we think this is sensitive, let's go confirm it, and move forward with Data protection. So if you need help on that, we've got the tools to do it. Once you identify that sensitive Data, you're going to want to understand, your Data flows and your use cases. Take a look at what analytics you're doing today. What analytics do you want to do, on sensitive Data in the future? Let's start designing our analytics, to work with sensitive Data, and there's some tips and tricks that we can provide, to help you mitigate, any kind of concerns around performance, or any kind of concerns around rewriting your SQL. As you've noted, you can just simply insert our SQL additions, into your code and you're off and running. You want to install and configure the Udx, and secure Data software plants. Well, the UDx is pretty darn simple. The documentation on Vertica is publicly available, you could see how that works, and what you need to configure it, one file here, and you're ready to go. So that's pretty straightforward to process, either grant some access to the Udx, and that's really up to the customer, because there are many different ways, to handle access control in Vertica, we're going to be flexible to fit within your model, of access control and adding the UDx to your mix. Each customer is a little different there, so you might want to talk with us a little bit about, the best practices for your use cases. But in general, that's going to be up and running in just a minute. The security software plants, hardened Linux appliance today, sits on-prem or in the cloud. And you can deploy that. I've seen it done in 15 minutes, but that's what the real tech you had, access to being able to generate a search, and do all this so that, your being able to set the firewall and all the DNS entries, the basically blocking and tackling of a software appliance, you get that done, corporations can take care of that, in just a couple of weeks, they get it all done, because they have wait waiting on other teams, but the software plants are really fast to get stood up, and they're very simple to administer, with our web based GUI. Then finally, you're going to implement your UDx use cases. Once the software appliance is up and running, we can set authentication methods, we could set up the format that you're going to use in Vertica, and then those two start talking together. And it should be going in dev and test in about half a day, and then you're running toward production, in just a matter of days, in most cases. We've got other customers that say, Hey, this is going to be a bigger migration project for us. We might want to split this up into chunks. Let's do the real sensitive and scary Data, like tax ID first, as our sort of toe in the water approach, and then we'll come back and protect other Data elements. That's one way to slice and dice, and implement your solution in a planned manner. Another way is schema based. Let's take a look at this section of the schema, and implement protection on these Data elements. Now let's take a look at the different schema, and we'll repeat the process, so you can iteratively move forward with your deployment. So what's the added value? When you add full Vertica plus voltage? I want to highlight this distinction because, Vertica contains world class security controls, around their Database. I'm an old time DBA from a different product, competing against Vertica in the past, and I'm really aware of the granular access controls, that are provided within various platforms. Vertica would rank at the very top of the list, in terms of being able to give me very tight control, and a lot of different AWS methods, being able to protect the Data, in a lot of different use cases. So Vertica can handle a lot of your Data protection needs, right out of the box. Voltage Secure Data, as we keep mentioning, adds that defense in-Depth, and it's going to enable those, enterprise wide use cases as well. So first off, I mentioned this, the standard of FF1, that is format preserving encryption, we're the authors of it, we continue to maintain that, and we want to emphasize that customers, really ought to be very, very careful, in terms of choosing a NIST standard, when implementing any kind of encryption, within the organization. So 8 ES was one of the first, and Hallmark, benchmark encryption algorithms, and in 2016, we were added to that mix, as FF1 with CS online. If you search NIST, and Voltage Security, you'll see us right there as the author of the standard, and all the processes that went along with that approval. We have centralized policy for key management, authentication, audit and compliance. We can now see that Vertica selected or fetch the key, to be able to protect some Data at this date and time. We can track that and be able to give you audit, and compliance reporting against that Data. You can move protected Data into and out of Vertica. So if we ingest via Kafka, and just via NiFi and Kafka, ingest on stream sets. There are a variety of different ingestion methods, and streaming methods, that can get Data into Vertica. We can integrate secure Data with all of those components. We're very well suited to integrate, with any Hadoop technology or any big Data technology, as we have API's in a variety of languages, bitness and platforms. So we've got that all out of the box, ready to go for you, if you need it. When you're moving Data out of Vertica, you might move it into an open systems platform, you might move it to the cloud, we can also operate and do the decryption there, you're going to get the same plaintext back, and if you protect Data over in the cloud, and move it into Vertica, you're going to be able to decrypt it in Vertica. That's our cross platform promise. We've been delivering on that for many, many years, and we now have many, many endpoints that do that, in production for the world's largest organization. We're going to preserve your Data format, and referential integrity. So if I protect my social security number today, I can protect another batch of Data tomorrow, and that same ciphertext will be generated, when I put that into Vertica, I can have absolute referential integrity on that Data, to be able to allow for analytics to occur, without even decrypting Data in many cases. And we have decrypt access for authorized users only, with the ability to add LDAP authentication authorization, for UDx users. So you can really have a number of different approaches, and flavors of how you implement voltage within Vertica, but what you're getting is the additional ability, to have that confidence, that we've got the Data protected at rest, even if I have a DBA that's not vetted or someone new, or I don't know where this person is from a third party, and being provided access as a DBA level privilege. They could select star from all day long, and they're going to get ciphertext, they're going to have nothing of any value, and if they want to use the UDF to decrypt it, they're going to be tracked and traced, as to their utilization of that. So it allows us to have that control, and additional layer of security on your sensitive Data. This may be required by regulatory agencies, and it's seeming that we're seeing compliance audits, get more and more strict every year. GDPR was kind of funny, because they said in 2016, hey, this is coming, they said in 2018, it's here, and now they're saying in 2020, hey, we're serious about this, and the fines are mounting. And let's give you some examples to kind of, help you understand, that these regulations are real, the fines are real, and your reputational damage can be significant, if you were to be in breach, of a regulatory compliance requirements. We're finding so many different use cases now, popping up around regional protection of Data. I need to protect this Data so that it cannot go offshore. I need to protect this Data, so that people from another region cannot see it. That's all the kind of capability that we have, within secure Data that we can add to Vertica. We have that broad platform support, and I mentioned NiFi and Kafka, those would be on the left hand side, as we start to ingest Data from applications into Vertica. We can have landing zone approaches, where we provide some automated scripting at an OS level, to be able to protect ETL batch transactions coming in. We could protect within the Vertica UDx, as I mentioned, with the copy command, directly using Vertica. Everything inside that dot dash line, is the Vertical Plus Voltage Secure Data combo, that's sold together as a single package. Additionally, we'd love to talk with you, about the stuff that's outside the dash box, because we have dozens and dozens of endpoints, that could protect and access Data, on many different platforms. And this is where you really start to leverage, some of the extensive power of secure Data, to go across platform to handle your web based apps, to handle apps in the cloud, and to handle all of this at scale, with hundreds of thousands of transactions per second, of format preserving encryption. That may not sound like much, but when you take a look at the algorithm, what we're doing on the mathematics side, when you look at everything that goes into that transaction, to me, that's an amazing accomplishment, that we're trying to reach those kinds of levels of scale, and with Vertica, it scales horizontally. So the more nodes you add, the more power you get, the more throughput you're going to get, from voltage secure Data. I want to highlight the next steps, on how we can continue to move forward. Our secure Data team is available to you, to talk about the landscape, your use cases, your Data. We really love the concept that, we've got so many different organizations out there, using secure Data in so many different and unique ways. We have vehicle manufacturers, who are protecting not just the VIN, not just their customer Data, but in fact they're protecting sensor Data from the vehicles, which is sent over the network, down to the home base every 15 minutes, for every vehicle that's on the road, and every vehicle of this customer of ours, since 2017, has included that capability. So now we're talking about, an additional millions and millions of units coming online, as those cars are sold and distributed, and used by customers. That sensor Data is critical to the customer, and they cannot let that be ex-filled in the clear. So they protect that Data with secure Data, and we have a great track record of being able to meet, a variety of different unique requirements, whether it's IoT, whether it's web based Apps, E-commerce, healthcare, all kinds of different industries, we would love to help move the conversations forward, and we do find that it's really a three party discussion, the customer, secure Data experts in some cases, and the Vertica team. We have great enablement within Vertica team, to be able to explain and present, our secure Data solution to you. But we also have that other ability to add other experts in, to keep that conversation going into a broader perspective, of how can I protect my Data across all my platforms, not just in Vertica. I want to give a shout out to our friends at Vertica Academy. They're building out a great demo and training facilities, to be able to help you learn more about these UDx's, and how they're implemented. The Academy, is a terrific reference and resource for your teams, to be able to learn more, about the solution in a self guided way, and then we'd love to have your feedback on that. How can we help you more? What are the topics you'd like to learn more about? How can we look to the future, in protecting unstructured Data? How can we look to the future, of being able to protect Data at scale? What are the requirements that we need to be meeting? Help us through the learning processes, and through feedback to the team, get better, and then we'll help you deliver more solutions, out to those endpoints and protect that Data, so that we're not having Data breach, we're not having regulatory compliance concerns. And then lastly, learn more about the Udx. I mentioned, that all of our content there, is online and available to the public. So vertica.com/secureData , you're going to be able to walk through the basics of the UDX. You're going to see how simple it is to set up, what the UDx syntax looks like, how to grant access to it, and then you'll start to be able to figure out, hey, how can I start to put this, into a PLC in my own environment? Like I mentioned before, we have publicly available hosted appliance, for demo purposes, that we can make available to you, if you want to PLC this. Reach out to us. Let's get a conversation going, and we'll get you the address and get you some instructions, we can have a quick enablement session. We really want to make this accessible to you, and help demystify the concept of encryption, because when you see it as a developer, and you start to get your hands on it and put it to use, you can very quickly see, huh, I could use this in a variety of different cases, and I could use this to protect my Data, without impacting my analytics. Those are some of the really big concerns that folks have, and once we start to get through that learning process, and playing around with it in a PLC way, that we can start to really put it to practice into production, to say, with confidence, we're going to move forward toward Data encryption, and have a very good result, at the end of the day. This is one of the things I find with customers, that's really interesting. Their biggest stress, is not around the timeframe or the resource, it's really around, this is my Data, I have been working on collecting this Data, and making it available in a very high quality way, for many years. This is my job and I'm responsible for this Data, and now you're telling me, you're going to encrypt that Data? It makes me nervous, and that's common, everybody feels that. So we want to have that conversation, and that sort of trial and error process to say, hey, let's get your feet wet with it, and see how you like it in a sandbox environment. Let's now take that into analytics, and take a look at how we can make this, go for a quick 1.0 release, and let's then take a look at, future expansions to that, where we start adding Kafka on the ingest side. We start sending Data off, into other machine learning and analytics platforms, that we might want to utilize outside of Vertica, for certain purposes, in certain industries. Let's take a look at those use cases together, and through that journey, we can really chart a path toward the future, where we can really help you protect that Data, at rest, in use, and keep you safe, from both the hackers and the regulators, and that I think at the end of the day, is really what it's all about, in terms of protecting our Data within Vertica. We're going to have a little couple minutes for Q&A, and we would encourage you to have any questions here, and we'd love to follow up with you more, about any questions you might have, about Vertica Plus Voltage Secure Data. They you very much for your time today.

Published Date : Mar 30 2020

SUMMARY :

and our engineering team is planning to join the Forum, and our goal is to keep you off the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerticaORGANIZATION

0.99+

100 gigQUANTITY

0.99+

16QUANTITY

0.99+

16 gigsQUANTITY

0.99+

200 gigQUANTITY

0.99+

Paige RobertsPERSON

0.99+

2016DATE

0.99+

PaigePERSON

0.99+

Rich GastonPERSON

0.99+

dozensQUANTITY

0.99+

2018DATE

0.99+

Vertica AcademyORGANIZATION

0.99+

2020DATE

0.99+

SQLTITLE

0.99+

AWSORGANIZATION

0.99+

FirstQUANTITY

0.99+

1000 peopleQUANTITY

0.99+

HallmarkORGANIZATION

0.99+

todayDATE

0.99+

Harold PotterPERSON

0.99+

RichPERSON

0.99+

millionsQUANTITY

0.99+

Stanford UniversityORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

TodayDATE

0.99+

Each customerQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

CaliforniaLOCATION

0.99+

KafkaTITLE

0.99+

VerticaTITLE

0.99+

LatinOTHER

0.99+

tomorrowDATE

0.99+

2017DATE

0.99+

eight coresQUANTITY

0.99+

twoQUANTITY

0.98+

GDPRTITLE

0.98+

firstQUANTITY

0.98+

one customerQUANTITY

0.98+

TableauTITLE

0.98+

United StatesLOCATION

0.97+

this weekDATE

0.97+

VerticaLOCATION

0.97+

4/2DATE

0.97+

LinuxTITLE

0.97+

one fileQUANTITY

0.96+

vertica.com/secureDataOTHER

0.96+

fourQUANTITY

0.95+

about half a dayQUANTITY

0.95+

CognosTITLE

0.95+

four peopleQUANTITY

0.94+

UdxORGANIZATION

0.94+

one wayQUANTITY

0.94+

NEEDS APPROVAL Nathalie Gholmieh, UCSD | ESCAPE/19


 

[Announcer] - From New York, it's theCUBE! Covering ESCAPE/19. >> Hello, welcome back to theCUBE coverage here in New York City for the first inaugural Multi-Cloud Conference called ESCAPE/2019. I'm John Furrier, host of theCUBE. We're here with Natalie Gholmieh who is the Manager of Data and Integration Services at the University of California San Diego campus/office- sprawling data center, tons of IT, a lot of challenges, welcome. >> Yeah, thank you for having me. >> So, thanks for taking the time out. You're a practitioner, you're here. Why are you at this conference? What are you hoping to gain from here? What interests you here at the Multi-Cloud Escape Conference? >> So, this conference is very much within the spirit of what we're trying to do. Our CIO has directives which is to avoid lock-in, to do multi-vendor orchestration, to go with containers first, and open-source wherever possible. So, this conference pretty much speaks to all of that. >> So, this is a really interesting data point, because it seems the common thread is data. >> Mhmm. >> The cloud is an integration of things, so people are trying to find that integration point so they can have multiple vendors, multiple clouds. It seems like the multi-vendor world back in the old days, where you had multiple vendors, heterogeneous environment, data seems to be the linchpin in all this. >> Right, yes. >> That's what you do. >> Right. >> How do you think about this? Because it used to be that the big database ran the world, now you have lots of databases, you have applications. >> Right, yeah. >> Databases are everywhere now. >> Data is born in multiple systems, but the data is also an asset right now to all of the organizations, including the university. So, what we want to try to accomplish is to get all of this data possibly in one place, or in multiple places, and be able to do analytics on top of this, and this is what the value-added processing over the data. >> What's exciting to you these days in the University? You guys try to change the business, could be technology? What are the cool things that you like, that you're working with right now, or that you envision emerging? >> Yeah. So, my team is currently building a platform to do all of the data integration and we are planning to offer this platform as a service to developers to streamline and standardize application development, as well as integration development, within the central IT at the University. So this is pretty much the most exciting thing that we're doing, is putting together this platform that is quite complex, it is a journey that we're taking together with the people who already operate the existing systems. So we're putting up this new thing that we're operating in parallel and then we will be migrating to that new platform. >> I'm sure containers are involved, >> Yes. >> Kubernetes is a key part of it. >> Yes, mhmm. So, the platform has two parts. There is the application publishing with Docker and Kubernetes, and we also have the streaming side of it, to build the data pipeline with open-source tools like Apache NiFi and Apache Kafka. So this is going to be wiring the data pipelines from source to target and moving the data in real time in order to- >> And you see that as a nice way to keep an option to move from cloud to cloud? >> Potentially, since the platform's role is to decouple the infrastructure from development. That way, you could spin a portion of the platform on any cloud, pretty much, and run your workload anywhere you want. >> So classic DevOps. >> Yeah. >> Separate infrastructure as code, provide a codified layer. >> Yeah. >> So, let me ask you a question. How did you get into all this data business? I mean, what attracted you to the data field? What's your story? Tell us your story. >> So, the data, you know, I personally started, I mean, I had more of a networking background. Then I became Sys Admin, then I got into the business of logging and log aggregation for machine data, and then I was, you know, using that data to create dashboards of system health and, you know, data correlation, and this is what exposed me, personally, first to the data world, and then I saw the value in doing all of this with data, and the value is even more impactful to the business when you're working with actual business data. So I'm very excited about that. >> So you were swimming in the first data lake before data lakes were data lakes. >> Yes, yeah, right, for machine data. >> Once you're in there, you see value, the data exhaust comes in, as we used to say back in the day, data exhaust! >> Yeah. >> So, now that you're dealing with the business value, is the conversation the same? Or are they different conversations? Or is it still the same, kind of, data conversation? Or is the job the same? Because you still have machine data, applications are throwing off data, you have infrastructure data being thrown off, you have new software layers. >> Yes, yeah. >> Is it the same, or is it different? Describe your current situation. >> You know, maybe the concepts are the same, but I think the logging machine data has more value to IT to give insights on how to improve your SLAs, within the scope of IT, but the business data really will impact the business, the whole entire university for us. So, one of the things that we're doing on the business side with the business data is to provide some analytics on the student data in order to increase their chances for success. So, getting all of that data, doing some reports and pattern analytics, and then coaching the students. >> Not a bad place to live, in San Diego, is it? >> Oh, it's excellent. >> Weather's always perfect? >> Oh, yeah. >> Marine layer's not as bad as L.A., but, you know. >> Yeah. >> Or is it? >> No, we do have- The university is right on the coast, so yeah. Sometimes it's gloomy the whole entire day. >> I love it there. I wish I could've gone to school at the University of San Diego. >> It is great. It's a great place to be. >> Love to go down, see my friends in La Jolla, Del Mar, beautiful areas. Great country. >> Yeah. >> Well, thank you for coming on and sharing your insights into multi-cloud and some of the thinking. It seems to be very foundational right now in its whole thinking. >> Mhmm. >> There's no master plan yet. People are really having good conversations around how to set it all up. >> Yeah. >> The architecture. >> Right. >> The role. >> Yeah, yeah. >> Do you see the same thing? >> Yes, architecture is actually a very essential piece of it because you need to plan before you go. If you go without planning, I think your bill is going to be off the roof. >> Huge bill. >> Yeah. >> And you'll sink in the quicksand and the data lake and you can be sucked into the data swamp. >> Yeah. Right. Yeah. So, architecture is a big piece of it, design, then build, and then continuous improvement, that's a huge thing at UC San Diego. >> You know what I get excited about? I get excited about real time, and how real time, time series data is becoming a big part of the application development, and understanding the context between good data and bad data. >> Mhmm. >> It's always a hard problem. It's a hard tech problem. >> Yeah, that is true, yeah. There are a lot of processes that should be set around the data to make sure the data's clean and it's a good data set and all of that. >> If data's an asset, then has it got a value? Is it on the balance sheet? Shouldn't we value the data? Some data's more valuable than others? It's a good question, huh? >> It is a good question, but I don't know the answer to that. >> No one knows. We always ask the question. I think that's a future state where at some point, data can be recognized, but right now it's hard to tell what's valuable or not. >> I think the value is in the returned services and the value-added services that you, as an organization, can bring to your customer base. This is where the value is, and if you want to put a dollar amount on that, I don't know, it's not my job. >> Thank you so much for coming on, special time of conversation. >> Thank you. >> CUBE Conversation here, the CUBE Coverage of the first inaugural Multi-Cloud Conference called ESCAPE/19, where the industry best are coming together. Practitioners, entrepreneurs, founders, executives, and finally, just talking about what multi-cloud really can be, foundationally what needs to be in place. And this is what happens here at these conferences, tons of hallway conversations. Natalie, thank you for spending the time with us. CUBE Coverage, I'm John Furrier. Thanks for watching.

Published Date : Oct 19 2019

SUMMARY :

[Announcer] - From New York, it's theCUBE! and Integration Services at the So, thanks for taking the time out. So, this conference pretty much speaks to all of that. because it seems the common thread is data. It seems like the multi-vendor world back in the old days, now you have lots of databases, you have applications. but the data is also an asset right now to all of the all of the data integration and we are planning to offer There is the application publishing with Docker and Potentially, since the platform's role is to decouple I mean, what attracted you to the data field? So, the data, you know, I personally started, So you were swimming in the first data lake Or is it still the same, kind of, data conversation? Is it the same, or is it different? So, one of the things that we're doing on the business side Sometimes it's gloomy the whole entire day. University of San Diego. It's a great place to be. Love to go down, see my friends in La Jolla, Well, thank you for coming on and sharing your insights how to set it all up. because you need to plan before you go. and you can be sucked into the data swamp. So, architecture is a big piece of it, part of the application development, It's a hard tech problem. set around the data to make sure the data's clean but I don't know the answer to that. We always ask the question. and the value-added services that you, Thank you so much for coming on, of the first inaugural Multi-Cloud Conference

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie GholmiehPERSON

0.99+

Nathalie GholmiehPERSON

0.99+

NataliePERSON

0.99+

John FurrierPERSON

0.99+

La JollaLOCATION

0.99+

New YorkLOCATION

0.99+

San DiegoLOCATION

0.99+

New York CityLOCATION

0.99+

two partsQUANTITY

0.99+

L.A.LOCATION

0.99+

Del MarLOCATION

0.99+

UC San DiegoORGANIZATION

0.98+

UCSDORGANIZATION

0.98+

oneQUANTITY

0.98+

one placeQUANTITY

0.98+

University of San DiegoORGANIZATION

0.97+

firstQUANTITY

0.96+

KubernetesTITLE

0.95+

Multi-Cloud Escape ConferenceEVENT

0.95+

ESCAPE/2019EVENT

0.95+

ESCAPE/19EVENT

0.92+

ApacheORGANIZATION

0.89+

first data lakeQUANTITY

0.82+

theCUBEORGANIZATION

0.79+

ESCAPEEVENT

0.77+

University of California SanORGANIZATION

0.76+

SysORGANIZATION

0.74+

ManagerPERSON

0.73+

first inauguralQUANTITY

0.72+

Multi-Cloud ConferenceEVENT

0.68+

KafkaTITLE

0.65+

NiFiTITLE

0.64+

first inaugural MultiQUANTITY

0.6+

-Cloud ConferenceEVENT

0.53+

DockerORGANIZATION

0.51+

Data and Integration ServicesPERSON

0.48+

DiegoLOCATION

0.31+

19EVENT

0.29+

Rob Bearden, Hortonworks | DataWorks Summit 2018


 

>> Live from San Jose in the heart of Silicon Valley, it's theCUBE covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks Summit here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We're joined by Rob Bearden. He is the CEO of Hortonworks. So thanks so much for coming on theCUBE again, Rob. >> Thank you for having us. >> So you just got off of the keynote on the main stage. The big theme is really about modern data architecture. So we're going to have this modern data architecture. What is it all about? How do you think about it? What's your approach? And how do you walk customers through this process? >> Well, there's a lot of moving parts in enabling a modern data architecture. One of the first steps is what we're trying to do is unlock the siloed transactional applications, and to get that data into a central architecture so you can get real time insights around the inclusive dataset. But what we're really trying to accomplish then within that modern data architecture is to bring all types of data whether it be real time streaming data, whether it be sensor data, IoT data, whether it be data that's coming from a connected core across the network, and to be able to bring all that data together in real time, and give the enterprise the ability to be able to take best in class action so that you get a very prescriptive outcome of what you want. So if we bring that data under management from point of origination and out on the edge, and then have the platforms that move that through its entire lifecycle, and that's our HDF platform, it gives the customer the ability to, after they capture it at the edge, move it, and then have the ability to process it as an event happens, a condition changes, various conditions come together, have the ability to process and take the exact action that you want to see performed against that, and then bring it to rest, and that's where our HDP platform comes into play where then all that data can be aggregated so you can have a holistic insight, and have real time interactions on that data. But then it then becomes about deploying those datasets and workloads on the tier that's most economically and architecturally pragmatic. So if that's on-prem, we make sure that we are architected for that on-prem deployment or private cloud or even across multiple public clouds simultaneously, and give the enterprise the ability to support each of those native environments. And so we think hybrid cloud architecture is really where the vast majority of our customers today and in the future, are going to want to be able to run and deploy their applications and workloads. And that's where our DataPlane Service Offering gives them the ability to have that hybrid architecture and the architectural latitude to move workloads and datasets across each tier transparently to what storage file format that they did or where that application is, and we provide all the tooling to match the complexity from doing that, and then we ensured that it has one common security framework, one common governance through its entire lifecycle, and one management platform to handle that entire lifecycle data. And that's the modern data architecture is to be able to bring all data under management, all types of data under management, and manage that in real time through its lifecycle til it comes at rest and deploy that across whatever architecture tier is most appropriate financially and from a performance on-cloud or prem. >> Rob, this morning at the keynote here in day one at DataWorks San Jose, you presented this whole architecture that you described in the context of what you call hybrid clouds to enable connected communities and with HDP, Hortonworks Data Platform 3.0 is one of the prime announcements, you brought containerization into the story. Could you connect those dots, containerization, connected communities, and HDP 3.0? >> Well, HDP 3.0 is really the foundation for enabling that hybrid architecture natively, and what's it done is it separated the storage from the compute, and so now we have the ability to deploy those workloads via a container strategy across whichever tier makes the most sense, and to move those application and datasets around, and to be able to leverage each tier in the deployment architectures that are most pragmatic. And then what that lets us do then is be able to bring all of the different data types, whether it be customer data, supply chain data, product data. So imagine as an industrial piece of equipment is, an airplane is flying from Atlanta, Georgia to London, and you want to be able to make sure you really understand how well is that each component performing, so that that plane is going to need service when it gets there, it doesn't miss the turnaround and leave 300 passengers stranded or delayed, right? Now with our Connected platform, we have the ability to take every piece of data from every component that's generated and see that in real time, and let the airlines make that real time. >> Delineate essentially. >> And ensure that we know every person that touched it and looked at that data through its entire lifecycle from the ground crew to the pilots to the operations team to the service. Folks on the ground to the reservation agents, and we can prove that if somehow that data has been breached, that we know exactly at what point it was breached and who did or didn't get to see it, and can prevent that because of the security models that we put in place. >> And that relates to compliance and mandates such as the Global Data Protection Regulation GDPR in the EU. At DataWorks Berlin a few months ago, you laid out, Hortonworks laid out, announced a new product called the Data Steward Studio to enable GDPR compliance. Can you give our listeners now who may not have been following the Berlin event a bit of an update on Data Steward Studio, how it relates to the whole data lineage, or set of requirements that you're describing, and then going forward what does Hortonworks's roadmap for supporting the full governance lifecycle for the Connected community, from data lineage through like model governance and so forth. Can you just connect a few dots that will be helpful? >> Absolutely. What's important certainly, driven by GDPR, is the requirement to be able to prove that you understand who's touched that data and who has not had access to it, and that you ensure that you're in compliance with the GDPR regulations which are significant, but essentially what they say is you have to protect the personal data and attributes of that data of the individual. And so what's very important is that you've got to be able to have the systems that not just secure the data, but understand who has the accessibility at any point in time that you've ever maintained that individual's data. And so it's not just about when you've had a transaction with that individual, but it's the rest of the history that you've kept or the multiple datasets that you may try to correlate to try to expand relationship with that customer, and you need to make sure that you can ensure not only that you've secured their data, but then you're protecting and governing who has access to it and when. And as importantly that you can prove in the event of a breach that you had control of that, and who did or did not access it, because if you can't prove any breach, that it was secure, and that no one breached it, who has or access to this not supposed to, you can be opened up for hundreds of thousands of dollars or even multiple millions of dollars of fines just because you can't prove that it was not accessed, and that's what the variety of our platforms, you mentioned Data Studio, is part of. DataPlane is one of the capabilities that gives us the ability. The core engine that does that is Atlas, and that's the open source governance platform that we developed through the community that really drives all the capabilities for governance that moves through each of our products, HDP, HDF, then of course, and DataPlane and Data Studio takes advantage of that and how it moves and replicates data and manages that process for us. >> One of the things that we were talking about before the cameras were rolling was this idea of data driven business models, how they are disrupting current contenders, new rivals coming on the scene all the time. Can you talk a little bit about what you're seeing and what are some of the most exciting and maybe also some of the most threatening things that you're seeing? >> Sure, in the traditional legacy enterprise, it's very procedural driven. You think about classic Encore ERP. It's worked very hard to have a very rigid, very structural procedural order to cash cycle that has not a great deal of flexibility. And it takes through a design process, it builds product, that then you sell product to a customer, and then you service that customer, and then you learn from that transaction different ways to automate or improve efficiencies in their supply chain. But it's very procedural, very linear. And in the new world of connected data models, you want to bring transparency and real time understanding and connectivity between the enterprise, the customer, the product, and the supply chain, and that you can take real time best in practice action. So for example you understand how well your product is performing. Is your customer using it correctly? Are they frustrated with that? Are they using it in the patterns and the frequency that they should be if they are going to expand their use and buy more, and if they're not, how do we engage in that cycle? How do we understand if they're going through a re-review and another buying of something similar that may not be with you for a different reason. And when we have real time visibility to our customer's interaction, understand our product's performance through its entire lifecycle, then we can bring real time efficiency with linking those together with our supply chain into the various relationships we have with our customers. To do that, it requires the modern data architecture, bringing data under management from the point it originates, whether it's from the product or the customer interacting with the company, or the customer interacting potentially with our ecosystem partners, mutual partners, and then letting the best in practice supply chain techniques, make sure that we're bringing the highest level of service and support to that entire lifecycle. And when we bring data under management, manage it through its lifecycle and have the historical view at rest, and leverage that across every tier, that's when we get these high velocity, deep transparency, and connectivity between each of the constituents in the value chain, and that's what our platforms give them the ability to do. >> Not only your platform, you guys have been in business now for I think seven years or so, and you shifted from being in the minds of many and including your own strategy from being the premier data at rest company in terms of the a Hadoop platform to being one of the premier data in motion companies. Is that really where you're going? To be more of a completely streaming focus, solution provider in a multi-cloud environment? And I hear a lot of Kafka in your story now that it's like, oh yeah, that's right, Hortonworks is big on Kafka. Can you give us just a quick sense of how you're making that shift towards low latency real time streaming, big data, or small data for that matter, with embedded analytics and machine learning? >> So, we have evolved from certainly being the leader in global data platforms with all the work that we do collaboratively, and in through the community, to make Hadoop an enterprise viable data platform that has the ability to run mission critical workloads and apps at scale, ensuring that it has all the enterprise facilities from security and governance and management. But you're right, we have expanded our footprint aggressively. And we saw the opportunity to actually create more value for our customers by giving them the ability to not wait til they bring data under management to gain an insight, because in that case, they're happened to be reactive post event post transaction. We want to give them the ability to shift their business model to being interactive, pre-event, pre-conditioned. The way to do that we learned was to be able to bring the data under management from the point of origination, and that's what we used MiNiFi and NiFi for, and then HDF, to move it through its lifecycle, and your point, we have the intellect, we have the insight, and then we have the ability then to process the best in class outcome based on what we know the variables are we're trying to solve for as that's happening. >> And there's the word, the phrase asset which of course is a transactional data paradigm plan, I hear that all over your story now in streaming. So, what you're saying is it's a completely enterprise-grade streaming environment from n to n for the new era of edge computing. Would that be a fair way of-- >> It's very much so. And our model and strategy has always been bring the other best in class engines for what they do well for their particular dataset. A couple of examples of that, one, you brought up Kafka, another is Spark. And they do what they do really well. But what we do is make sure that they fit inside an overall data architecture that then embodies their access to a much broader central dataset that goes from point of origination to point of rest on a whole central architecture, and then benefit from our security, governance, and operations model, being able to manage those engines. So what we're trying to do is eliminate the silos for our customers, and having siloed datasets that just do particular functions. We give them the ability to have an enterprise modern data architecture, we manage the things that bring that forward for the enterprise to have the modern data driven business models by bringing the governance, the security, the operations management, ensure that those workflows go from beginning to end seamlessly. >> Do you, go ahead. >> So I was just going to ask about the customer concerns. So here you are, you've now given them this ability to make these real time changes, what's sort of next? What's on their mind now and what do you see as the future of what you want to deliver next? >> First and foremost we got to make sure we get this right, and we really bring this modern data architecture forward, and make sure that we truly have the governance correct, the security models correct. One pane of glass to manage this. And really enable that hybrid data architecture, and let them leverage the cloud tier where it's architecturally and financially pragmatic to do it, and give them the ability to leg into a cloud architecture without risk of either being locked in or misunderstanding where the lines of demarcation of workloads or datasets are, and not getting the economies or efficiencies they should. And we solved that with DataPlane. So we're working very hard with the community, with our ecosystem and strategic partners to make sure that we're enabling the ability to bring each type of data from any source and deploy it across any tier with a common security, governance, and management framework. So then what's next is now that we have this high velocity of data through its entire lifecycle on one common set of platforms, then we can start enabling the modern applications to function. And we can go look back into some of the legacy technologies that are very procedural based and are dependent on a transaction or an event happening before they can run their logic to get an outcome because that grinds the customer in post world activity. We want to make sure that we're bringing that kind of, for example, supply chain functionality, to the modern data architecture, so that we can put real time inventory allocation based on the patterns that our customers go in either how they're using the product, or frustrations they've had, or success they've had. And we know through artificial intelligence and machine learning that there's a high probability not only they will buy or use or expand their consumption of whatever that they have of our product or service, but it will probably to these other things as well if we do those things. >> Predict the logic as opposed to procedural, yes, AI. >> And very much so. And so it'll be bringing those what's next will be the modern applications on top of this that become very predictive and enabler versus very procedural post to that post transaction. We're little ways downstream. That's looking out. >> That's next year's conference. >> That's probably next year's conference. >> Well, Rob, thank you so much for coming on theCUBE, it's always a pleasure to have you. >> Thank you both for having us, and thank you for being here, and enjoy the summit. >> We're excited. >> Thank you. >> We'll do. >> I'm Rebecca Knight for Jim Kobielus. We will have more from DataWorks Summit just after this. (upbeat music)

Published Date : Jun 20 2018

SUMMARY :

in the heart of Silicon Valley, He is the CEO of Hortonworks. keynote on the main stage. and give the enterprise the ability in the context of what you call and let the airlines from the ground crew to the pilots And that relates to and that you ensure that and maybe also some of the most and that you can take real and you shifted from being that has the ability to run for the new era of edge computing. and then benefit from our security, and what do you see as the future and make sure that we truly have Predict the logic as the modern applications on top of this That's probably next year's it's always a pleasure to have you. and enjoy the summit. I'm Rebecca Knight for Jim Kobielus.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James KobielusPERSON

0.99+

Rebecca KnightPERSON

0.99+

Rob BeardenPERSON

0.99+

Jim KobielusPERSON

0.99+

LondonLOCATION

0.99+

300 passengersQUANTITY

0.99+

San JoseLOCATION

0.99+

RobPERSON

0.99+

Silicon ValleyLOCATION

0.99+

HortonworksORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

hundreds of thousands of dollarsQUANTITY

0.99+

San Jose, CaliforniaLOCATION

0.99+

each componentQUANTITY

0.99+

GDPRTITLE

0.99+

DataWorks SummitEVENT

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.98+

millions of dollarsQUANTITY

0.98+

AtlasTITLE

0.98+

first stepsQUANTITY

0.98+

HDP 3.0TITLE

0.97+

One paneQUANTITY

0.97+

bothQUANTITY

0.97+

DataWorks Summit 2018EVENT

0.97+

FirstQUANTITY

0.96+

next yearDATE

0.96+

eachQUANTITY

0.96+

DataPlaneTITLE

0.96+

theCUBEORGANIZATION

0.96+

HadoopTITLE

0.96+

DataWorksORGANIZATION

0.95+

SparkTITLE

0.95+

todayDATE

0.94+

EULOCATION

0.93+

this morningDATE

0.91+

Atlanta,LOCATION

0.91+

BerlinLOCATION

0.9+

each typeQUANTITY

0.88+

Global Data Protection Regulation GDPRTITLE

0.87+

one commonQUANTITY

0.86+

few months agoDATE

0.85+

NiFiORGANIZATION

0.85+

Data Platform 3.0TITLE

0.84+

each tierQUANTITY

0.84+

Data StudioORGANIZATION

0.84+

Data StudioTITLE

0.83+

day oneQUANTITY

0.83+

one management platformQUANTITY

0.82+

MiNiFiORGANIZATION

0.82+

SanLOCATION

0.71+

DataPlaneORGANIZATION

0.69+

KafkaTITLE

0.67+

Encore ERPTITLE

0.66+

one common setQUANTITY

0.65+

Data Steward StudioORGANIZATION

0.65+

HDFORGANIZATION

0.59+

GeorgiaLOCATION

0.55+

announcementsQUANTITY

0.51+

JoseORGANIZATION

0.47+

John Kreisa, Hortonworks | DataWorks Summit 2018


 

>> Live from San José, in the heart of Silicon Valley, it's theCUBE! Covering DataWorks Summit 2018. Brought to you by Hortonworks. (electro music) >> Welcome back to theCUBE's live coverage of DataWorks here in sunny San José, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We're joined by John Kreisa. He is the VP of marketing here at Hortonworks. Thanks so much for coming on the show. >> Thank you for having me. >> We've enjoyed watching you on the main stage, it's been a lot of fun. >> Thank you, it's been great. It's been great general sessions, some great talks. Talking about the technology, we've heard from some customers, some third parties, and most recently from Kevin Slavin from The Shed which is really amazing. >> So I really want to get into this event. You have 2,100 attendees from 23 different countries, 32 different industries. >> Yep. This started as a small, >> That's right. tiny little thing! >> Didn't Yahoo start it in 2008? >> It did, yeah. >> You changed names a few year ago, but it's still the same event, looming larger and larger. >> Yeah! >> It's been great, it's gone international as you've said. It's actually the 17th total event that we've done. >> Yeah. >> If you count the ones we've done in Europe and Asia. It's a global community around data, so it's no surprise. The growth has been phenomenal, the energy is great, the innovations that the community is talking about, the ecosystem is talking about, is really great. It just continues to evolve as an event, it continues to bring new ideas and share those ideas. >> What are you hearing from customers? What are they buzzing about? Every morning on the main stage, you do different polls that say, "how much are you using machine learning? What portion of your data are you moving to the cloud?" What are you learning? >> So it's interesting because we've done similar polls in our show in Berlin, and the results are very similar. We did the cloud poll pole and there's a lot of buzz around cloud. What we're hearing is there's a lot of companies that are thinking about, or are somewhere along their cloud journey. It's exactly what their overall plans are, and there's a lot of news about maybe cloud will eat everything, but if you look at the pole results, something like 75% of the attendees said they have cloud in their plans. Only about 12% said they're going to move everything to the cloud, so a lot of hybrid with cloud. It's how to figure out which work loads to run where, how to think about that strategy in terms of where to deploy the data, where to deploy the work loads and what that should look like and that's one of the main things that we're hearing and talking a lot about. >> We've been seeing that Wikiban and our recent update to the recent market forecast showed that public cloud will dominate increasingly in the coming decade, but hybrid cloud will be a long transition period for many or most enterprises who are still firmly rooted in on-premises employment, so forth and so on. Clearly, the bulk of your customers, both of your custom employments are on premise. >> They are. >> So you're working from a good starting point which means you've got what, 1,400 customers? >> That's right, thereabouts. >> Predominantly on premises, but many of them here at this show want to sustain their investment in a vendor that provides them with that flexibility as they decide they want to use Google or Microsoft or AWS or IBM for a particular workload that their existing investment to Hortonworks doesn't prevent them from facilitating. It moves that data and those workloads. >> That's right. The fact that we want to help them do that, a lot of our customers have, I'll call it a multi-cloud strategy. They want to be able to work with an Amazon or a Google or any of the other vendors in the space equally well and have the ability to move workloads around and that's one of the things that we can help them with. >> One of the things you also did yesterday on the main stage, was you talked about this conference in the greater context of the world and what's going on right now. This is happening against the backdrop of the World Cup, and you said that this is really emblematic of data because this is a game, a tournament that generates tons of data. >> A tremendous amount of data. >> It's showing how data can launch new business models, disrupt old ones. Where do you think we're at right now? For someone who's been in this industry for a long time, just lay the scene. >> I think we're still very much at the beginning. Even though the conference has been around for awhile, the technology has been. It's emerging so fast and just evolving so fast that we're still at the beginning of all the transformations. I've been listening to the customer presentations here and all of them are at some point along the journey. Many are really still starting. Even in some of the polls that we had today talked about the fact that they're very much at the beginning of their journey with things like streaming or some of the A.I. machine learning technologies. They're at various stages, so I believe we're really at the beginning of the transformation that we'll see. >> That reminds me of another detail of your product portfolio or your architecture streaming and edge deployments are also in the future for many of your customers who still primarily do analytics on data at rest. You made an investment in a number of technologies NiFi from streaming. There's something called MiNiFi that has been discussed here at this show as an enabler for streaming all the way out to edge devices. What I'm getting at is that's indicative of Arun Murthy, one of your co-founders, has made- it was a very good discussion for us analysts and also here at the show. That is one of many investments you're making is to prepare for a future that will set workloads that will be more predominant in the coming decade. One of the new things I've heard this week that I'd not heard in terms of emphasis from you guys is more of an emphasis on data warehousing as an important use case for HDP in your portfolios, specifically with HIVE. The HIVE 3.0 now in- HDP3.0. >> Yes. >> With the enhancements to HIVE to support more real time and low latency, but also there's ACID capabilities there. I'm hearing something- what you guys are doing is consistent with one of your competitors, Cloudera. They're going deeper into data warehousing too because they recognize they've got to got there like you do to be able to absorb more of your customers' workloads. I think that's important that you guys are making that investment. You're not just big data, you're all data and all data applications. Potentially, if your customers want to go there and engage you. >> Yes. >> I think that was a significant, subtle emphasis that me as an analyst noticed. >> Thank you. There were so many enhancements in 3.0 that were brought from the community that it was hard to talk about everything in depth, but you're right. The enhancements to HIVE in terms of performance have really enabled it to take on a greater set of workloads and inner activity that we know that our customers want. The advantage being that you have a common data layer in the back end and you can run all this different work. It might be data warehousing, high speed query workloads, but you can do it on that same data with Spark and data-science related workloads. Again, it's that common pool backend of the data lake and having that ability to do it with common security and governance. It's one of the benefits our customers are telling us they really appreciate. >> One of the things we've also heard this morning was talking about data analytics in terms of brand value and brand protection importantly. Fedex, exactly. Talking about, the speaker said, we've all seen these apology commercials. What do you think- is it damage control? What is the customer motivation here? >> Well a company can have billions of dollars of market cap wiped out by breeches in security, and we've seen it. This is not theoretical, these are actual occurrences that we've seen. Really, they're trying to protect the brand and the business and continue to be viable. They can get knocked back so far that it can take years to recover from the impact. They're looking at the security aspects of it, the governance of their data, the regulations of GVPR. These things you've mentioned have real financial impact on the businesses, and I think it's brand and the actual operations and finances of the businesses that can be impacted negatively. >> When you're thinking about Hortonworks's marketing messages going forward, how do you want to be described now, and then how do you want customers to think of you five or 10 years from now? >> I want them to think of us as a partner to help us with their data journey, on all aspects of their data journey, whether they're collecting data from the EDGE, you mentioned NiFi and things like that. Bringing that data back, processing it in motion, as well as processing it in rest, regardless of where that data lands. On premise, in the cloud, somewhere in between, the hybrid, multi-cloud strategy. We really want to be thought of as their partner in their data journey. That's really what we're doing. >> Even going forward, one of the things you were talking about earlier is the company's sort of saying, "we want to be boring. We want to help you do all the stuff-" >> There's a lot of money in boring. >> There's a lot of money, right! Exactly! As you said, a partner in their data journey. Is it "we'll do anything and everything"? Are you going to do niche stuff? >> That's a good question. Not everything. We are focused on the data layer. The movement of data, the process and storage, and truly the analytic applications that can be built on top of the platform. Right now we've stuck to our strategy. It's been very consistent since the beginning of the company in terms of taking these open source technologies, making them enterprise viable, developing an eco-system around it and fostering a community around it. That's been our strategy since before the company even started. We want to continue to do that and we will continue to do that. There's so much innovation happening in the community that we quickly bring that into the products and make sure that's available in a trusted, enterprise-tested platform. That's really one of the things we see our customers- over and over again they select us because we bring innovation to them quickly, in a safe and consumable way. >> Before we came on camera, I was telling Rebecca that Hortonworks has done a sensational job of continuing to align your product roadmaps with those of your leading partners. IBM, AWS, Microsoft. In many ways, your primary partners are not them, but the entire open source community. 26 open source projects in which Hortonworks represents and incorporated in your product portfolio in which you are a primary player and committer. You're a primary ingester of innovation from all the communities in which you operate. >> We do. >> That is your core business model. >> That's right. We both foster the innovation and we help drive the information ourselves with our engineers and architects. You're absolutely right, Jim. It's the ability to get that innovation, which is happening so fast in the community, into the product and companies need to innovate. Things are happening so fast. Moore's Law was mentioned multiple times on the main stage, you know, and how it's impacting different parts of the organization. It's not just the technology, but business models are evolving quickly. We heard a little bit about Trumble, and if you've seen Tim Leonard's talk that he gave around what they're doing in terms of logistics and the ability to go all the way out to the farmer and impact what's happening at the farm and tracking things down to the level of a tomato or an egg all the way back and just understand that. It's evolving business models. It's not just the tech but the evolution of business models. Rob talked about it yesterday. I think those are some of the things that are kind of key. >> Let me stay on that point really quick. Industrial internet like precision agriculture and everything it relates to, is increasingly relying on visual analysis, parts and eggs and whatever it might be. That is convolutional neural networks, that is A.I., it has to be trained, and it has to be trained increasingly in the cloud where the data lives. The data lives in H.D.P, clusters and whatnot. In many ways, no matter where the world goes in terms of industrial IoT, there will be massive cluster of HTFS and object storage driving it and also embedded A.I. models that have to follow a specific DevOps life cycle. You guys have a strong orientation in your portfolio towards that degree of real-time streaming, as it were, of tasks that go through the entire life cycle. From the preparing the data, to modeling, to training, to deploying it out, to Google or IBM or wherever else they want to go. So I'm thinking that you guys are in a good position for that as well. >> Yeah. >> I just wanted to ask you finally, what is the takeaway? We're talking about the attendees, talking about the community that you're cultivating here, theme, ideas, innovation, insight. What do you hope an attendee leaves with? >> I hope that the attendee leaves educated, understanding the technology and the impacts that it can have so that they will go back and change their business and continue to drive their data projects. The whole intent is really, and we even changed the format of the conference for more educational opportunities. For me, I want attendees to- a satisfied attendee would be one that learned about the things they came to learn so that they could go back to achieve the goals that they have when they get back. Whether it's business transformation, technology transformation, some combination of the two. To me, that's what I hope that everyone is taking away and that they want to come back next year when we're in Washington, D.C. and- >> My stomping ground. >> His hometown. >> Easy trip for you. They'll probably send you out here- (laughs) >> Yeah, that's right. >> Well John, it's always fun talking to you. Thank you so much. >> Thank you very much. >> We will have more from theCUBE's live coverage of DataWorks right after this. I'm Rebecca Knight for James Kobielus. (upbeat electro music)

Published Date : Jun 20 2018

SUMMARY :

in the heart of Silicon Valley, He is the VP of marketing you on the main stage, Talking about the technology, So I really want to This started as a small, That's right. but it's still the same event, It's actually the 17th total event the innovations that the community is that's one of the main things that Clearly, the bulk of your customers, their existing investment to Hortonworks have the ability to move workloads One of the things you also did just lay the scene. Even in some of the polls that One of the new things I've heard this With the enhancements to HIVE to subtle emphasis that me the data lake and having that ability to One of the things we've also aspects of it, the the EDGE, you mentioned NiFi and one of the things you were talking There's a lot of money, right! That's really one of the things we all the communities in which you operate. It's the ability to get that innovation, the cloud where the data lives. talking about the community that learned about the things they came to They'll probably send you out here- fun talking to you. coverage of DataWorks right after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James KobielusPERSON

0.99+

Rebecca KnightPERSON

0.99+

IBMORGANIZATION

0.99+

RebeccaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Tim LeonardPERSON

0.99+

AWSORGANIZATION

0.99+

Arun MurthyPERSON

0.99+

JimPERSON

0.99+

Kevin SlavinPERSON

0.99+

EuropeLOCATION

0.99+

John KreisaPERSON

0.99+

BerlinLOCATION

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

2008DATE

0.99+

Washington, D.C.LOCATION

0.99+

AsiaLOCATION

0.99+

75%QUANTITY

0.99+

RobPERSON

0.99+

fiveQUANTITY

0.99+

San JoséLOCATION

0.99+

next yearDATE

0.99+

YahooORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

32 different industriesQUANTITY

0.99+

World CupEVENT

0.99+

yesterdayDATE

0.99+

23 different countriesQUANTITY

0.99+

oneQUANTITY

0.99+

1,400 customersQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

2,100 attendeesQUANTITY

0.99+

FedexORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

26 open source projectsQUANTITY

0.99+

HortonworksORGANIZATION

0.98+

17thQUANTITY

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.98+

billions of dollarsQUANTITY

0.98+

ClouderaORGANIZATION

0.97+

about 12%QUANTITY

0.97+

theCUBEORGANIZATION

0.97+

this weekDATE

0.96+

DataWorks Summit 2018EVENT

0.95+

NiFiORGANIZATION

0.91+

this morningDATE

0.89+

HIVE 3.0OTHER

0.86+

SparkTITLE

0.86+

few year agoDATE

0.85+

WikibanORGANIZATION

0.85+

The ShedORGANIZATION

0.84+

San José, CaliforniaLOCATION

0.84+

tonsQUANTITY

0.82+

H.D.PLOCATION

0.82+

DataWorksEVENT

0.81+

thingsQUANTITY

0.78+

DataWorksORGANIZATION

0.74+

MiNiFiTITLE

0.62+

dataQUANTITY

0.61+

MooreTITLE

0.6+

yearsQUANTITY

0.59+

coming decadeDATE

0.59+

TrumbleORGANIZATION

0.59+

GVPRORGANIZATION

0.58+

3.0OTHER

0.56+

Arun Murthy, Hortonworks | DataWorks Summit 2018


 

>> Live from San Jose in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my cohost, Jim Kobielus. We're joined by Aaron Murphy, Arun Murphy, sorry. He is the co-founder and chief product officer of Hortonworks. Thank you so much for returning to theCUBE. It's great to have you on >> Yeah, likewise. It's been a fun time getting back, yeah. >> So you were on the main stage this morning in the keynote, and you were describing the journey, the data journey that so many customers are on right now, and you were talking about the cloud saying that the cloud is part of the strategy but it really needs to fit into the overall business strategy. Can you describe a little bit about how you're approach to that? >> Absolutely, and the way we look at this is we help customers leverage data to actually deliver better capabilities, better services, better experiences, to their customers, and that's the business we are in. Now with that obviously we look at cloud as a really key part of it, of the overall strategy in terms of how you want to manage data on-prem and on the cloud. We kind of joke that we ourself live in a world of real-time data. We just live in it and data is everywhere. You might have trucks on the road, you might have drawings, you might have sensors and you have it all over the world. At that point, we've kind of got to a point where enterprise understand that they'll manage all the infrastructure but in a lot of cases, it will make a lot more sense to actually lease some of it and that's the cloud. It's the same way, if you're delivering packages, you don't got buy planes and lay out roads you go to FedEx and actually let them handle that view. That's kind of what the cloud is. So that is why we really fundamentally believe that we have to help customers leverage infrastructure whatever makes sense pragmatically both from an architectural standpoint and from a financial standpoint and that's kind of why we talked about how your cloud strategy, is part of your data strategy which is actually fundamentally part of your business strategy. >> So how are you helping customers to leverage this? What is on their minds and what's your response? >> Yeah, it's really interesting, like I said, cloud is cloud, and infrastructure management is certainly something that's at the foremost, at the top of the mind for every CIO today. And what we've consistently heard is they need a way to manage all this data and all this infrastructure in a hybrid multi-tenant, multi-cloud fashion. Because in some GEOs you might not have your favorite cloud renderer. You know, go to parts of Asia is a great example. You might have to use on of the Chinese clouds. You go to parts of Europe, especially with things like the GDPR, the data residency laws and so on, you have to be very, very cognizant of where your data gets stored and where your infrastructure is present. And that is why we fundamentally believe it's really important to have and give enterprise a fabric with which it can manage all of this. And hide the details of all of the underlying infrastructure from them as much as possible. >> And that's DataPlane Services. >> And that's DataPlane Services, exactly. >> The Hortonworks DataPlane Services we launched in October of last year. Actually I was on CUBE talking about it back then too. We see a lot of interest, a lot of excitement around it because now they understand that, again, this doesn't mean that we drive it down to the least common denominator. It is about helping enterprises leverage the key differentiators at each of the cloud renderers products. For example, Google, which we announced a partnership, they are really strong on AI and MO. So if you are running TensorFlow and you want to deal with things like Kubernetes, GKE is a great place to do it. And, for example, you can now go to Google Cloud and get DPUs which work great for TensorFlow. Similarly, a lot of customers run on Amazon for a bunch of the operational stuff, Redshift as an example. So the world we live in, we want to help the CIO leverage the best piece of the cloud but then give them a consistent way to manage and count that data. We were joking on stage that IT has just about learned how deal with Kerberos and Hadoob And now we're telling them, "Oh, go figure out IM on Google." which is also IM on Amazon but they are completely different. The only thing that's consistent is the name. So I think we have a unique opportunity especially with the open source technologies like Altas, Ranger, Knox and so on, to be able to draw a consistent fabric over this and secured occurrence. And help the enterprise leverage the best parts of the cloud to put a best fit architecture together, but which also happens to be a best of breed architecture. >> So the fabric is everything you're describing, all the Apache open source projects in which HortonWorks is a primary committer and contributor, are able to scheme as in policies and metadata and so forth across this distributed heterogeneous fabric of public and private cloud segments within a distributed environment. >> Exactly. >> That's increasingly being containerized in terms of the applications for deployment to edge nodes. Containerization is a big theme in HTP3.0 which you announced at this show. >> Yeah. >> So, if you could give us a quick sense for how that containerization capability plays into more of an edge focus for what your customers are doing. >> Exactly, great point, and again, the fabric is obviously, the core parts of the fabric are the open source projects but we've also done a lot of net new innovation with data plans which, by the way, is also open source. Its a new product and a new platform that you can actually leverage, to lay it out over the open source ones you're familiar with. And again, like you said, containerization, what is actually driving the fundamentals of this, the details matter, the scale at which we operate, we're talking about thousands of nodes, terabytes of data. The details really matter because a 5% improvement at that scale leads to millions of dollars in optimization for capex and opex. So that's why all of that, the details are being fueled and driven by the community which is kind of what we tell over HDP3 Until the key ones, like you said, are containerization because now we can actually get complete agility in terms of how you deploy the applications. You get isolation not only at the resource management level with containers but you also get it at the software level, which means, if two data scientists wanted to use a different version of Python or Scala or Spark or whatever it is, they get that consistently and holistically. That now they can actually go from the test dev cycle into production in a completely consistent manner. So that's why containers are so big because now we can actually leverage it across the stack and the things like MiNiFi showing up. We can actually-- >> Define MiNiFi before you go further. What is MiNiFi for our listeners? >> Great question. Yeah, so we've always had NiFi-- >> Real-time >> Real-time data flow management and NiFi was still sort of within the data center. What MiNiFi does is actually now a really, really small layer, a small thin library if you will that you can throw on a phone, a doorbell, a sensor and that gives you all the capabilities of NiFi but at the edge. >> Mmm Right? And it's actually not just data flow but what is really cool about NiFi it's actually command and control. So you can actually do bidirectional command and control so you can actually change in real-time the flows you want, the processing you do, and so on. So what we're trying to do with MiNiFi is actually not just collect data from the edge but also push the processing as much as possible to the edge because we really do believe a lot more processing is going to happen at the edge especially with the A6 and so on coming out. There will be custom hardware that you can throw and essentially leverage that hardware at the edge to actually do this processing. And we believe, you know, we want to do that even if the cost of data not actually landing up at rest because at the end of the day we're in the insights business not in the data storage business. >> Well I want to get back to that. You were talking about innovation and how so much of it is driven by the open source community and you're a veteran of the big data open source community. How do we maintain that? How does that continue to be the fuel? >> Yeah, and a lot of it starts with just being consistent. From day one, James was around back then, in 2011 we started, we've always said, "We're going to be open source." because we fundamentally believed that the community is going to out innovate any one vendor regardless of how much money they have in the bank. So we really do believe that's the best way to innovate mostly because their is a sense of shared ownership of that product. It's not just one vendor throwing some code out there try to shove it down the customers throat. And we've seen this over and over again, right. Three years ago, we talk about a lot of the data plane stuff comes from Atlas and Ranger and so on. None of these existed. These actually came from the fruits of the collaboration with the community with actually some very large enterprises being a part of it. So it's a great example of how we continue to drive it6 because we fundamentally believe that, that's the best way to innovate and continue to believe so. >> Right. And the community, the Apache community as a whole so many different projects that for example, in streaming, there is Kafka, >> Okay. >> and there is others that address a core set of common requirements but in different ways, >> Exactly. >> supporting different approaches, for example, they are doing streaming with stateless transactions and so forth, or stateless semantics and so forth. Seems to me that HortonWorks is shifting towards being more of a streaming oriented vendor away from data at rest. Though, I should say HDP3.0 has got great scalability and storage efficiency capabilities baked in. I wonder if you could just break it down a little bit what the innovations or enhancements are in HDP3.0 for those of your core customers, which is most of them who are managing massive multi-terabyte, multi-petabyte distributed, federated, big data lakes. What's in HDP3.0 for them? >> Oh for lots. Again, like I said, we obviously spend a lot of time on the streaming side because that's where we see. We live in a real-time world. But again, we don't do it at the cost of our core business which continues to be HDP. And as you can see, the community trend is drive, we talked about continuization massive step up for the Hadoob Community. We've also added support for GPUs. Again, if you think about Trove's at scale machine learning. >> Graphing processing units, >> Graphical-- >> AI, deep learning >> Yeah, it's huge. Deep learning, intensive flow and so on, really, really need a custom, sort of GPU, if you will. So that's coming. That's an HDP3. We've added a whole bunch of scalability improvements with HDFS. We've added federation because now we can go from, you can go over a billion files a billion objects in HDFS. We also added capabilities for-- >> But you indicated yesterday when we were talking that very few of your customers need that capacity yet but you think they will so-- >> Oh for sure. Again, part of this is as we enable more source of data in real-time that's the fuel which drives and that was always the strategy behind the HDF product. It was about, can we leverage the synergies between the real-time world, feed that into what you do today, in your classic enterprise with data at rest and that is what is driving the necessity for scale. >> Yes. >> Right. We've done that. We spend a lot of work, again, loading the total cost of ownership the TCO so we added erasure coding. >> What is that exactly? >> Yeah, so erasure coding is a classic sort of storage concept which allows you to actually in sort of, you know HTFS has always been three replicas So for redundancy, fault tolerance and recovery. Now, it sounds okay having three replicas because it's cheap disk, right. But when you start to think about our customers running 70, 80 hundred terabytes of data those three replicas add up because you've now gone from 80 terabytes of effective data where actually two 1/4 of an exobyte in terms of raw storage. So now what we can do with erasure coding is actually instead of storing the three blocks we actually store parody. We store the encoding of it which means we can actually go down from three to like two, one and a half, whatever we want to do. So, if we can get from three blocks to one and a half especially for your core data, >> Yeah >> the ones you're not accessing every day. It results in a massive savings in terms of your infrastructure costs. And that's kind of what we're in the business doing, helping customers do better with the data they have whether it's on-prem or on the cloud, that's sort of we want to help customers be comfortable getting more data under management along with secured and the lower TCO. The other sort of big piece I'm really excited about HDP3 is all the work that's happened to Hive Community for what we call the real-time database. >> Yes. >> As you guys know, you follow the whole sequel of ours in the Doob Space. >> And hive has changed a lot in the last several years, this is very different from what it was five years ago. >> The only thing that's same from five years ago is the name (laughing) >> So again, the community has done a phenomenal job, kind of, really taking sort of a, we used to call it like a sequel engine on HDFS. From there, to drive it with 3.0, it's now like, with Hive 3 which is part of HDP3 it's a full fledged database. It's got full asset support. In fact, the asset support is so good that writing asset tables is at least as fast as writing non-asset tables now. And you can do that not only on-- >> Transactional database. >> Exactly. Now not only can you do it on prem, you can do it on S3. So you can actually drive the transactions through Hive on S3. We've done a lot of work to actually, you were there yesterday when we were talking about some of the performance work we've done with LAP and so on to actually give consistent performance both on-prem and the cloud and this is a lot of effort simply because the performance characteristics you get from the storage layer with HDFS versus S3 are significantly different. So now we have been able to bridge those with things with LAP. We've done a lot of work and sort of enhanced the security model around it, governance and security. So now you get things like account level, masking, row-level filtering, all the standard stuff that you would expect and more from an Enprise air house. We talked to a lot of our customers, they're doing, literally tens of thousands of views because they don't have the capabilities that exist in Hive now. >> Mmm-hmm 6 And I'm sitting here kind of being amazed that for an open source set of tools to have the best security and governance at this point is pretty amazing coming from where we started off. >> And it's absolutely essential for GDPR compliance and compliance HIPA and every other mandate and sensitivity that requires you to protect personally identifiable information, so very important. So in many ways HortonWorks has one of the premier big data catalogs for all manner of compliance requirements that your customers are chasing. >> Yeah, and James, you wrote about it in the contex6t of data storage studio which we introduced >> Yes. >> You know, things like consent management, having--- >> A consent portal >> A consent portal >> In which the customer can indicate the degree to which >> Exactly. >> they require controls over their management of their PII possibly to be forgotten and so forth. >> Yeah, it's going to be forgotten, it's consent even for analytics. Within the context of GDPR, you have to allow the customer to opt out of analytics, them being part of an analytic itself, right. >> Yeah. >> So things like those are now something we enable to the enhanced security models that are done in Ranger. So now, it's sort of the really cool part of what we've done now with GDPR is that we can get all these capabilities on existing data an existing applications by just adding a security policy, not rewriting It's a massive, massive, massive deal which I cannot tell you how much customers are excited about because they now understand. They were sort of freaking out that I have to go to 30, 40, 50 thousand enterprise apps6 and change them to take advantage, to actually provide consent, and try to be forgotten. The fact that you can do that now by changing a security policy with Ranger is huge for them. >> Arun, thank you so much for coming on theCUBE. It's always so much fun talking to you. >> Likewise. Thank you so much. >> I learned something every time I listen to you. >> Indeed, indeed. I'm Rebecca Knight for James Kobeilus, we will have more from theCUBE's live coverage of DataWorks just after this. (Techno music)

Published Date : Jun 19 2018

SUMMARY :

brought to you by Hortonworks. It's great to have you on Yeah, likewise. is part of the strategy but it really needs to fit and that's the business we are in. And hide the details of all of the underlying infrastructure for a bunch of the operational stuff, So the fabric is everything you're describing, in terms of the applications for deployment to edge nodes. So, if you could give us a quick sense for Until the key ones, like you said, are containerization Define MiNiFi before you go further. Yeah, so we've always had NiFi-- and that gives you all the capabilities of NiFi the processing you do, and so on. and how so much of it is driven by the open source community that the community is going to out innovate any one vendor And the community, the Apache community as a whole I wonder if you could just break it down a little bit And as you can see, the community trend is drive, because now we can go from, you can go over a billion files the real-time world, feed that into what you do today, loading the total cost of ownership the TCO sort of storage concept which allows you to actually is all the work that's happened to Hive Community in the Doob Space. And hive has changed a lot in the last several years, And you can do that not only on-- the performance characteristics you get to have the best security and governance at this point and sensitivity that requires you to protect possibly to be forgotten and so forth. Within the context of GDPR, you have to allow The fact that you can do that now Arun, thank you so much for coming on theCUBE. Thank you so much. we will have more from theCUBE's live coverage of DataWorks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Rebecca KnightPERSON

0.99+

JamesPERSON

0.99+

Aaron MurphyPERSON

0.99+

Arun MurphyPERSON

0.99+

ArunPERSON

0.99+

2011DATE

0.99+

GoogleORGANIZATION

0.99+

5%QUANTITY

0.99+

80 terabytesQUANTITY

0.99+

FedExORGANIZATION

0.99+

twoQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

HortonworksORGANIZATION

0.99+

San JoseLOCATION

0.99+

AmazonORGANIZATION

0.99+

Arun MurthyPERSON

0.99+

HortonWorksORGANIZATION

0.99+

yesterdayDATE

0.99+

San Jose, CaliforniaLOCATION

0.99+

three replicasQUANTITY

0.99+

James KobeilusPERSON

0.99+

three blocksQUANTITY

0.99+

GDPRTITLE

0.99+

PythonTITLE

0.99+

EuropeLOCATION

0.99+

millions of dollarsQUANTITY

0.99+

ScalaTITLE

0.99+

SparkTITLE

0.99+

theCUBEORGANIZATION

0.99+

five years agoDATE

0.99+

one and a halfQUANTITY

0.98+

EnpriseORGANIZATION

0.98+

threeQUANTITY

0.98+

Hive 3TITLE

0.98+

Three years agoDATE

0.98+

bothQUANTITY

0.98+

AsiaLOCATION

0.97+

50 thousandQUANTITY

0.97+

TCOORGANIZATION

0.97+

MiNiFiTITLE

0.97+

ApacheORGANIZATION

0.97+

40QUANTITY

0.97+

AltasORGANIZATION

0.97+

Hortonworks DataPlane ServicesORGANIZATION

0.96+

DataWorks Summit 2018EVENT

0.96+

30QUANTITY

0.95+

thousands of nodesQUANTITY

0.95+

A6COMMERCIAL_ITEM

0.95+

KerberosORGANIZATION

0.95+

todayDATE

0.95+

KnoxORGANIZATION

0.94+

oneQUANTITY

0.94+

hiveTITLE

0.94+

two data scientistsQUANTITY

0.94+

eachQUANTITY

0.92+

ChineseOTHER

0.92+

TensorFlowTITLE

0.92+

S3TITLE

0.91+

October of last yearDATE

0.91+

RangerORGANIZATION

0.91+

HadoobORGANIZATION

0.91+

HIPATITLE

0.9+

CUBEORGANIZATION

0.9+

tens of thousandsQUANTITY

0.9+

one vendorQUANTITY

0.89+

last several yearsDATE

0.88+

a billion objectsQUANTITY

0.86+

70, 80 hundred terabytes of dataQUANTITY

0.86+

HTP3.0TITLE

0.86+

two 1/4 of an exobyteQUANTITY

0.86+

Atlas andORGANIZATION

0.85+

DataPlane ServicesORGANIZATION

0.84+

Google CloudTITLE

0.82+

John Kreisa, Hortonworks | Dataworks Summit EU 2018


 

>> Narrator: From Berlin, Germany, it's theCUBE. Covering Dataworks Summit Europe 2018. Brought to you by Hortonworks. >> Hello, welcome to theCUBE. We're here at Dataworks Summit 2018 in Berlin, Germany. I'm James Kobielus. I'm the lead analyst for Big Data Analytics, within the Wikibon team of SiliconAngle Media. Our guest is John Kreisa. He's the VP for Marketing at Hortonworks, of course, the host company of Dataworks Summit. John, it's great to have you. >> Thank you Jim, it's great to be here. >> We go long back, so you know it's always great to reconnect with you guys at Hortonworks. You guys are on a roll, it's been seven years I think since you guys were founded. I remember the founding of Hortonworks. I remember when it splashed in the Wall Street Journal. It was like oh wow, this big data thing, this Hadoop thing is actually, it's a market, it's a segment and you guys have built it. You know, you and your competitors, your partners, your ecosystem continues to grow. You guys went IPO a few years ago. Your latest numbers are pretty good. You're continuing to grow in revenues, in customer acquisitions, your deal sizes are growing. So Hortonworks remains on a roll. So, I'd like you to talk right now, John, and give us a sense of where Hortonworks is at in terms of engaging with the marketplace, in terms of trends that you're seeing, in terms of how you're addressing them. But talk about first of all the Dataworks Summit. How many attendees do you have from how many countries? Just give us sort of the layout of this show. >> I don't have all of the final counts yet. >> This is year six of the show? >> This is year six in Europe, absolutely, thank you. So it's great, we've moved it around different locations. Great venue, great host city here in Berlin. Super excited about it, I know we have representatives from more than 51 countries. If you think about that, drawing from a really broad set of countries, well beyond, as you know, because you've interviewed some of the folks beyond just Europe. We've had them from South America, U.S., Africa, and Asia as well, so really a broad swath of the open-source and big data community, which is great. The final attendance is going to be 1,250 to 1,300 range. The final numbers, but a great sized conference. The energy level's been really great, the sessions have been, you know, oversubscribed, standing room only in many of the popular sessions. So the community's strong, I think that's the thing that we really see here and that we're really continuing to invest in. It's something that Hortonworks was founded around. You referenced the founding, and driving the community forward and investing is something that has been part of our mantra since we started and it remains that way today. >> Right. So first of all what is Hortonworks? Now how does Hortonworks position itself? Clearly Hadoop is your foundation, but you, just like Cloudera, MapR, you guys have all continued to evolve to address a broader range of use-cases with a deeper stack of technology with fairly extensive partner ecosystems. So what kind of a beast is Hortonworks? It's an elephant, but what kind of an elephant is it? >> We're an elephant or riding on the elephant I'd say, so we're a global data management company. That's what we're helping organizations do. Really the end-to-end lifecycle of their data, helping them manage it regardless of where it is, whether it's on-premise or in the cloud, really through hybrid data architectures. That's really how we've seen the market evolve is, we started off in terms of our strategy with the platform based on Hadoop, as you said, to store, process, and analyze data at scale. The kind of fundamental use-case for Hadoop. Then as the company emerged, as the market kind of continued to evolve, we moved to and saw the opportunity really, capturing data from the edge. As IOT and kind of edge-use cases emerged it made sense for us to add to the platform and create the Hortonworks DataFlow. >> James: Apache NiFi >> Apache NiFi, exactly, HDF underneath, with associated additional open-source projects in there. Kafka and some streaming and things like that. So that was now move data, capture data in motion, move it back and put it into the platform for those large data applications that organizations are building on the core platform. It's also the next evolution, seeing great attach rates with that, the really strong interest in the Apache NiFi, you know, the meetup here for NiFi was oversubscribed, so really really strong interest in that. And then, the markets continued to evolve with cloud and cloud architectures, customers wanting to deploy in the cloud. You know, you saw we had that poll yesterday in the general session about cloud with really interesting results, but we saw that there was really companies wanting to deploy in a hybrid way. Some of them wanted to move specific workloads to the cloud. >> Multi-cloud, public, private. >> Exactly right, and multi-data center. >> The majority of your customer deployments are on prem. >> They are. >> Rob Bearden, your CEO, I think he said in a recent article on SiliconAngle that two-thirds of your deployments are on prem. Is that percentage going down over time? Are more of your customers shifting toward a public cloud orientation? Does Hortonworks worry about that? You've got partnerships, clearly, with the likes of IBM, AWS, and Microsoft Dasher and so forth, so do you guys see that as an opportunity, as a worrisome trend? >> No, we see it very much as an opportunity. And that's because we do have customers who are wanting to put more workloads and run things in the cloud, however, there's still almost always a component that's going to be on premise. And that creates a challenge for organizations. How do they manage the security and governance and really the overall operations of those deployments as they're in the cloud and on premise. And, to your point, multi-cloud. And so you get some complexity in there around that deployment and particularly with the regulations, we talked about GDPR earlier today. >> Oh, by the way, the Data Steward Studio demo today was really, really good. It showed that, first of all, you cover the entire range of core requirements for compliance. So that was actually the primary announcement at this show; Scott Gnau announced that. You demoed it today, I think you guys are off on a good start, yeah. We've gotten really, and thank you for that, we've gotten really good feedback on our DataPlane Services strategy, right, it provides that single pane of glass. >> I should say to our viewers that Data Steward Studio is the second of the services under the DataPlane, the Hortonworks DataPlane Services Portfolio. >> That's right, that's exactly right. >> Go ahead, keep going. >> So, you know, we see that as an opportunity. We think we're very strongly positioned in the market, being the first to bring that kind of solution to the customers and our large customers that we've been talking about and who have been starting to use DataPlane have been very, very positive. I mean they see it as something that is going to help them really kind of maintain control over these deployments as they start to spread around, as they grow their uses of the thing. >> And it's built to operate across the multi-cloud, I know this as well in terms of executing the consent or withdrawal of consent that the data subject makes through what is essentially a consent portal. >> That's right, that's right. >> That was actually a very compelling demonstration in that regard. >> It was good, and they worked very hard on it. And I was speaking to an analyst yesterday, and they were saying that they're seeing an increasing number of the customers, enterprises, wanting to have a multi-cloud strategy. They don't want to get locked into any one public cloud vendor, so, what they want is somebody who can help them maintain that common security and governance across their different deployments, and they see DataPlane Services is the way that's going to help them do that. >> So John, how is Hortonworks, what's your road map, how do you see the company in your go to market evolving over the coming years in terms of geographies, in terms of your focuses? Focus, in terms of the use-cases and workloads that the Hortonworks portfolio addresses. How is that shifting? You mentioned the Edge. AI, machine learning, deep learning. You are a reseller of IBM Data Science Experience. >> DSX, that's right. >> So, let's just focus on that. Do you see more customers turning to Hortonworks and IBM for a complete end-to-end pipeline for the ingest, for the preparation, modeling, training and so forth? And deployment of operationalized AI? Is that something you see going forward as an evolution path for your capabilities? >> I'd say yes, long-term, or even in the short-term. So, they have to get their data house in order, if you will, before they get to some of those other things, so we're still, Hortonworks strategy has always been focused on the platform aspect, right? The data-at-rest platform, data-in-motion platform, and now a platform for managing common security and governance across those different deployments. Building on that is the data science, machine learning, and AI opportunity, but our strategy there, as opposed to trying to trying to do it ourselves, is to partner, so we've got the strong partnership with IBM, resell their DSX product. And also other partnerships around to deliver those other capabilities, like machine learning and AI, from our partner ecosystem, which you referenced. We have over 2,300 partners, so a very, very strong ecosystem. And so, we're going to stick to our strategy of the platforms enabling that, which will subsequently enable data science, machine learning, and AI on top. And then, if you want me to talk about our strategy in terms of growth, so we already operate globally. We've got offices in I think 19 different countries. So we're really covering the globe in terms of the demand for Hortonworks products and beginning implements. >> Where's the fastest growing market in terms of regions for Hortonworks? >> Yeah, I mean, international generally is our fastest growing region, faster than the U.S. But we're seeing very strong growth in APAC, actually, so India, Asian countries, Singapore, and then up and through to Japan. There's a lot of growth out in the Asian region. And, you know, they're sort of moving directly to digital transformation projects at really large scale. Big banks, telcos, from a workload standpoint I'd say the patterns are very similar to what we've seen. I've been at Hortonworks for six and a half years, as it turns out, and the patterns we saw initially in terms of adoption in the U.S. became the patterns we saw in terms of adoption in Europe and now those patterns of adoption are the same in Asia. So, once a company realizes they need to either drive out operational costs or build new data applications, the patterns tend to be the same whether it's retail, financial services, telco, manufacturing. You can sort of replicate those as they move forward. >> So going forward, how is Hortonworks evolving as a company in terms of, for example with GDPR, Data Steward, data governance as a strong focus going forward, are you shifting your model in terms of your target customer away from the data engineers, the Hadoop cluster managers who are still very much the center of it, towards more data governance, towards more business analyst level of focus. Do you see Hortonworks shifting in that direction in terms of your focus, go to market, your message and everything? >> I would say it's not a shifting as much as an expansion, so we definitely are continuing to invest in the core platform, in Hadoop, and you would have heard of some of the changes that are coming in the core Hadoop 3.0 and 3.1 platform here. Alan and others can talk about those details, and in Apache NiFi. But, to your point, as we bring and have brought Data Steward Studio and DataPlane Services online, that allows us to address a different user within the organization, so it's really an expansion. We're not de-investing in any other things. It's really here's another way in a natural evolution of the way that we're helping organizations solve data problems. >> That's great, well thank you. This has been John Kreisa, he's the VP for marketing at Hortonworks. I'm James Kobielus of Wikibon SiliconAngle Media here at Dataworks Summit 2018 in Berlin. And it's been great, John, and thank you very much for coming on theCUBE. >> Great, thanks for your time. (techno music)

Published Date : Apr 19 2018

SUMMARY :

Brought to you by Hortonworks. of course, the host company of Dataworks Summit. to reconnect with you guys at Hortonworks. the sessions have been, you know, oversubscribed, you guys have all continued to evolve to address the platform based on Hadoop, as you said, in the Apache NiFi, you know, the meetup here so do you guys see that as an opportunity, and really the overall operations of those Oh, by the way, the Data Steward Studio demo today is the second of the services under the DataPlane, being the first to bring that kind of solution that the data subject makes through in that regard. an increasing number of the customers, Focus, in terms of the use-cases and workloads for the preparation, modeling, training and so forth? Building on that is the data science, machine learning, in terms of adoption in the U.S. the data engineers, the Hadoop cluster managers in the core platform, in Hadoop, and you would have This has been John Kreisa, he's the Great, thanks for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlanPERSON

0.99+

James KobielusPERSON

0.99+

JimPERSON

0.99+

Rob BeardenPERSON

0.99+

IBMORGANIZATION

0.99+

John KreisaPERSON

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

AsiaLOCATION

0.99+

AWSORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

BerlinLOCATION

0.99+

yesterdayDATE

0.99+

AfricaLOCATION

0.99+

South AmericaLOCATION

0.99+

SiliconAngle MediaORGANIZATION

0.99+

U.S.LOCATION

0.99+

1,250QUANTITY

0.99+

Scott GnauPERSON

0.99+

1,300QUANTITY

0.99+

Berlin, GermanyLOCATION

0.99+

seven yearsQUANTITY

0.99+

six and a half yearsQUANTITY

0.99+

JapanLOCATION

0.99+

HadoopTITLE

0.99+

AsianLOCATION

0.99+

secondQUANTITY

0.98+

over 2,300 partnersQUANTITY

0.98+

todayDATE

0.98+

two-thirdsQUANTITY

0.98+

19 different countriesQUANTITY

0.98+

Dataworks SummitEVENT

0.98+

more than 51 countriesQUANTITY

0.98+

Hadoop 3.0TITLE

0.98+

firstQUANTITY

0.98+

JamesPERSON

0.98+

Data Steward StudioORGANIZATION

0.98+

Dataworks Summit EU 2018EVENT

0.98+

Dataworks Summit 2018EVENT

0.97+

ClouderaORGANIZATION

0.97+

MapRORGANIZATION

0.96+

GDPRTITLE

0.96+

DataPlane ServicesORGANIZATION

0.96+

SingaporeLOCATION

0.96+

year sixQUANTITY

0.95+

2018EVENT

0.95+

Wikibon SiliconAngle MediaORGANIZATION

0.94+

IndiaLOCATION

0.94+

HadoopORGANIZATION

0.94+

APACORGANIZATION

0.93+

Big Data AnalyticsORGANIZATION

0.93+

3.1TITLE

0.93+

Wall Street JournalTITLE

0.93+

oneQUANTITY

0.93+

ApacheORGANIZATION

0.92+

WikibonORGANIZATION

0.92+

NiFiTITLE

0.92+

Alan Gates, Hortonworks | Dataworks Summit 2018


 

(techno music) >> (announcer) From Berlin, Germany it's theCUBE covering DataWorks Summit Europe 2018. Brought to you by Hortonworks. >> Well hello, welcome to theCUBE. We're here on day two of DataWorks Summit 2018 in Berlin, Germany. I'm James Kobielus. I'm lead analyst for Big Data Analytics in the Wikibon team of SiliconANGLE Media. And who we have here today, we have Alan Gates whose one of the founders of Hortonworks and Hortonworks of course is the host of DataWorks Summit and he's going to be, well, hello Alan. Welcome to theCUBE. >> Hello, thank you. >> Yeah, so Alan, so you and I go way back. Essentially, what we'd like you to do first of all is just explain a little bit of the genesis of Hortonworks. Where it came from, your role as a founder from the beginning, how that's evolved over time but really how the company has evolved specifically with the folks on the community, the Hadoop community, the Open Source community. You have a deepening open source stack with you build upon with Atlas and Ranger and so forth. Gives us a sense for all of that Alan. >> Sure. So as I think it's well-known, we started as the team at Yahoo that really was driving a lot of the development of Hadoop. We were one of the major players in the Hadoop community. Worked on that for, I was in that team for four years. I think the team itself was going for about five. And it became clear that there was an opportunity to build a business around this. Some others had already started to do so. We wanted to participate in that. We worked with Yahoo to spin out Hortonworks and actually they were a great partner in that. Helped us get than spun out. And the leadership team of the Hadoop team at Yahoo became the founders of Hortonworks and brought along a number of the other engineering, a bunch of the other engineers to help get started. And really at the beginning, we were. It was Hadoop, Pig, Hive, you know, a few of the very, Hbase, the kind of, the beginning projects. So pretty small toolkit. And we were, our early customers were very engineering heavy people, or companies who knew how to take those tools and build something directly on those tools right? >> Well, you started off with the Hadoop community as a whole started off with a focus on the data engineers of the world >> Yes. >> And I think it's shifted, and confirm for me, over time that you focus increasing with your solutions on the data scientists who are doing the development of the applications, and the data stewards from what I can see at this show. >> I think it's really just a part of the adoption curve right? When you're early on that curve, you have people who are very into the technology, understand how it works, and want to dive in there. So those tend to be, as you said, the data engineering types in this space. As that curve grows out, you get, it comes wider and wider. There's still plenty of data engineers that are our customers, that are working with us but as you said, the data analysts, the BI people, data scientists, data stewards, all those people are now starting to adopt it as well. And they need different tools than the data engineers do. They don't want to sit down and write Java code or you know, some of the data scientists might want to work in Python in a notebook like Zeppelin or Jupyter but some, may want to use SQL or even Tablo or something on top of SQL to do the presentation. Of course, data stewards want tools more like Atlas to help manage all their stuff. So that does drive us to one, put more things into the toolkit so you see the addition of projects like Apache Atlas and Ranger for security and all that. Another area of growth, I would say is also the kind of data that we're focused on. So early on, we were focused on data at rest. You know, we're going to store all this stuff in HDFS and as the kind of data scene has evolved, there's a lot more focus now on a couple things. One is data, what we call data-in-motion for our HDF product where you've got in a stream manager like Kafka or something like that >> (James) Right >> So there's processing that kind of data. But now we also see a lot of data in various places. It's not just oh, okay I have a Hadoop cluster on premise at my company. I might have some here, some on premise somewhere else and I might have it in several clouds as well. >> K, your focus has shifted like the industry in general towards streaming data in multi-clouds where your, it's more stateful interactions and so forth? I think you've made investments in Apache NiFi so >> (Alan) yes. >> Give us a sense for your NiFi versus Kafka and so forth inside of your product strategy or your >> Sure. So NiFi is really focused on that data at the edge, right? So you're bringing data in from sensors, connected cars, airplane engines, all those sorts of things that are out there generating data and you need, you need to figure out what parts of the data to move upstream, what parts not to. What processing can I do here so that I don't have to move upstream? When I have a error event or a warning event, can I turn up the amount of data I'm sending in, right? Say this airplane engine is suddenly heating up maybe a little more than it's supposed to. Maybe I should ship more of the logs upstream when the plane lands and connects that I would if, otherwise. That's the kind o' thing that Apache NiFi focuses on. I'm not saying it runs in all those places by my point is, it's that kind o' edge processing. Kafka is still going to be running in a data center somewhere. It's still a pretty heavy weight technology in terms of memory and disk space and all that so it's not going to be run on some sensor somewhere. But it is that data-in-motion right? I've got millions of events streaming through a set of Kafka topics watching all that sensor data that's coming in from NiFi and reacting to it, maybe putting some of it in the data warehouse for later analysis, all those sorts of things. So that's kind o' the differentiation there between Kafka and NiFi. >> Right, right, right. So, going forward, do you see more of your customers working internet of things projects, is that, we don't often, at least in the industry of popular mind, associate Hortonworks with edge computing and so forth. Is that? >> I think that we will have more and more customers in that space. I mean, our goal is to help our customers with their data wherever it is. >> (James) Yeah. >> When it's on the edge, when it's in the data center, when it's moving in between, when it's in the cloud. All those places, that's where we want to help our customers store and process their data. Right? So, I wouldn't want to say that we're going to focus on just the edge or the internet of things but that certainly has to be part of our strategy 'cause it's has to be part of what our customers are doing. >> When I think about the Hortonworks community, now we have to broaden our understanding because you have a tight partnership with IBM which obviously is well-established, huge and global. Give us a sense for as you guys have teamed more closely with IBM, how your community has changed or broadened or shifted in its focus or has it? >> I don't know that it's shifted the focus. I mean IBM was already part of the Hadoop community. They were already contributing. Obviously, they've contributed very heavily on projects like Spark and some of those. They continue some of that contribution. So I wouldn't say that it's shifted it, it's just we are working more closely together as we both contribute to those communities, working more closely together to present solutions to our mutual customer base. But I wouldn't say it's really shifted the focus for us. >> Right, right. Now at this show, we're in Europe right now, but it doesn't matter that we're in Europe. GDPR is coming down fast and furious now. Data Steward Studio, we had the demonstration today, it was announced yesterday. And it looks like a really good tool for the main, the requirements for compliance which is discover and inventory your data which is really set up a consent portal, what I like to refer to. So the data subject can then go and make a request to have my data forgotten and so forth. Give us a sense going forward, for how or if Hortonworks, IBM, and others in your community are going to work towards greater standardization in the functional capabilities of the tools and platforms for enabling GDPR compliance. 'Cause it seems to me that you're going to need, the industry's going to need to have some reference architecture for these kind o' capabilities so that going forward, either your ecosystem of partners can build add on tools in some common, like the framework that was laid out today looks like a good basis. Is there anything that you're doing in terms of pushing towards more Open Source standardization in that area? >> Yes, there is. So actually one of my responsibilities is the technical management of our relationship with ODPI which >> (James) yes. >> Mandy Chessell referenced yesterday in her keynote and that is where we're working with IBM, with ING, with other companies to build exactly those standards. Right? Because we do want to build it around Apache Atlas. We feel like that's a good tool for the basis of that but we know one, that some people are going to want to bring their own tools to it. They're not necessarily going to want to use that one platform so we want to do it in an open way that they can still plug in their metadata repositories and communicate with others and we want to build the standards on top of that of how do you properly implement these features that GDPR requires like right to be forgotten, like you know, what are the protocols around PIII data? How do you prevent a breach? How do you respond to a breach? >> Will that all be under the umbrella of ODPI, that initiative of the partnership or will it be a separate group or? >> Well, so certainly Apache Atlas is part of Apache and remains so. What ODPI is really focused up is that next layer up of how do we engage, not the programmers 'cause programmers can gage really well at the Apache level but the next level up. We want to engage the data professionals, the people whose job it is, the compliance officers. The people who don't sit and write code and frankly if you connect them to the engineers, there's just going to be an impedance mismatch in that conversation. >> You got policy wonks and you got tech wonks so. They understand each other at the wonk level. >> That's a good way to put it. And so that's where ODPI is really coming is that group of compliance people that speak a completely different language. But we still need to get them all talking to each other as you said, so that there's specifications around. How do we do this? And what is compliance? >> Well Alan, thank you very much. We're at the end of our time for this segment. This has been great. It's been great to catch up with you and Hortonworks has been evolving very rapidly and it seems to me that, going forward, I think you're well-positioned now for the new GDPR age to take your overall solution portfolio, your partnerships, and your capabilities to the next level and really in terms of in an Open Source framework. In many ways though, you're not entirely 100% like nobody is, purely Open Source. You're still very much focused on open frameworks for building fairly scalable, very scalable solutions for enterprise deployment. Well, this has been Jim Kobielus with Alan Gates of Hortonworks here at theCUBE on theCUBE at DataWorks Summit 2018 in Berlin. We'll be back fairly quickly with another guest and thank you very much for watching our segment. (techno music)

Published Date : Apr 19 2018

SUMMARY :

Brought to you by Hortonworks. of Hortonworks and Hortonworks of course is the host a little bit of the genesis of Hortonworks. a bunch of the other engineers to help get started. of the applications, and the data stewards So those tend to be, as you said, the data engineering types But now we also see a lot of data in various places. So NiFi is really focused on that data at the edge, right? So, going forward, do you see more of your customers working I mean, our goal is to help our customers with their data When it's on the edge, when it's in the data center, as you guys have teamed more closely with IBM, I don't know that it's shifted the focus. the industry's going to need to have some So actually one of my responsibilities is the that GDPR requires like right to be forgotten, like and frankly if you connect them to the engineers, You got policy wonks and you got tech wonks so. as you said, so that there's specifications around. It's been great to catch up with you and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

James KobielusPERSON

0.99+

Mandy ChessellPERSON

0.99+

AlanPERSON

0.99+

YahooORGANIZATION

0.99+

Jim KobielusPERSON

0.99+

EuropeLOCATION

0.99+

HortonworksORGANIZATION

0.99+

Alan GatesPERSON

0.99+

four yearsQUANTITY

0.99+

JamesPERSON

0.99+

INGORGANIZATION

0.99+

BerlinLOCATION

0.99+

yesterdayDATE

0.99+

ApacheORGANIZATION

0.99+

SQLTITLE

0.99+

JavaTITLE

0.99+

GDPRTITLE

0.99+

PythonTITLE

0.99+

100%QUANTITY

0.99+

Berlin, GermanyLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

DataWorks SummitEVENT

0.99+

AtlasORGANIZATION

0.99+

DataWorks Summit 2018EVENT

0.98+

Data Steward StudioORGANIZATION

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

NiFiORGANIZATION

0.98+

Dataworks Summit 2018EVENT

0.98+

HadoopORGANIZATION

0.98+

one platformQUANTITY

0.97+

2018EVENT

0.97+

bothQUANTITY

0.97+

millions of eventsQUANTITY

0.96+

HbaseORGANIZATION

0.95+

TabloTITLE

0.95+

ODPIORGANIZATION

0.94+

Big Data AnalyticsORGANIZATION

0.94+

OneQUANTITY

0.93+

theCUBEORGANIZATION

0.93+

NiFiCOMMERCIAL_ITEM

0.92+

day twoQUANTITY

0.92+

about fiveQUANTITY

0.91+

KafkaTITLE

0.9+

ZeppelinORGANIZATION

0.89+

AtlasTITLE

0.85+

RangerORGANIZATION

0.84+

JupyterORGANIZATION

0.83+

firstQUANTITY

0.82+

Apache AtlasORGANIZATION

0.82+

HadoopTITLE

0.79+

Joe Morrissey, Hortonworks | Dataworks Summit 2018


 

>> Narrator: From Berlin, Germany, it's theCUBE! Covering Dataworks Summit Europe 2018. Brought to you by Hortonworks. >> Well, hello. Welcome to theCUBE. I'm James Kobielus. I'm lead analyst at Wikibon for big data analytics. Wikibon, of course, is the analyst team inside of SiliconANGLE Media. One of our core offerings is theCUBE and I'm here with Joe Morrissey. Joe is the VP for International at Hortonworks and Hortonworks is the host of Dataworks Summit. We happen to be at Dataworks Summit 2018 in Berlin! Berlin, Germany. And so, Joe, it's great to have you. >> Great to be here! >> We had a number of conversations today with Scott Gnau and others from Hortonworks and also from your customer and partners. Now, you're International, you're VP for International. We've had a partner of yours from South Africa on theCUBE today. We've had a customer of yours from Uruguay. So there's been a fair amount of international presence. We had Munich Re from Munich, Germany. Clearly Hortonworks is, you've been in business as a company for seven years now, I think it is, and you've established quite a presence worldwide, I'm looking at your financials in terms of your customer acquisition, it just keeps going up and up so you're clearly doing a great job of bringing the business in throughout the world. Now, you've told me before the camera went live that you focus on both Europe and Asia PACS, so I'd like to open it up to you, Joe. Tell us how Hortonworks is doing worldwide and the kinds of opportunities you're selling into. >> Absolutely. 2017 was a record year for us. We grew revenues by over 40% globally. I joined to lead the internationalization of the business and you know, not a lot of people know that Hortonworks is actually one of the fastest growing software companies in history. We were the fastest to get to $100 million. Also, now the fastest to get to $200 million but the majority of that revenue contribution was coming from the United States. When I joined, it was about 15% of international contribution. By the end of 2017, we'd grown that to 31%, so that's a significant improvement in contribution overall from our international customer base even though the company was growing globally at a very fast rate. >> And that's also not only fast by any stretch of the imagination in terms of growth, some have said," Oh well, maybe Hortonworks, "just like Cloudera, maybe they're going to plateau off "because the bloom is off the rose of Hadoop." But really, Hadoop is just getting going as a market segment or as a platform but you guys have diversified well beyond that. So give us a sense for going forward. What are your customers? What kind of projects are you positioning and selling Hortonworks solutions into now? Is it a different, well you've only been there 18 months, but is it shifting towards more things to do with streaming, NiFi and so forth? Does it shift into more data science related projects? Coz this is worldwide. >> Yeah. That's a great question. This company was founded on the premise that data volumes and diversity of data is continuing to explode and we believe that it was necessary for us to come and bring enterprise-grade security and management and governance to the core Hadoop platform to make it really ready for the enterprise, and that's what the first evolution of our journey was really all about. A number of years ago, we acquired a company called Onyara, and the logic behind that acquisition was we believe companies now wanted to go out to the point of origin, of creation of data, and manage data throughout its entire life cycle and derive pre-event as well as post-event analytical insight into their data. So what we've seen as our customers are moving beyond just unifying data in the data lake and deriving post-transaction inside of their data. They're now going all the way out to the edge. They're deriving insight from their data in real time all the way from the point of creation and getting pre-transaction insight into data as well so-- >> Pre-transaction data, can you define what you mean by pre-transaction data. >> Well, I think if you look at it, it's really the difference between data in motion and data at rest, right? >> Oh, yes. >> A specific example would be if a customer walks into the store and they've interacted in the store maybe on social before they come in or in some other fashion, before they've actually made the purchase. >> Engagement data, interaction data, yes. >> Engagement, exactly. Exactly. Right. So that's one example, but that also extends out to use cases in IoT as well, so data in motion and streaming data, as you mentioned earlier since become a very, very significant use case that we're seeing a lot of adoption for. Data science, I think companies are really coming to the realization that that's an essential role in the organization. If we really believe that data is the most important asset, that it's the crucial asset in the new economy, then data scientist becomes a really essential role for any company. >> How do your Asian customers' requirements differ, or do they differ from your European cause European customers clearly already have their backs against the wall. We have five weeks until GDPR goes into effect. Do many of your Asian customer, I'm sure a fair number sell into Europe, are they putting a full court, I was going to say in the U.S., a full court press on complying with GDPR, or do they have equivalent privacy mandates in various countries in Asia or a bit of both? >> I think that one of the primary drivers I see in Asia is that a lot of companies there don't have the years of legacy architecture that European companies need to contend with. In some cases, that means that they can move towards next generation data-orientated architectures much quicker than European companies have. They don't have layers of legacy tech that they need to sunset. A great example of that is Reliance. Reliance is the largest company in India, they've got a subsidiary called GO, which is the fastest growing telco in the world. They've implemented our technology to build a next-generation OSS system to improve their service delivery on their network. >> Operational support system. >> Exactly. They were able to do that from the ground up because they formed their telco division around being a data-only company and giving away voice for free. So they can in some extent, move quicker and innovate a little faster in that regards. I do see much more emphasis on regulatory compliance in Europe than I see in Asia. I do think that GDPR amongst other regulations is a big driver of that. The other factor though I think that's influencing that is Cloud and Cloud strategy in general. What we've found is that, customers are drawn to the Cloud for a number of reasons. The economics sometimes can be attractive, the ability to be able to leverage the Cloud vendors' skills in terms of implementing complex technology is attractive, but most importantly, the elasticity and scalability that the Cloud provides us, hugely important. Now, the key concern for customers as they move to the Cloud though, is how do they leverage that as a platform in the context of an overall data strategy, right? And when you think about what a data strategy is all about, it all comes down to understanding what your data assets are and ensuring that you can leverage them for a competitive advantage but do so in a regulatory compliant manner, whether that's data in motion or data at rest. Whether it's on-prem or in the Cloud or in data across multiple Clouds. That's very much a top of mind concern for European companies. >> For your customers around the globe, specifically of course, your area of Europe and Asia, what percentage of your customers that are deploying Hortonworks into a purely public Cloud environment like HDInsight and Microsoft Azure or HDP inside of AWS, in a public Cloud versus in a private on-premises deployment versus in a hybrid public-private multi Cloud. Is it mostly on-prem? >> Most of our business is still on-prem to be very candid. I think almost all of our customers are looking at migrating, some more close to the Cloud. Even those that had intended to have a Cloud for a strategy have now realized that not all workloads belong in the Cloud. Some are actually more economically viable to be on-prem, and some just won't ever be able to move to the Cloud because of regulation. In addition to that, most of our customers are telling us that they actually want Cloud optionality. They don't want to be locked in to a single vendor, so we very much view the future as hybrid Cloud, as multi Cloud, and we hear our customers telling us that rather than just have a Cloud strategy, they need a data strategy. They need a strategy to be able to manage data no matter where it lives, on which tier, to ensure that they are regulatory compliant with that data. But then to be able to understand that they can secure, govern, and manage those data assets at any tier. >> What percentage of your deals involve a partner? Like IBM is a major partner. Do you do a fair amount of co-marketing and joint sales and joint deals with IBM and other partners or are they mostly Hortonworks-led? >> No, partners are absolutely critical to our success in the international sphere. Our partner revenue contribution across EMEA in the past year grew, every region grew by over 150% in terms of channel contribution. Our total channel business was 28% of our total, right? That's a very significant contribution. The growth rate is very high. IBM are a big part of that, as are many other partners. We've got, the very significant reseller channel, we've got IHV and ISV partners that are critical to our success also. Where we're seeing the most impact with with IBM is where we go to some of these markets where we haven't had a presence previously, and they've got deep and long-standing relationships and that helps us accelerate time to value with our customers. >> Yeah, it's been a very good and solid partnership going back several years. Well, Joe, this is great, we have to wrap it up, we're at the end of our time slot. This has been Joe Morrissey who is the VP for International at Hortonworks. We're on theCUBE here at Dataworks Summit 2018 in Berlin, and want to thank you all for watching this segment and tune in tomorrow, we'll have a full slate of further discussions with Hortonworks, with IBM and others tomorrow on theCUBE. Have a good one. (upbeat music)

Published Date : Apr 18 2018

SUMMARY :

Brought to you by Hortonworks. and Hortonworks is the host of Dataworks Summit. and the kinds of opportunities you're selling into. Also, now the fastest to get to $200 million of the imagination in terms of growth, and governance to the core Hadoop platform Pre-transaction data, can you define what you mean maybe on social before they come in or Engagement data, that that's an essential role in the organization. Do many of your Asian customer, that they need to sunset. the ability to be able to leverage the Cloud vendors' skills and Microsoft Azure or Most of our business is still on-prem to be very candid. and joint deals with IBM that are critical to our success also. and want to thank you all for watching this segment and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James KobielusPERSON

0.99+

Joe MorrisseyPERSON

0.99+

IBMORGANIZATION

0.99+

AsiaLOCATION

0.99+

EuropeLOCATION

0.99+

JoePERSON

0.99+

UruguayLOCATION

0.99+

HortonworksORGANIZATION

0.99+

IndiaLOCATION

0.99+

Scott GnauPERSON

0.99+

seven yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

28%QUANTITY

0.99+

South AfricaLOCATION

0.99+

OnyaraORGANIZATION

0.99+

BerlinLOCATION

0.99+

United StatesLOCATION

0.99+

$100 millionQUANTITY

0.99+

$200 millionQUANTITY

0.99+

31%QUANTITY

0.99+

five weeksQUANTITY

0.99+

18 monthsQUANTITY

0.99+

GOORGANIZATION

0.99+

tomorrowDATE

0.99+

2017DATE

0.99+

bothQUANTITY

0.99+

GDPRTITLE

0.99+

one exampleQUANTITY

0.99+

oneQUANTITY

0.98+

todayDATE

0.98+

U.S.LOCATION

0.98+

Dataworks Summit 2018EVENT

0.98+

AWSORGANIZATION

0.98+

Berlin, GermanyLOCATION

0.98+

over 40%QUANTITY

0.98+

MicrosoftORGANIZATION

0.98+

RelianceORGANIZATION

0.98+

over 150%QUANTITY

0.97+

Dataworks SummitEVENT

0.97+

EMEAORGANIZATION

0.97+

first evolutionQUANTITY

0.96+

2018EVENT

0.96+

EuropeanOTHER

0.96+

SiliconANGLE MediaORGANIZATION

0.95+

Munich, GermanyLOCATION

0.95+

OneQUANTITY

0.95+

end of 2017DATE

0.94+

HadoopTITLE

0.93+

ClouderaORGANIZATION

0.93+

about 15%QUANTITY

0.93+

past yearDATE

0.92+

theCUBEORGANIZATION

0.92+

single vendorQUANTITY

0.91+

telcoORGANIZATION

0.89+

Munich ReORGANIZATION

0.88+

Muggie van Staden, Obsidian | Dataworks Summit 2018


 

>> Voiceover: From Berlin, Germany, it's theCUBE, covering DataWorks Summit Europe 2018, brought to you by Hortonworks. >> Hi, hello, welcome to theCUBE, I'm James Kobielus. I'm the lead analyst for Big Data Analytics at the Wikibon, which is the team inside of SiliconANGLE Media that focuses on emerging trends and technologies. We are here, on theCUBE at DataWorks Summit 2018 in Berlin, Germany. And I have a guest here. This is, Muggie, and if I get it wrong, Muggie Van Staden >> That's good enough, yep. >> Who is with Obsidian, which is a South Africa-based partner of Hortonworks. And I'm not familiar with Obsidian, so I'm going to ask Muggie to tell us a little bit about your company, what you do, your focus on open source, and really the opportunities you see for big data, for Hadoop, in South Africa, really the African continent as a whole. So, Muggie? >> Yeah, James great to be here. Yes, Obsidian, we started it 23 years ago, focusing mostly on open source technologies, and as you can imagine that has changed a lot over the last 23 years when we started the concept of selling Linux was basically a box with a hat and maybe a T-shirt in it. Today that's changed. >> James: Hopefully there's a stuffed penguin in there, too. (laughing) I could use that right now. >> Maybe a manual. So our business has evolved a lot over the last 23 years. And one of the technologies that has come around is Hadoop. And we actually started with some of the other Hadoop vendors out there as our first partnerships, and probably three or four years ago we decided to take on Hortonworks as one of our vendors. We found them an amazing company to work with. And together with them we've now worked in four of the big banks in South Africa. One of them is actually here at DataWorks Summit. They won an award last night. So it's fantastic to be part of all of that. And yes, South Africa being so far removed from the rest of the world. They have different challenges. Everybody's nervous of Cloud. We have the joys that we don't really have any Cloud players locally yet. The two big players are in Microsoft and Amazon are planning some data centers soon. So the guys have different challenges to Europe and to the States. But big data, the big banks are looking at it, starting to deploy nice Hadoop clusters, starting to ingest data, starting to get real business value out of it, and we're there to help, and hopefully the four is the start for us and we can help lots of customers on this journey. >> Are South African-based companies, because you are so distant in terms of miles on the planet from Europe, from the EU, is any company in South Africa, or many companies, concerned at all about the global, or say the general data protection regulation, GDPR? US-based companies certainly are 'cause they operate in Europe. So is that a growing focus for them? And we have five weeks until GDPR kicks in. So tell me about it. >> Yeah, so from a South African point of view, some of the banks and some of the companies would have subsidiaries in Europe. So for them it's a very real thing. But we have our own Act called PoPI, which is the protection of private information, so very similar. So everybody's keeping an eye on it. Everybody's worried. I think everybody's worried for the first company to be fined. And then they will all make sure that they get their things right. But, I think not just because of a legislation, I think it's something that everybody should worry about. How do we protect data? How do we make sure the right people have access to the correct data when they should and nobody violates that because I mean, in this day and age, you know, Google and Amazon and those guys probably know more about me than my family does. So it's a challenge for everybody. And I think it's just the right thing for companies to do is to make sure that the data that they do have that they really do take good care of it. We trust them with our money and now we're trusting them with our data. So it's a real challenge for everybody. >> So how long has Obsidian been a partner of Hortonworks and how has your role, or partnership I should say, evolved over that time, and how do you see it evolving going forward. >> We've been a partner about three or four years now. And started off as a value added reseller. We also a training partner in South Africa for them. And as they as company have evolved, we've had to evolve with them. You know, so they started with HTTP as the Hadoop platform. Now they're doing NiFi and HDF, so we have to learn all of those technologies as well. But very, very excited where they're going with DataPlane service just managing a customer's data across multiple clusters, multiple clouds, because that's realistically where we see all the customers going, is you know clusters, on-premise clusters in typically multiple Clouds and how do you manage that? And we are very excited to walk this road together with Hortonworks and all the South African customers that we have. >> So you say your customers are deploying multiple Clouds. Public Clouds or hybrid private-public Clouds? Give us a sense, for South Africa, whether public Cloud is a major, or is a major deployment option or choice for financial services firms that you work with. >> Not necessarily financial services, so most of them are kicking tires at this stage, nobody's really put major work loads in there. As I mentioned, both Amazon and Microsoft are planning to put data centers down in South Africa very soon, and I think that will spur a big movement towards Cloud, but we do have some customers, unfortunately not Hortonworks customers, that are actually mostly in the Cloud. And they are now starting to look at a multi-Cloud strategy. So to ideally be in the three or four major Cloud providers and spinning up the right workloads in the right Cloud, and we're there to help. >> One of the most predominant workloads that your customers are running in the Cloud, is it backend in terms of data ingest and transformation? Is it a bit of maybe data warehousing with unstructured data? Is it a bit of things like queriable archiving. I want to get a sense for, what is predominant right now in workloads? >> Yeah I think most of them start with (mumble) environments. (mumbles) one customer that's heavily into Cloud from a data point of view. Literally it's their data warehouse. They put everything in there. I think from the banking customers, most of them are considering DR of their existing Hadoop clusters, maybe a subset of their data and not necessarily everything. And I think some of them are also considering putting their unstructured data outside on the Cloud because that's where most of it's coming from. I mean, if you have Twitter, Facebook, LinkedIn data, it's a bit silly to pull all of that into your environment, why not just put it in the Cloud, that's where it's coming from, and analyze that and connect it back to your data where relevant. So I think a lot of the customers would love to get there, and now Hortonworks makes it so much easier to do that. I think a lot of them will start moving in that direction. Now, excuse me, so are any or many of your customers doing development and training of machine learning algorithms and models in their Clouds? And to the extent that they are, are they using tools like the IBM Data Science Experience that Hortonworks resells for that? >> I think it's definitely on the radar for a lot of them. I'm not aware of anybody using it yet, but lots of people are looking at it and excited about the partnership between IBM and Hortonworks. And IBM has been a longstanding player in the South African market, and it's exciting for us as well to bring them into the whole Hortonworks ecosystem, and together solve real world problems. >> Give us a sense for how built out the big data infrastructure is in neighboring countries like Botswana or Angola or Mozambique and so forth. Is that an area that your company, are those regions that your company operates in? Sells into? >> We don't have offices, but we don't have a problem going in and helping customers there, so we've had projects in the past, not data related, that we've flown in and helped people. Most of the banks from a South African point of view, have branches into Africa. So it's on the roadmap, some are a little bit ahead of others, but definitely on the roadmap to actually put down Hadoop clusters in some of the major countries all throughout Africa. There's a big debate, do you put it down there, do you leave the data in South Africa? So they're all going through their own legislation, but it's definitely on the roadmap for all of them to actually take their data, knowledge in data science, up into Africa. >> Now you say that in South Africa Proper, there are privacy regulations, you know, maybe not the same as GDPR, but equivalent. Throughout Africa, at least throughout Southern Africa, how is privacy regulation lacking or is it emerging? >> I think it's emerging. A lot of the countries do have the basic rule that their data shouldn't leave the country. So everybody wants that data sovereignty and that's why a lot of them will not go to Cloud, and that's part of the challenges for the banks, that if they have banks up in Botswana, etc. And Botswana rules are our data has to stay in country. They have to figure out a way how do they connect that data to get the value for all of their customers. So real world challenges for everybody. >> When you're going into and selling into an emerging, or developing nation, of you need to provide upfront consulting to help the customer bootstrap their own understanding of the technology and making the business case and so forth. And how consultative is the selling process... >> Absolutely, and what we see with the banks, most of them even have a consultative approach within their own environment, so you would have the South African team maybe flying into the team at (mumbles) Botswana, and share some of the learnings that they've had. And then help those guys get up to speed. The reality is the skills are not necessarily in country. So there's a lot of training, a lot of help to go and say, we've done this, let us upscale you. And be a part of that process. So we sometimes send in teams to come and do two, three day training, basics, etc., so that ultimately the guys can operationalize in each country by themselves. >> So, that's very interesting, so what do you want to take away from this event? What do you find most interesting in terms of the sessions you've been in around the community showcase that you can take back to Obsidian, back in your country and apply? Like the announcement this morning of the Data Steward Studio. Do you see a possible, that your customers might be eager to use that for curation of their data in their clusters? >> Definitely, and one of the key messages for me was Scott, the CTO's message about your data strategy, your Cloud strategy, and your business strategy. It is effectively the same thing. And I think that's the biggest message that I would like to take back to the South African customers is to go and say, you need to start thinking about this. You know, as Cloud becomes a bigger reality for us, we have to align, we have to go and say, how do we get your data where it belongs? So you know, we like to say to our customers, we help the teams get the right code to the right computer and the right data, and I think it's absolutely critical for all of the customers to go and say, well, where is that data going to sit? Where is the right compute for that piece of data? And can we get it then, can we manage it, etc.? And align to business strategy. Everybody's trying to do digital transformation, and those three things go very much hand-in-hand. >> Well, Muggie, thank you very much. We're at the end of our slot. This has been great. It's been excellent to learn more about Obsidian and the work you're doing in South Africa, providing big data solutions or working with customers to build the big data infrastructure in the financial industry down there. So this has been theCUBE. We've been speaking with Muggie Van Staden of Obsidian Systems, and here at DataWorks Summit 2018 in Berlin. Thank you very much.

Published Date : Apr 18 2018

SUMMARY :

brought to you by Hortonworks. I'm the lead analyst for Big Data Analytics at the Wikibon, and really the opportunities you see for big data, and as you can imagine that has changed a lot I could use that right now. So the guys have different challenges to Europe or say the general data protection regulation, GDPR? And I think it's just the right thing for companies to do and how do you see it evolving going forward. And we are very excited to walk this road together So you say your customers are deploying multiple Clouds. And they are now starting to look at a multi-Cloud strategy. One of the most predominant workloads and now Hortonworks makes it so much easier to do that. and excited about the partnership the big data infrastructure is in neighboring countries but definitely on the roadmap to actually put down you know, maybe not the same as GDPR, and that's part of the challenges for the banks, And how consultative is the selling process... and share some of the learnings that they've had. around the community showcase that you can take back for all of the customers to go and say, and the work you're doing in South Africa,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

James KobielusPERSON

0.99+

AmazonORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

EuropeLOCATION

0.99+

Muggie Van StadenPERSON

0.99+

AfricaLOCATION

0.99+

GoogleORGANIZATION

0.99+

Muggie van StadenPERSON

0.99+

BotswanaLOCATION

0.99+

MozambiqueLOCATION

0.99+

AngolaLOCATION

0.99+

MuggiePERSON

0.99+

ScottPERSON

0.99+

South AfricaLOCATION

0.99+

JamesPERSON

0.99+

Southern AfricaLOCATION

0.99+

twoQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

BerlinLOCATION

0.99+

three dayQUANTITY

0.99+

threeQUANTITY

0.99+

GDPRTITLE

0.99+

FacebookORGANIZATION

0.99+

Berlin, GermanyLOCATION

0.99+

TwitterORGANIZATION

0.99+

Obsidian SystemsORGANIZATION

0.99+

first companyQUANTITY

0.99+

five weeksQUANTITY

0.99+

fourQUANTITY

0.99+

first partnershipsQUANTITY

0.99+

threeDATE

0.99+

TodayDATE

0.98+

LinuxTITLE

0.98+

23 years agoDATE

0.98+

DataWorks Summit 2018EVENT

0.98+

bothQUANTITY

0.97+

EULOCATION

0.97+

WikibonORGANIZATION

0.97+

oneQUANTITY

0.97+

PoPITITLE

0.97+

Data Steward StudioORGANIZATION

0.97+

each countryQUANTITY

0.97+

CloudTITLE

0.97+

USLOCATION

0.96+

last nightDATE

0.96+

SiliconANGLE MediaORGANIZATION

0.96+

four yearsQUANTITY

0.96+

DataWorks SummitEVENT

0.96+

HadooORGANIZATION

0.96+

OneQUANTITY

0.96+

Dataworks Summit 2018EVENT

0.95+

HadoopORGANIZATION

0.93+

about threeQUANTITY

0.93+

two big playersQUANTITY

0.93+

theCUBEORGANIZATION

0.93+

Scott Gnau, Hortonworks | Dataworks Summit EU 2018


 

(upbeat music) >> Announcer: From Berlin, Germany, it's The Cube, covering DataWorks Summit Europe 2018. Brought to you by Hortonworks. >> Hi, welcome to The Cube, we're separating the signal from the noise and tuning into the trends in data and analytics. Here at DataWorks Summit 2018 in Berlin, Germany. This is the sixth year, I believe, that DataWorks has been held in Europe. Last year I believe it was at Munich, now it's in Berlin. It's a great show. The host is Hortonworks and our first interviewee today is Scott Gnau, who is the chief technology officer of Hortonworks. Of course Hortonworks got established themselves about seven years ago as one of the up and coming start ups commercializing a then brand new technology called Hadoop and MapReduce. They've moved well beyond that in terms of their go to market strategy, their product portfolio, their partnerships. So Scott, this morning, it's great to have ya'. How are you doing? >> Glad to be back and good to see you. It's been awhile. >> You know, yes, I mean, you're an industry veteran. We've both been around the block a few times but I remember you years ago. You were at Teradata and I was at another analyst firm. And now you're with Hortonworks. And Hortonworks is really on a roll. I know you're not Rob Bearden, so I'm not going to go into the financials, but your financials look pretty good, your latest. You're growing, your deal sizes are growing. Your customer base is continuing to deepen. So you guys are on a roll. So we're here in Europe, we're here in Berlin in particular. It's five weeks--you did the keynote this morning, It's five weeks until GDPR. The sword of Damacles, the GDPR sword of Damacles. It's not just affecting European based companies, but it's affecting North American companies and others who do business in Europe. So your keynote this morning, your core theme was that, if you're in enterprise, your business strategy is equated with your cloud strategy now, is really equated with your data strategy. And you got to a lot of that. It was a really good discussion. And where GDPR comes into the picture is the fact that protecting data, personal data of your customers is absolutely important, in fact it's imperative and mandatory, and will be in five weeks or you'll face a significant penalty if you're not managing that data and providing customers with the right to have it erased, or the right to withdraw consent to have it profiled, and so forth. So enterprises all over the world, especially in Europe, are racing as fast as they can to get compliant with GDPR by the May 25th deadline time. So, one of the things you discussed this morning, you had an announcement overnight that Hortonworks has released a new solution in technical preview called The Data Steward Studio. And I'm wondering if you can tie that announcement to GDPR? It seems like data stewardship would have a strong value for your customers. >> Yeah, there's definitely a big tie-in. GDPR is certainly creating a milestone, kind of a trigger, for people to really think about their data assets. But it's certainly even larger than that, because when you even think about driving digitization of a business, driving new business models and connecting data and finding new use cases, it's all about finding the data you have, understanding what it is, where it came from, what's the lineage of it, who had access to it, what did they do to it? These are all governance kinds of things, which are also now mandated by laws like GDPR. And so it's all really coming together in the context of the new modern data architecture era that we live in, where a lot of data that we have access to, we didn't create. And so it was created outside the firewall by a device, by some application running with some customer, and so capturing and interpreting and governing that data is very different than taking derivative transactions from an ERP system, which are already adjudicated and understood, and governing that kind of a data structure. And so this is a need that's driven from many different perspectives, it's driven from the new architecture, the way IoT devices are connecting and just creating a data bomb, that's one thing. It's driven by business use cases, just saying what are the assets that I have access to, and how can I try to determine patterns between those assets where I didn't even create some of them, so how do I adjudicate that? >> Discovering and cataloging your data-- >> Discovering it, cataloging it, actually even... When I even think about data, just think the files on my laptop, that I created, and I don't remember what half of them are. So creating the metadata, creating that trail of bread crumbs that lets you piece together what's there, what's the relevance of it, and how, then, you might use it for some correlation. And then you get in, obviously, to the regulatory piece that says sure, if I'm a new customer and I ask to be forgotten, the only way that you can guarantee to forget me is to know where all of my data is. >> If you remember that they are your customer in the first place and you know where all that data is, if you're even aware that it exists, that's the first and foremost thing for an enterprise to be able to assess their degree of exposure to GDPR. >> So, right. It's like a whole new use case. It's a microcosm of all of these really big things that are going on. And so what we've been trying to do is really leverage our expertise in metadata management using the Apache Atlas project. >> Interviewer: You and IBM have done some major work-- >> We work with IBM and the community on Apache Atlas. You know, metadata tagging is not the most interesting topic for some people, but in the context that I just described, it's kind of important. And so I think one of the areas where we can really add value for the industry is leveraging our lowest common denominator, open source, open community kind of development to really create a standard infrastructure, a standard open infrastructure for metadata tagging, into which all of these use cases can now plug. Whether it's I want to discover data and create metadata about the data based on patterns that I see in the data, or I've inherited data and I want to ensure that the metadata stay with that data through its life cycle, so that I can guarantee the lineage of the data, and be compliant with GDPR-- >> And in fact, tomorrow we will have Mandy Chessell from IBM, a key Hortonworks partner, discussing the open metadata framework you're describing and what you're doing. >> And that was part of this morning's keynote close also. It all really flowed nicely together. Anyway, it is really a perfect storm. So what we've done is we've said, let's leverage this lowest common denominator, standard metadata tagging, Apache Atlas, and uplevel it, and not have it be part of a cluster, but actually have it be a cloud service that can be in force across multiple data stores, whether they're in the cloud or whether they're on prem. >> Interviewer: That's the Data Steward Studio? >> Well, Data Plane and Data Steward Studio really enable those things to come together. >> So the Data Steward Studio is the second service >> Like an app. >> under the Hortonworks DataPlane service. >> Yeah, so the whole idea is to be able to tie those things together, and when you think about it in today's hybrid world, and this is where I really started, where your data strategy is your cloud strategy, they can't be separate, because if they're separate, just think about what would happen. So I've copied a bunch of data out to the cloud. All memory of any lineage is gone. Or I've got to go set up manually another set of lineage that may not be the same as the lineage it came with. And so being able to provide that common service across footprint, whether it's multiple data centers, whether it's multiple clouds, or both, is a really huge value, because now you can sit back and through that single pane, see all of your data assets and understand how they interact. That obviously has the ability then to provide value like with Data Steward Studio, to discover assets, maybe to discover assets and discover duplicate assets, where, hey, I can save some money if I get rid of this cloud instance, 'cause it's over here already. Or to be compliant and say yeah, I've got these assets here, here, and here, I am now compelled to do whatever: delete, protect, encrypt. I can now go do that and keep a record through the metadata that I did it. >> Yes, in fact that is very much at the heart of compliance, you got to know what assets there are out there. And so it seems to me that Hortonworks is increasingly... the H-word rarely comes up these days. >> Scott: Not Hortonworks, you're talking about Hadoop. >> Hadoop rarely comes up these days. When the industry talks about you guys, it's known that's your core, that's your base, that's where HDP and so forth, great product, great distro. In fact, in your partnership with IBM, a year or more ago, I think it was IBM standardized on HDP in lieu of their distro, 'cause it's so well-established, so mature. But going forward, you guys in many ways, Hortonworks, you have positioned yourselves now. Wikibon sees you as being the premier solution provider of big data governance solutions specifically focused on multi-cloud, on structured data, and so forth. So the announcement today of the Data Steward Studio very much builds on that capability you already have there. So going forward, can you give us a sense to your roadmap in terms of building out DataPlane's service? 'Cause this is the second of these services under the DataPlane umbrella. Give us a sense for how you'll continue to deepen your governance portfolio in DataPlane. >> Really the way to think about it, there are a couple of things that you touched on that I think are really critical, certainly for me, and for us at Hortonworks to continue to repeat, just to make sure the message got there. Number one, Hadoop is definitely at the core of what we've done, and was kind of the secret sauce. Some very different stuff in the technology, also the fact that it's open source and community, all those kinds of things. But that really created a foundation that allowed us to build the whole beginning of big data data management. And we added and expanded to the traditional Hadoop stack by adding Data in Motion. And so what we've done is-- >> Interviewer: NiFi, I believe, you made a major investment. >> Yeah, so we made a large investment in Apache NiFi, as well as Storm and Kafka as kind of a group of technologies. And the whole idea behind doing that was to expand our footprint so that we would enable our customers to manage their data through its entire lifecycle, from being created at the edge, all the way through streaming technologies, to landing, to analytics, and then even analytics being pushed back out to the edge. So it's really about having that common management infrastructure for the lifecycle of all the data, including Hadoop and many other things. And then in that, obviously as we discuss whether it be regulation, whether it be, frankly, future functionality, there's an opportunity to uplevel those services from an overall security and governance perspective. And just like Hadoop kind of upended traditional thinking... and what I mean by that was not the economics of it, specifically, but just the fact that you could land data without describing it. That seemed so unimportant at one time, and now it's like the key thing that drives the difference. Think about sensors that are sending in data that reconfigure firmware, and those streams change. Being able to acquire data and then assess the data is a big deal. So the same thing applies, then, to how we apply governance. I said this morning, traditional governance was hey, I started this employee, I have access to this file, this file, this file, and nothing else. I don't know what else is out there. I only have access to what my job title describes. And that's traditional data governance. In the new world, that doesn't work. Data scientists need access to all of the data. Now, that doesn't mean we need to give away PII. We can encrypt it, we can tokenize it, but we keep referential integrity. We keep the integrity of the original structures, and those who have a need to actually see the PII can get the token and see the PII. But it's governance thought inversely as it's been thought about for 30 years. >> It's so great you've worked governance into an increasingly streaming, real-time in motion data environment. Scott, this has been great. It's been great to have you on The Cube. You're an alum of The Cube. I think we've had you at least two or three times over the last few years. >> It feels like 35. Nah, it's pretty fun.. >> Yeah, you've been great. So we are here at Dataworks Summit in Berlin. (upbeat music)

Published Date : Apr 18 2018

SUMMARY :

Brought to you by Hortonworks. So Scott, this morning, it's great to have ya'. Glad to be back and good to see you. So, one of the things you discussed this morning, of the new modern data architecture era that we live in, forgotten, the only way that you can guarantee and foremost thing for an enterprise to be able And so what we've been trying to do is really leverage so that I can guarantee the lineage of the data, discussing the open metadata framework you're describing And that was part of this morning's keynote close also. those things to come together. of lineage that may not be the same as the lineage And so it seems to me that Hortonworks is increasingly... When the industry talks about you guys, it's known And so what we've done is-- Interviewer: NiFi, I believe, you made So the same thing applies, then, to how we apply governance. It's been great to have you on The Cube. Nah, it's pretty fun.. So we are here at Dataworks Summit in Berlin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

ScottPERSON

0.99+

IBMORGANIZATION

0.99+

BerlinLOCATION

0.99+

Scott GnauPERSON

0.99+

HortonworksORGANIZATION

0.99+

TeradataORGANIZATION

0.99+

Last yearDATE

0.99+

May 25thDATE

0.99+

five weeksQUANTITY

0.99+

Mandy ChessellPERSON

0.99+

GDPRTITLE

0.99+

MunichLOCATION

0.99+

Rob BeardenPERSON

0.99+

second serviceQUANTITY

0.99+

30 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

firstQUANTITY

0.99+

Berlin, GermanyLOCATION

0.99+

secondQUANTITY

0.99+

DataPlaneORGANIZATION

0.99+

sixth yearQUANTITY

0.98+

three timesQUANTITY

0.98+

first intervieweeQUANTITY

0.98+

Dataworks SummitEVENT

0.98+

oneQUANTITY

0.97+

this morningDATE

0.97+

DataWorks Summit 2018EVENT

0.97+

MapReduceORGANIZATION

0.96+

HadoopTITLE

0.96+

HadoopORGANIZATION

0.96+

one timeQUANTITY

0.96+

35QUANTITY

0.96+

single paneQUANTITY

0.96+

NiFiORGANIZATION

0.96+

todayDATE

0.94+

DataWorks Summit Europe 2018EVENT

0.93+

Data Steward StudioORGANIZATION

0.93+

Dataworks Summit EU 2018EVENT

0.92+

about seven years agoDATE

0.91+

a year orDATE

0.88+

yearsDATE

0.87+

StormORGANIZATION

0.87+

WikibonORGANIZATION

0.86+

Apache NiFiORGANIZATION

0.85+

The CubePERSON

0.84+

North AmericanOTHER

0.84+

DataWorksORGANIZATION

0.84+

Data PlaneORGANIZATION

0.76+

Data Steward StudioTITLE

0.75+

KafkaORGANIZATION

0.75+

Keynote Analysis | Dataworks Summit 2018


 

>> Narrator: From Berlin, Germany, it's theCUBE! Covering DataWorks Summit, Europe 2018. (upbeat music) Brought to you by Hortonworks. (upbeat music) >> Hello, and welcome to theCUBE. I'm James Kobielus. I'm the lead analyst for Big Data analytics in the Wikibon team of SiliconANGLE Media, and we're here at DataWorks Summit 2018 in Berlin, Germany. And it's an excellent event, and we are here for two days of hard-hitting interviews with industry experts focused on the hot issues facing customers, enterprises, in Europe and the world over, related to the management of data and analytics. And what's super hot this year, and it will remain hot as an issue, is data privacy and privacy protection. Five weeks from now, a new regulation of the European Union called the General Data Protection Regulation takes effect, and it's a mandate that is effecting any business that is not only based in the EU but that does business in the EU. It's coming fairly quickly, and enterprises on both sides of the Atlantic and really throughout the world are focused on GDPR compliance. So that's a hot issue that was discussed this morning in the keynote, and so what we're going to be doing over the next two days, we're going to be having experts from Hortonworks, the show's host, as well as IBM, Hortonworks is one of their lead partners, as well as a customer, Munich Re, will appear on theCUBE and I'll be interviewing them about not just GDPR but really the trends facing the Big Data industry. Hadoop, of course, Hortonworks got started about seven years ago as one of the solution providers that was focused on commercializing the open source Hadoop code base, and they've come quite a ways. They've had their recent financials were very good. They continue to rock 'n' roll on the growth side and customer acquisitions and deal sizes. So we'll be talking a little bit later to Scott Gnau, their chief technology officer, who did the core keynote this morning. He'll be talking not only about how the business is doing but about a new product announcement, the Data Steward Studio that Hortonworks announced overnight. It is directly related to or useful, this new solution, for GDPR compliance, and we'll ask Scott to bring us more insight there. But what we'll be doing over the next two days is extracting signal from noise. The Big Data space continues to grow and develop. Hadoop has been around for a number of years now, but in many ways it's been superseded in the agenda as the priorities of enterprises that are building applications from data by some newer primarily open source technology such as Apache Spark, TensorFlow for building deep learning and so forth. We'll be discussing the trends towards the deepening of the open source data analytics stack with our guest. We'll be talking with a European based reinsurance company, Munich Re, about the data lake that they have built for their internal operations, and we'll be asking their, Andres Kohlmaier, their lead of data engineering, to discuss how they're using it, how they're managing their data lake, and possibly to give us some insight about it will serve them in achieving GDPR compliance and sustaining it going forward. So what we will be doing is that we'll be looking at trends, not just in compliance, not just in the underlying technologies, but the applications that Hadoop and Spark and so forth, these technologies are being used for, and the applications are really, the same initiatives in Europe are world-wide in terms of what enterprises are doing. They're moving away from Big Data environments built primarily on data at rest, that's where Hadoop has been, the sweet spot, towards more streaming architectures. And so Hortonworks, as I said the show's host, has been going more deeply towards streaming architectures with its investments in NiFi and so forth. We'll be asking them to give us some insight about where they're going with that. We'll also be looking at the growth of multi-cloud Big Data environments. What we're seeing is that there's a trend in the marketplace away from predominately premises-based Big Data platforms towards public cloud-based Big Data platforms. And so Hortonworks, they are partners with a number of the public cloud providers, the IBM that I mentioned. They've also got partnerships with Microsoft Azure, with Amazon Web Services, with Google and so forth. We'll be looking, we'll be asking our guest to give us some insight about where they're going in terms of their support for multi-clouds, support for edge computing, analytics, and the internet of things. Big Data increasingly is evolving towards more of a focus on serving applications at the edge like mobile devices that have autonomous smarts like for self-driving vehicles. Big Data is critically important for feeding, for modeling and building the AI needed to power the intelligence and endpoints. Not just self-driving cars but intelligent appliances, conversational user interfaces for mobile devices for our consumer appliances like, you know, Amazon's got their Alexa, Apple's got their Siri and so forth. So we'll be looking at those trends as well towards pushing more of that intelligence towards the edge and the power and the role of Big Data and data driven algorithms, like machine learning, and driving those kinds of applications. So what we see in the Wikibon, the team that I'm embedded within, we have published just recently our updated forecast for the Big Data analytics market, and we've identified key trends that are... revolutionizing and disrupting and changing the market for Big Data analytics. So among the core trends, I mentioned the move towards multi-clouds. The move towards a more public cloud-based big data environments in the enterprise, I'll be asking Hortonworks, who of course built their business and their revenue stream primarily on on-premises deployments, to give us a sense for how they plan to evolve as a business as their customers move towards more public cloud facing deployments. And IBM, of course, will be here in force. We have tomorrow, which is a Thursday. We have several representatives from IBM to talk about their initiatives and partnerships with Hortonworks and others in the area of metadata management, in the area of machine learning and AI development tools and collaboration platforms. We'll be also discussing the push by IBM and Hortonworks to enable greater depths of governance applied to enterprise deployments of Big Data, both data governance, which is an area where Hortonworks and IBM as partners have achieved a lot of traction in terms of recognition among the pace setters in data governance in the multi-cloud, unstructured, Big Data environments, but also model governments. The governing, the version controls and so forth of machine learning and AI models. Model governance is a huge push by enterprises who increasingly are doing data science, which is what machine learning is all about. Taking that competency, that practice, and turning into more of an industrialized pipeline of building and training and deploying into an operational environment, a steady stream of machine-learning models into multiple applications, you know, edge applications, conversational UIs, search engines, eCommerce environments that are driven increasingly by machine learning that's able to process Big Data in real time and deliver next best actions and so forth more intelligence into all applications. So we'll be asking Hortonworks and IBM to net out where they're going with their partnership in terms of enabling a multi-layered governance environment to enable this pipeline, this machine-learning pipeline, this data science pipeline, to be deployed it as an operational capability into more organizations. Also, one of the areas where I'll be probing our guest is to talk about automation in the machine learning pipeline. That's been a hot theme that Wikibon has seen in our research. A lot of vendors in the data science arena are adding automation capabilities to their machine-learning tools. Automation is critically important for productivity. Data scientists as a discipline are in limited supply. I mean experienced, trained, seasoned data scientists fetch a high price. There aren't that many of them, so more of the work they do needs to be automated. It can be automated by a mature tool, increasingly mature tools on the market, a growing range of vendors. I'll be asking IBM and Hortonworks to net out where they're going with automation in sight of their Big Data, their machine learning tools and partnerships going forward. So really what we're going to be doing over the next few days is looking at these trends, but it's going to come back down to GDPR as a core envelope that many companies attending this event, DataWorks Summit, Berlin, are facing. So I'm James Kobielus with theCUBE. Thank you very much for joining us, and we look forward to starting our interviews in just a little while. Our first up will be Scott Gnau from Hortonworks. Thank you very much. (upbeat music)

Published Date : Apr 18 2018

SUMMARY :

Brought to you by Hortonworks. and enterprises on both sides of the Atlantic

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James KobielusPERSON

0.99+

IBMORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

Scott GnauPERSON

0.99+

Andres KohlmaierPERSON

0.99+

AppleORGANIZATION

0.99+

European UnionORGANIZATION

0.99+

EuropeLOCATION

0.99+

General Data Protection RegulationTITLE

0.99+

ScottPERSON

0.99+

GoogleORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

two daysQUANTITY

0.99+

Munich ReORGANIZATION

0.99+

ThursdayDATE

0.99+

SiriTITLE

0.99+

GDPRTITLE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Berlin, GermanyLOCATION

0.99+

WikibonORGANIZATION

0.99+

firstQUANTITY

0.99+

Data Steward StudioORGANIZATION

0.98+

bothQUANTITY

0.98+

tomorrowDATE

0.98+

DataWorks SummitEVENT

0.98+

AtlanticLOCATION

0.98+

oneQUANTITY

0.98+

BerlinLOCATION

0.98+

both sidesQUANTITY

0.97+

DataWorks Summit 2018EVENT

0.97+

ApacheORGANIZATION

0.96+

HadoopTITLE

0.95+

AlexaTITLE

0.94+

this yearDATE

0.94+

SparkTITLE

0.92+

2018EVENT

0.91+

EUORGANIZATION

0.91+

Dataworks Summit 2018EVENT

0.88+

TensorFlowORGANIZATION

0.81+

this morningDATE

0.77+

about seven years agoDATE

0.76+

AzureTITLE

0.7+

next two daysDATE

0.68+

Five weeksQUANTITY

0.62+

NiFiTITLE

0.59+

EuropeanLOCATION

0.59+

theCUBEORGANIZATION

0.58+

Sastry Malladi, FogHorn | Big Data SV 2018


 

>> Announcer: Live from San Jose, it's theCUBE, presenting Big Data Silicon Valley, brought to you by SiliconANGLE Media and its ecosystem partner. (upbeat electronic music) >> Welcome back to The Cube. I'm Lisa Martin with George Gilbert. We are live at our event, Big Data SV, in downtown San Jose down the street from the Strata Data Conference. We're joined by a new guest to theCUBE, Sastry Malladi, the CTO Of FogHorn. Sastry, welcome to theCUBE. >> Thank you, thank you, Lisa. >> So FogHorn, cool name, what do you guys do, who are you? Tell us all that good stuff. >> Sure. We are a startup based in Silicon Valley right here in Mountain View. We started about three years ago, three plus years ago. We provide edge computing intelligence software for edge computing or fog computing. That's how our company name got started is FogHorn. For our particularly, for our IoT industrial sector. All of the industrial guys, whether it's transportation, manufacturing, oil and gas, smart cities, smart buildings, any of those different sectors, they use our software to predict failure conditions in real time, or do condition monitoring, or predictive maintenance, any of those use cases and successfully save a lot of money. Obviously in the process, you know, we get paid for what we do. >> So Sastry... GE populized this concept of IIoT and the analytics and, sort of the new business outcomes you could build on it, like Power by the Hour instead of selling a jet engine. >> Sastry: That's right. But there's... Actually we keep on, and David Floor did some pioneering research on how we're going to have to do a lot of analytics on the edge for latency and bandwidth. What's the FogHorn secret sauce that others would have difficulty with on the edge analytics? >> Okay, that's a great question. Before I directly answer the question, if you don't mind, I'll actually even describe why that's even important to do that, right? So a lot of these industrial customers, if you look at, because we work with a lot of them, the amount of data that's produced from all of these different machines is terabytes to petabytes of data, it's real. And it's not just the traditional digital sensors but there are video, audio, acoustic sensors out there. The amount of data is humongous, right? It's not even practical to send all of that to a Cloud environment and do data processing, for many reasons. One is obviously the connectivity, bandwidth issues, and all of that. But the two most important things are cyber security. None of these customers actually want to connect these highly expensive machines to the internet. That's one. The second is the lack of real-time decision making. What they want to know, when there is a problem, they want to know before it's too late. We want to notify them it is a problem that is occurring so that have a chance to go fix it and optimize their asset that is in question. Now, existing solutions do not work in this constrained environment. That's why FogHorn had to invent that solution. >> And tell us, actually, just to be specific, how constrained an environment you can operate in. >> We can run in about less than 100 to 150 megabytes of memory, single-core to dual-core of CPU, whether it's an ARM processor, an x86 Intel-based processor, almost literally no storage because we're a real-time processing engine. Optionally, you could have some storage if you wanted to store some of the results locally there but that's the kind of environment we're talking about. Now, when I say 100 megabytes of memory, it's like a quarter of Raspberry Pi, right? And even in that environment we have customers that run dozens of machinery models, right? And we're not talking -- >> George: Like an ensemble. >> Like an anomaly detection, a regression, a random forest, or a clustering, or a gamut, some of those. Now, if we get into more deep learning models, like image processing and neural net and all of that, you obviously need a little bit more memory. But what we have shown, we could still run, one of our largest smart city buildings customer, elevator company, runs in a raspberry Pi on millions of elevators, right? Dozens of machinery algorithms on top of that, right? So that's the kind of size we're talking about. >> Let me just follow up with one question on the other thing you said, with, besides we have to do the low-latency locally. You said a lot of customers don't want to connect these brown field, I guess, operations technology machines to the internet, and physically, I mean there was physical separation for security. So it's like security, Bill Joy used to say "Security by obscurity." Here it's security by -- >> Physical separation, absolutely. Tell me about it. I was actually coming from, if you don't mind, last week I was in Saudi Arabia. One of the oil and gas plants where we deployed our software, you have to go to five levels of security even to get to there, It's a multibillion dollar plant and refining the gas and all of that. Completely offline, no connectivity to the internet, and we installed, in their existing small box, our software, connected to their live video cameras that are actually measuring the stuff, doing the processing and detecting the specific conditions that we're looking for. >> That's my question, which was if they want to be monitoring. So there's like one low level, really low hardware low level, the sensor feeds. But you could actually have a richer feed, which is video and audio, but how much of that, then, are you doing the, sort of, inferencing locally? Or even retraining, and I assume that since it's not the OT device, and it's something that's looking at it, you might be more able to send it back up the Cloud if you needed to do retraining? >> That's exactly right. So the way the model works is particularly for image processing because you need, it's a more complex process to train than create a model. You could create a model offline, like in a GPU box, an FPGA box and whatnot. Import and bring the model back into this small little device that's running in the plant, and now the live video data is coming in, the model is inferencing the specific thing. Now there are two ways to update and revise the model: incremental revision of the model, you could do that if you want, or you can send the results to a central location. Not internet, they do have local, in this example for example a PIDB, an OSS PIDB, or some other local service out there, where you have an opportunity to gather the results from each of these different locations and then consolidate and retrain the model, put the model back again. >> Okay, the one part that I didn't follow completely is... If the model is running ultimately on the device, again and perhaps not even on a CPU, but a programmable logic controller. >> It could, even though a programmable controller also typically have some shape of CPU there as well. These days, most of the PLCs, programmable controllers, have either an RM-based processor or an x86-based processor. We can run either one of those too. >> So, okay, assume you've got the model deployed down there, for the, you know, local inferencing. Now, some retraining is going to go on in the Cloud, where you have, you're pulling in the richer perspective from many different devices. How does that model get back out to the device if it doesn't have the connectivity between the device and the Cloud? >> Right, so if there's strictly no connectivity, so what happens is once the model is regenerated or retrained, they put a model in a USB stick, it's a low attack. USB stick, bring it to the PLC device and upload the model. >> George: Oh, so this is sort of how we destroyed the Iranian centrifuges. >> That's exactly right, exactly right. But you know, some other environments, even though it's not connectivity to the Cloud environment, per se, but the devices have the ability to connect to the Cloud. Optionally, they say, "Look, I'm the device "that's coming up, do you have an upgraded model for me?" Then it can pull the model. So in some of the environments it's super strict where there are absolutely no way to connect this device, you put it in a USB stick and bring the model back here. Other environments, device can query the Cloud but Cloud cannot connect to the device. This is a very popular model these days because, in other words imagine this, an elevator sitting in a building, somebody from the Cloud cannot reach the elevator, but an elevator can reach the Cloud when it wants to. >> George: Sort of like a jet engine, you don't want the Cloud to reach the jet engine. >> That's exactly right. The jet engine can reach the Cloud it if wants to, when it wants to, but the Cloud cannot reach the jet engine. That's how we can pull the model. >> So Sastry, as a CTO you meet with customers often. You mentioned you were in Saudi Arabia last week. I'd love to understand how you're leveraging and gaging with customers to really help drive the development of FogHorn, in terms of being differentiated in the market. What are those, kind of bi-directional, symbiotic customer relationships like? And how are they helping FogHorn? >> Right, that's actually a great question. We learn a lot from customers because we started a long time ago. We did an initial version of the product. As we begin to talk to the customers, particularly that's part of my job, where I go talk to many of these customers, they give us feedback. Well, my problem is really that I can't even do, I can't even give you connectivity to the Cloud, to upgrade the model. I can't even give you sample data. How do you do that modeling, right? And sometimes they say, "You know what, "We are not technical people, help us express the problem, "the outcome, give me tools "that help me express that outcome." So we created a bunch of what we call OT tools, operational technology tools. How we distinguish ourselves in this process, from the traditional Cloud-based vendor, the traditional data science and data analytics companies, is that they think in terms of computer scientists, computer programmers, and expressions. We think in terms of industrial operators, what can they express, what do they know? They don't really necessarily care about, when you tell them, "I've got an anomaly detection "data science machine algorithm", they're going to look at you like, "What are you talking about? "I don't understand what you're talking about", right? You need to tell them, "Look, this machine is failing." What are the conditions in which the machine is failing? How do you express that? And then we translate that requirement, or that into the underlying models, underlying Vel expressions, Vel or CPU expression language. So we learned a ton from user interface, capabilities, latency issues, connectivity issues, different protocols, a number of things that we learn from customers. >> So I'm curious with... More of the big data vendors are recognizing data in motion and data coming from devices. And some, like Hortonworks DataFlow NiFi has a MiNiFi component written in C plus plus, really low resource footprint. But I assume that that's really just a transport. It's almost like a collector and that it doesn't have the analytics built in -- >> That's exactly right, NiFi has the transport, it has the real-time transport capability for sure. What it does not have is this notion of that CEP concept. How do you combine all of the streams, everything is a time series data for us, right, from the devices. Whether it's coming from a device or whether it's coming from another static source out there. How do you express a pattern, a recognition pattern definition, across these streams? That's where our CPU comes in the picture. A lot of these seemingly similar software capabilities that people talk about, don't quite exactly have, either the streaming capability, or the CPU capability, or the real-time, or the low footprint. What we have is a combination of all of that. >> And you talked about how everything's time series to you. Is there a need to have, sort of an equivalent time series database up in some central location? So that when you subset, when you determine what relevant subset of data to move up to the Cloud, or you know, on-prem central location, does it need to be the same database? >> No, it doesn't need to be the same database. It's optional. In fact, we do ship a local time series database at the edge itself. If you have a little bit of a local storage, you can down sample, take the results, and store it locally, and many customers actually do that. Some others, because they have their existing environment, they have some Cloud storage, whether it's Microsoft, it doesn't matter what they use, we have connectors from our software to send these results into their existing environments. >> So, you had also said something interesting about your, sort of, tool set, as being optimized for operations technology. So this is really important because back when we had the Net-Heads and the Bell-Heads, you know it was a cultural clash and they had different technologies. >> Sastry: They sure did, yeah. >> Tell us more about how selling to operations, not just selling, but supporting operations technology is different from IT technology and where does that boundary live? >> Right, so typical IT environment, right, you start with the boss who is the decision maker, you work with them and they approve the project and you go and execute that. In an industrial, in an OT environment, it doesn't quite work like that. Even if the boss says, "Go ahead and go do this project", if the operator on the floor doesn't understand what you're talking about, because that person is in charge of operating that machine, it doesn't quite work like that. So you need to work bottom up as well, to convincing them that you are indeed actually solving their pain point. So the way we start, where rather than trying to tell them what capabilities we have as a product, or what we're trying to do, the first thing we ask is what is their pain point? "What's your problem? What is the problem "you're trying to solve?" Some customers say, "Well I've got yield, a lot of scrap. "Help me reduce my scrap. "Help me to operate my equipment better. "Help me predict these failure conditions "before it's too late." That's how the problem starts. Then we start inquiring them, "Okay, what kind of data "do you have, what kind of sensors do you have? "Typically, do you have information about under what circumstances you have seen failures "versus not seeing failures out there?" So in the process of inauguration we begin to understand how they might actually use our software and then we tell them, "Well, here, use your software, "our software, to predict that." And, sorry, I want 30 more seconds on that. The other thing is that, typically in an IT environment, because I came from that too, I've been in this position for 30 plus years, IT, UT and all of that, where we don't right away talk about CEP, or expressions, or analytics, and we don't talk about that. We talk about, look, you have these bunch of sensors, we have OT tools here, drag and drop your sensors, express the outcome that you're trying to look for, what is the outcome you're trying to look for, and then we drive behind the scenes what it means. Is it analytics, is it machine learning, is it something else, and what is it? So that's kind of how we approach the problem. Of course, if, sometimes you do surprisingly occasionally run into very technical people. From those people we can right away talk about, "Hey, you need these analytics, you need to use machinery, "you need to use expressions" and all of that. That's kind of how we operate. >> One thing, you know, that's becoming clearer is I think this widespread recognition that's data intensive and low latency work to be done near the edge. But what goes on in the Cloud is actually closer to simulation and high-performance compute, if you want to optimize a model. So not just train it, but maybe have something that's prescriptive that says, you know, here's the actionable information. As more of your data is video and audio, how do you turn that into something where you can simulate a model, that tells you the optimal answer? >> Right, so this is actually a good question. From our experience, there are models that require a lot of data, for example, video and audio. There are some other models that do not require a lot of data for training. I'll give you an example of what customer use cases that we have. There's one customer in a manufacturing domain, where they've been seeing a lot of finished goods failures, there's a lot of scrap and the problem then was, "Hey, predict the failures, "reduce my scrap, save the money", right? Because they've been seeing a lot of failures every single day, we did not need a lot of data to train and create a model to that. So, in fact, we just needed one hour's worth of data. We created a model, put the thing, we have reduced, completely eliminated their scrap. There are other kinds of models, other kinds of models of video, where we can't do that in the edge, so we're required for example, some video files or simulated audio files, take it to an offline model, create the model, and see whether it's accurately predicting based on the real-time video coming in or not. So it's a mix of what we're seeing between those two. >> Well Sastry, thank you so much for stopping by theCUBE and sharing what it is that you guys at FogHorn are doing, what you're hearing from customers, how you're working together with them to solve some of these pretty significant challenges. >> Absolutely, it's been a pleasure. Hopefully this was helpful, and yeah. >> Definitely, very educational. We want to thank you for watching theCUBE, I'm Lisa Martin with George Gilbert. We are live at our event, Big Data SV in downtown San Jose. Come stop by Forager Tasting Room, hang out with us, learn as much as we are about all the layers of big data digital transformation and the opportunities. Stick around, we will be back after a short break. (upbeat electronic music)

Published Date : Mar 8 2018

SUMMARY :

brought to you by SiliconANGLE Media down the street from the Strata Data Conference. what do you guys do, who are you? Obviously in the process, you know, the new business outcomes you could build on it, What's the FogHorn secret sauce that others Before I directly answer the question, if you don't mind, how constrained an environment you can operate in. but that's the kind of environment we're talking about. So that's the kind of size we're talking about. on the other thing you said, with, and refining the gas and all of that. the Cloud if you needed to do retraining? Import and bring the model back If the model is running ultimately on the device, These days, most of the PLCs, programmable controllers, if it doesn't have the connectivity USB stick, bring it to the PLC device and upload the model. we destroyed the Iranian centrifuges. but the devices have the ability to connect to the Cloud. you don't want the Cloud to reach the jet engine. but the Cloud cannot reach the jet engine. So Sastry, as a CTO you meet with customers often. they're going to look at you like, and that it doesn't have the analytics built in -- or the real-time, or the low footprint. So that when you subset, when you determine If you have a little bit of a local storage, So, you had also said something interesting So the way we start, where rather than trying that tells you the optimal answer? and the problem then was, "Hey, predict the failures, and sharing what it is that you guys at FogHorn are doing, Hopefully this was helpful, and yeah. We want to thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

Lisa MartinPERSON

0.99+

Saudi ArabiaLOCATION

0.99+

Sastry MalladiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

one hourQUANTITY

0.99+

SastryPERSON

0.99+

Silicon ValleyLOCATION

0.99+

GEORGANIZATION

0.99+

100 megabytesQUANTITY

0.99+

LisaPERSON

0.99+

Bill JoyPERSON

0.99+

twoQUANTITY

0.99+

FogHornORGANIZATION

0.99+

last weekDATE

0.99+

Mountain ViewLOCATION

0.99+

30 more secondsQUANTITY

0.99+

David FloorPERSON

0.99+

one questionQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

San JoseLOCATION

0.99+

30 plus yearsQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

three plus years agoDATE

0.99+

one customerQUANTITY

0.98+

oneQUANTITY

0.98+

secondQUANTITY

0.98+

C plus plusTITLE

0.98+

OneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

150 megabytesQUANTITY

0.98+

two waysQUANTITY

0.97+

Strata Data ConferenceEVENT

0.97+

IranianOTHER

0.97+

five levelsQUANTITY

0.95+

millions of elevatorsQUANTITY

0.95+

about less than 100QUANTITY

0.95+

one partQUANTITY

0.94+

VelOTHER

0.94+

One thingQUANTITY

0.92+

dozens of machinery modelsQUANTITY

0.92+

eachQUANTITY

0.91+

IntelORGANIZATION

0.91+

FogHornPERSON

0.86+

2018DATE

0.85+

first thingQUANTITY

0.85+

single-coreQUANTITY

0.85+

NiFiORGANIZATION

0.82+

Power by the HourORGANIZATION

0.81+

about three years agoDATE

0.81+

Forager Tasting RORGANIZATION

0.8+

a tonQUANTITY

0.8+

CTOPERSON

0.79+

multibillion dollarQUANTITY

0.79+

DataEVENT

0.79+

Bell-HeadsORGANIZATION

0.78+

every single dayQUANTITY

0.76+

The CubeORGANIZATION

0.75+

CloudCOMMERCIAL_ITEM

0.73+

Dozens of machinery algorithmsQUANTITY

0.71+

PiCOMMERCIAL_ITEM

0.71+

petabytesQUANTITY

0.7+

raspberryORGANIZATION

0.69+

Big DataORGANIZATION

0.68+

CloudTITLE

0.67+

dual-coreQUANTITY

0.65+

SastryORGANIZATION

0.62+

NetORGANIZATION

0.61+

Scott Gnau, Hortonworks | Big Data SV 2018


 

>> Narrator: Live from San Jose, it's the Cube. Presenting Big Data Silicon Valley. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Welcome back to the Cube's continuing coverage of Big Data SV. >> This is out tenth Big Data event, our fifth year in San Jose. We are down the street from the Strata Data Conference. We invite you to come down and join us, come on down! We are at Forager Tasting Room & Eatery, super cool place. We've got a cocktail event tonight, and a endless briefing tomorrow morning. We are excited to welcome back to the Cube, Scott Gnau, the CTO of Hortonworks. Hey, Scott, welcome back. >> Thanks for having me, and I really love what you've done with the place. I think there's as much energy here as I've seen in the entire show. So, thanks for having me over. >> Yeah! >> We have done a pretty good thing to this place that we're renting for the day. So, thanks for stopping by and talking with George and I. So, February, Hortonworks announced some news about Hortonworks DataFlow. What was in that announcement? What does that do to help customers simplify data in motion? What industries is it going to be most impactful for? I'm thinking, you know, GDPR is a couple months away, kind of what's new there? >> Well, yeah, and there are a couple of topics in there, right? So, obviously, we're very committed to, which I think is one of our unique value propositions, is we're committed to really creating an easy to use data management platform, as it were, for the entire lifecycle of data, from one data created at the edge and as data are streaming from one place to another place, and, at rest, analytics get run, analytics get pushed back out to the edge. So, that entire lifecycle is really the footprint that we're looking at, and when you dig a level into that, obviously, the data in motion piece is usually important, and So I think one a the things that we've looked at is we don't want to be just a streaming engine or just a tool for creating pipes and data flows and so on. We really want to create that entire experience around what needs to happen for data that's moving, whether it be acquisition at the edge in a protected way with provenance and encryption, whether it be applying streaming analytics as the data are flowing and everywhere kind of in between, and so that's what HDF represents, and what we released in our latest release, which, to your point, was just a few weeks ago, is a way for our customers to go build their data in motion applications using a very simple drag and drop GUI interface. So, they don't have to understand all of the different animals in the zoo, and the different technologies that are in play. It's like, "I want to do this." Okay, here's a GUI tool, you can have all of the different operators that are represented by the different underlying technologies that we provide as Hortonworks DataFlow, and you can stream them together, and then, you can make those applications and test those applications. One of the biggest enhancements that we did, is we made it very easy then for once those things are built in a laptop environment or in a dev environment, to be published out to production or to be published out to other developers who might want to enhance them and so on. So, the idea is to make it consumable inside of an enterprise, and when you think about data in motion and IOT and all those use cases, it's not going to be one department, one organization, or one person that's doing it. It's going to be a team of people that are distributed just like the data and the sensors, and, so, being able to have that sharing capability is what we've enhanced in the experience. >> So, you were just saying, before we went live, that you're here having speed dates with customers. What are some of the things... >> It's a little bit more sincere than that, but yeah. >> (laughs) Isn't speed dating sincere? It's 2018, I'm not sure. (Scott laughs) What are some of the things that you're hearing from customers, and how is that helping to drive what's coming out from Hortonworks? >> So, the two things that I'm hearing right, number one, certainly, is that they really appreciate our approach to the entire lifecycle of data, because customers are really experiencing huge data volume increases and data just from everywhere, and it's no longer just from the ERP system inside the firewall. It's from third party, it's from Sensors, it's from mobile devices, and, so, they really do appreciate kind of the territory that we cover with the tools and technologies we bring to market, and, so, that's been very rewarding. Clearly, customers who are now well into this path, they're starting to think about, in this new world, data governance, and data governance, I just took all of the energy out of the room, governance, it sounds like, you know, hard. What I mean by data governance, really, is customers need to understand, with all of this diverse, connected data everywhere, in the cloud, on PRIM, then Sensors, third party, partners, is, frankly, they need a trail of breadcrumbs that say what is it, where'd it come from, who had access to it, and then, what did they do with it? If you start to piece that together, that's what they really need to understand, the data estate that belongs to them, so they can turn that into refined product, and, so, when you then segway in one of your earlier questions, that GDPR is, certainly, a triggering point where if it's like, okay, the penalties are huge, oh my God, it's a whole new set of regulations that I have to comply with, and when you think about that trail of breadcrumbs that I just described, that actually becomes a roadmap for compliance under regulations like GDPR, where if a European customer calls up and says, "Forget my data.", the only way that you can guarantee that you forgot that person's data, is to actually understand where it all is, and that requires proper governance, tools, and techniques, and, so, when I say governance, it's, really, not like, you know, the governor and the government, and all that. That's an aspect, but the real, important part is how do I keep all of that connectivity so that I can understand the landscape of data that I've got access to, and I'm hearing a lot of energy around that, and when you think about an IOT kind of world, distributed processing, multiple hybrid cloud footprints, data is just everywhere, and, so, the perimeter is no longer fixed, it's kind of variable, and being able to keep track of that is a very important thing for our customers. >> So, continuing on that theme, Scott. Data lakes seem to be the first major new repository we added after we had data warehouses and data marts, and it looked like the governance solutions were sort of around that perimeter of the data lake. Tell us, you were alluding to, sort of, how many more repositories, whether at rest or in motion, there are for data. Do we have to solve the governance problem end-to-end before we can build meaningful applications? >> So, I would argue personally, that governance is one of the most strategic things for us as an industry, collectively, to go solve in a universal way, and what I mean by that, is throughout my career, which is probably longer than I'd like to admit, in an EDW centric world, where things are somewhat easier in terms of the perimeter and where the data came from, data sources were much more controlled, typically ERP systems, owned wholly by a company. Even in that era, true data governance, meta data management, and that provenance was never really solved adequately. There were 300 different solutions, none of which really won. They were all different, non-compatible, and the problem was easier. In this new world, with connected data, the problem is infinitely more difficult to go solve, and, so, that same kind of approach of 300 different proprietary solutions I don't think is going to work. >> So, tell us, how does that approach have to change and who can make that change? >> So, one of the things, obviously, that we're driving is we're leveraging our position in the open community to try to use the community to create that common infrastructure, common set of APIs for meta data management, and, of course, we call that Apache Atlas, and we work with a lot of partners, some of whom are customers, some of whom are other vendors, even some of whom could be considered competitors, to try to drive an Apache open source kind of project to become that standard layer that's common into which vendors can bring their applications. So, now, if I have a common API for tracking meta data in that trail of breadcrumbs that's commonly understood, I can bring in an application that helps customers go develop the taxonomy of the rules that they want to implement, and, then, that helps visualize all of the other functionality, which is also extremely important, and that's where I think specialization comes into play, but having that common infrastructure, I think, is a really important thing, because that's going to enable data, data lakes, IOT to be trusted, and if it's not trusted, it's not going to be successful. >> Okay, there's a chicken and an egg there it sounds like, potentially. >> Am I the chicken or the egg? >> Well, you're the CTO. (Lisa laughs) >> Okay. >> The thing I was thinking of was, the broader the scope of trust that you're trying to achieve at first, the more difficult the problem, do you see customers wanting to pick off one high value application, not necessarily that's about managing what's in Atlas, in the meta data, so much as they want to do an IOT app and they'll implement some amount of governance to solve that app. In other words, which comes first? Do they have to do the end-to-end meta data management and governance, or do they pick a problem off first? >> In this case, I think it's chicken or egg. I mean, you could start from either point. I see customers who are implementing applications in the IOT space, and they're saying, "Hey, this requires a new way to think of governance, "so, I'm going to go and build that out, but I'm going to "think about it being pluggable into the next app." I also see a lot of customers, especially in highly regulated industries, and especially in highly regulated jurisdictions, who are stepping back and saying, "Forget the applications, this is a data opportunity, "and, so, I want to go solve my data fabric, "and I want to have some consistency across "that data fabric into which I can publish data "for specific applications and guarantee "that, wholistically, I am compliant "and that I'm sitting inside of our corporate mission "and all of those things." >> George: Okay. >> So, one of the things you mention, and we talk about this a lot, is the proliferation of data. It's so many, so many different sources, and companies have an opportunity, you had mentioned the phrase data opportunity, there is massive opportunity there, but you said, you know, from even a GDR perspective alone, I can't remove the data if I don't know where it is to the breadcrumbs. As a marketer, we use terms like get a 360 degree view of your customer. Is that actually really something that customers can achieve leveraging a data. Can they actually really get, say a retailer, a 360, a complete view of their customer? >> Alright, 358. >> That's pretty good! >> And we're getting there. (Lisa laughs) Yeah, I mean, obviously, the idea is to get a much broader view, and 360 is a marketing term. I'm not a marketing person, >> Yes. But it, certainly, creates a much broader view of highly personalized information that help you interact with your customer better, and, yes, we're seeing customers do that today and have great success with it and actually change and build new business models based on that capability, for sure. The folks who've done that have realized that in this new world, the way that that works is you have to have a lot of people have access to a lot of data, and that's scary, because that's not the way it used to be, right? >> Right. >> It used to be you go to the DBA and you ask for access, and then, your boss has to sign off and say it's what you asked for. In this world, you need to have access to all of it. So, when you think about this new governance capability where as part of the governance integrated with security, personalized information can be encrypted, it can be blurred out, but you still have access to the data to look at the relationships to be found in the data to build out those sophisticated models. So, that's where not only is it a new opportunity for governance just because the sources, the variety at the different landscape, but it's, ultimately, very much required, because if you're the CSO, you're not going to give access to the marketing team all of its customer data unless you understand that, right, but it has to be, "I'm just giving it to you, "and I know that it's automatically protected." versus, "I'm going to let you ask for it." to be successful. >> Right. >> I guess, following up on that, it sounds like what we were talking about, chicken or egg. Are you seeing an accelerating shift from where data is sort of collected, centrally, from applications, or, what we hear on Amazon, is the amount coming off the edge is accelerating. >> It is, and I think that that is a big drive to, frankly, faster clouded option, you know, the analytic space, particularly, has been a laggard in clouded option for many reasons, and we've talked about it previously, but one of the biggest reasons, obviously, is that data has gravity, data movement is expensive, and, so, now, when you think about where data is being created, where it lives, being further out on the edge, and may live its entire lifecycle in the cloud, you're seeing a reversal of gravity more towards cloud, and that, again, creates more opportunities in terms of driving a more varied perimeter and just keeping track of where all the assets are. Finally, I think it also leads to this notion of managing entire lifecycle of data. One of the implications of that is if data is not going to be centralized, it's going to live in different places, applications have to be portable to move to where the data exists. So, when I think about that landscape of creating ubiquitous data management within Hortonworks' portfolio, that's one of the big values that we can create for our customers. Not only can we be an on-ramp to their hybrid architecture, but as we become that on-ramp, we can also guarantee the portability of the applications that they've built out to those cloud footprints and, ultimately, even out to the edge. >> So, a quick question, then, to clarify on that, or drill down, would that mean you could see scenarios where Hortonworks is managing the distribution of models that do the inferencing on the edge, and you're collecting, bringing back the relevant data, however that's defined, to do the retraining of any models or recreation of new models. >> Absolutely, absolutely. That's one of the key things about the NiFi project in general and Hortonworks DataFlow, specifically, is the ability to selectively move data, and the selectivity can be based on analytic models as well. So, the easiest case to think about is self-driving cars. We all understand how that works, right? A self-driving car has cameras, and it's looking at things going on. It's making decisions, locally, based on models that have been delivered, and they have to be done locally, because of latency, right, but, selectively, hey, here's something that I saw as an image I didn't recognize. I need to send that up, so that it can be added to my lexicon of what images are and what action should be taken. So, of course, that's all very futuristic, but we understand how that works, but that has application in things that are very relevant today. Think about jet engines that have diagnostics running. Do I need to send that terabyte of data an hour over an expensive thing? No, but I have a model that runs locally that says, "Wow, this thing looks interesting. "Let me send a gigabyte now for immediate action." So, that decision making capability is extremely important. >> Well, Scott, thanks so much for taking some time to come chat with us once again on the Cube. We appreciate your insights. >> Appreciate it, time flies. This is great. >> Doesn't it? When you're having fun! >> Yeah. >> Alright, we want to thank you for watching the Cube. I'm Lisa Martin with George Gilbert. We are live at Forager Tasting Room in downtown San Jose at our own event, Big Data SV. We'd love for you to come on down and join us tonight, today, tonight, and tomorrow. Stick around, we'll be right back with our next guest after a short break. (techno music) >> Narrator: Since the dawn of the cloud, the Cube

Published Date : Mar 7 2018

SUMMARY :

Brought to you by SiliconANGLE Media Welcome back to the Cube's We are down the street from the Strata Data Conference. as I've seen in the entire show. What does that do to help customers simplify data in motion? So, the idea is to make it consumable What are some of the things... It's a little bit more from customers, and how is that helping to drive what's that I have to comply with, and when you think and it looked like the governance solutions the problem is infinitely more difficult to go solve, So, one of the things, obviously, Okay, there's a chicken and an egg there it sounds like, Well, you're the CTO. of governance to solve that app. "so, I'm going to go and build that out, but I'm going to So, one of the things you mention, is to get a much broader view, that help you interact with your customer better, in the data to build out those sophisticated models. off the edge is accelerating. if data is not going to be centralized, of models that do the inferencing on the edge, is the ability to selectively move data, to come chat with us once again on the Cube. This is great. Alright, we want to thank you for watching the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

ScottPERSON

0.99+

HortonworksORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

George GilbertPERSON

0.99+

Scott GnauPERSON

0.99+

Lisa MartinPERSON

0.99+

San JoseLOCATION

0.99+

FebruaryDATE

0.99+

360 degreeQUANTITY

0.99+

2018DATE

0.99+

tomorrowDATE

0.99+

358OTHER

0.99+

GDPRTITLE

0.99+

todayDATE

0.99+

tomorrow morningDATE

0.99+

fifth yearQUANTITY

0.99+

tonightDATE

0.99+

LisaPERSON

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.99+

Hortonworks'ORGANIZATION

0.99+

one departmentQUANTITY

0.99+

one organizationQUANTITY

0.99+

two thingsQUANTITY

0.99+

360QUANTITY

0.98+

one personQUANTITY

0.98+

oneQUANTITY

0.98+

CubeORGANIZATION

0.97+

Strata Data ConferenceEVENT

0.96+

300 different solutionsQUANTITY

0.96+

an hourQUANTITY

0.95+

OneQUANTITY

0.95+

tenthQUANTITY

0.95+

300 different proprietary solutionsQUANTITY

0.95+

Big Data SV 2018EVENT

0.93+

few weeks agoDATE

0.92+

one dataQUANTITY

0.87+

AtlasTITLE

0.86+

Hortonworks DataFlowORGANIZATION

0.85+

Big DataEVENT

0.85+

CubeCOMMERCIAL_ITEM

0.84+

Silicon ValleyLOCATION

0.83+

EuropeanOTHER

0.82+

DBAORGANIZATION

0.82+

ApacheTITLE

0.79+

TastingORGANIZATION

0.76+

ApacheORGANIZATION

0.73+

CTOPERSON

0.72+

SensorsORGANIZATION

0.71+

downtown San JoseLOCATION

0.7+

Forager Tasting RoomLOCATION

0.67+

SVEVENT

0.66+

terabyte of dataQUANTITY

0.66+

NiFiORGANIZATION

0.64+

ForagerLOCATION

0.62+

Narrator:TITLE

0.6+

Big DataORGANIZATION

0.55+

RoomLOCATION

0.52+

EateryORGANIZATION

0.45+

Arun Murthy, Hortonworks | BigData NYC 2017


 

>> Coming back when we were a DOS spreadsheet company. I did a short stint at Microsoft and then joined Frank Quattrone when he spun out of Morgan Stanley to create what would become the number three tech investment (upbeat music) >> Host: Live from mid-town Manhattan, it's theCUBE covering the BigData New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. (upbeat electronic music) >> Welcome back, everyone. We're here, live, on day two of our three days of coverage of BigData NYC. This is our event that we put on every year. It's our fifth year doing BigData NYC in conjunction with Hadoop World which evolved into Strata Conference, which evolved into Strata Hadoop, now called Strata Data. Probably next year will be called Strata AI, but we're still theCUBE, we'll always be theCUBE and this our BigData NYC, our eighth year covering the BigData world since Hadoop World. And then as Hortonworks came on we started covering Hortonworks' data summit. >> Arun: DataWorks Summit. >> DataWorks Summit. Arun Murthy, my next guest, Co-Founder and Chief Product Officer of Hortonworks. Great to see you, looking good. >> Likewise, thank you. Thanks for having me. >> Boy, what a journey. Hadoop, years ago, >> 12 years now. >> I still remember, you guys came out of Yahoo, you guys put Hortonworks together and then since, gone public, first to go public, then Cloudera just went public. So, the Hadoop World is pretty much out there, everyone knows where it's at, it's got to nice use case, but the whole world's moved around it. You guys have been, really the first of the Hadoop players, before ever Cloudera, on this notion of data in flight, or, I call, real-time data but I think, you guys call it data-in-motion. Batch, we all know what Batch does, a lot of things to do with Batch, you can optimize it, it's not going anywhere, it's going to grow. Real-time data-in-motion's a huge deal. Give us the update. >> Absolutely, you know, we've obviously been in this space, personally, I've been in this for about 12 years now. So, we've had a lot of time to think about it. >> Host: Since you were 12? >> Yeah. (laughs) Almost. Probably look like it. So, back in 2014 and '15 when we, sort of, went public and we're started looking around, the thesis always was, yes, Hadoop is important, we're going to love you to manage lots and lots of data, but a lot of the stuff we've done since the beginning, starting with YARN and so on, was really enable the use cases beyond the whole traditional transactions and analytics. And Drop, our CO calls it, his vision's always been we've got to get into a pre-transactional world, if you will, rather than the post-transactional analytics and BIN and so on. So that's where it started. And increasingly, the obvious next step was to say, look enterprises want to be able to get insights from data, but they also want, increasingly, they want to get insights and they want to deal with it in real-time. You know while you're in you shopping cart. They want to make sure you don't abandon your shopping cart. If you were sitting at at retailer and you're on an island and you're about to walk away from a dress, you want to be able to do something about it. So, this notion of real-time is really important because it helps the enterprise connect with the customer at the point of action, if you will, and provide value right away rather than having to try to do this post-transaction. So, it's been a really important journey. We went and bought this company called Onyara, which is a bunch of geeks like us who started off with the government, built this batching NiFi thing, huge community. Its just, like, taking off at this point. It's been a fantastic thing to join hands and join the team and keep pushing in the whole streaming data style. >> There's a real, I don't mean to tangent but I do since you brought up community I wanted to bring this up. It's been the theme here this week. It's more and more obvious that the community role is becoming central, beyond open-source. We all know open-source, standing on the shoulders before us, you know. And Linux Foundation showing code numbers hitting up from $64 million to billions in the next five, ten years, exponential growth of new code coming in. So open-source certainly blew me. But now community is translating to things you start to see blockchain, very community based. That's a whole new currency market that's changing the financial landscape, ICOs and what-not, that's just one data point. Businesses, marketing communities, you're starting to see data as a fundamental thing around communities. And certainly it's going to change the vendor landscape. So you guys compare to, Cloudera and others have always been community driven. >> Yeah our philosophy has been simple. You know, more eyes and more hands are better than fewer. And it's been one of the cornerstones of our founding thesis, if you will. And you saw how that's gone on over course of six years we've been around. Super-excited to have someone like IBM join hands, it happened at DataWorks Summit in San Jose. That announcement, again, is a reflection of the fact that we've been very, very community driven and very, very ecosystem driven. >> Communities are fundamentally built on trust and partnering. >> Arun: Exactly >> Coding is pretty obvious, you code with your friends. You code with people who are good, they become your friends. There's an honor system among you. You're starting to see that in the corporate deals. So explain the dynamic there and some of the successes that you guys have had on the product side where one plus one equals more than two. One plus one equals five or three. >> You know IBM has been a great example. They've decided to focus on their strengths which is around Watson and machine learning and for us to focus on our strengths around data management, infrastructure, cloud and so on. So this combination of DSX, which is their data science work experience, along with Hortonworks is really powerful. We are seeing that over and over again. Just yesterday we announced the whole Dataplane thing, we were super excited about it. And now to get IBM to say, we'll get in our technologies and our IP, big data, whether it's big Quality or big Insights or big SEQUEL, and the word has been phenomenal. >> Well the Dataplane announcement, finally people who know me know that I hate the term data lake. I always said it's always been a data ocean. So I get redemption because now the data lakes, now it's admitting it's a horrible name but just saying stitching together the data lakes, Which is essentially a data ocean. Data lakes are out there and you can form these data lakes, or data sets, batch, whatever, but connecting them and integrating them is a huge issue, especially with security. >> And a lot of it is, it's also just pragmatism. We start off with this notion of data lake and say, hey, you got too many silos inside the enterprise in one data center, you want to put them together. But then increasingly, as Hadoop has become more and more mainstream, I can't remember the last time I had to explain what Hadoop is to somebody. As it has become mainstream, couple things have happened. One is, we talked about streaming data. We see all the time, especially with HTF. We have customers streaming data from autonomous cars. You have customers streaming from security cameras. You can put a small minify agent in a security camera or smart phone and can stream it all the way back. Then you get into physics. You're up against the laws of physics. If you have a security camera in Japan, why would you want to move it all the way to California and process it. You'd rather do it right there, right? So with this notion of a regional data center becomes really important. >> And that talks to the Edge as well. >> Exactly, right. So you want to have something in Japan that collects all of the security cameras in Tokyo, and you do analysis and push what you want back here, right. So that's physics. The other thing we are increasingly seeing is with data sovereignty rules especially things like GDPR, there's now regulation reasons where data has to naturally stay in different regions. Customer data from Germany cannot move to France or visa versa, right. >> Data governance is a huge issue and this is the problem I have with data governance. I am really looking for a solution so if you can illuminate this it would be great. So there is going to be an Equifax out there again. >> Arun: Oh, for sure. >> And the problem is, is that going to force some regulation change? So what we see is, certainly on the mugi bond side, I see it personally is that, you can almost see that something else will happen that'll force some policy regulation or governance. You don't want to screw up your data. You also don't want to rewrite your applications or rewrite you machine learning algorithms. So there's a lot of waste potential by not structuring the data properly. Can you comment on what's the preferred path? >> Absolutely, and that's why we've been working on things like Dataplane for almost a couple of years now. We is to say, you have to have data and policies which make sense, given a context. And the context is going to change by application, by usage, by compliance, by law. So, now to manage 20, 30, 50 a 100 data lakes, would it be better, not saying lakes, data ponds, >> [Host} Any Data. >> Any data >> Any data pool, stream, river, ocean, whatever. (laughs) >> Jacuzzis. Data jacuzzis, right. So what you want to do is want a holistic fabric, I like the term, you know Forrester uses, they call it the fabric. >> Host: Data fabric. >> Data fabric, right? You want a fabric over these so you can actually control and maintain governance and security centrally, but apply it with context. Last not least, is you want to do this whether it's on frame or on the cloud, or multi-cloud. So we've been working with a bank. They were probably based in Germany but for GDPR they had to stand up something in France now. They had French customers, but for a bunch of new reasons, regulation reasons, they had to sign up something in France. So they bring their own data center, then they had only the cloud provider, right, who I won't name. And they were great, things are working well. Now they want to expand the similar offering to customers in Asia. It turns out their favorite cloud vendor was not available in Asia or they were not available in time frame which made sense for the offering. So they had to go with cloud vendor two. So now although each of the vendors will do their job in terms of giving you all the security and governance and so on, the fact that you are to manage it three ways, one for OnFrame, one for cloud vendor A and B, was really hard, too hard for them. So this notion of a fabric across these things, which is Dataplane. And that, by the way, is based by all the open source technologies we love like Atlas and Ranger. By the way, that is also what IBM is betting on and what the entire ecosystem, but it seems like a no-brainer at this point. That was the kind of reason why we foresaw the need for something like a Dataplane and obviously couldn't be more excited to have something like that in the market today as a net new service that people can use. >> You get the catalogs, security controls, data integration. >> Arun: Exactly. >> Then you get the cloud, whatever, pick your cloud scenario, you can do that. Killer architecture, I liked it a lot. I guess the question I have for you personally is what's driving the product decisions at Hortonworks? And the second part of that question is, how does that change your ecosystem engagement? Because you guys have been very friendly in a partnering sense and also very good with the ecosystem. How are you guys deciding the product strategies? Does it bubble up from the community? Is there an ivory tower, let's go take that hill? >> It's both, because what typically happens is obviously we've been in the community now for a long time. Working publicly now with well over 1,000 customers not only puts a lot of responsibility on our shoulders but it's also very nice because it gives us a vantage point which is unique. That's number one. The second one we see is being in the community, also we see the fact that people are starting to solve the problems. So it's another elementary for us. So you have one as the enterprise side, we see what the enterprises are facing which is kind of where Dataplane came in, but we also saw in the community where people are starting to ask us about hey, can you do multi-cluster Atlas? Or multi-cluster Ranger? Put two and two together and say there is a real need. >> So you get some consensus. >> You get some consensus, and you also see that on the enterprise side. Last not least is when went to friends like IBM and say hey we're doing this. This is where we can position this, right. So we can actually bring in IGSC, you can bring big Quality and bring all these type, >> [Host} So things had clicked with IBM? >> Exactly. >> Rob Thomas was thinking the same thing. Bring in the power system and the horsepower. >> Exactly, yep. We announced something, for example, we have been working with the power guys and NVIDIA, for deep learning, right. That sort of stuff is what clicks if you're in the community long enough, if you have the vantage point of the enterprise long enough, it feels like the two of them click. And that's frankly, my job. >> Great, and you've got obviously the landscape. The waves are coming in. So I've got to ask you, the big waves are coming in and you're seeing people starting to get hip with the couple of key things that they got to get their hands on. They need to have the big surfboards, metaphorically speaking. They got to have some good products, big emphasis on real value. Don't give me any hype, don't give me a head fake. You know, I buy, okay, AI Wash, and people can see right through that. Alright, that's clear. But AI's great. We all cheer for AI but the reality is, everyone knows that's pretty much b.s. except for core machine learning is on the front edge of innovation. So that's cool, but value. [Laughs] Hey I've got the integrate and operationalize my data so that's the big wave that's coming. Comment on the community piece because enterprises now are realizing as open source becomes the dominant source of value for them, they are now really going to the next level. It used to be like the emerging enterprises that knew open source. The guys will volunteer and they may not go deeper in the community. But now more people in the enterprises are in open source communities, they are recruiting from open source communities, and that's impacting their business. What's your advice for someone who's been in the community of open source? Lessons you've learned, what is the best practice, from your standpoint on philosophy, how to build into the community, how to build a community model. >> Yeah, I mean, the end of the day, my best advice is to say look, the community is defined by the people who contribute. So, you get advice if you contribute. Which means, if that's the fundamental truth. Which means you have to get your legal policies and so on to a point that you can actually start to let your employees contribute. That kicks off a flywheel, where you can actually go then recruit the best talent, because the best talent wants to stand out. Github is a resume now. It is not a word doc. If you don't allow them to build that resume they're not going to come by and it's just a fundamental truth. >> It's self governing, it's reality. >> It's reality, exactly. Right and we see that over and over again. It's taken time but it as with things, the flywheel has changed enough. >> A whole new generation's coming online. If you look at the young kids coming in now, it is an amazing environment. You've got TensorFlow, all this cool stuff happening. It's just amazing. >> You, know 20 years ago that wouldn't happen because the Googles of the world won't open source it. Now increasingly, >> The secret's out, open source works. >> Yeah, (laughs) shh. >> Tell everybody. You know they know already but, This is changing some of the how H.R. works and how people collaborate, >> And the policies around it. The legal policies around contribution so, >> Arun, great to see you. Congratulations. It's been fun to watch the Hortonworks journey. I want to appreciate you and Rob Bearden for supporting theCUBE here in BigData NYC. If is wasn't for Hortonworks and Rob Bearden and your support, theCUBE would not be part of the Strata Data, which we are not allowed to broadcast into, for the record. O'Reilly Media does not allow TheCube or our analysts inside their venue. They've excluded us and that's a bummer for them. They're a closed organization. But I want to thank Hortonworks and you guys for supporting us. >> Arun: Likewise. >> We really appreciate it. >> Arun: Thanks for having me back. >> Thanks and shout out to Rob Bearden. Good luck and CPO, it's a fun job, you know, not the pressure. I got a lot of pressure. A whole lot. >> Arun: Alright, thanks. >> More Cube coverage after this short break. (upbeat electronic music)

Published Date : Sep 28 2017

SUMMARY :

the number three tech investment Brought to you by SiliconANGLE Media This is our event that we put on every year. Co-Founder and Chief Product Officer of Hortonworks. Thanks for having me. Boy, what a journey. You guys have been, really the first of the Hadoop players, Absolutely, you know, we've obviously been in this space, at the point of action, if you will, standing on the shoulders before us, you know. And it's been one of the cornerstones Communities are fundamentally built on that you guys have had on the product side and the word has been phenomenal. So I get redemption because now the data lakes, I can't remember the last time I had to explain and you do analysis and push what you want back here, right. so if you can illuminate this it would be great. I see it personally is that, you can almost see that We is to say, you have to have data and policies Any data pool, stream, river, ocean, whatever. I like the term, you know Forrester uses, the fact that you are to manage it three ways, I guess the question I have for you personally is So you have one as the enterprise side, and you also see that on the enterprise side. Bring in the power system and the horsepower. if you have the vantage point of the enterprise long enough, is on the front edge of innovation. and so on to a point that you can actually the flywheel has changed enough. If you look at the young kids coming in now, because the Googles of the world won't open source it. This is changing some of the how H.R. works And the policies around it. and you guys for supporting us. Thanks and shout out to Rob Bearden. More Cube coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AsiaLOCATION

0.99+

FranceLOCATION

0.99+

ArunPERSON

0.99+

IBMORGANIZATION

0.99+

Rob BeardenPERSON

0.99+

GermanyLOCATION

0.99+

Arun MurthyPERSON

0.99+

JapanLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TokyoLOCATION

0.99+

2014DATE

0.99+

CaliforniaLOCATION

0.99+

12QUANTITY

0.99+

fiveQUANTITY

0.99+

Frank QuattronePERSON

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

OnyaraORGANIZATION

0.99+

$64 millionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

San JoseLOCATION

0.99+

O'Reilly MediaORGANIZATION

0.99+

eachQUANTITY

0.99+

Morgan StanleyORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

OneQUANTITY

0.99+

fifth yearQUANTITY

0.99+

AtlasORGANIZATION

0.99+

20QUANTITY

0.99+

oneQUANTITY

0.99+

Rob ThomasPERSON

0.99+

three daysQUANTITY

0.99+

eighth yearQUANTITY

0.99+

yesterdayDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

six yearsQUANTITY

0.99+

EquifaxORGANIZATION

0.99+

next yearDATE

0.99+

NYCLOCATION

0.99+

HortonworksORGANIZATION

0.99+

second partQUANTITY

0.99+

bothQUANTITY

0.99+

RangerORGANIZATION

0.99+

50QUANTITY

0.98+

30QUANTITY

0.98+

YahooORGANIZATION

0.98+

Strata ConferenceEVENT

0.98+

DataWorks SummitEVENT

0.98+

HadoopTITLE

0.98+

'15DATE

0.97+

20 years agoDATE

0.97+

ForresterORGANIZATION

0.97+

GDPRTITLE

0.97+

second oneQUANTITY

0.97+

one data centerQUANTITY

0.97+

GithubORGANIZATION

0.96+

about 12 yearsQUANTITY

0.96+

three waysQUANTITY

0.96+

ManhattanLOCATION

0.95+

day twoQUANTITY

0.95+

this weekDATE

0.95+

NiFiORGANIZATION

0.94+

DataplaneORGANIZATION

0.94+

BigDataORGANIZATION

0.94+

Hadoop WorldEVENT

0.93+

billionsQUANTITY

0.93+

George Chow, Simba Technologies - DataWorks Summit 2017


 

>> (Announcer) Live from San Jose, in the heart of Silicon Valley, it's theCUBE covering DataWorks Summit 2017, brought to you by Hortonworks. >> Hi everybody, this is George Gilbert, Big Data and Analytics Analyst with Wikibon. We are wrapping up our show on theCUBE today at DataWorks 2017 in San Jose. It has been a very interesting day, and we have a special guest to help us do a survey of the wrap-up, George Chow from Simba. We used to call him Chief Technology Officer, now he's Technology Fellow, but when we was explaining the different in titles to me, I thought he said Technology Felon. (George Chow laughs) But he's since corrected me. >> Yes, very much so >> So George and I have been, we've been looking at both Spark Summit last week and DataWorks this week. What are some of the big advances that really caught your attention? >> What's caught my attention actually is how much manufacturing has really, I think, caught into the streaming data. I think last week was very notable that both Volkswagon and Audi actually had case studies for how they're using streaming data. And I think just before the break now, there was also a similar session from Ford, showcasing what they are doing around streaming data. >> And are they using the streaming analytics capabilities for autonomous driving, or is it other telemetry that they're analyzing? >> The, what is it, I think the Volkswagon study was production, because I still have to review the notes, but the one for Audi was actually quite interesting because it was for managing paint defect. >> (George Gilbert) For paint-- >> Paint defect. >> (George Gilbert) Oh. >> So what they were doing, they were essentially recording the environmental condition that they were painting the cars in, basically the entire pipeline-- >> To predict when there would be imperfections. >> (George Chow) Yes. >> Because paint is an extremely high-value sort of step in the assembly process. >> Yes, what they are trying to do is to essentially make a connection between downstream defect, like future defect, and somewhat trying to pinpoint the causes upstream. So the idea is that if they record all the environmental conditions early on, they could turn around and hopefully figure it out later on. >> Okay, this sounds really, really concrete. So what are some of the surprising environmental variables that they're tracking, and then what's the technology that they're using to build model and then anticipate if there's a problem? >> I think the surprising finding they said were actually, I think it was a humidity or fan speed, if I recall, at the time when the paint was being applied, because essentially, paint has to be... Paint is very sensitive to the condition that is being applied to the body. So my recollection is that one of the finding was that it was a narrow window during which the paint were, like, ideal, in terms of having the least amount of defect. >> So, had they built a digital twin style model, where it's like a digital replica of some aspects of the car, or was it more of a predictive model that had telemetry coming at it, and when it's an outside a certain bounds they know they're going to have defects downstream? >> I think they're still working on the predictive model, or actually the model is still being built, because they are essentially trying to build that model to figure out how they should be tuning the production pipeline. >> Got it, so this is sort of still in the development phase? >> (George Chow) Yeah, yeah >> And can you tell us, did they talk about the technologies that they're using? >> I remember the... It's a little hazy now because after a couple weeks of conference, so I don't remember the specifics because I was counting on the recordings to come out in a couples weeks' time. So I'll definitely share that. It's a case study to keep an eye on. >> So tell us, were there other ones where this use of real-time or near real-time data had some applications that we couldn't do before because we now can do things with very low latency? >> I think that's the one that I was looking forward to with Ford. That was the session just earlier, I think about an hour ago. The session actually consisted of a demo that was being done live, you know. It was being streamed to us where they were showcasing the data that was coming off a car that's been rigged up. >> So what data were they tracking and what were they trying to anticipate here? >> They didn't give enough detail, but it was basically data coming off of the CAN bus of the car, so if anybody is familiar with the-- >> Oh that's right, you're a car guru, and you and I compare, well our latest favorite is the Porche Macan >> Yes, yes. >> SUV, okay. >> But yeah, they were looking at streaming the performance data of the car as well as the location data. >> Okay, and... Oh, this sounds more like a test case, like can we get telemetry data that might be good for insurance or for... >> Well they've built out the system enough using the Lambda Architecture with Kafka, so they were actually consuming the data in real-time, and the demo was actually exactly seeing the data being ingested and being acted on. So in the case they were doing a simplistic visualization of just placing the car on the Google Map so you can basically follow the car around. >> Okay so, what was the technical components in the car, and then, how much data were they sending to some, or where was the data being sent to, or how much of the data? >> The data was actually sent, streamed, all the way into Ford's own data centers. So they were using NiFi with all the right proxy-- >> (George Gilbert) NiFi being from Hortonworks there. >> Yeah, yeah >> The Hortonworks data flow, okay >> Yeah, with all the appropriate proxys and firewall to bring it all the way into a secure environment. >> Wow >> So it was quite impressive from the point of view of, it was life data coming off of the 4G modem, well actually being uploaded through the 4G modem in the car. >> Wow, okay, did they say how much compute and storage they needed in the device, in this case the car? >> I think they were using a very lightweight platform. They were streaming apparently from the Raspberry Pi. >> (George Gilbert) Oh, interesting. >> But they were very guarded about what was inside the data center because, you know, for competitive reasons, they couldn't share much about how big or how large a scale they could operate at. >> Okay, so Simba has been doing ODBC and JDBC drivers to standard APIs, to databases for a long time. That was all about, that was an era where either it was interactive or batch. So, how is streaming, sort of big picture, going to change the way applications are built? >> Well, one way to think about streaming is that if you look at many of these APIs, into these systems, like Spark is a good example, where they're trying to harmonize streaming and batch, or rather, to take away the need to deal with it as a streaming system as opposed to a batch system, because it's obviously much easier to think about and reason about your system when it is traditional, like in the traditional batch model. So, the way that I see it also happening is that streaming systems will, you could say will adapt, will actually become easier to build, and everyone is trying to make it easier to build, so that you don't have to think about and reason about it as a streaming system. >> Okay, so this is really important. But they have to make a trade-off if they do it that way. So there's the desire for leveraging skill sets, which were all batch-oriented, and then, presumably SQL, which is a data manipulation everyone's comfortable with, but then, if you're doing it batch-oriented, you have a portion of time where you're not sure you have the final answer. And I assume if you were in a streaming-first solution, you would explicitly know whether you have all the data or don't, as opposed to late arriving stuff, that might come later. >> Yes, but what I'm referring to is actually the programming model. All I'm saying is that more and more people will want streaming applications, but more and more people need to develop it quickly, without having to build it in a very specialized fashion. So when you look at, let's say the example of Spark, when they focus on structured streaming, the whole idea is to make it possible for you to develop the app without having to write it from scratch. And the comment about SQL is actually exactly on point, because the idea is that you want to work with the data, you can say, not mindful, not with a lot of work to account for the fact that it is actually streaming data that could arrive out of order even, so the whole idea is that if you can build applications in a more consistent way, irrespective whether it's batch or streaming, you're better off. >> So, last week even though we didn't have a major release of Spark, we had like a point release, or a discussion about the 2.2 release, and that's of course very relevant for our big data ecosystem since Spark has become the compute engine for it. Explain the significance where the reaction time, the latency for Spark, went down from several hundred milliseconds to one millisecond or below. What are the implications for the programming model and for the applications you can build with it. >> Actually, hitting that new threshold, the millisecond, is actually a very important milestone because when you look at a typical scenario, let's say with AdTech where you're serving ads, you really only have, maybe, on the order about 100 or maybe 200 millisecond max to actually turn around. >> And that max includes a bunch of things, not just the calculation. >> Yeah, and that, let's say 100 milliseconds, includes transfer time, which means that in your real budget, you only have allowances for maybe, under 10 to 20 milliseconds to compute and do any work. So being able to actually have a system that delivers millisecond-level performance actually gives you ability to use Spark right now in that scenario. >> Okay, so in other words, now they can claim, even if it's not per event processing, they can claim that they can react so fast that it's as good as per event processing, is that fair to say? >> Yes, yes that's very fair. >> Okay, that's significant. So, what type... How would you see applications changing? We've only got another minute or two, but how do you see applications changing now that, Spark has been designed for people that have traditional, batch-oriented skills, but who can now learn how to do streaming, real-time applications without learning anything really new. How will that change what we see next year? >> Well I think we should be careful to not pigeonhole Spark as something built for batch, because I think the idea is that, you could say, the originators, of Spark know that it's all about the ease of development, and it's the ease of reasoning about your system. It's not the fact that the technology is built for batch, so the fact that you could use your knowledge and experience and an API that actually is familiar, should leverage it for something that you can build for streaming. That's the power, you could say. That's the strength of what the Spark project has taken on. >> Okay, we're going to have to end it on that note. There's so much more to go through. George, you will be back as a favorite guest on the show. There will be many more interviews to come. >> Thank you. >> With that, this is George Gilbert. We are DataWorks 2017 in San Jose. We had a great day today. We learned a lot from Rob Bearden and Rob Thomas up front about the IBM deal. We had Scott Gnau, CTO of Hortonworks on several times, and we've come away with an appreciation for a partnership now between IBM and Hortonworks that can take the two of them into a set of use cases that neither one on its own could really handle before. So today was a significant day. Tune in tomorrow, we have another great set of guests. Keynotes start at nine, and our guests will be on starting at 11. So with that, this is George Gilbert, signing out. Have a good night. (energetic, echoing chord and drum beat)

Published Date : Jun 13 2017

SUMMARY :

in the heart of Silicon Valley, do a survey of the wrap-up, What are some of the big advances caught into the streaming data. but the one for Audi was actually quite interesting in the assembly process. So the idea is that if they record So what are some of the surprising environmental So my recollection is that one of the finding or actually the model is still being built, of conference, so I don't remember the specifics the data that was coming off a car the performance data of the car for insurance or for... So in the case they were doing a simplistic visualization So they were using NiFi with all the right proxy-- to bring it all the way into a secure environment. So it was quite impressive from the point of view of, I think they were using a very lightweight platform. the data center because, you know, for competitive reasons, going to change the way applications are built? so that you don't have to think about and reason about it But they have to make a trade-off if they do it that way. so the whole idea is that if you can build and for the applications you can build with it. because when you look at a typical scenario, not just the calculation. So being able to actually have a system that delivers but how do you see applications changing now that, so the fact that you could use your knowledge There's so much more to go through. that can take the two of them

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

GeorgePERSON

0.99+

HortonworksORGANIZATION

0.99+

George GilbertPERSON

0.99+

Scott GnauPERSON

0.99+

Rob BeardenPERSON

0.99+

AudiORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

San JoseLOCATION

0.99+

George ChowPERSON

0.99+

FordORGANIZATION

0.99+

last weekDATE

0.99+

Silicon ValleyLOCATION

0.99+

one millisecondQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

100 millisecondsQUANTITY

0.99+

200 millisecondQUANTITY

0.99+

todayDATE

0.99+

tomorrowDATE

0.99+

VolkswagonORGANIZATION

0.99+

this weekDATE

0.99+

Google MapTITLE

0.99+

AdTechORGANIZATION

0.99+

DataWorks 2017EVENT

0.98+

DataWorks Summit 2017EVENT

0.98+

bothQUANTITY

0.98+

11DATE

0.98+

SparkTITLE

0.98+

WikibonORGANIZATION

0.96+

under 10QUANTITY

0.96+

oneQUANTITY

0.96+

20 millisecondsQUANTITY

0.95+

Spark SummitEVENT

0.94+

first solutionQUANTITY

0.94+

SQLTITLE

0.93+

hundred millisecondsQUANTITY

0.93+

2.2QUANTITY

0.92+

one wayQUANTITY

0.89+

SparkORGANIZATION

0.88+

Lambda ArchitectureTITLE

0.87+

KafkaTITLE

0.86+

minuteQUANTITY

0.86+

Porche MacanORGANIZATION

0.86+

about 100QUANTITY

0.85+

ODBCTITLE

0.84+

DataWorksEVENT

0.84+

NiFiTITLE

0.84+

about an hour agoDATE

0.8+

JDBCTITLE

0.79+

Raspberry PiCOMMERCIAL_ITEM

0.76+

SimbaORGANIZATION

0.75+

Simba TechnologiesORGANIZATION

0.74+

couples weeks'QUANTITY

0.7+

CTOPERSON

0.68+

theCUBEORGANIZATION

0.67+

twinQUANTITY

0.67+

couple weeksQUANTITY

0.64+

Scott Gnau, Hortonworks - DataWorks Summit 2017


 

>> Announcer: Live, from San Jose, in the heart of Silicon Valley, it's The Cube, covering DataWorks Summit 2017. Brought to you by Hortonworks. >> Welcome back to The Cube. We are live at DataWorks Summit 2017. I'm Lisa Martin with my cohost, George Gilbert. We've just come from this energetic, laser light show infused keynote, and we're very excited to be joined by one of the keynotes today, the CTO of Hortonworks, Scott Gnau. Scott, welcome back to The Cube. >> Great to be here, thanks for having me. >> Great to have you back here. One of the things that you talked about in your keynote today was collaboration. You talked about the modern data architecture and one of the things that I thought was really interesting is that now where Horton Works is, you are empowering cross-functional teams, operations managers, business analysts, data scientists, really helping enterprises drive the next generation of value creation. Tell us a little bit about that. >> Right, great. Thanks for noticing, by the way. I think the next, the important thing, kind of as a natural evolution for us as a company and as a community is, and I've seen this time and again in the tech industry, we've kind of moved from really cool breakthrough tech, more into a solutions base. So I think this whole notion is really about how we're making that natural transition. And when you think about all the cool technology and all the breakthrough algorithms and all that, that's really great, but how do we then take that and turn it to value really quickly and in a repeatable fashion. So, the notion that I launched today is really making these three personas really successful. If you can focus, combining all of the technology, usability and even some services around it, to make each of those folks more successful in their job. So I've broken it down really into three categories. We know the traditional business analyst, right? They've Sequel and they've been doing predictive modeling of structured data for a very long time, and there's a lot of value generated from that. Making the business analyst successful Hadoop inspired world is extremely valuable. And why is that? Well, it's because Hadoop actually now brings a lot more breadth of data and frankly a lot more depth of data than they've ever had access to before. But being able to communicate with that business analyst in a language they understand, Sequel, being able to make all those tools work seamlessly, is the next extension of success for the business analyst. We spent a lot of time this morning talking about data scientists, the next great frontier where you bring together lots and lots and lots and lots of data, for instance, Skin and Math and Heavy Compute, with the data scientists and really enable them to go build out that next generation of high definition kind of analytics, all right, and we're all, certainly I am, captured by the notion of self-driving cars, and you think about a self-driving car, and the success of that is purely based on the successful data science. In those cameras and those machines being able to infer images more accurately than a human being, and then make decisions about what those images mean. That's all data science, and it's all about raw processing power and lots and lots and lots of data to make those models train and more accurate than what would otherwise happen. So enabling the data scientist to be successful, obviously, that's a use case. You know, certainly voice activated, voice response kinds of systems, for better customer service; better fraud detection, you know, the cost of a false positive is a hundred times the cost of missing a fraudulent behavior, right? That's because you've irritated a really good customer. So being able to really train those models in high definition is extremely valuable. So bringing together the data, but the tool set so that data scientists can actually act as a team and collaborate and spend less of their time finding the data, and more of their time providing the models. And I said this morning, last but not least, the operations manager. This is really, really, really important. And a lot of times, especially geeks like myself, are just, ah, operations guys are just a pain in the neck. Really, really, really important. We've got data that we've never thought of. Making sure that it's secured properly, making sure that we're managing within the regulations of privacy requirements, making sure that we're governing it and making sure how that data is used, alongside our corporate mission is really important. So creating that tool set so that the operations manager can be confident in turning these massive files of data to the business analyst and to the data scientist and be confident that the company's mission, the regulation that they're working within in those jurisdictions are all in compliance. And so that's what we're building on, and that stack, of course, is built on open source Apache Atlas and open source Apache Ranger and it really makes for an enterprise grade experience. >> And a couple things to follow on to that, we've heard of this notion for years, that there is a shortage of data scientists, and now, it's such a core strategic enabler of business transformation. Is this collaboration, this team support that was talked about earlier, is this helping to spread data science across these personas to enable more of the to be data scientists? >> Yeah, I think there are two aspects to it, right? One is certainly really great data scientists are hard to find; they're scarce. They're unique creatures. And so, to the extent that we're able to combine the tool set to make the data scientists that we have more productive, and I think the numbers are astronomical, right? You could argue that, with the wrong tool set, a data scientist might spend 80% or 90% of his or her time just finding the data and only 10% working on the problem. If we can flip that around and make it 10% finding the data and 90%, that's like, in order of magnitude, more breadth of data science coverage that we get from the same pool of data scientists, so I think that from an efficiency perspective, that's really huge. The second thing, though, is that by looking at these personas and the tools that we're rolling out, can we start to package up things that the data scientists are learning and move those models into the business analysts desktop. So, now, not only is there more breadth and depth of data, but frankly, there's more depth and breadth of models that can be run, but inferred with traditional business process, which means, turning that into better decision making, turning that into better value for the business, just kind of happens automatically. So, you're leveraging the value of data scientists. >> Let me follow that up, Scott. So, if the, right now the biggest time sync for the data scientist or the data engineer is data cleansing and transformation. Where do the cloud vendors fit in in terms of having trained some very broad horizontal models in terms of vision, natural language understanding, text to speech, so where they have accumulated a lot of data assets, and then they created models that were trained and could be customized. Do you see a role for, not just mixed gen UI related models coming from the cloud vendors, but for other vendors who have data assets to provide more fully baked models so that you don't have to start from scratch? >> Absolutely. So, one of the things that I talked about also this morning is this notion, and I said it this morning, kind of opens where open community, open source, and open ecosystem, I think it's now open to the third power, right, and it's talking about open models and algorithms. And I think all of those same things are really creating a tremendous opportunity, the likes of which we've not seen before, and I think it's really driving the velocity in the market, right, so there's no, because we're collaborating in the open, things just get done faster and more efficiently, whether it be in the core open source stuff or whether it be in the open ecosystem, being able to pull tools in. Of course, the announcement earlier today, with IBMs Data Science Experience software as a framework for the data scientists to work as a team, but that thing in and of itself is also very open. You can plug in Python, you can plug in open source models and libraries, some of which were developed in the cloud and published externally. So, it's all about continued availability of open collaboration that is the hallmark of this wave of technology. >> Okay, so we have this issue of how much can we improve the productivity with better tools or with some amount of data. But then, the part that everyone's also point out, besides the cloud experience, is also the ability to operationalize the models and get them into production either in Bespoke apps or packaged apps. How's that going to sort of play out over time? >> Well, I think two things you'll see. One, certainly in the near term, again, with our collaboration with IBM and the Data Science Experience. One of the key things there is not only, not just making the data scientists be able to be more collaborative, but also the ease of which they can publish their models out into the wild. And so, kind of closing that loop to action is really important. I think, longer term, what you're going to see, and I gave a hint of this a little bit in my keynote this morning, is, I believe in five years, we'll be talking about scalability, but scalability won't be the way we think of it today, right? Oh, I have this many petabytes under management, or, petabytes. That's upkeep. But truly, scalability is going to be how many connected devices do you have interacting, and how many analytics can you actually push from model perspective, actually out to the center or out to the device to run locally. Why is that important? Think about it as a consumer with a mobile device. The time of interaction, your attention span, do you get an offer in the right time, and is that offer relevant. It can't be rules based, it has to be models based. There's no time for the electrons to move from your device across a power grid, run an analytic and have it come back. It's going to happen locally. So scalability, I believe, is going to be determined in terms of the CPU cycles and the total interconnected IOT network that you're working in. What does that mean from your original question? That means applications have to be portable, models have to be portable so that they can execute out to the edge where it's required. And so that's, obviously, part of the key technology that we're working with in Portworks Data Flow and the combination of Apache Nifi and Apache Caca and Storm to really combine that, "How do I manage, not only data in motion, but ultimately, how do I move applications and analytics to the data and not be required to move the data to the analytics?" >> So, question for you. You talked about real time offers, for example. We talk a lot about predicted analytics, advanced analytics, data wrangling. What are your thoughts on preemptive analytics? >> Well, I think that, while that sounds a little bit spooky, because we're kind of mind reading, I think those things can start to exist. Certainly because we now have access to all of the data and we have very sophisticated data science models that allow us to understand and predict behavior, yeah, the timing of real time analytics or real time offer delivery, could actually, from our human being perception, arrive before I thought about it. And isn't that really cool in a way. I'm thinking about, I need to go do X,Y,Z. Here's a relevant offer, boom. So it's no longer, I clicked here, I clicker here, I clicked here, and in five seconds I get a relevant offer, but before I even though to click, I got a relevant offer. And again, to the extent that it's relevant, it's not spooky. >> Right. >> If it's irrelevant, then you deal with all of the other downstream impact. So that, again, points to more and more and more data and more and more and more accurate and sophisticated models to make sure that that relevance exists. >> Exactly. Well, Scott Gnau, CTO of Hortonworks, thank you so much for stopping by The Cube once again. We appreciate your conversation and insights. And for George Gilbert, I am Lisa Martin. You're watching The Cube live, from day one of the DataWorks Summit in the heart of Silicon Valley. Stick around, though, we'll be right back.

Published Date : Jun 13 2017

SUMMARY :

in the heart of Silicon Valley, it's The Cube, the CTO of Hortonworks, Scott Gnau. One of the things that you talked about So enabling the data scientist to be successful, And a couple things to follow on to that, and the tools that we're rolling out, for the data scientist or the data engineer as a framework for the data scientists to work as a team, is also the ability to operationalize the models not just making the data scientists be able to be You talked about real time offers, for example. And again, to the extent that it's relevant, So that, again, points to more and more and more data of the DataWorks Summit in the heart of Silicon Valley.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

George GilbertPERSON

0.99+

ScottPERSON

0.99+

IBMORGANIZATION

0.99+

80%QUANTITY

0.99+

San JoseLOCATION

0.99+

10%QUANTITY

0.99+

90%QUANTITY

0.99+

Scott GnauPERSON

0.99+

Silicon ValleyLOCATION

0.99+

IBMsORGANIZATION

0.99+

PythonTITLE

0.99+

two aspectsQUANTITY

0.99+

five secondsQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

OneQUANTITY

0.99+

DataWorks Summit 2017EVENT

0.98+

Horton WorksORGANIZATION

0.98+

HadoopTITLE

0.98+

oneQUANTITY

0.98+

DataWorks SummitEVENT

0.98+

todayDATE

0.98+

eachQUANTITY

0.98+

five yearsQUANTITY

0.97+

thirdQUANTITY

0.96+

second thingQUANTITY

0.96+

Apache CacaORGANIZATION

0.95+

three personasQUANTITY

0.95+

this morningDATE

0.95+

Apache NifiORGANIZATION

0.95+

this morningDATE

0.94+

three categoriesQUANTITY

0.94+

CTOPERSON

0.93+

The CubeTITLE

0.9+

SequelPERSON

0.89+

Apache RangerORGANIZATION

0.88+

two thingsQUANTITY

0.86+

hundred timesQUANTITY

0.85+

PortworksORGANIZATION

0.82+

earlier todayDATE

0.8+

Data Science ExperienceTITLE

0.79+

The CubeORGANIZATION

0.78+

Apache AtlasORGANIZATION

0.75+

StormORGANIZATION

0.74+

day oneQUANTITY

0.74+

waveEVENT

0.69+

one of the keynotesQUANTITY

0.66+

lotsQUANTITY

0.63+

yearsQUANTITY

0.53+

HortonworksEVENT

0.5+

lots of dataQUANTITY

0.49+

SequelORGANIZATION

0.46+

FlowORGANIZATION

0.39+

Sam Greenblatt, Nano Global - Open Networking Summit 2017 - #ONS2017 - #theCUBE


 

(lively synth music) >> Announcer: Live, from Santa Clara, California, it's The Cube, covering Open Networking Summit 2017. Brought to you by The Linux Foundation. >> Hey welcome back everybody, Jeff Frick here with The Cube. We are at Open Networking Summit, joined here in this segment by Scott Raynovich, my guest host for the next couple days, great to see you again Scott. >> Good to see you. >> And real excited to have a long-time Cube alumni, a many-time Cube alumni always up to some interesting and innovative thing. (Scott laughs) Sam Greenblat, he's now amongst other things the CTO of Nano Global, nano like very very small. Sam, great to see ya. >> Great to see you too Jim. >> So you said before we went offline, you thought you would retire, but there's just too many exciting things going on, and it drug you back into this crazy tech world. >> Just when you think you're out, they pull you back in. (all laugh) >> All right, so what is Nano Global, for people that aren't familiar with the company? >> Nano Global is a Amosil-Q, which is the compound, which is a nano compound that basically kills viruses, pathogens, funguses, and it does it by attaching itself at the nano level to these microbiol, microlife, and it implodes it, and technically that term is called lysis. >> (Jeff) That sounds very scary. >> It's very scary, because we try to sell it as a hand processing. >> You just told me it kills everything, I don't know if I want to put that on my hands, Sam. (all laugh) >> No it's good, that it kills some of the good bacteria, but it basically protects you for 24 hours. You don't have to reapply it, you can wash your hands. >> (Scott) It's like you become Superman or something. >> Absolutely, I literally use it to wash off the trays on the planes, and the armrests, while the guy next to me is sneezing like crazy, to try to kill any airborne pathogens. >> So what about the nanotechnology's got you traveling up to Santa Clara today for? >> Well, what I'm doing is, one of the things we're working on, besides that, is we're working on genomics, and I worked with some other companies on genomics besides Nano, and genomics has me totally fascinated. When I was at Dell, I went to ASU, and for the first time, I saw, pediatric genomics being processed quickly, and that was in a day. Today, a day is unheard of, it's terrible, you want to do it in less than an hour, and I was fascinated by how many people can be affected by the use of genomic medicine, and genomic pharmacology. And you see some of the ads on TV like Teva, that's genomic medicine, added tax, a genomic irregularity in your DNA, so it's amazing. And the other thing I'm very interested in is eradicating in my lifetime, which I don't know if it's going to happen, cancer, and how you do that is very simple. They found that chemotherapy is interesting, but not fascinating, it doesn't always work, but what they're finding is if they can find enough biometric information from genomes, from your proteomics, from your RNA, they can literally customize, it's called precision medicine, a specific medicine track for you, to actually fight the cancer successfully. >> I can't wait for the day, and hopefully it will be in your lifetime, when they look back at today's cancer treatments, and said "now what did you do again? (Sam laughs) You gave them as much poison as they could take, right up to the time they almost die, and hopefully the cancer dies first?" >> I'll take the - >> It's like bloodletting, it will not be that long from now that we look back at this time and say that was just archaic, which is good. >> It's called reactive medicine. It's funny, there's a story, that the guy who actually did the sequencing of the DNA, the original DNA strand tells, that when he was younger, he basically were able to see his chromosomes, and then he was able to get down to the DNA and to the proteins, and he could see that he had an irregularity that was known for basically cancer. And he went to the doctor, and he said "I think I have cancer of the pancreas." And the guy said "your blood tests don't show it." and by the way you don't get that blood test until you're over 40 years old, PS-1, the PS scan. And what happened was they actually found out that he had cancer of the pancreas, so... >> Yeah, it's predictive isn't it? So basically what you're doing is you're data mining the human and the human genome, and trying to do some sort of - >> We're not doing the 23andme, which tells you you have a propensity to be fat. >> Right, right, but walk us through what you're doing. You're obviously, you're here at an IT cloud conference so you're obviously using cloud technology to help accelerate the discovery of medicine, so walk us through how you're doing that. >> What happens is, when you get the swab, or the blood, and your DNA is then processed, it comes in and it gets cut to how many literal samples that they need. 23andme uses the 30x, that's 30 pieces. That's 80, by the way, gigabytes of data. If you were to take a 50x, is what you need for cancer, which is probably low, but it's, that takes you up to 150 gigabytes per person. Now think about the fact, you got to capture that, then you got to capture the RNA of the person, you got to capture his biometrics, and you got to capture his electronic medical record, and all the radiology that's done. And you got to bring it together, look at it, and determine what they should do. And the problem is the oncologic doctors today are scared to death of this, because they know how, if you have this, I'm going to take you in and basically do some radiation. I'm going to do chemotherapy on you and run the course. What's happening is, when you do all of this, you got to correlate all this data, it's probably the world's largest big data outside of Youtube. It's number two in number of bytes, and we haven't sequenced everybody on the planet. Everybody should get sequenced, it should be stored, and then when you get, that's called a germline you're healthy, then you take the cancer and you look at the germline and compare it, and then you're able to see what the difference is. Now open source has great technology to deal with this flood of data. LinkedIn, as you know open source, cacafa and one of the things that's great about that is it's a pull model, it's a producer, broker, subscriber model, and you can open up multiple channels, and by opening up multiple channels, since the subscribers are doing the pull instead of trying to send it all and overflow it, and we all know what it's like to overflow a pipe. It goes everywhere. But doing it through a cacafa model or a NiFi model, which is, by the way, donated by the NSA. We're not going to unmask who donated it but, (laughs) no, I'm only kidding, but the NSA donated it, and data flows now become absolutely critical, because as you get these segments of DNA, you got to send it all down, then what you got to do is do, and you're going to love this, a hidden Markovian chain, and put it all back together, so you can match the pattern, and then once you match the pattern, then you got to do quality control to see whether or not you screwed it up. And then, beyond that, you then have to do something called Smith-Waterman, which is a QC time, and then you can give it to somebody to figure out where the variant is. The whole key is all three of us share 99.9% of the same DNA. That one percent, tenth of a percent, is what is a variant. The variant is what causes all the diseases. We're all born with cancer. You have cancer in you, I have it, Jeff has it, and the only difference between a healthy person and a sick person is your killer cell went to sleep and doesn't attack the cancer. The only way to attack cancer is not chemotherapy, and I know every oncologic person who sees this is going to have a heart attack, it's basically let your immune system fight it. So what this tech does is it moves all that massive data into the variant. Once you get the variant, then you got to look at the RNA and see if there's variance there. Then you got to look at the radiology, the germline, and the biometric data, and once you get that, you can make a decision. I'll give you the guy who's my hero in this is the guy named Dr. Soon. He's the guy who came up with Abroxane. Abroxane is for pancreatic -- >> Jeff: Who is he with now? >> NantHealth. (both laugh) And why I, he discovered, he knew all about medicine, but he didn't know anything about technology. So then this becomes probably the best machine learning issue that you can have, because you have all this data, you're going to learn what it works on patients. And you're going to get all the records back, so what I'm going to talk about, because they wanted to talk about using SDM, using NFA, opening up hundreds of channels from source to, from provider to the subscriber, or consumer, as they call it, with the broker in the middle. And moving that data, then getting it over there, and doing the processing fast enough that it can be done while the patient still hasn't had any other problems. So I have great charts of what the genome looks like. I sent it to you. >> So it's clear these two fields are going to continue to merge, and the bioinformatics, and IT cloud. >> Sam: They're merging, as fast as possible. >> And we just plug our brain and our bodies into the health cloud, and it tells us what's up. >> Exactly, if Ginni was here, Ginni Rometty from IBM, she would tell you that quantum, she'd just announce it first commercially, an available quantum computer. Her first use for it is genomics, because genomics is a very repetitive process that is done in parallel. Remember you just cut this thing into 50 pieces, you put it back together, and now you're looking to see what's hidden, and it doesn't look like it's normal. If you looked at my genetics, one of the things you'll notice, that I will not consume a lot of caffeine. And how they know that is because there's a set of chromosomes, and my 23 chromosomes, that basically says I won't consume it. Turns out to be totally wrong, because of my behavior over the day. (all laugh) But what the Linux Foundation was interesting is everybody here wants to talk about, are we going to use this technology or that technology. What they want is an application, using the technology, and NantHealth that I talked about, can transport a terabyte of data virtually. In other words, it's not really doing it, but it's doing it through multiple sources and multiple consumers, and that's what people are fascinated by. >> All right, well like I said, Sammy gets into the wild and wooly ways and exciting new things. (Sam laughs) So sounds great, and a very bright future on the health care side. Thanks for stopping by. >> Thank you very much. I hope I didn't bore you with... (Jeff and Sam laugh) >> No, no, no, we don't want more chemotherapy, so that's definitely better to have less chemotherapy and more genetic fixing of sickness. So Sam, nice to see you again, thanks for stopping by. >> Thank you very much. >> Scott Raynovich, Jeff Frick, you're watching The Cube, from Open Networking Summit in Santa Clara, we'll be back after this short break. Thanks for watching. (synth music) >> Announcer: Robert Hershevech.

Published Date : Apr 5 2017

SUMMARY :

Brought to you by The Linux Foundation. great to see you again Scott. the CTO of Nano Global, nano like very very small. and it drug you back into this crazy tech world. Just when you think you're out, they pull you back in. and it does it by attaching itself at the nano level It's very scary, because we try to sell it as I don't know if I want to put that on my hands, Sam. You don't have to reapply it, you can wash your hands. on the planes, and the armrests, while the guy going to happen, cancer, and how you do that is very simple. that was just archaic, which is good. and by the way you don't get that blood test until which tells you you have a propensity to be fat. accelerate the discovery of medicine, and the biometric data, and once you get that, issue that you can have, because you have all this data, continue to merge, and the bioinformatics, and IT cloud. into the health cloud, and it tells us what's up. you put it back together, and now you're looking the health care side. Thank you very much. So Sam, nice to see you again, thanks for stopping by. Scott Raynovich, Jeff Frick, you're watching The Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sam GreenblatPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

ScottPERSON

0.99+

Sam GreenblattPERSON

0.99+

Scott RaynovichPERSON

0.99+

Ginni RomettyPERSON

0.99+

Robert HershevechPERSON

0.99+

IBMORGANIZATION

0.99+

SamPERSON

0.99+

80QUANTITY

0.99+

30 piecesQUANTITY

0.99+

NSAORGANIZATION

0.99+

50 piecesQUANTITY

0.99+

JimPERSON

0.99+

Santa ClaraLOCATION

0.99+

24 hoursQUANTITY

0.99+

GinniPERSON

0.99+

23 chromosomesQUANTITY

0.99+

99.9%QUANTITY

0.99+

one percentQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

50xQUANTITY

0.99+

CubeORGANIZATION

0.99+

Santa Clara, CaliforniaLOCATION

0.99+

LinkedInORGANIZATION

0.99+

TodayDATE

0.99+

SammyPERSON

0.99+

The CubeTITLE

0.99+

less than an hourQUANTITY

0.99+

30xQUANTITY

0.99+

Nano GlobalORGANIZATION

0.98+

YoutubeORGANIZATION

0.98+

over 40 years oldQUANTITY

0.98+

The CubeORGANIZATION

0.98+

Dr.PERSON

0.98+

todayDATE

0.98+

first timeQUANTITY

0.97+

NantHealthORGANIZATION

0.97+

#ONS2017EVENT

0.97+

SoonPERSON

0.97+

firstQUANTITY

0.96+

Open Networking SummitEVENT

0.96+

two fieldsQUANTITY

0.95+

SupermanPERSON

0.95+

tenth of a percentQUANTITY

0.95+

Amosil-QOTHER

0.94+

threeQUANTITY

0.94+

Nano Global - Open Networking Summit 2017EVENT

0.94+

DellORGANIZATION

0.94+

Open Networking Summit 2017EVENT

0.93+

oneQUANTITY

0.93+

hundreds of channelsQUANTITY

0.92+

TevaORGANIZATION

0.92+

first useQUANTITY

0.91+

a dayQUANTITY

0.9+

ASUORGANIZATION

0.9+

RNAOTHER

0.89+

CTOPERSON

0.88+

gigabytesQUANTITY

0.83+

The Linux FoundationORGANIZATION

0.8+

Nano GlobalOTHER

0.73+

up to 150 gigabytes per personQUANTITY

0.72+

both laughQUANTITY

0.7+

lysisOTHER

0.67+

terabyte of dataQUANTITY

0.65+

couple daysDATE

0.65+

-1OTHER

0.62+

Smith-WatermanPERSON

0.58+

AbroxaneOTHER

0.57+

AbroxaneCOMMERCIAL_ITEM

0.57+

variantOTHER

0.56+

Stephanie McReynolds, Alation & Lee Paries, Think Big Analytics - #BigDataSV - #theCUBE


 

>> Voiceover: San Jose, California, tt's theCUBE, covering Big Data Silicon Valley 2017. (techno music) >> Hey, welcome back everyone. Live in Silicon Valley for Big Data SV. This is theCUBE coverage in conjunction with Strata + Hadoop. I'm John Furrier with George Gilbert at Wikibon. Two great guests. We have Stephanie McReynolds, Vice President of startup Alation, and Lee Paries who is the VP of Think Big Analytics. Thanks for coming back. Both been on theCUBE, you have been on theCUBE before, but Think Big has been on many times. Good to see you. What's new, what are you guys up to? >> Yeah, excited to be here and to be here with Lee. Lee and I have a personal relationship that goes back quite aways in the industry. And then what we're talking about today is the integration between Kylo, which was recently announced as an open source project from Think Big, and Alation's capability to sit on top of Kylo and to gather to increase the velocity of data lake initiatives, kind of going from zero to 60 in a pretty short amount of time to get both technical value from Kylo and business value from Alation. >> So talk about Alation's traction, because you guys has been an interesting startup, a lot of great press. George is a big fan. He's going to jump in with some questions, but some good product fit with the market. What's the update? What's some of the status on the traction in terms of the company and customers and whatnot? >> Yeah, we've been growing pretty rapidly for a startup. We've doubled our production customer count from last time we talked. Some great brand names. Munich Reinsurance this morning was talking about their implementation. So they have 600 users of Alation in their organization. We've entered Europe, not only with Munich Reinsurance but Tesco is a large account of ours in Europe now. And here in the States we've seen broad adoption across a wide range of industries, every one from Pfizer in the healthcare space to eBay, who's been our longest standing customer. They have about 1,000 weekly users on Alation. So not only a great increase in number of logos, but also organic growth internally at many of these companies across data scientists, data analysts, business analysts, a wide range of users of the product, as well. >> It's been interesting. What I like about your approach, and we talk about Think Big about it before, we let every guest come in so far that's been in the same area is talking about metadata layers, and so this is interesting, there's a metadata data addressability if you will for lack of a better description, but yet human usable has to be integrating into human processes, whether it's virtualization, or any kind of real time app or anything. So you're seeing this convergence between I need to get the data into an app, whether it's IoT data or something else, really really fast, so really kind of the discovery pieces now, the interesting layer, how competitive is it, and what's the different solutions that you guys see in this market? >> Yeah, I think it's interesting, because metadata has kind of had a revival, right? Everyone is talking about the importance in metadata and open integration with metadata. I think really our angle is as Alation is that having open transfer of technical metadata is very important for the foundation of analytics, but what really brings that technical metadata to life is also understanding what is the business context of what's happening technically in the system? What's the business context of data? What's the behavioral context of how that data has been used that might inform me as an analyst? >> And what's your unique approach to that? Because that's like the Holy Grail. It's like translating geek metadata, indexing stuff into like usable business outcomes. It's been a cliche for years, you know. >> The approach is really based on machine learning and AI technology to make recommendations to business users about what might be interesting to them. So we're at a state in the market where there is so much data that is available and that you can access, either in Hadoop as a data lake or in a data warehouse in a database like Teradata, that today what you need as state of the art is the system to start to recommend to you what might be interesting data for you to use as a data scientist or an analyst, and not just what's the data you could use, but how accurate is that data, how trustworthy is it? I think there's a whole nother theme of governance that's rising that's tied to that metadata discussion, which is it's not enough to just shove bits and bytes between different systems anymore. You really need to understand how has this data been manipulated and used and how does that influence my security considerations, my privacy considerations, the value I'm going to be able to get out of that data set? >> What's your take on this, 'cause you guys have a relationship. How is Think Big doing? Then talk about the partnership you guys have with Alation. >> Sure, so I mean when you look at what we've done specifically to an open source project it's the first one that Teradata has fully sponsored and released based on Apache 2.0 called Kylo, it's really about the enablement of the full data lake platform and the full framework, everywhere from ingest, to securing it, to governing it, which part of that is collecting is part of that process, the basic technical and business metadata so later you can hand it over to the user so they could sample, they could profile the data, they can find, they can search in a Google like manner, and then you can enable the organization with that data. So when you look at it from a standpoint of partnering together, it's really about collecting that data specifically within Hadoop to enable it, yet with the ability then to hand it off to more the enterprise wide solution like Alation through API connections that connect to that, and then for them they enrich it in a way that they go about it with the social collaboration and the business to extend it from there. >> So that's the accelerant then. So you're accelerating the open source project in through this new, with Alation. So you're still going to rock and roll with the open source. >> Very much going to rock and roll with the open source. So it's really been based on five years of Think Big's work in the marketplace over about 150 data lakes. The IT we've built around that to do things repeatedly, consistently, and then releasing that in the last two years, dedicated development based on Apache Spark and NiFi to stand that out. >> Great work by the way. Open sources continue to be more relevant. But I got to get your perspective on a meme that's been floating around day one here, and maybe it's because of the election, but someone said, "We got to drain the data swamp, "and make data great again." And not a play on Trump, but the data lake is going through a transition and saying, "Okay, we've got data lakes," but now this year it's been a focus on making that much more active and cleaner and making sure it doesn't become a swamp if you will. So there's been a focus of taking data lake content and getting it into real time, and IoT has kind of I think been a forcing function. But you guys, do you guys have a perspective on that on where data lakes are going? Certainly it's been trending conversation here at the show. >> Yeah, I think IoT has been part of drain that data swamp, but I think also now you have a mass of business analysts that are starting to get access to that data in the lake. These Hadoop implementations are maturing to the stage where you have-- >> John: To value coming out of it. >> Yeah, and people are trying to wring value out of that lake, and sometimes finding that it is harder than they expected because the data hasn't been pre-prepared for them. This old world of IT would pre-prepare the data, and then I got a single metric or I got a couple metrics to choose from is now turned on its head. People are taking a more exploratory, discovery oriented approach to navigating through their data and finding that the nuisances of data really matter when trying to evolve an insight. So the literacy in these organizations and their awareness of some of the challenges of a lake are coming to the forefront, and I think that's a healthy conversation for us all to have. If you're going to have a data driven organization, you have to really understand the nuisances of your data to know where to apply it appropriately to decision making. >> So (mumbles) actually going back quite a few years when he started at Microsoft said, Internet software has changed paradigm so much in that we have this new set of actions where it was discover, learn, try, buy, recommend, and it sounds like as a consumer of data in a data lake we've added or preppended this discovery step. Where in a well curated data warehouse it was learn, you had your X dimensions that were curated and refined, and you don't have that as much with the data lake. I guess I'm wondering, it's almost like if you're going to take, as we were talking to the last team with AtScale and moving OLAP to be something you consume on a data lake the way you consume on a data warehouse, it's almost like Alation and a smart catalog is as much a requirement as a visualization tool is by itself on a data warehouse? >> I think what we're seeing is this notion of data needing to be curated, and including many brains and many different perspectives in that curation process is something that's defining the future of analytics and how people use technical metadata, and what does it mean for the devops organization to get involved in draining that swamp? That means not only looking at the elements of the data that are coming in from a technical perspective, but then collaborating with a business to curate the value on top of that data. >> So in other words it's not just to help the user, the business analyst, navigate, but it's also to help the operational folks do a better job of curating once they find out who's using it, who's using the data and how. >> That's right. They kind of need to know how this data is going to be used in the organization. The volumes are so high that they couldn't possibly curate every bit and byte that is stored in the data lake. So by looking at how different individuals in the organization and different groups are trying to access that data that gives early signal to where should we be spending more time or less time in processing this data and helping the organization really get to their end goals of usage. >> Lee, I want to ask you a question. On your blog post, I just was pointed out earlier, you guys quote a Gartner stat which says, which is pretty doom and gloom, which said, "70% of Hadoop deployments in 2017 "will either fail or deliver their estimated cost savings "of their predicted revenue." And then it says, "That's a dim view, "but not shared by the Kylo community." How are you guys going to make the Kylo data lake software work well? What's your thoughts on that? Because I think people, that's the number one, again, question that I highlighted earlier is okay, I don't want a swamp, so that's fear, whether they get one or not, so they worry about data cleansing and all these things. So what's Kylo doing that's going to accelerate, or lower that number, of fails in the data lake world? >> Yeah sure, so again, a lot of it's through experience of going out there and seeing what's done. A lot of people have been doing a lot of different things within the data lakes, but when you go in there there's certain things they're not doing, and then when you're doing them it's about doing them over consistently and continually improving upon that, and that's what Kylo is, it's really a framework that we keep adding to, and as the community grows and other projects come in there can enhance it we bring the value. But a lot of times when we go in it it's basically end users can't get to the data, either one because they're not allowed to because maybe it's not secured and relied to turn it over to them and let them drive with it, or they don't know the data is there, which goes back to basic collecting the basic metadata and data (mumbles) to know it's there to leverage it. So a lot of times it's going back and looking at and leveraging what we have to build that solid foundation so IT and operations can feel like they can hand that over in a template format so business users could get to the data and start acting off of that. >> You just lost your mic there, but Stephanie, I got to ask you a question. So just on a point of clarification, so you guys, are you supporting Kylo? Is that the relationship, or how does that work? >> So we're integrated with Kylo. So Kylo will ingest data into the lake, manage that data lake from a security perspective giving folks permissions, enables some wrangling on that data, and what Alation is receiving then from Kylo is that technical metadata that's being created along that entire path. >> So you're certified with Kylo? How does that all work from the customer standpoint? >> That's a very much integration partnership that we'd be working together. >> So from a customer standpoint it's clean and you then provide the benefits on the other side? >> Correct. >> Yeah, absolutely. We've been working with data lake implementations for some time, since our founding really, and I think this is an extension of our philosophy that the data lakes are going to play an important role that are going to complement databases and analytics tools, business intelligence tools, and the analytics environment, and the open source is part of the future of how folks are building these environments. So we're excited to support the Kylo initiative. We've had a longstanding relationship with Teradata as a partner, so it's a great way to work together. >> Thanks for coming on theCUBE. Really appreciate it, and thank... What do you think of the show you guys so far? What's the current vibe of the show? >> Oh, it's been good so far. I mean, it's one day into it, but very good vibe so far. Different topics and different things-- >> AI machine learning. You couldn't be more happier with that machine learning-- >> Great to see machine learning taking a forefront, people really digging into the details around what it means when you apply it. >> Stephanie, thanks for coming on theCUBE, really appreciate it. More CUBE coverage after the show break. Live from Silicon Valley, I'm John Furrier with George Gilbert. We'll be right back after this short break. (techno music)

Published Date : Mar 15 2017

SUMMARY :

(techno music) What's new, what are you guys up to? and to gather to increase He's going to jump in with some questions, And here in the States we've seen broad adoption that you guys see in this market? Everyone is talking about the importance in metadata Because that's like the Holy Grail. is the system to start to recommend to you Then talk about the partnership you guys have with Alation. and the business to extend it from there. So that's the accelerant then. and NiFi to stand that out. and maybe it's because of the election, to the stage where you have-- and finding that the nuisances of data really matter to be something you consume on a data lake and many different perspectives in that curation process but it's also to help the operational folks and helping the organization really get in the data lake world? and data (mumbles) to know it's there to leverage it. but Stephanie, I got to ask you a question. and what Alation is receiving then from Kylo that we'd be working together. that the data lakes are going to play an important role What's the current vibe of the show? Oh, it's been good so far. You couldn't be more happier with that machine learning-- people really digging into the details More CUBE coverage after the show break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie McReynoldsPERSON

0.99+

George GilbertPERSON

0.99+

EuropeLOCATION

0.99+

StephaniePERSON

0.99+

LeePERSON

0.99+

TescoORGANIZATION

0.99+

Lee PariesPERSON

0.99+

GeorgePERSON

0.99+

TrumpPERSON

0.99+

2017DATE

0.99+

JohnPERSON

0.99+

PfizerORGANIZATION

0.99+

five yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Think BigORGANIZATION

0.99+

John FurrierPERSON

0.99+

70%QUANTITY

0.99+

San Jose, CaliforniaLOCATION

0.99+

AlationORGANIZATION

0.99+

TeradataORGANIZATION

0.99+

Think Big AnalyticsORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

GartnerORGANIZATION

0.99+

zeroQUANTITY

0.99+

KyloORGANIZATION

0.99+

60QUANTITY

0.99+

600 usersQUANTITY

0.98+

AtScaleORGANIZATION

0.98+

eBayORGANIZATION

0.98+

GoogleORGANIZATION

0.98+

todayDATE

0.98+

first oneQUANTITY

0.98+

HadoopTITLE

0.98+

BothQUANTITY

0.98+

bothQUANTITY

0.97+

Two great guestsQUANTITY

0.97+

this yearDATE

0.97+

about 1,000 weekly usersQUANTITY

0.97+

one dayQUANTITY

0.95+

single metricQUANTITY

0.95+

Apache SparkORGANIZATION

0.94+

KyloTITLE

0.93+

WikibonORGANIZATION

0.93+

NiFiORGANIZATION

0.92+

about 150 data lakesQUANTITY

0.92+

Apache 2.0TITLE

0.89+

this morningDATE

0.88+

coupleQUANTITY

0.86+

Big Data Silicon Valley 2017EVENT

0.84+

day oneQUANTITY

0.83+

Vice PresidentPERSON

0.81+

StrataTITLE

0.77+

KyloPERSON

0.77+

#theCUBEORGANIZATION

0.76+

Big DataORGANIZATION

0.75+

last two yearsDATE

0.71+

oneQUANTITY

0.7+

Munich ReinsuranceORGANIZATION

0.62+

CUBEORGANIZATION

0.52+

Scott Gnau, Hortonworks Big Data SV 17 #BigDataSV #theCUBE


 

>> Narrator: Live from San Jose, California it's theCUBE covering Big Data Silicon Valley 2017. >> Welcome back everyone. We're here live in Silicon Valley. This is theCUBE's coverage of Big Data Silicon Valley. Our event in conjunction with O'Reilly Strata Hadoop, of course we have our Big Data NYC event and we have our special popup event in New York and Silicon Valley. This is our Silicon Valley version. I'm John Furrier, with my co-host Jeff Frick and our next guest is Scott Gnau, CTO of Hortonworks. Great to have you on, good to see you again. >> Scott: Thanks for having me. >> You guys have an event coming up in Munich, so I know that there's a slew of new announcements coming up with Hortonworks in April, next month in Munich for your EU event and you're going to be holding a little bit of that back, but some interesting news this morning. We had Wei Wang yesterday with Microsoft Azure team HDInsight's. That's flowering nicely, a good bet there, but the question has always been at least from people in the industry and we've been questioning you guys on, hey, where's your cloud strategy? Because as a disture you guys have been very successful with your always open approach. Microsoft as your guy was basically like, that's why we go with Hortonworks because of pure open source, committed to that from day one, never wavered. The question is cloud first, AI, machine learning this is a sweet spot for IoT. You're starting to see the collision between cloud and data, and in the intersection of that is deep learning, IoT, a lot of amazing new stuff going to be really popping out of this. Your thoughts and your cloud strategy. >> Obviously we see cloud as an enabler for these use cases. In many instances the use cases can be femoral. They might not be tied immediately to an ROI, so you're going to go to the capital committee and all this kind of stuff, versus let me go prove some value very quickly. It's one of the key enablers core ingredients and when we say cloud first, we really mean it. It's something where the solutions work together. At the same time, cloud becomes important. Our cloud strategy and I think we've talked about this in many different venues is really twofold. One is we want to give a common experience to our customers across whatever footprint they chose, whether it be they roll their own, they do it on print, they do it in public cloud and they have choice of different public cloud vendors. We want to give them a similar experience, a good experience that is enterprise great, platform level experience, so not point solution kind of one function and then get rid of it, but really being able to extend the platform. What I mean by that of course, is being able to have common security, common governance, common operational management. Being able to have a blueprint of the footprint so that there's compatibility of applications that get written. And those applications can move as they decide to change their mind about where their platform hosting the data, so our goal really is to give them a great and common experience across all of those footprints number one. Then number two, to offer a lot of choices across all of those domains as well, whether it be, hey I want to do infrastructure as a service and I know what I want on one end of the spectrum to I'm not sure exactly what I want, but I want to spin up a data science cluster really quickly. Boom, here's a platform as a service offer that runs and is available very easy to consume, comes preconfigured and kind of everywhere in between. >> By the way yesterday Wei was pointing out 99.99 SLAs on some of the stuff coming out. >> Are amazing and obviously in the platform as a service space, you also get the benefit of other cloud services that can plug in that wouldn't necessarily be something you'd expect to be typical of a core Hadoop platform. Getting the SLAs, getting the disaster recovery, getting all of the things that cloud providers can provide behind the scenes is some additional upside obviously as well in those deployment options. Having that common look and feel, making it easy, making it frictionless, are all of the core components of our strategy and we saw a lot of success with that in coming out of year end last year. We see rapid customer adoption. We see rapid customer success and frankly I see that I would say that 99.9% of customers that I talk to are hybrid where they have a foot in nonprem and they have a foot in cloud and they may have a foot in multiple clouds. I think that's indicative of what's going on in the world. Think about the gravity of data. Data movement is expensive. Analytics and multi-core chipsets give us the ability to process and crunch numbers at unprecedented rates, but movement of data is actually kind of hard. There's latency, it can be expensive. A lot of data in the future, IoT data, machine data is going to be created and live its entire lifecycle in the cloud, so the notion of being able to support hybrid with a common look and feel, I think very strategically positions us to help our customers be successful when they start actually dealing with data that lives its entire lifecycle outside the four walls of the data center. >> You guys really did a good job I thought on having that clean positioning of data at rest, but also you had the data in motion, which I think ahead of its time you guys really nailed that and you also had the IoT edge in mind, we've talked I think two years ago and this was really not on everyone's radar, but you guys saw that, so you've made some good bets on the HDInsight and we talked about that yesterday with Wei on here and Microsoft. So edge analytics and data in motion a very key right now, because that batch streaming world's coming together and IoTs flooding it with all this kind of data. We've seen the success in the clouds where analytics have been super successful with powering by the clouds. I got to ask you with Microsoft as your preferred cloud provider, what's the current status for customers who have data in motion, specifically IoT too. It's the common question we're getting, not necessarily the Microsoft question, but okay I've got edge coming in strong-- >> Scott: Mm-hmm >> and I'm going to run certainly hybrid in a multi cloud world, but I want to put the cloud stuff for most of the analytics and how do I deal with the edge? >> Wow, there's a lot there (laughs) >> John: You got 10 seconds, go! (laughs) You have Microsoft as your premier cloud and you have an Amazon relationship with a marketplace and what not. You've got a great relationship with Microsoft. >> Yeah. I think it boils down to a bigger macro thing and hopefully I'll peel into some specifics. I think number one, we as an industry kind of short change ourselves talking about Hadoop, Hadoop, Hadoop, Hadoop, Hadoop. I think it's bigger than Hadoop, not different than but certainly than, right, and this is where we started with the whole connected platforms indicating of traditional Hadoop comes from traditional thinking of data at rest. So I've got some data, I've stored it and I want to run some analytics and I want to be able to scale it and all that kinds of stuff. Really good stuff, but only part of the issue. The other part of the issue is data that's moving, data that's being created outside of the four walls of the data center. Data that's coming from devices. How do I manage and move and handle all of that? Of course there have been different hype cycles on streaming and streaming analytics and data flow and all those things. What we wanted to do is take a very protracted look at the problem set of the future. We said look it's really about the entire lifecycle of data from inception to demise of the data or data being delayed, delete it, which very infrequently happens these days. >> Or cold storage-- >> Cold storage, whatever. You know it's created at the edge, it moves through, it moves in different places, its landed, its analyzed, there are models built. But as models get deployed back out to the edge, that entire problem set is a problem set that I think we, certainly we at Hortonworks are looking to address with the solutions. That actually is accelerated by the notion of multiple cloud footprints because when you think about a customer that may have multiple cloud footprints and trying to tie the data together, it creates a unique opportunity, I think there's a reversal in the way people need to think about the future of compute. Where having been around for a little bit of time, it's always been let me bring all the data together to the applications and have the applications run and then I'll send answers back. That is impossible in this new world order, whether it be the cloud or the fog or any of the things in between or the data center, data are going to be distributed and data movement will become the expensive thing, so it will be very important to be able to have applications that are deployable across a grid, and applications move to the data instead of data moving to the application. And or at least to have a choice and be able to be selective so that I believe that ultimately scalability five years from now, ten years from now, it's not going to be about how many exabytes I have in my cloud instance, that will be part of it, it will be about how many edge devices can I have computing and analyzing simultaneously and coordinating with each other this information to optimize customer experience, to optimize the way an autonomous car drives or anywhere in between. >> It's totally radical, but it's also innovative. You mentioned the cost of moving data will be the issue. >> Scott: Yeah. >> So that's going to change the architecture of the edge. What are you seeing with customers, cuz we're seeing a lot of people taking a protracted view like you were talking about and looking at the architectures, specifically around okay. There's some pressure, but there's no real gun to the head yet, but there's certainly pressure to do architectural thinking around edge and some of the things you mentioned. Patterns, things you can share, anecdotal stories, customer references. >> You know the common thing is that customers go, "Yep, that's going to be interesting. "It's not hitting me right now, "but I know it's going to be important. "How can I ease into it and kind of without the suspenders "how can I prove this is going to work and all that." We've seen a lot of certainly interest in that. What's interesting is we're able to apply some of that futuristic IoT technology in Hortonworks data flow that includes NiFi and MiNiFi out to the edge to traditional problems like, let me get the data from the branches into the central office and have that roundtrip communication to a banker who's talking to a customer and has the benefit of all the analytics at home, but I can guarantee that roundtrip of data and analytics. Things that we thought were solid before, can be solved very easily and efficiently with this technology, which is then also extensible even out further to the edge. In many instances, I've been surprised by customer adoption with them saying, "Yeah, I get that, but gee this helps me "solve a problem that I've had for the last 20 years "and it's very easy and it sets me up "on the right architectural course, "for when I start to add in those edge devices, "I know exactly how I'm going to go do it." It's been actually a really good conversation that's very pragmatic with immediate ROI, but again positioning people for the future that they know is coming. Doing that, by the way, we're also able to prove the security. Think about security is a big issue that everyone's talking about, cyber security and everything. That's typically security about my data center where I've got this huge fence around it and it's very controlled. Think about edge devices are now outside that fence, so security and privacy and provenance become really, really interesting in that world. It's been gratifying to be able to go prove that technology today and again put people on that architectural course that positions them to be able to go out further to the edge as their business demands it. >> That's such great validation when they come back to you with a different solution based on what you just proposed. >> Scott: Yep. >> That means they really start to understand, they really start to see-- >> Scott: Yep. >> How it can provide value to them. >> Absolutely, absolutely. That is all happening and again like I said this I think the notion of the bigger problem set, where it's not just storing data and analyzing data, but how do I have portable applications and portable applications that move further and further out to the edge is going to be the differentiation. The future successful deployments out there because those deployments and folks are able to adopt that kind of technology will have a time to market advantage, they'll have a latency advantage in terms of interaction with a customer, not waiting for that roundtrip of really being able to push out customized, tailored interactions, whether it be again if it's driving your car and stopping on time, which is kind of important, to getting a coupon when you're walking past a store and anywhere in between. >> It's good you guys have certainly been well positioned for being flexible, being an open source has been a great advantage. I got to ask you the final question for the folks watching, I'm sure you guys answer this either to investors or whatnot and customers. A lot's changed in the past five years and a lot's happening right now. You just illustrated it out, the scenario with the edge is very robust, dynamic, changing, but yet value opportunity for businesses. What's the biggest thing that's changing right now in the Hortonworks view of the world that's notable that you thinks worth highlighting to people watching that are your customers, investors, or people in the industry. >> I think you brought up a good point, the whole notion of open and the whole groundswell around open source, open community development as a new paradigm for delivering software. I talked a little bit about a new paradigm of the gravity of data and sensors and this new problem set that we've got to go solve, that's kind of one piece of this storm. The other piece of the storm is the adoption and the wave of open, open community collaboration of developers versus integrated silo stacks of software. That's manifesting itself in two places and obviously I think we're an example of helping to create that. Open collaboration means quicker time to market and more innovation and accelerated innovation in an increasingly complex world. That's one requirement slash advantage of being in the open world. I think the other thing that's happening is the generation of workforce. When I think about when I got my first job, I typed a resume with a typewriter. I'm dating myself. >> White out. >> Scott: Yeah, with white out. (laughter) >> I wasn't a good typer. >> Resumes today is basically name and get GitHub address. Here's my body of work and it's out there for everybody to see, and that's the mentality-- >> And they have their cute videos up there as well, of course. >> Scott: Well yeah, I'm sure. (laughter) >> So it's kind of like that shift to this is now the new paradigm for software delivery. >> This is important. You've got theCUBE interview, but I mean you're seeing it-- >> Is that the open source? >> In the entertainment. No, we're seeing people put huge interviews on their LinkedIn, so this notion of collaboration in the software engineering mindset. You go back to when we grew up in software engineering, now it went to open source, now it's GitHub is essentially a social network for your body of work. You're starting to see the software development open source concepts, they apply to data engineering, data science is still early days. Media media creation what not so, I think that's a really key point in the data science tools are still in their infancy. >> I think open, and by the way I'm not here to suggest that everything will be open, but I think a majority and-- >> Collaborative the majority of the problem that we're solving will be collaborative, it will be ecosystem driven and where there's an extremely large market open will be the most efficient way to address it. And certainly no one's arguing that data and big data is not a large market. >> Yep. You guys are all on the cloud now, you got the Microsoft, any other updates that you think worth sharing with folks. >> You've got to come back and see us in Munich then. >> Alright. We'll be there, theCUBE will be there in Munich in April. We have the Hortonworks coverage going on in Data Works, the conference is now called Data Works in Munich. This is theCUBE here with Scott Gnau, the CTO of Hortonworks. Breaking it down I'm John Furrier with Jeff Frick. More coverage from Big Data SV in conjunction with Strata Hadoop after the short break. (upbeat music)

Published Date : Mar 15 2017

SUMMARY :

it's theCUBE covering Big good to see you again. and in the intersection of blueprint of the footprint on some of the stuff coming out. of customers that I talk to are hybrid I got to ask you with Microsoft and you have an Amazon relationship of the data center. and be able to be selective You mentioned the cost of and looking at the architectures, and has the benefit on what you just proposed. and further out to the edge I got to ask you the final and the whole groundswell Scott: Yeah, with white out. and that's the mentality-- And they have their cute videos Scott: Well yeah, I'm sure. So it's kind of like that shift to but I mean you're seeing it-- in the data science tools the majority of the you got the Microsoft, You've got to come back We have the Hortonworks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

Jeff FrickPERSON

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Scott GnauPERSON

0.99+

AmazonORGANIZATION

0.99+

Scott GnauPERSON

0.99+

New YorkLOCATION

0.99+

MunichLOCATION

0.99+

John FurrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

AprilDATE

0.99+

yesterdayDATE

0.99+

10 secondsQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

San Jose, CaliforniaLOCATION

0.99+

99.99QUANTITY

0.99+

two placesQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

first jobQUANTITY

0.99+

GitHubORGANIZATION

0.99+

next monthDATE

0.99+

two years agoDATE

0.98+

todayDATE

0.98+

99.9%QUANTITY

0.98+

ten yearsQUANTITY

0.97+

Big DataEVENT

0.97+

five yearsQUANTITY

0.96+

Big Data Silicon Valley 2017EVENT

0.96+

this morningDATE

0.95+

O'Reilly Strata HadoopORGANIZATION

0.95+

OneQUANTITY

0.95+

Data WorksEVENT

0.94+

year end last yearDATE

0.94+

oneQUANTITY

0.93+

HadoopTITLE

0.93+

theCUBEORGANIZATION

0.93+

one pieceQUANTITY

0.93+

Wei WangPERSON

0.91+

NYCLOCATION

0.9+

WeiPERSON

0.88+

past five yearsDATE

0.87+

firstQUANTITY

0.86+

CTOPERSON

0.83+

four wallsQUANTITY

0.83+

Big Data SVORGANIZATION

0.83+

#BigDataSVEVENT

0.82+

one functionQUANTITY

0.81+

Big Data SV 17EVENT

0.78+

EULOCATION

0.73+

HDInsightORGANIZATION

0.69+

Strata HadoopPERSON

0.69+

one requirementQUANTITY

0.68+

number twoQUANTITY

0.65+

Rob Bearden, Hortonworks - Executive On-the-Ground #theCUBE


 

>> Voiceover: On the Ground, presented by The Cube. Here's your host John Furrier. (techno music) >> Hello, everyone. Welcome to a special On the Ground executive interview with Rob Bearden, the CEO of Hortonworks. I'm John Furrier with The Cube. Rob, welcome to this On the Ground. >> Thank you. >> So I got to ask you, you're five years old this year, your company Hortonworks in June, have Hadoop Summit coming up, what a magical run. You guys went public. Give us a quick update on Hortonworks and what's going on. The five-year birthday, any special plans? >> Well, we're going to actually host the 10-year birthday party of Hadoop, which is you know, started at Yahoo! and open-source community. So everyone's invited. Hopefully you'll be able to make it as well. We've accomplished a lot in the last five years. We've grown to over 1000 employees, over 900 customers. This year is our first full year of being a public company, and the street has us at $265 million dollars in billings. So tremendous progress has happened and we've seen the entire data architecture begin to re-platform around Hadoop now. >> CEOs across the globe are facing profound challenges, data, cloud, mobile, obviously this digital transformation. What are you seeing our there as you talk to your customers? >> Well they view that the digital transformation is a massive opportunity for value creation, for that enterprise. And they realize that they can really shift their business models from being very reactive post-transaction to actually being able to consolidate all of the new paradigm data with the existing transaction data and actually get to a very pro-active model pre-transaction. And so they understand their customer's patterns. They understand the kinds of things that their customers want to buy before they ever engage in the procurement process. And they can make better and more compelling offers at better price points and be able to serve their customers better, and that's really the transformation that's happening and they realize the value of that creation between them and their customer. >> And one of the exciting things about The Cube is we go to all these different industry events and you were speaking last week at an event where data is at the center of the value proposition around digital transformation, and that's really been the key trend that we've been seeing consistently, that buzz word digital transformation. What does that mean to you? Because this is coming up over and over again around this digital platform, digital weathers, digital media or digital engagement. It's all around data. What's your thoughts and what is from your perspective digital transformation? >> Well, it's about being able to derive value from your data and be able to take that value back to your customers under your supply chain, and to be able to create a completely new engagement with how you're managing your interaction with your customers and your supply chain from the data that they're generating and the data that you have about them. >> When you talk to CEOs and people in the business out in the field, how much of this digital transformation do you see as real in terms of progress, real progress? In terms of total transitions, or is it just being talked about now? What's your progress bar meter? How would you peg this trend? >> I would say we're at four and I believe we'll be at six by the end of 2016. And it's one of the biggest movements I've seen since the '90s and ERP, because it's so transformational into the business model by being able to transform the data that we have about our collective entity and our collective customer and collective supply chain, and be able to apply predictive and real-time interactions against that data as events and occurrences are happening, and to be able to quickly offer products and services, and the velocity that that creates to modernization and the value creation back is at a pace that's never been able to happen. And they've really understood the importance of doing that or being disintermediated in their existing spaces. >> You mention ERP, it kind of shows our age, but I'll ask the question. Back in the '90s ERP, CRM, these were processes that were well known, that people automated with technology which was at that time unknown. You got a riser-client server technology, local area networking, TCP IP was emerging, so you got some unknown technology stuff happening, but known processes that were being automated and hence saw that boom. Now you mention today, it's interesting because Peter Burris at Wikibon's thesis says today the processes are unknown and the technology's known, so there's now a new dynamic. It's almost flipped upside-down where this digital transformation is exact opposite. IoT is a great use case where all these unknown things are coming into the enterprise that are value opportunities. Get the technology knows, so now the challenge is how to use technology, to deploy it, and be agile to capture and automate these future and/or real-time unknown processes. Your thoughts on that premise. >> The answers are buried in the data, is the great news, and so the technology as you said is there, and you have these new, unknown processes through Internet of Things, the new paradigm data sets with sensors and clickstream and mobile data. And the good news is they generate the data and we can apply technology to the data through AI and machine learning to really make sure that we understand how to transform the value out of that, out of those data sets. >> So how does IT deal with this? 'Cause going back 30 years IT was a clear line of sight, again, automating those known processes. Now you have unknown opportunities, but you have to be in a position for that. Call that cloud, call that DevOps, call that data driven, whatever the metaphor is. People are being agile, be ready for it. How is that different now and what is the future of data in that paradigm? And how does a customer come to grips and rationalize this notion of I need a clear line of sight of the value, not knowing what the processes is about data. What should they be doing? >> Well, we don't know the processes necessarily, per se, but we do know what the data is telling us because we can bring all that data under management. We can apply the right kind of algorithms, the right kind of tools on it, to give us the outcomes that we want and have the ability to monetize and unlock that value very quickly. >> Hortonworks architecture is kind of designed now at the last Hadoop Summit in Dublin. We heard about the platform. Your architecture's going beyond Hadoop, and it says Hadoop Summit and Hadoop was the key to big data. Going beyond Hadoop means other things. What does that mean for the customer? Because now they're seeing these challenges. How does Hortonworks describe that and what value do you bring to those customers? >> Big data was about data at rest and being able to drive the transformation that it has, being able to consolidate all the transactional platforms into central data architecture. Being able to bring all the new paradigm data sets to the mobile, the clickstream, the IoT data, and bring that together and be able to really transition from being reactive post-transaction to be able to be predictive and interactive pre-transaction. And that's a very, very powerful value proposition and you create a lot of value doing that, but what's really learned through that process is in the digital transformation journey, that actually the further upstream that we can get to engaging with the data, even if we can get to it at the point of origination at the furthest edge, at the point of center, at the actual time of clickstream and we can engage with that data as those events and occurrences are happening and we can process against those events as their happening, it creates higher levels of value. So from the Hortonworks platform we have the ability to manage data at rest with Hadoop, as well as data in motion with the Hortonworks data flow platform. And our view is that we must be able to engage with all the data all the time. And so we bring the platforms to bring data under management from the point of origination all the way through as it's in motion, and to the point it comes at rest and be able to aggregate those interactions through the entire process. >> It's interesting, you mention real-time, and one of the ideas of Hadoop was it was always going to be a data warehouse killer, 'cause it makes a lot of sense. You can store the data. It's unstructured data and you can blend in structured on top of that and build on top of that. Has that happened? And does real-time kind of change that equation? Because there's still a role for a data warehouse. If someone has an investment are they being modernized? Clear that up for me because I just can't kind of rationalize that yet. Data warehouses are old, the older ones, but they're not going away any time soon from what we're hearing. Your thoughts as Hadoop as the data warehouse killer. >> Yeah, well, our strategy from day one has never been to go in and disintermediate any of the existing platforms or any of the existing applications or services. In fact, to the contrary. What we wanted to do and have done from day one is be able to leverage Hadoop as an extension of those data platforms. The DW architecture has limitations to it in terms of how much data pragmatically and economically is really viable to go into the data warehouse. And so our model says let's bring more data under management as an extension to the existing data warehouses and give the existing data warehouses the ability to have a more holistic view of data. Now I think the next generation of evolution is happening right now and the enterprise is saying that's great. We're able to get more value longer from our existing data warehouse and tools investment by bringing more data under management, leveraging a combined architecture of Hadoop and data warehouse. But now they're trying to redefine really what does the data warehouse of the future look like, and it's really about how we make decisions, right? And at what point do we make decisions because in the world of DW today it assumes that data's aggregated post-transaction, right? In the new world of data architecture that's across the IT landscape, it says we want to engage with data from the point it's originated, and we want to be able to process and make decisions as events and as occurrences and as opportunities arise before that transaction potentially ever happens. And so the data warehouse of the future is much different in terms of how and when a decision's made and when that data's processed. And in many cases it's pre-transaction versus post-transaction. >> Well also I would just add, and I want to get your thoughts on this, real-time, 'cause now in the moment at the transaction we now have cloud resources and potentially other resources that could become available. Why even go to the data warehouses? So how has real-time changed the game? 'Cause data in motion kind of implies real-time whether it's IoT or some sort of bank transaction or something else. How has real-time changed the game? >> Well, it's at what point can we engage with the customer, but what it really has established is the data has to be able to be processed whether it be on Prim, in the cloud, or in a hybrid architecture. And we can't be constrained by where the data's processed. We need to be able to take the processing to the data versus having to wait for the data to come to the processing. And I think that's the very powerful part of cloud, the on Prim, and software to find networking, and when you bring all of those platforms together, you get the ability to have a very powerful and elastic processing capability at any point in the life cycle of the data. And we've never been able to put all those pieces together on an economically viable model. >> So I got to ask you, you guys are five years old in June, Hadoop's only 10 years old. Still young, still kind of in the early days, but yet you guys are public company. How are you guys looking at the growth strategy for you guys? 'Cause the trend is for people to go private. You guys went public. You're out in the open. Certainly your competitor Cloud ARIS is private, but people can get that they're kind of behind the curtain. Some say public with a $3 billion dollar graduation, but for the most part you're public. So the question is how are you guys going to sustain the growth? What is the growth strategy? What's your innovation strategy? >> Well if you look at the companies that are going private, those are the companies that are the older platforms, the older technologies, in a very mature market that have not been able to innovate those core platforms and they sort of reached their maturity cycle, and I think going private gives them the ability to do that innovation, maybe change their licensing model, the subscription, and make some of the transformations they need to make. I have no doubt they'll be very successful doing that. Our situation's much different. As the modern IT landscape is re-architecting itself almost across every layer. If you look at what's happening in the networking layer going to SDN. Certainly in our space with data and it's moving away from just transactional siloed environments to central data architectures and next generation data platforms. And being able to go all the way out to the edge and bring data under management through the entire movement cycle. We're in a market that we're able to innovate rapidly. Not only in terms of the architecture of the data platform being able to bring batch, real-time applications together simultaneously on a central data set and consolidate all of the data, but also then be able to move out and do the data in motion and be able to control an entire life cycle. There's a tremendous amount of innovation that's going to happen there, and these are significant growth markets. Both the data in motion and the data at rest market. The data at rest market's a $50 billion dollar marketplace. The data in motion market is a $1 trillion dollar TAM. So when you look at the massive opportunity to create value in these high growth markets, in the ability to innovate and create the next generation data platforms, there's a lot of room for growth and a lot of room for scale. And that's exactly why you should be public when you're going though these large growth markets in a space that's re-platforming, because the CIO wants to understand and have transparent visibility into their platform partners. They want to know how you're doing. Are you executing the plan? Or are you hiding behind a facade of one perception or another. >> Or pivoting or some sort of re-architecture. >> Right, so I think it's very appropriate in a high growth, high innovation market where the IT platforms are going through a re-architecture that you actually are public going through that growth phase. Now it forces discipline around how you operationalize the business and how you run the business, but I think that's very healthy for both the tech and the company. >> Michael Dell told me he wanted to go private mainly because he had to do some work essentially behind the curtain. Didn't want the 90-day shot clock, the demands of Wall Street. Other companies do it because the can't stand alone. They don't have a platform and they're constantly pivoting internally to try to grope and find that groove swing, if you will. You're saying that you guys have your groove swing and as Dave Velanti always says, always get behind a growing total adjustment market or TAM, you saying that. Okay, I buy that. So the TAM's growing. What are you guys doing on the platform side that's enabling your customers to re-platform and take advantage of their current data situation as well as the upcoming IoT boom that's being forecasted? >> Well, the first thing is the genesis of which we started the company around, which is we transformed Hadoop from being a batch architecture, single data set, single application, to being able to actually manage a central data architecture where all data comes under management and be able to drive and evolve from batch to batch interactive and real-time simultaneously over that central data set. And then making sure that it's truly an enterprise viable, enterprise ready platform to manage mission critical workloads at scale. And those are the areas where we're continuing to innovate around security, around data governance, around life cycle management, the operations and the management consoles. But then we want to expand the markets that we operate in and be world class and best tech on planet Earth for that data at rest and our core Hadoop business. But as we then see the opportunities to go out to the edge and from the point of origination truly manage and bring that data under management through its entire life cycle, through the movement process and create value. And so we want to continue to extend the reach of when we have data under management and the value we bring to the data through its entire life cycle. And then what's next is you have that data in its life cycle. You then move into the modern data applications, and if you look at what we've done with cyber security and some of the offerings that we've engaged in the cyber security space, that was our first entry. And that's proven to be a significant game changer for us and our customers both. >> Cyber security certainly a big data problem. Also a cloud opportunity with the horsepower you can get with computing. Give us the update. What are you seeing there from a traction standpoint? What's some of the level of engagements your having with enterprises outside of the NSA and the big government stuff, which I'm sure they're customers don't have to disclose that, but for the most part a normal enterprise are constantly planning as if they are already attacked and they're having different schemes that they're deploying. How are they using your platform for that right now? >> Well, the nature of attacks has changed. And it's evolved from just trying to find the hole in the firewall or where we get into the gateway, to how we find a way through a back door and just hang out in your network and watch for patterns and watch for the ability to aggregate relationships and then pose as a known entity that you can then cascade in. And in the world of cyber security you have to be able to understand those anomalies and be able to detect those anomalies that sit there and watch for their patterns to change. And as you go through a whole life cycle of data management between a cloud on Prim and a hybrid architecture, it opens up many, many opportunities for the bad guys to get in and have very new schemes. And our cyber security models give the ability to really track how those anomalies are attaching, where the patterns are emerging, and to be able to detect that in real-time and we're seeing the major enterprises shift to these new models, and it's become a very big part of our growth. >> So I got to change gears and ask you about open-source. You've been an open-source really from the beginning, I would call first generation commercial. But it was not a tier one citizen at that time. It was an alternative to other privatery platforms, whether you look at the network stack or certainly from software. Now today it's tier one. Still we hear business people kind of like, well, open-source. Why should a business executive care about opens-source now? And what would you say to that person who's watching about the benefits of open-source and some of the new models that could help them. >> Well, open-source in general's going to give a number of things. One, it's going to probably provide the best tech, the most innovation in a space, whether that be at the network layer or whether that be at the middle wear layer, the tools layer or certainly the data layer. And you're going to see more innovation typically happen on those platforms much faster and you've got transparent visibility into it. And it brings an ecosystem with it and I think that's really one of the fundamental issues that someone should be concerned with is what does the ecosystem around my tech look like? An open-source really draws forward a very big ecosystem in terms of innovators of the tech, but also enablers of the tech and adopters of the tech in terms of incremental applications, incremental tool sets. And what it does and the benefit to the end customer is the best tech, the most innovation, and typically operating models that don't generate lock in for 'em, and it gives them optionality to use the tech in the most appropriate architecture in the best economic model without being locked in to a proprietary path that they end up with no optionality. >> So talk about the do-it-yourself mentality. In IT that's always been frowned upon because it's been expensive, time-consuming, yet now with organic open-source and now with cloud, you saw that first generation do-it-yourself, standing up stuff on Amazon, whatnot, is being very viable. It funded shadow IT and a variety of other great things around virtualization, visualization, and so on. Today we're seeing that same pattern swing back to do-it-yourself, is good for organic innovation but causes some complexities. So I want to get your thoughts on this because this seems to be a common thread on our Cube interviews and at Hadoop Summit and at Big Data SV as part of Big Data Week when we were in town. We heard from customers and we heard the following: It's still complex and the total cost of ownership's still too high. That seems to be the common theme for slowing down the rapid acceleration of Hadoop and its ecosystem in general. One, do you agree with that? And two, if so, or what would be than answer to make that go faster? >> Well, I think you're seeing it accelerate. I think you're seeing the complexities dwindle away through both innovation and the tech and the maturing of the tech, as well as just new tool sets and applications that are leveraging it, that take away any complexity that was there. But what I think has been acknowledged is, the value that it creates and that it's worth the do-it-yourself and bringing together the spare techs because the innovation that it brings, the new architectures and the value that it creates as these platforms move into the different use cases that they're enabling. >> So I got to ask you this question. I know you're not going to like it and all the people always say, well John, why does everyone always ask that same question? You guys have a radically different approach than Cloudera. It's the number one question. I get ask them about Cloudera. Cloudera, ask them about Hortonworks. You guys have been battling. They were first. You guys came right fast followers second. With the Yahoo! thing we've been following you guys since day one. Explain the difference between Cloudera, because now a couple things have changed over the past few years. One is, Hadoop wasn't the be all end all for big data. There's been a lot of other things certainly SPARK and some other stuff happening, but yet now enterprises are adopting and coexisting with other stuff. So we've seen Cloudera make some pivots. They certainly got some good technology, but they've had some good right answers and some wrong answers. How've you guys been managing it because you're now public, so we can see all the numbers. We know what the business is doing. But relative to the industry, how are you guys compared to Cloudera? What's the differences? And what are you guys doing differently that makes Hortonworks a better vendor than Cloudera? >> I can't speak to all the Cloudera models and strategies. What I'll tell you is the foundation of our model and strategy is based on. When we founded the company we were as you mentioned, three of four years post Cloudera's founding. We felt like we needed to evolve Hadoop in terms of the architecture, and we didn't want to adopt the batch-oriented architecture. Instead we took the core Hadoop platform and through YARN enabled it to bring a central data architecture together as well as be able to be generating batch interactive in real-time applications, leveraging YARN as the data operating system for Hadoop. And then the real strategy behind that was to open up the data sets, open up the different types of use cases, be able to do it on a central data architecture. But then as other processing engines emerged, whether it be a SPARK as you brought up or some of the other ones that we see coming down the pipe, we can then integrate those engines through YARN onto the central data platform. And we open up the number of opportunities, and that's the core basis. I think that's different than some of the other competitor's technology architecture. >> Looking back now five years, are there moves that you were going to make that others have made, that you look back and say I'm glad we didn't do that given today's landscape? >> What I'm glad we did do is open up to the most use cases and workloads and data sets as possible through YARN, and that's proven to be a very, very, fundamentally differentiation of our model and strategy for anybody in the Hadoop space certainly. And I'm also very happy that we saw the opportunity about a year ago that it needed to be more than just about data at rest on Hadoop, and that actually to truly be the next generation data architecture, that you've got to be able to provide the platforms for data at rest and data in motion and our acquisition of Onyara, to be able to get the NiFi technology so that we're truly capturing the data from the point of origination all the way through the movement cycle until it comes at rest has given us now the ability to do a complete life cycle management for an entire data supply chain. And those decisions have proven to be very, very differentiation between us and any of our other competitors and it's opened up some very, very big markets. More importantly, it's accelerated the time to value that our customers get in the use cases that they're enabling through us. >> How would you talk about the scenario that people are saying about Hadoop not being the end all be all industry? At the same time, 'cause big data, as Aroon Merkey said on the Keblan Dublin. It's bigger than Hadoop now, but Hadoop has become synonymous with big data generally. Where's the leadership coming from in your mind? Because we're certainly not seeing it on the data warehouse side, 'cause those guys still have the old technology, trying to co-exist and re=platform for the future. So question is, is Hortonworks viewing Hadoop as still leading generically as a big data industry or has it become a sidebar of the big data industry? >> Of Hadoop? Hadoop is the platform, and we believe ground zero for big data. But we believe it's bigger than that. It's about all data and being able to manage the entire life cycle of all data, and that starts from the point of origination, until it comes at rest, and be able to continue to drive that entire life cycle. Hadoop certainly is the underpinning of the platform for big data, but it's really got to be about all data. Data at rest, data in motion, and what you'll see is the next leg in this is, the modern data applications that then emerge from that. >> How has the ecosystem in the Hadoop industry, I would agree with by the way the Hadoop players are leading big data in general in terms of innovation. The ecosystem's been a big part of it. You guys have invested in it. Certainly a lot of developers and open-source. How has the ecosystem changed given the current situation from where it was? And where do you see the ecosystem going? With the re-platforming not everyone can have a platform. There's a ton of guys out there that have tools, that are looking for a home, they're trying to figure out the chessboard on what's going on with the ecosystem. What's your thoughts of the current situation and how it will evolve in your view? >> Well, I think one of the strongest statements from day one is whether it's EDW or BI or relational, none of the traditional platform players say the way you solve your big data problem is with my platform. They to a company have a Hadoop platform strategy of some form to bring all of that huge volume of big data under management, and it fits our model very well in that we're not trying to disintermediate, but extend those platforms by leveraging HDP as an extension of their platform. And what that's done is it's created pool markets. It's brought Hadoop into the enterprise with a very specific value proposition in use case, bringing more data under management for that tool, that application, or that platform. And then the enterprises has realized there's other opportunities beyond that. And new use cases and new data sets, we can also gain more leverage from. And that's what's really accelerated-- >> So you see growth in the ecosystem? >> We're actually seeing exponential acceleration of the growth around the ecosystem. Not only in terms of the existing platform and tools and applications for either adopting Hadoop, but now new start-up companies building completely from scratch applications just for the big data sets. >> Let's talk about STARS. We were talking before we sat down about the challenges being an entrepreneur. You mentioned the exponential acceleration of entrepreneurs coming into the ecosystem. That's a safe harbor right now. It seems to be across the board. And a lot of the big platforms have robust, growing ecosystems. What's the current landscape of STARS? I know you're an active investor yourself and you're involved in a lot of different start-up conversations and advisor. What's your view of the current landscape right now? Series A, B, C, growth. Stalling. What needs to be in place for these companies to be successful? What are some of the things that you're seeing? >> You have to be surgically focused right now or on a very particular problem set, maybe even by industry. And understand how to solve the problem and have an absolute correlation to a value proposition and a very well defined and clear model of how you're going to go solve that problem, monetize it, and scale. Or you have to have an incredibly well-financed and deep war chest to go after a platform play that's going after a very large TAM that is enabling a re-platforming at one of the levels and the new IT landscape. >> So laser focus in a stack or vertical, and/or a huge cash from funded benchmark or other VCs, tier one VCs, to have a differentiator. They have to have some sort of enabler. >> To enable a next generation platform and something that's very transformational as a platform that really evolves the IT stack. >> What strategies would you advise entrepreneurs in terms of either white spaces to attack and/or their orientation to this new data layer? Because if this plays out as we were talking about, you're going to have a horizontal data layer where you need eye dropper ability. Need to have data in motion, but data aware. Smart data you integrate into disparate systems. Breaking down the siloed concept. How should an entrepreneur develop or look at that? Is there a certain model you've seen work successfully? Is there a certain open-source group they can jump into? What thoughts would you share? 'Cause this seems to be the toughest nut to crack for entrepreneurs. >> Right now you're seeing a massive shift in the IT data architecture, is one example. You're seeing another massive shift in the network architecture. For example, the SDN, right? You're seeing I think a big shift in the kinds of applications getting away from application functionality to data enabled applications. And I think it's important for the entrepreneur to understand where in the landscape do they really want to position? Where do they bring intellectual capital that can be monetized? Some of the areas that I think you'll see emerge very quickly in the next four, six, eight quarters are the new optimization engines, and so things around AI and machine learning. And now that we have all of the data under management through its entire life cycle, how do I now optimize both where that data's processed, in the cloud or on Prim, or as it's in motion. And there's a massive opportunity through software defined networking to actually come in and now optimize at the purest price point and/or efficiency where that data's managed, where that data's stored, and let it continue to reap the benefits. Just as Amazon's done in retail, if you like this, you should look at that. Just as Yahoo! did, I'll point out with Hadoop, it's advertising models and strategies of being able to put specific content in front of you. Those kinds of opportunities are now available for the processing and storage of data through the entire life cycle across any architectural strategy. >> Are you seeing data from a developer's standpoint being instrumental in their use cases? Meaning as I'm developing on top a data platforms like Hortonworks or others, where there's disparate data, what's their interaction? What's their relationship to the data? How are they using it? What do they need to know? Where's the line in terms of their involvement in the data? >> Well, what we're seeing is very big movement with the developed community that they now want to be able to just let the data tell them where the application service needs to be. Because in the new world of data they understand what the entity relationships are with their customers and the patterns that their customers happening. They now can highly optimize when their customers are about to cross over into from one event to the other, and what that typically means and therefore what the inverted action should be to create the best experience with their customer, to create a higher level of service, to be able to create a better packaged price point at a better margin. They also have the ability to understand it in real-time based on what the data trend is flowing, how well their product's performing. Any obstacles or issues that are happening with their product. So they don't want to have to have application logic that then they run a report on three days, three weeks after some events happened. They now are taking the data and as that data and events are happening in the data and it's telling them what to do and they're able to prescriptively act on whatever event or circumstance unfold from that. >> So they want the data now. They want real-time data embedded in the apps as on the front line developer. >> And they want to optimize what that data is doing as it's unfolding through its natural life cycle. >> Let's talk with your customer base and what their expectations are. What questions should a customer or potential customer ask to their big data vendor as they look at the future? What are the key questions they should ask? >> They should really be comparing what is your architectural strategy, first and foremost. For managing data. And what kinds of data can I manage? What are the limitations in your architecture? What workloads and data sets can't I manage? What are the latency issues that your architecture would create for me? What's your business model that's associated with us engaging together? How much of the life cycle can you enable of my data? How secure are you making my data? What kind of long tail of visibility and chain of custody can I have around the governance? What kind of governance standards are you applying to the data? How much of my governance standards can you help me automate? How easy is it to operate and how intuitive is it? How big is your ecosystem? What's your road map and your strategy? What's next in your application stack? >> So enterprises are looking at simplicity. They're looking for total cost of ownership. How is big data innovation going to solve that problem? Because with IoT, again, a lot of new stuff's happening really, really fast. How do they get their arms around this simplicity question in this total cost of ownership? How should they be thinking about it? >> Well, what the Hadoop platforms have to do and the data in motion platforms have to do is to be able to bring the data under management and bring all of the enterprise services that they have in their existing data platforms, in the areas of security, in the areas of management, in the areas of data governance, so they can truly run mission critical workloads at scale with all the same levels of predictability that they have in isolation, in their existing proprietary platforms. And be able to do it in a way that's very intuitive for their existing platforms to be able to access it, very intuitive for their operations teams to be able to manage it, and very clean and easy for their existing tools and platforms investments to leverage it. >> On the industry landscape right now what are you seeing if a consolidation? Some are saying we're seeing some consolidation. Lot of companies going private. You're seeing people buckle down. It's almost a line. If you weren't born before a certain date for the company, you might have the wrong architecture. Certainly enterprises re-platform, I would agree with that, but as a supplier to customers, you're one of the young guys. You were born in the cloud. You were born in open-source, Hortonworks. Not everyone else is like that, and certainly Oracle's like one of the big guys that keep on doing well. IBM's been around. But they're all changing, as well. And certainly a lot of these growth companies pre-IPO are kind of being sold off. What's your take on the current situation with the bubble, the softening, whatever people calling it. What's your thoughts? >> I think you see some companies who got caught up and if we sort of unpack that to the ones who are going private now, those are the companies that have operated in a very mature market space. They were able to not innovate as much as they would probably have liked to, they're probably locked into a proprietary technology in a non-subscription model of some sort. Maybe a perpetual license model. And those are very different models than the enterprise wants to adopt today and their ability to innovate and grow because the market shrank, forced them to go into very constrained environments. And ultimately, they can be great companies. They have great value propositions, but they need to go through transformations that don't include a 90-day shot clock in the public market. In the markets where there's maybe, I was in the B round or the C round and I was focused on providing a niche offering into one of those mature spaces that's becoming disintermediated or evolve quickly because an open-source company has come into the space or that section of IT stack has morphed into more of a cloud-centric or SAP-centric or an open-source centric environment. They got cut short. Their market's gone away. Their market shrunk. They can't innovate their way out of it. And they then ultimately have to find a different approach, and they may or may not be able to get the financing to do that. We're in a much different position. >> Certainly the down round. We're seeing down rounds from the high valuations. That's the first sign of trouble. >> That's the first sign. I've gotten three calls this week from companies that are liquidating and have two weeks to find a new home. >> Great, we'll look for some furniture for our new growing SiliconANGLE office. >> I think you'll have some good values. >> You personally, looking back over five year now in this journey, what an incredible run you guys have had and fun to watch you guys. What's the biggest thing that surprised you and what's the biggest thing that's happened? If you can talk about those two things 'cause again, a lots happened. The markets changed significantly. You guys went public. You got a big office here. What surprised you and what was the biggest thing that you think was the catalyst of the current trajectory? >> How quickly the market grew. We saw from day one when we started the company that this was a billion dollar opportunity, and that was the bar for starting whatever we did. We were looking for new opportunities. We had to see a billion dollar opportunity. How quickly we have seen the growth and the formation of the market in general. And then how quickly some of the new opportunities have opened up, in particular around streaming, Internet of Things, the new paradigm data sets, and how quickly the enterprises have seen the ability to create a next generation data architecture and the aggressiveness in which their moving to do that with Hadoop. And then how quickly in the last year it swung to also being able to want to bring data in motion under management, as well. >> If you could talk to a customer right here, right now, and they asked you the following question, Rob, look around the corner five years out. Tell me something that someone else can't see that you see, that I should be aware of in my business. And why should I go with Hortonworks? >> It's going to be a table stake requirement to be able to understand from whether it be your customer or your supply chain from the point they begin to engage and the first step towards engaging with your product or your service, what they're trying to accomplish, and to be able to interact with them from that first inception point. It's also going to be table stakes to understand to be able to monitor your product in real-time, and be able to understand how well it's performing, down to the component level so that you can make real-time corrections, improvements, and be able to do that on the fly. The other thing that you're going to see is that it's going to be a table stake requirement to be able to aggregate the data that's happened in that life cycle and give your customer the ability to monetize the data about them. But you as the enterprise will be responsible for creating anonymity, confidentiality and security of the data. But you're going to have to be able to provide the data about your customers and give them the ability to if they choose to monetize the data about them, that the ability to do so. >> So I get that correct, you're basically saying 100% digital. >> Oh, it's by far, within the next five years, absolutely. If you do not have a full digital model, in most industries you'll be disintermediated. >> Final question. What's the big bet that you're making right now at Hortonworks? That you say we're pinning the company on blank, fill in the blank. >> It's not about big data. It's about all data under management. >> Rob, thanks so much for spending the time here On the Ground. Rob Bearden, CEO of Hortonworks here for an executive On the Ground. I'm John for The Cube. Thanks for watching. (techno music)

Published Date : Jun 24 2016

SUMMARY :

Voiceover: On the Ground, Welcome to a special On the Ground executive interview So I got to ask you, and the street has us at $265 million dollars in billings. CEOs across the globe are facing profound challenges, and that's really the transformation that's happening and that's really been the key trend and the data that you have about them. and the value creation back is at a pace so now the challenge is how to use technology, and so the technology as you said is there, line of sight of the value, and have the ability to monetize and unlock What does that mean for the customer? the ability to manage data at rest with Hadoop, and one of the ideas of Hadoop was it was And so the data warehouse of the future So how has real-time changed the game? the data has to be able to be processed whether it be So the question is how are you guys going to of the data platform being able to bring batch, for both the tech and the company. So the TAM's growing. and the value we bring to the data What's some of the level of engagements for the bad guys to get in and have very new schemes. and some of the new models that could help them. and adopters of the tech in terms of So talk about the do-it-yourself mentality. and the tech and the maturing of the tech, and all the people always say, and that's the core basis. it's accelerated the time to value that our customers get or has it become a sidebar of the big data industry? and that starts from the point of origination, How has the ecosystem in the Hadoop industry, say the way you solve your big data problem acceleration of the growth around the ecosystem. And a lot of the big platforms have robust, and have an absolute correlation to a value proposition They have to have some sort of enabler. that really evolves the IT stack. 'Cause this seems to be the toughest nut and let it continue to reap the benefits. They also have the ability to understand it as on the front line developer. And they want to optimize what that data is doing What are the key questions they should ask? How much of the life cycle can you How is big data innovation going to solve that problem? and the data in motion platforms have to do and certainly Oracle's like one of the big guys and their ability to innovate and grow We're seeing down rounds from the high valuations. That's the first sign. for our new growing SiliconANGLE office. and fun to watch you guys. have seen the ability to create and they asked you the following question, that the ability to do so. So I get that correct, If you do not have a full digital model, What's the big bet that you're making right now It's about all data under management. for an executive On the Ground.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rob BeardenPERSON

0.99+

Dave VelantiPERSON

0.99+

Peter BurrisPERSON

0.99+

RobPERSON

0.99+

AmazonORGANIZATION

0.99+

Michael DellPERSON

0.99+

$3 billionQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

John FurrierPERSON

0.99+

two weeksQUANTITY

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

100%QUANTITY

0.99+

Aroon MerkeyPERSON

0.99+

OracleORGANIZATION

0.99+

90-dayQUANTITY

0.99+

three daysQUANTITY

0.99+

JuneDATE

0.99+

two thingsQUANTITY

0.99+

TAMORGANIZATION

0.99+

first signQUANTITY

0.99+

first entryQUANTITY

0.99+

five yearsQUANTITY

0.99+

last weekDATE

0.99+

oneQUANTITY

0.99+

DublinLOCATION

0.99+

bothQUANTITY

0.99+

$1 trillion dollarQUANTITY

0.99+

over 900 customersQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

over 1000 employeesQUANTITY

0.99+

$50 billion dollarQUANTITY

0.99+

three callsQUANTITY

0.99+

firstQUANTITY

0.99+

HadoopTITLE

0.99+

last yearDATE

0.99+

sixQUANTITY

0.99+

$265 million dollarsQUANTITY

0.98+

Big Data WeekEVENT

0.98+

three weeksQUANTITY

0.98+

one exampleQUANTITY

0.98+

Series AOTHER

0.98+

Keblan DublinORGANIZATION

0.98+

this weekDATE

0.98+

first stepQUANTITY

0.98+

Hadoop SummitEVENT

0.98+

Yahoo!ORGANIZATION

0.98+

BothQUANTITY

0.97+

first generationQUANTITY

0.97+

this yearDATE

0.97+

OneQUANTITY

0.97+

fourQUANTITY

0.96+

This yearDATE

0.96+

TodayDATE

0.96+

10-year birthdayQUANTITY

0.96+

HadoopORGANIZATION

0.95+

end of 2016DATE

0.95+