Randy Meyer & Alexander Zhuk | HPE Discover 2017 Madrid
>> Announcer: Live from Madrid, Spain. It's the Cube. Covering HP Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> Good afternoon from Madrid everybody. Good morning on the East Coast. Good really early morning on the West Coast. This is the Cube, the leader in live tech coverage. We're here day one at HPE Discover Madrid 2017. My name is Dave Velonte, I'm here with my cohost Peter Berse. Randy Meyers here is the Vice President and General Manager of the Mission Critical business unit at Hewlett Packard Enterprise. And he's joined by Alexander Zhuk, who is the SAP practice lead at Eldorado. Welcome to the Cube, thanks for coming on. >> Thanks for having us. >> Thank you. >> Randy we were just reminiscing about the number of times you've been on the Cube, consecutive years, it's like the Patriots winning the AFC East it just keeps happening. >> Or Cal Ripkin would probably be you. >> Me and Tom Brady. >> You're the Cal Ripken of the Cube. So give us the update, what's happening in the Mission Critical Business unit. What's going on here at Discover. >> Well, actually just lots of exciting things going on, in fact we just finished the main general session keynote. And that was the coming out party for our new Superdome Flex product. So, we've been in the Mission Critical space for quite some time now. Driving the HANA business, we've got 2500 customers around the world, small, large. And with out acquisition last year of SGI, we got this fabulous technology, that not only scales up to the biggest and most baddest thing that you can imagine to the point where we're talking about Stephen Hawking using that to explore the universe. But it scales down, four sockets, one terabyte, for lots of customers doing various things. So I look at that part of the Mission Critical business, and it's just so exciting to take technology, and watch it scale both directions, to the biggest problems that are out there, whether they are commercial and enterprise, and Alexander will talk about lots of things we're doing in that space. Or even high performance computing now, so we've kind of expanded into that arena. So, that's really the big news Super Dome Flex coming out, and really expanding that customer base. >> Yeah, Super Dome Flex, any memory in that baby? (laughing) >> 32 sockets, 48 terabyte if you want to go that big, and it will get bigger and bigger and bigger over time as we get more density that's there. And we really do have customers in the commercial space using that. I've got customers that are building massive ERP systems, massive data warehouses to address that kind of memory. >> Alright, let's hear from the customer. Alexander, first of all, tell us about your role, and tell us about Eldorado. >> I'm responsible for SAP basis and infrastructure. I'm working in Eldorado who is one of the largest consumer electronics network in Russia. We have more than 600 shops all over the country in more than 200 cities and towns, and have more than 16,000 employees. We have more than 50,000 stock keeping units, and proceeding over three and a half million orders with our international primarily. >> SAP practice lead, obviously this is a HANA story, so can you take us through your HANA journey, what led to the decision for HANA, maybe give us the before, during and after. Leading up to the decision to move to HANA, what was life like, and why HANA? >> We first moved our business warehouse system to HANA back in 2011. It's a time we got strong business requirements to have weak reporting. So, retail business, it's a business whose needs and very rapid decision making. So after we moved to HANA, we get the speed increasing of reports giving at 15 times. We got stock replenishment reports nine times faster. We got 50 minute sales reports every hour, instead of two hours. May I repeat this? >> No, it makes sense. So, the move to HANA was really precipitated by a need to get more data faster, so in memory allows you to do that. What about the infrastructure platform underneath, was it always HP at the time, that was 2011. What's HP's role, HPE's role in that, HANA? >> Initially we were on our business system in Germany, primarily on IBM solutions. But then according to the law requirements, we intended to go to Russia. And here we choose HP solutions as the main platform for our HANA database and traditional data bases. >> Okay Data residency forced you to move this whole solution back to Russia. If I may, Dave, one of the things that we're talking about and I want to test this with you, Alexander, is businesses not only have to be able to scale, but we talk about plastic infrastructure, where they have to be able to change their work loads. They have to be able to go up and down, but they also have to be able to add quickly. As you went through the migration process, how were you able to use the technology to introduce new capabilities into the systems to help your business to grow even faster? >> At that time, before migration, we had strong business requirements for our business growing and had some forecasts how HANA will grow. So we represented to our possible partners, our needs, for example, our main requirement was the possibility to scale up our CRM system up to nine terabytes memory. So, at that time, there was only HP who could provide that kind of solution. >> So, you migrated from a traditional RDBMS environment, your data warehouse previously was a traditional data base, is that right? And then you moved to HANA? >> Not all systems, but the most critical, the most speed critical system, it's our business warehouse and our CRM system. >> How hard was that? So, the EDW and the CRM, how difficult was that migration, did you have to freeze code, was it a painful migration? >> Yes, from the application point of view it was very painful, because we had to change everything, some our reports they had to be completely changed, reviewed, they had to adopt some abap code for the new data base. Also, we got some HANA level troubles, because it was very elaborate. >> Early days of HANA, I think it was announced in 2011. Maybe 2012... (laughing) >> That's one of the things for most customers that we talk to, it's a journey. You're moving from a tried and true environment that you've run for years, but you want the benefits in memory of speed, of massive data that you can use to change your business. But you have to plan that. It was a great point. You have to plan it's gonna scale up, some things might have to scale out, and at the same time you have to think about the application migration, the data migration, the data residency rules, different countries have different rules on what has to be there. And I think that's one of the things we try to take into account as HPE when we're designing systems. I want to let you partition them. I want to let you scale them up or down depending on the work load that's there. Because you don't just have one, you have BW and CRM, you have development environments, test environments, staging environments. The more we can help that look similar, and give you flexibility, the easier that is for customers. And then I think it's incumbent on us also to make sure we support our customers with knowledge, service, expertise, because it really is a journey, but you're right, 2011 it was the Wild West. >> So, give us the HPE HANA commercial. Everybody always tells us, we're great at HANA, we're best at HANA. What makes HPE best at HANA, different with HANA? >> What makes us best at HANA, one, we're all in on this, we have a partnership with SAP, we're designing for the large scale, as you said, that nobody else is building up into this space. Lots of people are building one terabyte things, okay. But when you really want to get real, when you want to get to 12 terabytes, when you want to get to 24 to 48. We're not only building systems capable of that, we're doing co-engineering and co-innovation work with SAP to make that work, to test that. I put systems on site in Waldorf, Germany, to allow them to go do that. We'll go diagnose software issues in the HANA code jointly, and say, here's where you're stressing that, and how we can go leverage that. You couple that with our services capability, and our move towards, you'll consume HANA in a lot of different ways. There will be some of it that you want on premise, in house, there will be some things that you say, that part of it might want to be in the Cloud. Yes, my answer to all of those things is yes. How do I make it easy to fit your business model, your business requirements, and the way you want to consume things economically? How do I alow you to say yes to that? 2500 customers, more than half of the installed base of all HANA systems worldwide reside on Hewlett Packard Enterprise. I think we're doing a pretty good job of enabling customers to say, that's a real choice that we can go forward with, not just today, but tomorrow. >> Alexander, are you doing things in the Cloud? I'm sure you are, what are you doing in the Cloud? Are you doing HANA in the Cloud? >> We have not traditional Cloud, as to use it to say, now we have a private Cloud. We have during some circumstance, we got all the hardware into our property. Now, it's operating by our partner. Between two company they are responsible for all those layers from hardware layer, service contracts, hardware maintenance, to the basic operation systems support, SEP support. >> So, if you had to do it all over again, what might you do differently? What advice would you give to other customers going down this journey? >> My advice is to at first, choose the right team and the right service provider. Because when you go to solution, some technical overview, architectural overview, you should get some confirmation from vendor. At first, it should be confirmed by HP. It should be confirmed by SEP. Also, there is a financial question, how to sponsor all this thing. And we got all these things from HP and our service partner. >> Right, give you the last word. >> So, one, it's an exciting time. We're watching this explosion of data happening. I believe we've only just scratched the surface. Today, we're looking at tens of thousands of skews for a customer, and looking at the velocity of that going through a retail chain. But every device that we have, is gonna have a sensor in it, it's gonna be connected all the time. It's gonna be generating data to the point where you say, I'm gonna keep it, and I'm gonna use it, because it's gonna let me take real time action. Some day they will be able to know that the mobile phone they care about is in their store, and pop up an offer to a customer that's exactly meaningful to do that. That confluence of sensor data, location data, all the things that we will generate over time. The ability to take action on that in real time, whether it's fix a part before it fails, create a marketing offer to the person that's already in the store, that allows them to buy more. That allows us to search the universe, in search for how did we all get here. That's what's happening with data. It is exploding. We are at the very front edge of what I think is gonna be transformative for businesses and organizations everywhere. It is cool. I think the advent of in memory, data analytics, real time, it's gonna change how we work, it's gonna change how we play. Frankly, it's gonna change human kind when we watch some of these researchers doing things on a massive level. It's pretty cool. >> Yeah, and the key is being able to do that wherever the data lives. >> Randy: Absolutely >> Gentlemen, thanks very much for coming on the Cube. >> Thank you for having us. >> Your welcome, great to see you guys again. Alright, keep it right there everybody, Peter and I will be back with our next guest, right after this short break. This is the Cube, we're live from HPE Discover Madrid 2017. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Hewlett Packard Enterprise. and General Manager of the Mission Critical the number of times you've been on the Cube, in the Mission Critical Business unit. So I look at that part of the Mission Critical business, 32 sockets, 48 terabyte if you want to go that big, Alright, let's hear from the customer. We have more than 600 shops all over the country this is a HANA story, so can you take us It's a time we got strong business requirements So, the move to HANA was really precipitated But then according to the law requirements, If I may, Dave, one of the things that we're So, at that time, there was only HP Not all systems, but the most critical, it was very painful, because we had to change everything, Early days of HANA, I think it was announced in 2011. and at the same time you have to think about So, give us the HPE HANA commercial. in house, there will be some things that you say, as to use it to say, now we have a private Cloud. and the right service provider. It's gonna be generating data to the point where you say, Yeah, and the key is being able to do that This is the Cube, we're live from HPE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Berse | PERSON | 0.99+ |
Alexander Zhuk | PERSON | 0.99+ |
Dave Velonte | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Randy Meyers | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Russia | LOCATION | 0.99+ |
2011 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
two hours | QUANTITY | 0.99+ |
Stephen Hawking | PERSON | 0.99+ |
Madrid | LOCATION | 0.99+ |
50 minute | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Tom Brady | PERSON | 0.99+ |
Cal Ripkin | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Alexander | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
24 | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cal Ripken | PERSON | 0.99+ |
Eldorado | ORGANIZATION | 0.99+ |
2500 customers | QUANTITY | 0.99+ |
32 sockets | QUANTITY | 0.99+ |
more than 16,000 employees | QUANTITY | 0.99+ |
HANA | TITLE | 0.99+ |
Randy Meyer | PERSON | 0.99+ |
Today | DATE | 0.99+ |
12 terabytes | QUANTITY | 0.99+ |
Randy | PERSON | 0.99+ |
more than 200 cities | QUANTITY | 0.99+ |
nine times | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
15 times | QUANTITY | 0.99+ |
Madrid, Spain | LOCATION | 0.99+ |
SGI | ORGANIZATION | 0.99+ |
48 | QUANTITY | 0.99+ |
more than 600 shops | QUANTITY | 0.99+ |
Waldorf, Germany | LOCATION | 0.99+ |
two company | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
four sockets | QUANTITY | 0.99+ |
Patriots | ORGANIZATION | 0.99+ |
more than 50,000 stock | QUANTITY | 0.98+ |
48 terabyte | QUANTITY | 0.98+ |
Super Dome Flex | COMMERCIAL_ITEM | 0.98+ |
one | QUANTITY | 0.98+ |
both directions | QUANTITY | 0.97+ |
West Coast | LOCATION | 0.97+ |
over three and a half million orders | QUANTITY | 0.97+ |
Discover | ORGANIZATION | 0.97+ |
East Coast | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
SEP | ORGANIZATION | 0.96+ |
HPE | TITLE | 0.93+ |
Donna Prlich, Hitachi Vantara | PentahoWorld 2017
>> Announcer: Live, from Orlando, Florida, it's The Cube. Covering PentahoWorld 2017. Brought to you by, Hitachi Vantara. >> Welcome back to Orlando, everybody. This is PentahoWorld, #pworld17 and this is The Cube, The leader in live tech coverage. My name is Dave Vellante and I'm here with my co-host, Jim Kobielus Donna Prlich is here, she's the Chief Product Officer of Pentaho and a many-time Cube guest. Great to see you again. >> Thanks for coming on. >> No problem, happy to be here. >> So, I'm thrilled that you guys decided to re-initiate this event. You took a year off, but we were here in 2015 and learned a lot about Pentaho and especially about your customers and how they're applying this, sort of, end-to-end data pipeline platform that you guys have developed over a decade plus, but it was right after the acquisition by Hitachi. Let's start there, how has that gone? So they brought you in, kind of left you alone for awhile, but what's going on, bring us up to date. >> Yeah, so it's funny because it was 2015, it was PentahoWorld, second one, and we were like, wow, we're part of this new company, which is great, so for the first year we were really just driving against our core. Big-Data Integration, analytics business, and capturing a lot of that early big-data market. Then, probably in the last six months, with the initiation of Hitachi Ventara which really is less about Pentaho being merged into a company, and I think Brian covered it in a keynote, we're going to become a brand new entity, which Hitachi Vantara is now a new company, focused around software. So, obviously, they acquired us for all that big-data orchestration and analytics capability and so now, as part of that bigger organization, we're really at the center of that in terms of moving from edge to outcome, as Brian talked about, and how we focus on data, digital transformation and then achieving the outcome. So that's where we're at right now, which is exciting. So now we're part of this bigger portfolio of products that we have access to in some ways. >> Jim: And I should point out that Dave called you The CPO of Pentaho, but in fact you're the CPO of Hitachi Vantara, is that correct? >> No, so I am not. I am the CPO for the Pentaho product line, so it's a good point, though, because Pentaho brand, the product brand, stays the same. Because obviously we have 1,800 customers and a whole bunch of them are all around here. So I cover that product line for Hitachi Vantara. >> David: And there's a diverse set of products in the portfolios >> Yes. >> So I'm actually not sure if it makes sense to have a Chief Products officer for Hitachi Vantara, right? Maybe for different divisions it makes sense, right? But I've got to ask you, before the acquisition, how much were you guys thinking about IOT and Industrial IOT? It must have been on your mind, at about 2015 it certainly was a discussion point and GE was pushing all this stuff out there with the ads and things like that, but, how much was Pentaho thinking about it and how has that accelerated since the acquisition? >> At that time in my role, I had product marketing I think I had just taken Product Management and what we were seeing was all of these customers that were starting to leverage machine-generated data and were were thinking, well, this is IOT. And I remember going to a couple of our friendly analyst folks and they were like, yeah, that's IOT, so it was interesting, it was right before we were acquired. So, we'd always focus on these blueprints of we've got to find the repeatable patterns, whether it's Customer 360 in big data and we said, well they're is some kind of emerging pattern here of people leveraging sensor data to get a 360 of something. Whether it's a customer or a ship at sea. So, we started looking at that and going, we should start going after this opportunity and, in fact, some of the customers we've had for a long time, like IMS, who spoke today all around the connected cars. They were one of the early ones and then in the last year we've probably seen more than 100% growth in customers, purely from a Pentaho perspective, leveraging Machine-generated data with some other type of data for context to see the outcome. So, we were seeing it then, and then when we were acquired it was kind of like, oh this is cool now we're part of this bigger company that's going after IOT. So, absolutely, we were looking at it and starting to see those early use cases. >> Jim: A decade or more ago, Pentaho, at that time, became very much a pioneer in open-source analytics, you incorporated WECA, the open-source code base for machine-learning, data mining of sorts. Into the core of you're platform, today, here, at the conference you've announced Pentaho 8.0, which from what I can see is an interesting release because it brings stronger integration with the way the open-source analytic stack has evolved, there's some Spark Streaming integration, there's some Kafaka, some Hadoop and so forth. Can you give us a sense of what are the main points of 8.0, the differentiators for that release, and how it relates to where Pentaho has been and where you're going as a product group within Hiatachi Vantara. >> So, starting with where we've been and where we're going, as you said, Anthony DeShazor, Head of Customer Success, said today, 13 years, on Friday, that Pentaho started with a bunch of guys who were like, hey, we can figure out this BI thing and solve all the data problems and deliver the analytics in an open-source environment. So that's absolutely where we came form. Obviously over the years with big data emerging, we focused heavily on the big data integration and delivering the analytics. So, with 8.0, it's a perfect spot for us to be in because we look at IOT and the amount of data that's being generated and then need to address streaming data, data that's moving faster. This is a great way for us to pull in a lot of the capabilities needed to go after those types of opportunities and solve those types of challenges. The first one is really all about how can we connect better to streaming data. And as you mentioned, it's Spark Streaming, it's connecting to Kafka streams, it's connecting to the Knox gateway, all things that are about streaming data and then in the scale-up, scale-out kind of, how do we better maximize the processing resources, we announced in 7.1, I think we talked to you guys about it, the Adaptive Execution Layers, the idea that you could choose execution engine you want based on the processing you need. So you can choose the PDI engine, you can choose Spark. Hopefully over time we're going to see other engines emerge. So we made that easier, we added Horton Work Support to that and then this concept of, so that's to scale up, but then when you think about the scale-out, sometimes you want to be able to distribute the processing across your nodes and maybe you run out of capacity in a Pentaho server, you can add nodes now and then you can kind-of get rid of that capacity. So this concept of worker-nodes, and to your point earlier about the Hitachi Portfolio, we use some of the services in the foundry layer that Hitachi's been building as a platform. >> David: As a low balancer, right? >> As part of that, yes. So we could leverage what they had done which if you think about Hitachi, they're really good at storage, and a lot of things Pentaho doesn't have experience in, and infrastructure. So we said, well why are we trying to do this, why don't we see what these guys are doing and we leverage that as part of the Pentaho platform. So that's the first time we brought some of their technology into the mix with the Pentaho platform and I think we're going to see more of that and then, lastly, around the visual data prep, so how can we keep building on that experience to make data prep faster and easier. >> So can I ask you a really Columbo question on that sort-of load-balancing capabilities that you just described. >> That's a nice looking trench coat you're wearing. >> (laughter) gimme a little cigar. So, is that the equivalent of a resource negotiator? Do I think of that as sort of your own yarn? >> Donna: I knew you were going to ask me about that (laughter) >> Is that unfair to position it that way? >> It's a little bit different, conceptually, right, it's going to help you to better manage resources, but, if you think about Mesos and some of the capabilities that are out there that folks are using to do that, that's what we're leveraging, so it's really more about sometimes I just need more capacity for the Pentaho server, but I don't need it all the time. Not every customer is going to get to the scale that they need that so it's a really easy way to just keep bringing in as much capacity as you need and have it available. >> David: I see, so really efficient, sort of low-level kind of stuff. >> Yes. >> So, when you talk about distributed load execution, you're pushing more and more of the processing to the edge and, of course, Brian gave a great talk about edge to outcome. You and I were on a panel with Mark Hall and Ella Hilal about the, so called, "power of three" and you did a really good blog post on that the power of the IOT, and big data, and the third is either predictive analytics or machine learning, can you give us a quick sense for our viewers about what you mean by the power of three and how it relates to pushing more workloads to the edge and where Hitachi Vantara is going in terms of your roadmap in that direction for customers. >> Well, its interesting because one of the things we, maybe we have a recording of it, but kind of shrink down that conversation because it was a great conversation but we covered a lot of ground. Essentially that power of three is. We started with big data, so as we could capture more data we could store it, that gave us the ability to train and tune models much easier than we could before because it was always a challenge of, how do I have that much data to get my model more accurate. Then, over time everybody's become a data scientist with the emergence of R and it's kind of becoming a little bit easier for people to take advantage of those kinds of tools, so we saw more of that, and then you think about IOT, IOT is now generating even more data, so, as you said, you're not going to be able to process all of that, bring all that in and store it, it's not really efficient. So that's kind of creating this, we might need the machine learning there, at the edge. We definitely need it in that data store to keep it training and tuning those models, and so what it does is, though, is if you think about IMS, is they've captured all that data, they can use the predictive algorithms to do some of the associations between customer information and the censor data about driving habits, bring that together and so it's sort of this perfect storm of the amount of data that's coming in from IOT, the availability of the machine learning, and the data is really what's driving all of that, and I think that Mark Hall, on our panel, who's a really well-known data-mining expert was like, yeah, it all started because we had enough data to be able to do it. >> So I want to ask you, again, a product and maybe philosophy question. We've talked on the Cube a lot about the cornucopia of tooling that's out there and people who try to roll their own and. The big internet companies and the big banks, they get the resources to do it but they need companies like you. When we talk to your customers, they love the fact that there's an integrated data pipeline and you've made their lives simple. I think in 8.0 I saw spark, you're probably replacing MapReduce and making life simpler so you've curated a lot of these tools, but at the same time, you don't own you're own cloud, you're own database, et cetera. So, what's the philosophy of how you future-proof your platform when you know that there are new projects in Apache and new tooling coming out there. What's the secret sauce behind that? >> Well the first one is the open-source core because that just gave us the ability to have APIs, to extend, to build plugins, all of that in a community that does quite a bit of that, in fact, Kafka started with a customer that built a step, initially, we've now brought that into a product and created it as part of the platform but those are the things that in early market, a customer can do at first. We can see what emerges around that and then go. We will offer it to our customers as a step but we can also say, okay, now we're ready to productize this. So that's the first thing, and then I think the second one is really around when you see something like Spark emerge and we were all so focused on MapReduce and how are we going to make it easier and let's create tools to do that and we did that but then it was like MapReduce is going to go away, well there's still a lot of MapReduce out there, we know that. So we can see then, that MapReduce is going to be here and, I think the numbers are around 50/50, you probably know better than I do where Spark is versus MapReduce. I might be off but. >> Jim: If we had George Gilbert, he'd know. >> (laughs) Maybe ask George, yeah it's about 50/50. So you can't just abandon that, 'cause there's MapReduce out there, so it was, what are we going to do? Well, what we did in the Hadoop Distro days is we created a adaptive, big data layer that said, let's abstract a layer so that when we have to support a new distribution of Hadoop, we don't have to go back to the drawing board. So, it was the same thing with the execution engines. Okay, let's build this adaptive execution layer so that we're prepared to deal with other types of engines. I can build the transformation once, execute it anywhere, so that kind of philosophy of stepping back if you have that open platform, you can do those kinds of things, You can create those layers to remove all of that complexity because if you try to one-off and take on each one of those technologies, whether it's Spark or Flink or whatever's coming, as a product, and a product management organization, and a company, that's really difficult. So the community helps a ton on that, too. >> Donna, when you talk to customers about. You gave a great talk on the roadmap today to give a glimpse of where you guys are headed, your basic philosophy, your architecture, what are they pushing you for? Where are they trying to take you or where are you trying to take them? (laughs) >> (laughs) Hopefully, a little bit of both, right? I think it's being able to take advantage of the kinds of technologies, like you mentioned, that are emerging when they need them, but they also want us to make sure that all of that is really enterprise-ready, you're making it solid. Because we know from history and big data, a lot of those technologies are early, somebody has to get their knees skinned and all that with the first one. So they're really counting on us to really make it solid and quality and take care of all of those intricacies of delivering it in a non-open-source way where you're making it a real commercial product, so I think that's one thing. Then the second piece that we're seeing a lot more of as part of Hitachi we've moved up into the enterprise we also need to think a lot more about monitoring, administration, security, all of the things that go at the base of a pipeline. So, that scenario where they want us to focus. The great thing is, as part of Hitachi Vantara now, those aren't areas that we always had a lot of expertise in but Hitachi does 'cause those are kind of infrastructure-type technologies, so I think the push to do that is really strong and now we'll actually be able to do more of it because we've got that access to the portfolio. >> I don't know if this is a fair question for you, but I'm going to ask it anyway, because you just talked about some of the things Hitachi brings and that you can leverage and it's obvious that a lot of the things that Pentaho brings to Hitachi, the family but one of the things that's not talked about a lot is go-to-market, Hitachi data systems, traditionally don't have a lot of expertise at going to market with developers as the first step, where in your world you start. Has Pentaho been able to bring that cultural aspect to the new entity. >> For us, even though we have the open-source world, that's less of the developer and more of an architect or a CIO or somebody who's looking at that. >> David: Early adopter or. >> More and more it's the Chief Data Officer and that type of a persona. I think that, now that we are a entity, a brand new entity, that's a software-oriented company, we're absolutely going to play a way bigger role in that, because we brought software to market for 13 years. I think we've had early wins, we've had places where we're able to help. In an account, for instance, if you're in the data center, if that's where Hitachi is, if you start to get that partnership and we can start to draw the lines from, okay, who are the people that are now looking at, what's the big data strategy, what's the IOT strategy, where's the CDO. That's where we've had a much better opportunity to get to bigger sales in the enterprise in those global accounts, so I think we'll see more of that. Also there's the whole transformation of Hitachi as well, so I think there'll be a need to have much more of that software experience and also, Hitachi's hired two new executives, one on the sales side from SAP, and one who's now my boss, Brad Surak from GE Digital, so I think there's a lot of good, strong leadership around the software side and, obviously, all of the expertise that the folks at Pentaho have. >> That's interesting, that Chief Data Officer role is emerging as a target for you, we were at an event on Tuesday in Boston, there were about 200 Chief Data Officers there and I think about 25% had a Robotic Process Automation Initiative going on, they didn't ask about IOT just this little piece of IOT and then, Jim, Data Scientists and that whole world is now your world, okay great. Donna Prlich, thanks very much for coming to the Cube. Always a pleasure to see you. >> Donna: Yeah, thank you. >> Okay, Dave Velonte for Jim Kobielus. Keep it right there everybody, this is the Cube. We're live from PentahoWorld 2017 hashtag P-World 17. Brought to you by Hitachi Vantara, we'll be right back. (upbeat techno)
SUMMARY :
Brought to you by, Hitachi Vantara. Great to see you again. that you guys decided to that we have access to in some ways. I am the CPO for the Pentaho product line, of data for context to see the outcome. of 8.0, the differentiators on the processing you need. on that experience to that you just described. That's a nice looking So, is that the equivalent it's going to help you to David: I see, so really efficient, of the processing to in that data store to but at the same time, you to do that and we did Jim: If we had George have that open platform, you of where you guys are headed, that go at the base of a pipeline. and that you can leverage and more of an architect that the folks at Pentaho have. and that whole world is Brought to you by Hitachi
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Hitachi | ORGANIZATION | 0.99+ |
Anthony DeShazor | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
David | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Donna | PERSON | 0.99+ |
Mark Hall | PERSON | 0.99+ |
Dave Velonte | PERSON | 0.99+ |
Ella Hilal | PERSON | 0.99+ |
Donna Prlich | PERSON | 0.99+ |
Pentaho | ORGANIZATION | 0.99+ |
Brad Surak | PERSON | 0.99+ |
Hitachi Vantara | ORGANIZATION | 0.99+ |
13 years | QUANTITY | 0.99+ |
Friday | DATE | 0.99+ |
Mark Hall | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Tuesday | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
GE Digital | ORGANIZATION | 0.99+ |
1,800 customers | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
third | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Hitachi Ventara | ORGANIZATION | 0.99+ |
two new executives | QUANTITY | 0.99+ |
more than 100% | QUANTITY | 0.99+ |
second one | QUANTITY | 0.98+ |
PentahoWorld | EVENT | 0.98+ |
today | DATE | 0.98+ |
PentahoWorld | ORGANIZATION | 0.98+ |
#pworld17 | EVENT | 0.98+ |
first one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Hiatachi Vantara | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
IMS | ORGANIZATION | 0.96+ |
Kafka | TITLE | 0.95+ |
about 200 Chief Data Officers | QUANTITY | 0.95+ |