Jeff Veis, Actian | BigData NYC 2017
>> Live from Midtown Manhattan, it's the Cube. Covering big data, New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Okay welcome back everyone, live here in New York City it's the Cube special annual presentation of BIGDATA NYC. This is our annual event in New York City where we talk to all the fall leaders and experts, CEOs, entrepreneurs and anyone making shaping the agenda with the Cube. In conjunction with STRATA DATA which was formally called STRATA HEDUP. HEDUP world, the Cube's NYC event. BIGDATA I want to see you separate from that when we're here. Which of these, who's the chief marketing acting of Cube alumni. Formerly with HPE, been on many times. Good to see you. >> Good to see you. >> Well you're a marketing genius we've talked before at HPE. You got so much experience in data and analytics, you've seen the swath of spectrum across the board from classic. I call classic enterprise to cutting edge. To now full on cloud, AI, machine learning, IOT. Lot of stuff going on, on premise seems to be hot still. There's so much going on from the large enterprises dealing with how to better use your analytics. At Acting you're heading up to marketing, what's the positioning? What're you doing there? >> Well the shift that we see and what's unique about Acting. Which has just a very differentiated and robust portfolio is the shift to what we refer to as hybrid data. And it's a shift that people aren't talking about, most of the competition here. They have that next best mouse trap, that one thing. So it's either move your database to the cloud or buy this appliance or move to this piece of open source. And it's not that they don't have interesting technologies but I think they're missing the key point. Which is never before have we seen the creation side of data and the consumption of data becoming more diverse, more dynamic. >> And more in demand too, people want both sides. Before we go any deeper I just want you to take a minute to define what is hybrid data actually mean. What does that term mean for the people that want to understand this term deeper. >> Well it's understanding that it's not just the location of it. Of course there's hybrid computing which is premised in cloud. And that's an important part of it. But there's also about where and how is that data created. What time domain is that data going to be consumed and used and that's so important. A lot of analytics, a lot of the guys across the street are kind of thinking about reporting in analytics and that old world way of. We collect lots of data and then we deliver analytics. But increasingly analytics is being used almost in real time or near real time. Because people are doing things with the data in the moment. Then another dimension of it is AdHawk discovery. Where you can have not one or two or three data scientists but dozens if not hundreds of people. All with copies of Tableau and Click attacking and hitting that data. And of course it's not one data source but multiple as they find adjacencies with data. A lot of the data may be outside of the four walls. So when you look at consumption ad creation of data the net net is you need not one solution but a collection of best fits. >> So a hybrid between consumption and creation so that's the two hybrids. I mean hybrid implies, you know little bit of this little bit of that. >> That's the bridge that you need to be able to cross. Which is where do I get that data? And then where's that data going? >> Great so lets get into Acting. Give us the update, obviously Acting has got a huge portfolio. We've covered you guys know best. Been on the Cube many times. They've cobbled together all these solutions that can be very affective for customers. Take us through the value proposition that this hybrid data enables with Acting. >> Well if you decompose it from our view point there's three pillars. That you kind of needed since the test of time in one sense. They're critical, which is the ability to manage the data. The ability to connect the data. In the old days we said integrate but now I think basically all apps, all kind of data sources are connected in some sense. Sometimes very temporal. And then finally the analytics. So you need those three pillars and you need to be able to orchestrate across them. And what we have is a collection of solutions that span that. They can do transactional data, they can do graph data and object oriented data. Today we're announcing a new generation of our analytics, specifically on HEDUP. And that's Vector H. Love to be able to talk to that today with the native spark integration. >> Lets get into the news. Hard news here at BIGDATA NYC is you guys announced the latest support for Apachi Spark so with Vector H. So Acting Vector in HEDUP, hence the H. What is it? >> Is Spark glue for hybrid data environments or is it something you layer over different underlying databases? >> Well I think it's fair to say it is becoming the glue. In fact we had a previous technology that did a humans job at doing some of the work. Now that we spark and that community. The thing though is if you wanted to take advantage of spark it was kind of like the old days of HEDUP. Assembly was required and that is increasingly not what organizations are looking for. They want to adopt the technology but they want to use it and get on with their day job. What we have done... >> Machine learning, putting algorithms in place, managing software. >> It could be very exonerate things such as predictive machines learning. Next generation AI. But for everyone of those there's an easy a dozen if not a hundred uses of being able to reach and extract data in their native formats. Be able to grab a Parke file and without any transformation being analyze it. Or being able to talk to an application and being able to interface with that. With being able to do reads and writes with zero penalty. So the asset compliance component of databases is critical and a lot of the traditional HEDUP approaches, pretty much read only vehicles. And that meant they were limited on the use cases they could use it. >> Lets talk about the hard news. What specifically was announced? >> Well we have a technology called Vector. Vector does run, just to establish the baseline here. It runs single node, Windows, Linux, and there's a community edition. So your users can download and use that right now. We have Vector H which was designed for scale out for HEDUP and it takes advantage of Yarn. And allows you to scale out across your HEDUP cluster petabytes if you like. What we've added to that solution is now native spark integration and that native spark integration gives you three key things. Number one, zero penalty for real time updates. We're the only ones to the best of our knowledge that can do that. In other words you can update the data and you will not slow down your analytics performance. Every other HEDUP based analytic tool has to, if you will stop the clock. Fresh out the new data to be able to do updates. Because of our architecture and our deep knowledge with transactional processing you don't slow down. That means you can always be assured you'll have fresh data running. The second thing is spark powered direct query access. So we can get at not just Vector formats we have an optimized data format. Which it is the fastest as you'd find in analytic databases but what's so important is you can hit, ORC, Parke and other data file formats through spark and without any transformation. Be it to ingest and analyze an information. The third one and certainly not the least is something that I think you're going to be talking a lot more about. Which is native spark data frame support. Data frames. >> What's the impact of that? >> Well data frames will allow you to be able to talk to spark SQL, spark R based applications. So now that you're not just going to the data you're going to other applications. And that means that you're able to interface directly to the system of record applications that are running. Using this lingua franca of data frames that now has hit a maturity point where you're seeing pretty broad adoption. And by doing native integration with that we've just simplified the ability to connect directly to dozens of enterprise applications and get the information you need. >> Jeff would you be describing what you're offering now. As a form of data, sort of a data virtualization layer that sits in front of all these back end databases. But uses data frames from spark or am I misconstruing. >> Well it's a little less a virtualization layer as maybe a super highway. That we're able to say this analytics tool... You know in the old days it was one of two things. Either you had to do a formal traditional integration and transform that data right so? You had to go from French to German, once it was in German you could read it. Or what you had to do was you had to be able to query and bring in that information. But you had to be able to slow down your performance because that transformation had not occurred. Now what we're able to use is use this park native connector. So you can have the best of both worlds and if you will, it is creating an abstraction layer but it's really for connectivity as opposed to an overall one. What we're not doing is virtualizing the data. That's the key point, there are some people that are pushing data cataloging and cleansing products and abstracting the entire data from you. You're still aware of where the native format is, you're still able to write to it with zero penalty. And that's critical for performance. When you start to build lots of abstraction layers truly traditional ones. You simplify some things but usually you pay a performance penalty. And just to make a point, in the benchmarks we're running compared to Hive and Polor for example. We're used cases against Vector H may take nearly two hours we can do it in less than two minutes. And we've been able to uphold that for over a year. That is because Vector in its core technology has calmer capabilities and, this is a mouthful. But multi level in memory capability. And what does that mean? You ask. >> I was going to ask but keep going. >> I can imagine the performance latency is probably great. I mean you have in memory that everyone kind of wants. >> Well a lot of in memory where it is you used is just held at the RAM level. And it's the ability to breed data in RAM and take advantage of it. And we do that and of course that's a positive but we go down to the cash level. We get down much much lower because we would rather that data be in the CPU if at all possible. And with these high performance cores it's quite possible. So we have some tricks that are special and unique to Vector so that we actually optimize the in memory capability. The other last thing we do is you know HEDUP and HTFS is not particularly smart about where it places the data. And the last thing you want is your data rolling across lots of different data nodes. That just kills performance. What we're able to do is think about the core location of the data. Look at the jobs and look at the performance and we're able to squeeze optimization in there. And that's how we're able to get 50, 100 sometimes an excess of 500 times faster than some of the other well known SQL and HEDUP performances. So that combined now with this spark integration this native spark integration. Means people don't have to do the plumbing they can get out of the basement and up to the first floor. They can take care of, advantage of open source innovation yet get what we're claiming is the fastest HEDUP analytics database in HEDUP. >> So, I got to ask you. I mean you've been, and I mentioned on the intro, industry veteran. CMO, chief marketing officer. I mean challenging with Acting cause there's so many things to focus on. How are you attacking the marketing of Acting because you have a portfolio that hybrid data is a good position. I like that how you bring that to the forefront kind of give it a simple positioning. But as you look at Acting's value proposition and engage you customer base and potentially prospective customers. How are you iterating the marketing message the position and engaging with clients? >> Well it's a fair question and it is daunting when you have multiple products. And you got to have a simple compelling message, less is more to get signal above noise today. At least that's how I feel. So we're hanging our hats on hybrid data. And we're going to take it to the moon or go down with the ship on that. But we've been getting some pretty good feedback. >> What's been the hit one feedback on the hybrid data because, I'm a big fan of hybrid cloud but I've been saying it's a methodology it's not a product. On premise cloud is growing and so is public so hybrid hangs together in the cloud thing. So with data, you're bridging two worlds. Consumption and creation. >> Well what's interesting when you say hybrid data. People put their own definitions around it. In an unaided way and they say you know with all the technology and all the trends, that's actually at the end of the day nets out my situation. I do have data that's hybrid data and it's becoming increasingly more hybrid. And god knows the people that are demanding wanting to use it aren't using it or doing it. And the last thing I need, and I'm really convinced of this. Is a lot of people talk about platforms we love to use the P word. Nobody buys a platform because people are trying to address their use cases. But they don't wat to do it in this siloed kind of brick wall way where I address one use case but it won't function elsewhere. What are they looking for is a collection of best fits solutions that can cooperate together. The secret source for us is we have a cloud control plane. All our technologies, whether it's on premise or in the cloud touch that. And it allows us to orchestrate and do things together. Sometimes it's very intimate and sometimes it's broader. >> Or what exactly is the control plane? >> It does everything from administration, it can do down to billing and it can also be scheduling transactional performance. Now on one extreme we use it for a back up recovery for our transactional database. And we have a cloud based back up recovery service and it all gets administered through the control plane. So it knows exactly when it's appropriate to backup because it understands that database and it takes care of it. It was relatively simple for us to create. On the more intimate sense we were the first company and it was called Acting X which I know we were talking before. We named our product after X before our friends at Apple did. So I like to think we were pioneers. >> San Francisco had the iPhone don't get confused there remember. >> I got to give credit where credit's due. >> And give it up. >> But what Acting X is, and we announced it back in April. Is it takes the same vector technology I just talked about. So it's material and we combined it with our integrated transactional database. Which has over 10,000 users around the world. And what we did is we dropped in this high performance calmer database for free. I'm going to say that again, for free in our transactional part from system. So everyone one of our customers, soon as they upgraded to now Acting X. Got a rocket ship of a calmer high performance database inside their transactional database. The data is fresh, it moves over into the calmer format. And the reporting takes off. >> Jeff to end this statement I'll give you the last word. A lot of people look at Acting also a product I mentioned earlier. Is it product leadership that's winning, is it the values of the customer? Where is Acting and winning for the folks that aren't yet customers that you'd like to talk to. What is the Acting success formula? What's the differentiation, where is it, where does it jump off the page? Is it the product, is it the delivery? Where's the action. >> Is it innovation? >> Well let me tell you about, I would answer with two phrases. First is our tag line, our tag line is "activate your data". And that resonated with a lot of people. A lot of people have a lot of data and we've been in this big data era where people talked about the size of their data. Literally I have 5 petabytes you have 6 petabytes. I think people realized that kind of missed the entire picture. Sometimes smaller data, god forbid 1 terabyte can be amazingly powerful depending on the use case. So it's obviously more than size what it is about is activating it. Are you actually using that data so it's making a meaningful difference. And you're not putting it in a data pond, puddle or lake to be used someday like you're storing it in an attic. There's a lot of data getting dusty in attics today because it is not being activated. And that would bring me to the, not the tag line but what I think what's driving us and why customers are considering us. They see we are about the technology of the future but we're very much about innovation that actually works. Because of our heritage, because we have companies that understand for over 20 years how to run on data. We get what acid compliance is, we get what transactional systems are. We get that you need to be able to not just read but write data. And we bring the methodology to our innovation and so for people, companies, animals, any form of life. That is interested in. >> So it's the product platform that activates and then the result is how you guys roll with customers. >> In the real world today where you can have real concurrency, real enterprise, great performance. Along with the innovation. >> And the hybrid gives them some flexibility that's the new tag line, that's the kind of main. I understand you currently hybrid data means basically flexibility for the customer. >> Yeah it's use the data you need for what you use it for and have the systems work for you. Rather than you work for the systems. >> Okay check out Acting, Jeff Viece friend of the Cube, alumni now. The CMO at Acting, we following your progress so congratulations on the new opportunity. More Cube coverage after this strip break. I'm John Furrier, James Kobielus here inside the Cube in New York City for our BIGDATA NYC event all week. In conjunction with STRATA Data right next door we'll be right back. (tech music)
SUMMARY :
Brought to you by SiliconANGLE Media and anyone making shaping the agenda There's so much going on from the large enterprises is the shift to what we refer to as hybrid data. What does that term mean for the people that the net net is you need not one solution so that's the two hybrids. That's the bridge that you need to be able to cross. Been on the Cube many times. and you need to be able to orchestrate across them. So Acting Vector in HEDUP, hence the H. it is becoming the glue. and being able to interface with that. Lets talk about the hard news. and you will not slow down your analytics performance. and get the information you need. Jeff would you be describing and abstracting the entire data from you. I can imagine the performance latency And the last thing you want is your data rolling across I like that how you bring that to the forefront and it is daunting when you have multiple products. on the hybrid data because, and they say you know with all the technology So I like to think we were pioneers. San Francisco had the iPhone And the reporting takes off. is it the values of the customer? We get that you need to be able to not just read and then the result is how you guys roll with customers. where you can have real concurrency, And the hybrid gives them some flexibility and have the systems work for you. Jeff Viece friend of the Cube, alumni now.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Kobielus | PERSON | 0.99+ |
Jeff Viece | PERSON | 0.99+ |
Jeff Veis | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
April | DATE | 0.99+ |
Jeff | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
6 petabytes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
5 petabytes | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
less than two minutes | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
Midtown Manhattan | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
STRATA Data | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
1 terabyte | QUANTITY | 0.99+ |
two phrases | QUANTITY | 0.99+ |
first floor | QUANTITY | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Vector | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
both sides | QUANTITY | 0.99+ |
one sense | QUANTITY | 0.99+ |
Acting X | TITLE | 0.99+ |
San Francisco | LOCATION | 0.98+ |
over a year | QUANTITY | 0.98+ |
Windows | TITLE | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
500 times | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
NYC | LOCATION | 0.98+ |
over 10,000 users | QUANTITY | 0.98+ |
three data scientists | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.98+ |
three pillars | QUANTITY | 0.98+ |
hundreds of people | QUANTITY | 0.98+ |
Tableau | TITLE | 0.97+ |
second thing | QUANTITY | 0.97+ |
STRATA HEDUP | EVENT | 0.97+ |
two hours | QUANTITY | 0.97+ |
both worlds | QUANTITY | 0.96+ |
HEDUP | ORGANIZATION | 0.96+ |
SQL | TITLE | 0.96+ |
two things | QUANTITY | 0.96+ |
a dozen | QUANTITY | 0.95+ |
one data source | QUANTITY | 0.95+ |
first company | QUANTITY | 0.95+ |
one solution | QUANTITY | 0.94+ |
100 | QUANTITY | 0.93+ |
BIGDATA | ORGANIZATION | 0.91+ |
two hybrids | QUANTITY | 0.9+ |
BIGDATA | EVENT | 0.9+ |
STRATA DATA | ORGANIZATION | 0.9+ |
2017 | DATE | 0.89+ |
Vector H | TITLE | 0.88+ |
spark | ORGANIZATION | 0.88+ |
HEDUP | TITLE | 0.88+ |
German | LOCATION | 0.87+ |
one extreme | QUANTITY | 0.86+ |
four walls | QUANTITY | 0.86+ |
dozens of enterprise applications | QUANTITY | 0.85+ |
single | QUANTITY | 0.84+ |
Acting X. | TITLE | 0.82+ |
three key things | QUANTITY | 0.8+ |
Day Two Kickoff | Big Data NYC
(quite music) >> I'll open that while he does that. >> Co-Host: Good, perfect. >> Man: All right, rock and roll. >> This is Robin Matlock, the CMO of VMware, and you're watching theCUBE. >> This is John Siegel of VPA Product Marketing at Dell EMC. You're watching theCUBE. >> This is Matthew Morgan, I'm the chief marketing officer at Druva and you are watching theCUBE. >> Announcer: Live from midtown Manhattan, it's theCUBE. Covering BigData New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. (rippling music) >> Hello, everyone, welcome to a special CUBE live presentation here in New York City for theCUBE's coverage of BigData NYC. This is where all the action's happening in the big data world, machine learning, AI, the cloud, all kind of coming together. This is our fifth year doing BigData NYC. We've been covering the Hadoop ecosystem, Hadoop World, since 2010, it's our eighth year really at ground zero for the Hadoop, now the BigData, now the Data Market. We're doing this also in conjunction with Strata Data, which was Strata Hadoop. That's a separate event with O'Reilly Media, we are not part of that, we do our own event, our fifth year doing our own event, we bring in all the thought leaders. We bring all the influencers, meaning the entrepreneurs, the CEOs to get the real story about what's happening in the ecosystem. And of course, we do it with our analyst at Wikibon.com. I'm John Furrier with my cohost, Jim Kobielus, who's the chief analyst for our data piece. Lead analyst Jim, you know the data world's changed. We had commenting yesterday all up on YouTube.com/SiliconAngle. Day one was really set the table. And we kind of get the whiff of what's happening, we can kind of feel the trend, we got a finger on the pulse. Two things going on, two big notable stories is the world's continuing to expand around community and hybrid data and all these cool new data architectures, and the second kind of substory is the O'Reilly show has become basically a marketing. They're making millions of dollars over there. A lot of people were, last night, kind of not happy about that, and what's giving back to the community. So, again, the community theme is still resonating strong. You're starting to see that move into the corporate enterprise, which you're covering. What are you finding out, what did you hear last night, what are you hearing in the hallways? What is kind of the tea leaves that you're reading? What are some of the things you're seeing here? >> Well, all things hybrid. I mean, first of all it's building hybrid applications for hybrid cloud environments and there's various layers to that. So yesterday on theCUBE we had, for example, one layer is hybrid semantic virtualization labels are critically important for bridging workloads and microservices and data across public and private clouds. We had, from AtScale, we had Bruno Aziza and one of his customers discussing what they're doing. I'm hearing a fair amount of this venerable topic of semantic data virtualization become even more important now in the era of hybrid clouds. That's a fair amount of the scuttlebutt in the hallway and atrium talks that I participated in. Also yesterday from BMC we had Basil Faruqi talking about basically talking about automating data pipelines. There are data pipelines in hybrid environments. Very, very important for DevOps, productionizing these hybrid applications for these new multi-cloud environments. That's quite important. Hybrid data platforms of all sorts. Yesterday we had from ActIn Jeff Veis discussing their portfolio for on-prem, public cloud, putting the data in various places, and speeding up the queries and so forth. So hybrid data platforms are going increasingly streaming in real time. What I'm getting is that what I'm hearing is more and more of a layering of these hybrid environments is a critical concern for enterprises trying to put all this stuff together, and future-proof it so they can add on all the new stuff. That's coming along like cirrus clouds, without breaking interoperability, and without having to change code. Just plug and play in a massively multi-cloud environment. >> You know, and also I'm critical of a lot of things that are going on. 'Cause to your point, the reason why I'm kind of critical on the O'Reilly show and particularly the hype factor going on in some areas is two kinds of trends I'm seeing with respect to the owners of some of the companies. You have one camp that are kind of groping for solutions, and you'll see that with they're whitewashing new announcements, this is going on here. It's really kind of-- >> Jim: I think it's AI now, by the way. >> And they're AI-washing it, but you can, the tell sign is they're always kind of doing a magic trick of some type of new announcement, something's happening, you got to look underneath that, and say where is the deal for the customers? And you brought this up yesterday with Peter Burris, which is the business side of it is really the conversation now. It's not about the speeds and feeds and the cluster management, it's certainly important, and those solutions are maturing. That came up yesterday. The other thing that you brought up yesterday I thought was notable was the real emphasis on the data science side of it. And it's that it's still not easy or data science to do their job. And this is where you're seeing productivity conversations come up with data science. So, really the emphasis at the end of the day boils down to this. If you don't have any meat on the bone, you don't have a solution that rubber hits the road where you can come in and provide a tangible benefit to a company, an enterprise, then it's probably not going to work out. And we kind of had that tool conversation, you know, as people start to grow. And so as buyers out there, they got to look, and kind of squint through it saying where's the real deal? So that kind of brings up what's next? Who's winning, how do you as an analyst look at the playing field and say, that's good, that's got traction, that's winning, mm not too sure? What's your analysis, how do you tell the winners from the losers, and what's your take on this from the data science lens? >> Well, first of all you can tell the winners when they have an ample number of referenced customers who are doing interesting things. Interesting enough to get a jaded analyst to pay attention. Doing something that changes the fabric of work or life, whatever, clearly. Solution providers who can provide that are, they have all the hallmarks of a winner meaning they're making money, and they're likely to grow and so forth. But also the hallmarks of a winner are those, in many ways, who have a vision and catalyze an ecosystem around that vision of something that could be made, possibly be done before but not quite as efficiently. So you know, for example, now the way what we're seeing now in the whole AI space, deep learning, is, you know, AI means many things. The core right now, in terms of the buzzy stuff is deep learning for being able to process real time streams of video, images and so forth. And so, what we're seeing now is that the vendors who appear to be on the verge of being winners are those who use deep learning inside some new innovation that has enough, that appeals to a potential mass market. It's something you put on your, like an app or something you put on your smart phone, or it's something you buy at Walmart, install in your house. You know, the whole notion of clearly Alexa, and all that stuff. Anything that takes chatbot technology, really deep learning powers chatbots, and is able to drive a conversational UI into things that you wouldn't normally expect to talk to you and does it well in a way that people have to have that. Those are the vendors that I'm looking for, in terms of those are the ones that are going to make a ton of money selling to a mass market, and possibly, and very much once they go there, they're building out a revenue stream and a business model that they can conceivably take into other markets, especially business markets. You know, like Amazon, 20-something years ago when they got started in the consumer space as the exemplar of web retailing, who expected them 20 years later to be a powerhouse provider of business cloud services? You know, so we're looking for the Amazons of the world that can take something as silly as a conversational UI inside of a, driven by DL, inside of a consumer appliance and 20 years from now, maybe even sooner, become a business powerhouse. So that's what's new. >> Yeah, the thing that comes up that I want to get your thoughts on is that we've seen data integration become a continuing theme. The other thing about the community play here is you start to see customers align with syndicates or partnerships, and I think it's always been great to have customer traction, but, as you pointed out, as a benchmark. But now you're starting to see the partner equation, because this isn't open, decentralized, distributed internet these days. And it is looking like it's going to form differently than they way it was, than the web days and with mobile and connected devices it IoT and AI. A whole new infrastructure's developing, so you're starting to see people align with partnerships. So I think that's something that's signaling to me that the partnership is amping up. I think the people are partnering more. We've had Hortonworks on with IBM, people are partner, some people take a Switzerland approach where they partner with everyone. You had, WANdisco partners with all the cloud guys, I mean, they have unique ITP. So you have this model where you got to go out, do something, but you can't do it alone. Open source is a key part of this, so obviously that's part of the collaboration. This is a key thing. And then they're going to check off the boxes. Data integration, deep learning is a new way to kind of dig deeper. So the question I have for you is, the impact on developers, 'cause if you can connect the dots between open source, 90% of the software written will be already open source, 10% differentiated, and then the role of how people going to market with the enterprise of a partnership, you can almost connect the dots and saying it's kind of a community approach. So that leaves the question, what is the impact to developers? >> Well the impact to developers, first of all, is when you go to a community approach, and like some big players are going more community and partnership-oriented in hot new areas like if you look at some of the recent announcements in chatbots and those technologies, we have sort of a rapprochement between Microsoft and Facebook and so forth, or Microsoft and AWS. The impact for developers is that there's convergence among the companies that might have competed to the death in particular hot new areas, like you know, like I said, chatbot-enabled apps for mobile scenarios. And so it cuts short the platform wars fairly quickly, harmonizes around a common set of APIs for accessing a variety of competing offerings that really overlap functionally in many ways. For developers, it's simplification around a broader ecosystem where it's not so much competition on the underlying open source technologies, it's now competition to see who penetrates the mass market with actually valuable solutions that leverage one or more of those erstwhile competitors into some broader synthesis. You know, for example, the whole ramp up to the future of self-driving vehicles, and it's not clear who's going to dominate there. Will it be the vehicle manufacturers that are equipping their cars with all manner of computerized everything to do whatnot? Or will it be the up-and-comers? Will it be the computer companies like Apple and Microsoft and others who get real deep and invest fairly heavily in self-driving vehicle technology, and become themselves the new generation of automakers in the future? So, what we're getting is that going forward, developers want to see these big industry segments converge fairly rapidly around broader ecosystems, where it's not clear who will be the dominate player in 10 years. The developers don't really care, as long as there is consolidation around a common framework to which they can develop fairly soon. >> And open source is obviously a key role in this, and how is deep learning impacting some of the contributions that are being made, because we're starting to see the competitive advantage in collaboration on the community side is with the contributions from companies. For example, you mentioned TensorFlow multiple times yesterday from Google. I mean, that's a great contribution. If you're a young kind coming into the developer community, I mean, this is not normal. It wasn't like this before. People just weren't donating massive libraries of great stuff already pre-packaged, So all new dynamics emerging. Is that putting pressure on Amazon, is that putting pressure on AWS and others? >> It is. First of all, there is a fair amount of, I wouldn't call it first-mover advantage for TensorFlow, there've been a number of DL toolkits on the market, open source, for the last several years. But they achieved the deepest and broadest adoption most rapidly, and now they are a, TensorFlow is essentially a defacto standard in the way, that we just go back, betraying my age, 30, 40 years ago where you had two companies called SAS and SPSS that quickly established themselves as the go-to statistical modeling tools. And then they got a generation, our generation, of developers, or at least of data scientists, what became known as data scientists, to standardize around you're either going to go with SAS or SPSS if you're going to do data mining. Cut ahead to the 2010s now. The new generation of statistical modelers, it's all things DL and machine learning. And so SAS versus SPSS is ages ago, those companies are, those products still exist. But now, what are you going to get hooked on in school? What are you going to get hooked on in high school, for that matter, when you're just hobby-shopping DL? You'll probably get hooked on TensorFlow, 'cause they have the deepest and the broadest open source community where you learn this stuff. You learn the tools of the trade, you adopt that tool, and everybody else in your environment is using that tool, and you got to get up to speed. So the fact is, that broad adoption early on in a hot new area like DL, means tons. It means that essentially TensorFlow is the new Spark, where Spark, you know, once again, Spark just in the past five years came out real fast. And it's been eclipsed, as it were, on the stack of cool by TensorFlow. But it's a deepening stack of open source offerings. So the new generation of developers with data science workbenches, they just assume that there's Spark, and they're going to increasingly assume that there's TensorFlow in there. They're going to increasingly assume that there are the libraries and algorithms and models and so forth that are floating around in the open source space that they can use to bootstrap themselves fairly quickly. >> This is a real issue in the open source community which we talked, when we were in LA for the Open Source Summit, was exactly that. Is that, there are some projects that become fashionable, so for example, a cloud-native foundation, very relevant but also hot, really hot right now. A lot of people are jumping on board the cloud natives bandwagon, and rightfully so. A lot of work to be done there, and a lot of things to harvest from that growth. However, the boring blocking and tackling projects don't get all the fanfare but are still super relevant, so there's a real challenge of how do you nurture these awesome projects that we don't want to become like a nightclub where nobody goes anymore because it's not fashionable. Some of these open source projects are super important and have massive traction, but they're not as sexy, or flair-ish as some of that. >> Dl is not as sexy, or machine learning, for that matter, not as sexy as you would think if you're actually doing it, because the grunt work, John, as we know for any statistical modeling exercise, is data ingestion and preparation and so forth. That's 75% of the challenge for deep learning as well. But also for deep learning and machine learning, training the models that you build is where the rubber meets the road. You can't have a really strongly predictive DL model in terms of face recognition unless you train it against a fair amount of actual face data, whatever it is. And it takes a long time to train these models. That's what you hear constantly. I heard this constantly in the atrium talking-- >> Well that's a data challenge, is you need models that are adapting and you need real time, and I think-- >> Oh, here-- >> This points to the real new way of doing things, it's not yesterday's model. It's constantly evolving. >> Yeah, and that relates to something I read this morning or maybe it was last night, that Microsoft has made a huge investment in AI and deep learning machinery. They're doing amazing things. And one of the strategic advantages they have as a large, established solution provider with a search engine, Bing, is that from what I've been, this is something I read, I haven't talked to Microsoft in the last few hours to confirm this, that Bing is a source of training data that they're using for machine learning and I guess deep learning modeling for their own solutions or within their ecosystem. That actually makes a lot of sense. I mean, Google uses YouTube videos heavily in its deep learning for training data. So there's the whole issue of if you're a pipsqueak developer, some, you know, I'm sorry, this sounds patronizing. Some pimply-faced kid in high school who wants to get real deep on TensorFlow and start building and tuning these awesome kickass models to do face recognition, or whatever it might be. Where are you going to get your training data from? Well, there's plenty of open source database, or training databases out there you can use, but it's what everybody's using. So, there's sourcing the training data, there's labeling the training data, that's human-intensive, you need human beings to label it. There was a funny recent episode, or maybe it was a last-season episode of Silicone Valley that was all about machine learning and building and training models. It was the hot dog, not hot dog episode, it was so funny. They bamboozle a class on the show, fictionally. They bamboozle a class of college students to provide training data and to label the training data for this AI algorithm, it was hilarious. But where are you going to get the data? Where are you going to label it? >> Lot more work to do, that's basically what you're getting at. >> Jim: It's DevOps, you know, but it's grunt work. >> Well, we're going to kick off day two here. This is the SiliconeANGLE Media theCUBE, our fifth year doing our own event separate from O'Reilly media but in conjunction with their event in New York City. It's gotten much bigger here in New York City. We call it BigData NYC, that's the hashtag. Follow us on Twitter, I'm John Furrier, Jim Kobielus, we're here all day, we've got Peter Burris joining us later, head of research for Wikibon, and we've got great guests coming up, stay with us, be back with more after this short break. (rippling music)
SUMMARY :
This is Robin Matlock, the CMO of VMware, This is John Siegel of VPA Product Marketing This is Matthew Morgan, I'm the chief marketing officer Brought to you by SiliconANGLE Media What is kind of the tea leaves that you're reading? That's a fair amount of the scuttlebutt I'm kind of critical on the O'Reilly show is really the conversation now. Doing something that changes the fabric So the question I have for you is, the impact on developers, among the companies that might have competed to the death and how is deep learning impacting some of the contributions You learn the tools of the trade, you adopt that tool, and a lot of things to harvest from that growth. That's 75% of the challenge for deep learning as well. This points to the in the last few hours to confirm this, that's basically what you're getting at. This is the SiliconeANGLE Media theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Matthew Morgan | PERSON | 0.99+ |
Basil Faruqi | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
John Siegel | PERSON | 0.99+ |
O'Reilly Media | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
SPS | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
LA | LOCATION | 0.99+ |
Silicone Valley | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
10% | QUANTITY | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2010s | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
AtScale | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10 years | QUANTITY | 0.99+ |
WANdisco | ORGANIZATION | 0.99+ |
Jeff Veis | PERSON | 0.99+ |
fifth year | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Yesterday | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
BigData | ORGANIZATION | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
Bing | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.98+ |
Amazons | ORGANIZATION | 0.98+ |
last night | DATE | 0.98+ |
two kinds | QUANTITY | 0.98+ |
Spark | TITLE | 0.98+ |
Hortonworks | ORGANIZATION | 0.98+ |
Day one | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
VPA | ORGANIZATION | 0.98+ |
2010 | DATE | 0.98+ |
ActIn | ORGANIZATION | 0.98+ |
Open Source Summit | EVENT | 0.98+ |
one layer | QUANTITY | 0.98+ |
Druva | ORGANIZATION | 0.97+ |
Alexa | TITLE | 0.97+ |
day two | QUANTITY | 0.97+ |
Bruno Aziza | PERSON | 0.97+ |
SPSS | TITLE | 0.97+ |
Switzerland | LOCATION | 0.97+ |
Two things | QUANTITY | 0.96+ |
NYC | LOCATION | 0.96+ |
Wikibon | ORGANIZATION | 0.96+ |
30 | DATE | 0.95+ |
Wikibon.com | ORGANIZATION | 0.95+ |
SiliconeANGLE Media | ORGANIZATION | 0.95+ |
O'Reilly | ORGANIZATION | 0.95+ |
Emma McGrattan, Actian | Big Data NYC 2017
>> Announcer: Live from midtown Manhattan it's theCUBE covering Big Data New York City 2017. Brought to you by Silicon Angle Media and it's ecosystem sponsors. (upbeat techno music) >> Hello, everyone. Welcome back to theCUBE's exclusive coverage of Big Data NYC for all the access. It's our fifth year doing our own event in New York City. The hashtag is BigDataNYC. Also, in conjunction with Strata Hadoop, used to be called Hadoop World, then Strata Hadoop. Now, it's called Strata Data as they try to grope to where the future's going to be. A lot of hype over there. A lot of action. But here as where we do the intimate interviews and the stories. I'm John Furrier, co-host of theCUBE with Emma McGrattan who is the Senior Vice President of Engineering at Actian. Great to have you on. >> Thanks for having me. >> We love having everyone from Ireland cause the accidents great traction. So, I appreciate you coming on. Have a beer later at the pub. New York's got to lot of great Irish pubs. In all seriousness, we've had Actian on before. Mike Hoskins has been on. We had Jeff Veis on yesterday giving us the marketing angle of hybrid data that you guys are doing. What's under the hood? Because Actian has a lot of technology in their portfolio through how you guys had your growth strategy. But now as the world wants to bring it together you're seeing some real critical trends. >> Emma: Right. >> A lot of application development where data's important. Huge amount of security challenges. People are trying to build out and bring security out of IT. And then you've got all this data covering stuff. That's just on the top line. Then you got IOT. So, people are busy. Their plates are full, and data's the center of it. So, what are you guys doing to bring all of Actian together? >> Emma: That's a great question, perfect question for Actian. So, we have in Actian a number of products in the portfolio. And we believe that best fit product. So, if you're doing something like graph database, it doesn't make sense to put a Vector in Hadoop solution against that. And we've got the right fit technology for what we're doing. And for IOT we've got an embedded database that's as small as 30 megs. So, I've got PowerPoint files that are bigger than this database. You put it in a device, set it, it can run for 20 years. You never have to touch it. But all that data that's being generated typically you're generating it because you want, at some point, to be able to analyze it. And we've gone in the portfolio and Vector in Hadoop has the ability to take that data from the IOT sources and perform very high-speed analytics on that. So, the products that we have within the portfolio are focused around data integration, so pulling data into an environment where you're going to perform analysis or otherwise operationalize that data, data management. A lot of our customers are just doing CRM, ERP applications on our product platforms. And then the analytics is where I get really excited cause there's so much happening in the analytics world in terms of new types of applications being built, in terms of real time requirements, in terms of security and governance that you're talking about in reference in your question. And we've got a unique solution that can address all of those areas in our Vector in Hadoop products. So, it's interesting that we see the name Hadoop coming out of the show this week because we see that the focus on Hadoop kind of moving to the background and where the real focus is around the data and not so much-- >> And the business value. >> I hate to sound cliché about outcomes but we were joking on theCUBE yesterday and kind of can't coin the term, "Outcomes as a service." Which is kind of a goof on the whole, "It's about the outcomes." Which is a cliché in tech. But that really is the truth. At the end of the day, you've got a business goal. But the role of data now in real time is key. You're seeing people want real time. Not real time response with old data, they want the real data. So, people are starting to look at data as a really instrumental part of the development process. Similar with DevOps did with infrastructure as code, people want data to be like code. >> Emma: Exactly. >> And that is a hard >> Architectural challenge. So, if you go into your customer base what do you guys tell them? And I was going to the hybrid cloud as the marketing message. But I have challenged, I'm the CXO. I'm the CDO. I'm the CIO. I'm the CFO, COO, whatever the person making these huge, sweeping operational cost decisions. What's the architecture? Cause that's what people are working on right now. And how do you present that? >> Right. So, we recognize the fact that everybody's got a very distributed environment. And part of the message around hybrid data is that data can be generated pretty much any place. You may be generating data in the cloud with your own custom applications. You may be using salesforce.com or NetSuite or whatever. And you've got your on-premise sources of data generation. And what we provide in Actian is the ability to access all of that data in real time, and make it part of the applications that you're deploying that is going to be able to react in real time to changes. You don't want to be acting on yesterday's data because things have happened, things have moved on. So, the importance of real time is not lost on Actian. And all of these solutions that we bring together enable that real time analysis of what's happening in every part of the environment. So, it's hybrid in terms of the type of data that you're working with. It's hybrid in terms of it could be generated in the cloud, in any cloud or on-premise, and being able to pull all of that together an perform real time analysis is incredibly important to generate value from the data. >> Emma, I want to get your thoughts on a comment that I heard last night and then multiple times but the same pattern, they don't get it. "They" could be the venture capitalists as part of the startup. Or the customer has, "Oh, this is the way we do it." There's definitely things that are out there Silo's Legacy things that are-- Still not going away, and we know that. But how do you go into a customer saying look, there's a whole new way of doing things right now. It's not necessarily radical lift and shift or rip and replace. Whatever word you want to use. There's always a word that, you don't like rip and replace, we'll say lift and shift. It's the same thing, right? >> Right. >> You don't want to do a lot of incremental operational wholesale changes. >> Right. >> But you want to do incremental value now. How do you go in and say, "Look, this is the way you want to think about real time in your architecture." Because I don't necessarily want to change my operational mindset for the sake of Salesforce and all these different data sources. How do you guys have that conversation? >> So, Actian is unique in that we have a consumer base that goes back 20, 30 years. I personally will be at Actian 25 years in December. So, we've got customers that are running our I'd like to call them Legacy products, but they're products that powering their business every day of the week. And we've also got incredibly innovative product that we're on the bleeding edge. And what we've done in our recent release of Actian X is do combined bleeding edge technology with this more mature and proven technology. So, at Actian X you've got the OLTP database that was Ingres and now got rebranded because it's got new capabilities. And then we've taken the engine from Actian Vector product, and brought that into Actian X so that you can do in real time analysis of your OLTP data. And we act in real time to changes in the data. And it's interesting that you talk about real time because it means different things to different people. So, if you're talking to somebody doing risk analysis, real time is milliseconds. If you're talking to some customers, real time is yesterday's data and that's fine. And what we've done with Actian X is to provide that ability to determine for yourself what real time means to you and to provide a solution that enables you to respond in real time. Now, bringing analytics into what is a more traditional OLTP database, and kind of demonstrating for them some of the new capabilities it enables and opens up other opportunities as far as we can have conversations about maybe backing up that dataset to the cloud. Somebody that may have been risk averse and not looking at cloud all of a sudden is looking at cloud, looking at analytics, and then kind of opening up new opportunities for us. And new opportunities for them cause the data, as they say, is the new oil. >> That's great, great. And you guys have a good customer base to draw from. So, you've got to bring in the shiny new toy but make it work with existing. So, it sounds like you been like an extraction layer that you're building on tech that was very useful and is useful, by decoupling it with new software that adds value. Is it an extraction layer of sorts? >> We don't think of it as an extraction layer but certainly one could think of it that way because it's ... Well, yeah it's-- >> John: It's a product. You basically take the old product and bring new stuff to it. >> Exactly. >> Okay, so I got to ask you about the trend around IOT. Because IOT is one of those things right now that's super hype. And I think it's going to be even more hype. But security has been a big problem and I hear a lot honestly, certainly IOTs on the agenda. Industrial IOT is kind of the low-hanging fruit. They go to that first. But no one wants to be the next Equifax. So, there's a lot of security stuff that causes, plus there's other things going on they got to take care of. How do you guys talk about the security equation where you can come in and put in a reliable workable solution and still make the customer's feel like they're moving the ball down the field. >> So, that's one of the benefits that we have of being in the industry for as long as we have. We have very deep understanding as to what security requirements are. In terms of providing capabilities within the product to do things like control who can access what data and to what degree. Can they update it? Can they only read it? Providing the ability to encrypt the data. So, for many usecases the data is so sensitive that you'd always want to encrypt it when it's stored. You'd want any traffic coming in and out of the environment to be encrypted. Being able to audit everything that's happening in the environment, who's issuing what queries and from where and to set alarms or something if somebody attempts to access data that they shouldn't be attempting to access. So, taking all of those capabilities together, we're then able to look at things like GDPR. What are the requirements for securing the data? And we've got all the capabilities within the product. And we've got the credibility cause we've been doing this for 30 years, that we can secure these environments. We can conform to the various standards and mandates that are put in place for data security. So, we have a very strong story to tell-- >> John: What is your position >> John: On GDPR? Obviously, you've got a super important, I call it the Y2K that actually is real cause you have there compliance issues. There's a lot of, obviously, political things going on but this is a real problem, about to move fast as a solution. What are you guys offer there? >> Equifax was a prime example of why GDPR is incredibly important. So, for Actian, and you know, I talked about the capabilities we provide with regard to securing data, and secure access to that data. And when it comes to GDPR, a lot of it is around process. So, what we're doing is guiding our customers and making sure that they have secure processes in place. Putting all of the smarts into the technology, and then having somebody doing an offline backup on a CD that they leave on a seat on the train which has, in the past, been a source of data breeches, is an issue with process and not with technology. So, we're helping with that. And helping in educating-- >> John: Equifax had some >> BPN issues but also, I mean, I haven't reported on this yet also have confirmed that there were state actors involved, foreign actors penetrating in through their franchise relationships. So, in partnering in an open internet these days you need to understand who the partners are even if they're in the network. >> Absolutely. And that's why this whole idea of providing all of the capabilities required for data security including auditing, who's coming in. So, failed attempts to get into the system should be reported as problems. And that's a capability that we have within the database. >> So, you've been at Actian for 25 years, I did not know. That's cool. Good folks over there. I've been to the office a few times. I'm sure you got a good healthy customer base but for the folks that don't know Actian. What's the pitch from your standpoint? Not the marketing pitch hybrid data, I get that. I mean, what should they know about you guys. What is the problem that you saw? What do you bring to the table? From an engineering perspective, how do you differentiate? >> So, my primary focus is around high-speed analytics. And so, Actian enables the fastest SQL access to data, on Hadoop and off of Hadoop, proven through benchmarks. So, high-speed analytics is incredibly important. But for Actian, we're unique in having this 30 year history where we understand what it is to run 24/7, mission critical operational databases. So, Actian's known for products like Ingres, like Psql, and being able to analyze data that's operationalized but then also bringing in new data sources. Cause that's where things are really going. But people want to choose the best application whether it's in the cloud or on-premise, it doesn't matter. It's the best application for their need. And being able to pull all of that data together, and for operational purposes, and for analytics purposes is incredibly important. And Actian enables all of that. >> And that's where the hybrid is really clever and smart because you got the consumption side and the creation side, and data integration isn't a project, it's real. It just happens. >> Emma: Right. >> So, you want to enable that. I can see that would be a key benefit. Certainly as, whether these decentralized apps get more traction, you're going to start to see more immutable things transactions happening. Blockchain clearly points to that direction of the market where that's cool. Distributed computing has been around for awhile but now decentralized we know how to behave there. So, we're seeing some apps that will probably be rewritten for that. But again, if architected properly that should be a problem. >> Right, exactly. And we don't want anybody to have to rewrite apps. What we want to be able to do is to provide a platform where the data that you need is available. >> John: Yeah, they're called Dapps for decentralized apps. It's a whole new wave coming, it's not being talked about here at the show. We are on, obviously, at Silicon Angle and Wikibon are those trends as we're riding the big wave. Okay, Em, I want to ask you a final question. Kind of take your Actian hat off, put your Irish techie hat on, and let's get down and dirty on what the main problem in the industry is right now. If you look back and kind of go to the balcony if you will, look at the stage of the industry, obviously Hadoop is now in the background. It's an element of the bigger picture. We're seeing, we were commenting yesterday that these customers have these tool sheds of all these tools they've bought. They bought a hammer that wants to be a lawnmower, right? It's just like they have their tool platforms are being pitched at them. There's a lot of confusion. What's the main problem that the industry's trying to solve? If you look at it, if you can put the dots together. What is the big problem that needs to be solved, that the industry should be solving? >> So, I think data is every place, right? And there's not a whole lot of discipline around corralling that and putting security around it. Being able to deploy security policies across data regardless of where it's deployed or sourced. So, I think that's probably the biggest challenge is bringing compute to the data and pulling all of that together. And that's the challenge that we're addressing. >> And so, the unification, if you will, people use that word, all unifying data. What does that actually mean? You guys call it hybrid data which means you have some flexibility if you need it. >> Emma: Right. >> All right, cool. Emma, thanks so much for coming on theCUBE. Really appreciate it. Congratulations on your success. And again, you guys got to a good spot. You got a broad portfolio, you're bringing together with hybrid data. Best of luck. We'll keep in touch. Emma McGrattan here, the Senior Vice President of Engineering at Actian here on theCUBE. More live coverage here in New York City from theCUBE's coverage of Big Data NYC after this short break. (upbeat techno music)
SUMMARY :
Brought to you by Silicon Angle Media and the stories. hybrid data that you guys are doing. So, what are you guys doing to bring all of Actian together? So, the products that we have within the portfolio and kind of can't coin the term, "Outcomes as a service." So, if you go into your customer base and make it part of the applications that you're deploying Or the customer has, "Oh, this is the way we do it." You don't want to do a lot of incremental operational my operational mindset for the sake of Salesforce And it's interesting that you talk about real time And you guys have a good customer base to draw from. but certainly one could think of it that way and bring new stuff to it. Industrial IOT is kind of the low-hanging fruit. So, that's one of the benefits that we have I call it the Y2K that actually is real Putting all of the smarts into the technology, So, in partnering in an open internet these days all of the capabilities required for data security What is the problem that you saw? And so, Actian enables the fastest SQL access to data, And that's where the hybrid is really clever and smart So, you want to enable that. is to provide a platform where the data that you need What is the big problem that needs to be solved, And that's the challenge that we're addressing. And so, the unification, if you will, And again, you guys got to a good spot.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Emma McGrattan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Emma | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
Mike Hoskins | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Actian | ORGANIZATION | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
December | DATE | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
30 year | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Jeff Veis | PERSON | 0.99+ |
fifth year | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
New York | LOCATION | 0.99+ |
Actian X | ORGANIZATION | 0.99+ |
30 megs | QUANTITY | 0.99+ |
Actian Vector | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Ingres | ORGANIZATION | 0.99+ |
this week | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
last night | DATE | 0.98+ |
SQL | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Strata Hadoop | TITLE | 0.97+ |
Vector | ORGANIZATION | 0.95+ |
Y2K | ORGANIZATION | 0.95+ |
Hadoop | TITLE | 0.95+ |
DevOps | TITLE | 0.95+ |
NYC | LOCATION | 0.94+ |
NetSuite | TITLE | 0.92+ |
Silicon Angle | ORGANIZATION | 0.91+ |
Irish | OTHER | 0.9+ |
2017 | DATE | 0.89+ |
2017 | EVENT | 0.88+ |
Psql | TITLE | 0.86+ |
Salesforce | ORGANIZATION | 0.86+ |
first | QUANTITY | 0.85+ |
Strata Data | TITLE | 0.84+ |