Image Title

Search Results for Jacque:

Jacque Istok, Pivotal | Big Data SV 2018


 

>> Announcer: Live from San Jose, it's The Cube. Presenting Big Data, Silicon Valley. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Welcome back to The Cube, we are live in San Jose at Forager Eatery, a really cool place down the street from the Strata Data Conference. This is our 10th big data event, we call this BigData SV, we've done five here, five in New York, and this is our day one of coverage, I'm Lisa Martin with my co-host George Gilbert, and we're joined by a Cube alumni, Jacque Istok, the head of data from Pivotal. Welcome back to the cube, Jacque. >> Thank you, it's great to be here. >> So, just recently you guys announced, Pivotal announced, the GA of your Kubernetes-based Pivotal container service, PKS following this initial beta that you guys released last year, tell us about that, what's the main idea behind PKS? >> So, as we were talking about earlier, we've had this opinionated platform as a service for the last couple of years, it's taken off, but it really requires a very specific methodology for deploying microservices and kind of next gen applications, and what we've seen with the ground swell behind Kubernetes is a very seamless way where we can not just do our opinionated applications, we can do any applications leveraging Kubernetes. In addition, it actually allows us to again, kind of have an opinionated way to work with stateful, stateful data, if you will. And so, what you'll see is two of the main things we have going on, again, if you look at both of those products they're all managed by a thing we call Bosch and Bosch allows for not just the ease of installation, but also the actual operation of the entire platform. And so, what we're seeing is the ability to do day two operations not just around just the apps, not just the platform, but also the data products that run within it. And you'll see later this year as we continue to evolve our data products running on top of either the PKS product or the PCF product. >> Quick question before you jump in George, so you talk about some of the technology benefits and reasoning for that, from a customer perspective, what are some of the key benefits that you've designed this for, or challenges to solve? >> I'd say the key benefits, one is convenience and ease of installation, and operationalization. Kubernetes seems to have basically become the standard for being able to deploy containers, whether its on Pram or off Pram, and having an enterprise solution to do that is something that customers are actually really looking towards, in fact, we had sold about a dozen of these products even before it was GA there was so much excitement around it. But, beyond that, I think we've been really focused on this idea of digital transformation. So Pivotal's whole talk track really is changing how companies build software. And I think the introduction of PKS really takes us to the next level, which is that there's no digital transformation without data, and basically Kubernetes and PKS allow us to implement that and perform for our customers. >> This is really a facilitator of a company's digital transformation journey. >> Correct. In a very easy and convenient way, and I think, you know, whether it's our generation, or, you know, what's going on in just technology, but everybody is so focused on convenience, push button, I just want it to work. I don't want to have to dig into the details. >> So this picks up on a theme we've been pounding on for a couple of years on our side, which is the infrastructure was too hard to stand up and operate >> Male Speaker: Yeah. >> But now that we're beginning to solve some of those problems, talk about some of the use case. Let's pick GE because that's a flagship customer, start with some of the big outcomes, some of the big business outcomes they're shooting for and then how some of the pivotal products map into that. >> Sure, so there's a lot of use cases. Obviously, GE is both a large organization, as well as an investor inside of Pivotal. A lot of different things we can talk about one that comes to mind out of the gate is we've got a data suite we sell in addition to PKS and PCF, and within that data suite there are a couple of products, green plum being one of them. Green plum is this open source MPP data platform. Probably one of the most successful implementations within GE is this ability to actually consolidate a bunch of different ERP data and have people be able to querey it, again, cheaply, easily, effectively and there are a lot of different ways you can implement a solution like that. I think what's attractive to these guys specifically around green plum is that it leverages, you know, standard ANSI SQL, it scales to pedobytes of data, we have this ability to do on pram and off pram I was actually at the Gartner Conference earlier this week and walking around the show it was actually somewhat eye opening to me to be able to see that if you look at just that one product, there really isn't a competitive product that was being showcased that was open source, multi cloud, analytical in nature, et cetera. And so I think, again, to get back to the GE scenario, what was attractive to them was everything they're doing on pram can move to the cloud, whether it's Google, Azure, Amazon they can literally run the exact same product and the exact same queries. If you extend it beyond that particular use case, there are other use cases that are more real time, and again, inside of the data suite, we've got another product called gem fire, which is an in-memory data grid that allows for this rapid ingest, so you can kind of think and imagine whether it's jet engines, or whether it's wind turbines data is constantly being generated, and our ability to take that data in real time, ingest it, actually perform analytics on it as it comes in, so, again, kind of a loose example would be if you know the heat tolerance of a wind turbine is between this temperature and this temperature, do something: send an alarm, shut it down, et cetera. If you can do that in real time, you can actually save millions of dollars by not letting that turbine fail. >> Okay, it sounds here like the gem fire product and the green plum DBMS are very complimentary. You know, one is speed, and one is sort of throughput. And we've seen almost like with Hadupen overreaction in turning a coherent platform into a bunch of building blocks. >> Male Speaker: Yes. >> And with green plum you have everything packaged together. Would it be proper to think of green plum as combining the best of the data link and the data warehouse where you've got the data scientists and data engineers with what would have been another product and the business analysts and the BI crowd satisfied with the same product, but what would have been another? >> Male Speaker: So, I'd say you're spot on. What is super interesting to me is, one, I've been doing data warehousing now for, I don't know, 20 years, and for the last five, I've kind of felt like data warehouse, just the term, was equivalent to the mainframe. So, I actually kind of relegated it the I'm not going to use that term anymore, but with the advent of the cloud and with other products that are out there we're seeing this resurgence where the data warehouse is cool again, and I think part of it is because we had this shift where we had really expensive products doing the classic EDW and it was too rigid, and it was too expensive, and Haduke sort of came on and everyone was like hey this is really easy, this is really cheap, we can store whatever we want, we can do any kind of analytics, and I think, I was saying before, the love affair with piecing all of that together is kind of over and I also think, it's funny, it was really hard for organizations to successfully stand up a Haduke platform, and I think the metric we hear is fifty percent of them fail, right, so part of that, I believe is because there just aren't enough people to be able to do what needed to be done. So, interestingly enough, because of those failures, because the Haduke ecosystem didn't quite integrate into the classic enterprise, products like green plum are suddenly very popular. I was just seeing our downloads for the open source part of green plum, and we're literally, at this juncture seeing 1500 distinct customers leveraging the open source product, so I feel like we're on kind of this upswing of getting everybody to understand that you don't have to go to Haduke to be able to do structured to unstructured data at scale. You can actually use some of these other products. >> Female Speaker: Sorry George, quickly, being in the industry for 20 years, we talk about, you know, culture a lot, and we say cultural shift. People started embracing Haduke, we can dump everything that data lake turned into swamps. I'm curious though, what is that, maybe it's not a cultural shift, maybe it's a cultural roller coaster, like, mainframes are cool again. Give us your perspective on how you've helped companies like GE sort of as technology waves come really kind of help design and maybe drive a culture that embraces the velocity of this change. >> Sure, so one of the things we do a lot is help our customers better leverage technology, and really kind of train it. So, we have a couple different aspects to pivotal. One of them is our labs aspect, and effectively that is our ability to teach people how to better build applications, how to better do data science, how to better do data engineering. Now, when we come in, we have an opinionated way to do all those things, and when a customer embraces it it actually opens up a lot of doors. So we're somewhat technology agnostic, which aids in your question, right, so we can come in, we're not trying to push a specific technology, we're trying to push a methodology and an end goal and solution. And I think, you know, often times of course that end goal and solution is best met by our products, but to your point about the roller coaster, it seems as though as we have evolved there is a notion that data will, from an organization, will all come together in a common object store, and then the ability to quickly be able to spin up an analytical or a programmmatic interface within that data is super important and that's where we're kind of leaning, and that's where I think this idea of convenience being able to push button instantiate a green plum cluster, push button instantiate a gem fire grid so that you can do analytics or you can take actions on it is so super important. >> Male Speaker: You said something that sounds really important which is we want to get it sounded like you were alluding to a single source of truth, and then you spin up whatever compute, you bring it to the data. But there's an emerging, still early school of thought which is maybe the single source of truth should be a hub centered around real time streams. >> Male Speaker: Sure. Yeah. >> How does Pivotal play in that role? >> So, there are a lot of products that can help facilitate that including our own. I would say that there is a broad ecosystem that kind of says, if I was going to start an organization today there are a number of vertical products I would need in order to be successful with data. One of the would be just a standard relational database. And if I pause there for a second, if you look at it, there is definitely a move toward building microservices so that you can glue all those pieces together. Those microservices require smaller, simpler relational type databases, or you know, SQL type databases on the front end, but they become simpler and simpler where I think if I was Oracle or some of the more stalwart on the relational side, it's not about how many widgets you can put into the database, it's really about it's simplicity and performance. From there, having some kind of message queue or system to be able to take the changes and the updates of the data down the line so that, not so much IT providing it to an end user, but more self service, being able to subscribe to the data that I care about. And again, going back to the simplicity, me as an end user being able to take control of my destiny and use whatever product or technology makes the most sense to me and if I sort of dovetail on the side of that, we've focused so much this year on convenience and flexibility that I think it is now at a spot where all of the innovations that we're doing in the Amazon marketplace on green plum, all of those innovations are actually leading us to the same types of innovations in data deployments on top of Kubernetes. And so two of them that come to mind, I felt like, I was in front of a group last week and we were presenting some of the things we had done, and one of them was self-healing of green plum and so it's often been said that these big analytical solutions are really hard to operate and through our innovations we're able to have, if a segment goes down or a host goes down, or network problems, through the implementation the system will actually self heal itself, so all of a sudden the operational needs become quite a bit less. In addition, we've also created this automatic snapshotting capability which allows, I think our last benchmark we did about a pedobyte of data in less than three minutes, so suddenly you've got this operational stalwart, almost a database as a service without really being a service really just this living breathing thing. And that kind of dovetails back to where we're trying to make all of our products perform in a way that customers can just use them and not worry about the nuts and bolts of it. >> Female Speaker: So last question, we've got about 30 seconds left. You mentioned a lot of technologies but you mentioned methodology. Is that approach from Pivotal one of the defining competitive advantages that you deliver to the market? >> Male Speaker: It is 100 per cent one of our defining our defining things. Our methodology is what is enabling our customers to be successful and it actually allows me to say we've partnered with postcrestkampf and green plum summit this year is next month in April and the theme of that is hashtag data tells the story. And so, from our standpoint, green plum is continuing to take off, gem fire is continuing to take off, Kubernetes is continuing to take off, PCF is continuing to take off, but we believe that digital transformation doesn't happen without data. We think data tells a story. I'm here to encourage everyone to come to green plum summit, I'm also here to encourage everyone to share their stories with us on twitter, hashtag data tells a story, so that we can continue to broaden this ecosystem. >> Female Speaker: Hahtag data tells a story. Jacque, thanks so much for carving out some time this week to come back to the cube and share what's new and differentiating at Pivotal. >> Thank you. >> We want to thank you for watching The Cube. I'm Lisa Martin with my co-host George Gilbert. We are live at Big Data SV, our tenth big data event come down here, see us, we're in San Jose at Forrager eatery, we've got a great party tonight and also tomorrow morning at eight am we've got a breakfast briefing you wont' want to miss. Stick around, we'll be back with our next guest after a short break.

Published Date : Mar 7 2018

SUMMARY :

Brought to you by SiliconANGLE Media Welcome back to The Cube, we are live in San Jose and Bosch allows for not just the ease of installation, and having an enterprise solution to do that This is really a facilitator of a company's you know, whether it's our generation, But now that we're beginning to solve and again, inside of the data suite, we've got and the green plum DBMS are very complimentary. and the business analysts and the BI crowd of getting everybody to understand a culture that embraces the velocity of this change. and then the ability to quickly be able to Male Speaker: You said something that And that kind of dovetails back to where we're competitive advantages that you deliver to the market? and it actually allows me to say and share what's new and differentiating at Pivotal. we've got a breakfast briefing you wont' want to miss.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Lisa MartinPERSON

0.99+

BoschORGANIZATION

0.99+

GeorgePERSON

0.99+

twoQUANTITY

0.99+

GEORGANIZATION

0.99+

San JoseLOCATION

0.99+

fifty percentQUANTITY

0.99+

AmazonORGANIZATION

0.99+

JacquePERSON

0.99+

New YorkLOCATION

0.99+

Jacque IstokPERSON

0.99+

20 yearsQUANTITY

0.99+

last yearDATE

0.99+

PivotalORGANIZATION

0.99+

oneQUANTITY

0.99+

100 per centQUANTITY

0.99+

GoogleORGANIZATION

0.99+

last weekDATE

0.99+

Silicon ValleyLOCATION

0.99+

less than three minutesQUANTITY

0.99+

fiveQUANTITY

0.99+

The CubeTITLE

0.99+

HadukeORGANIZATION

0.99+

next monthDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

OracleORGANIZATION

0.99+

bothQUANTITY

0.99+

PKSORGANIZATION

0.98+

CubeORGANIZATION

0.98+

Strata Data ConferenceEVENT

0.98+

OneQUANTITY

0.97+

tonightDATE

0.97+

millions of dollarsQUANTITY

0.97+

this weekDATE

0.97+

tomorrow morning atDATE

0.97+

about a dozenQUANTITY

0.96+

AzureORGANIZATION

0.96+

earlier this weekDATE

0.96+

todayDATE

0.96+

SQLTITLE

0.96+

1500 distinct customersQUANTITY

0.96+

this yearDATE

0.95+

later this yearDATE

0.95+

last couple of yearsDATE

0.95+

Gartner ConferenceEVENT

0.95+

single sourceQUANTITY

0.94+

KubernetesORGANIZATION

0.94+

one productQUANTITY

0.93+

PramORGANIZATION

0.93+

10thQUANTITY

0.93+

PCFORGANIZATION

0.92+

tenthQUANTITY

0.92+

KubernetesTITLE

0.88+

about 30 secondsQUANTITY

0.86+

AprilDATE

0.86+

HahtagORGANIZATION

0.85+

about a pedobyte of dataQUANTITY

0.84+

2018DATE

0.84+

twitterORGANIZATION

0.84+

One of themQUANTITY

0.84+

gem fireORGANIZATION

0.83+

The CubeORGANIZATION

0.82+

Jacque Istok, Pivotal | BigData NYC 2017


 

>> Announcer: Live from midtown Manhattan, it's the Cube, covering big data New York City 2017. Brought to you by Silicon Angle Media and its ecosystem sponsors. >> Welcome back everyone, we're here live in New York City for the week, three days of wall to wall coverage of big data NYC, it's big data week here in conjunction with Strata Adup, Strata Data which is an event running right around the corner, this is the Cube, I'm John Furrier with my cohost, Peter Burris, our next guest Jacque Istok who's the head of data at Pivotal. Welcome to the Cube, good to see you again. >> Likewise. >> You guys had big news we covered at VMware, obviously the Kubernetes craze is fantastic, you're starting to see cloud native platforms front and center even in some of these operational worlds like in cloud, data you guys have been here a while with Green Plum and Pivotal's been adding more to the data suite, so you guys are a player in this ecosystem. >> Correct. >> As it grows to be much more developer-centric and enterprise-centric and AI-centric, what's the update? >> I'd like to talk about a couple things, just three quick things here, one focused primarily on simplicity, first and foremost as you said, there's a lot of things going on on the cloud foundry side, a lot of things that we're doing with Kubernetes, etc., super exciting. I will say Tony Berge has written a nice piece about Green Plum in Zitinet, essentially calling Green Plum the best kept secret in the analytic database world. Why I think that's important is, what isn't really well known is that over the period of Pivotal's history, the last four and a half years, we focused really heavily on the cloud foundry side, on dev/ops, on getting users to actually be able to publish code. What we haven't talked about as much is what we're doing on the data side and I find it very interesting to repeatedly tell analysts and customers that the Green Plum business has been and continues to be a profitable business unit within Pivotal, so as we're growing on the cloud foundry side, we're continuing to grow a business that many of the organizations that I see here at Strata are still looking to get to, that ever forgotten profitability zone. >> There's a legacy around Green Plum, I'm not going to say they pivoted, pun intended, Pivotal. There's been added stuff around Green Plum, Green Plum might get lost in the messaging because it's been now one of many ingredients, right? >> It's true and when we formed Pivotal, I think there were 34 some different skews that we have now focused in on over the last two years or so. What's super exciting is again, over that time period, one of the things that we took to heart within the Green Plum side is this idea of extreme agile. As you guys know, Pivotal Labs being the core part of the Pivotal mission helps our customers figure out how to actually build software. We finally are drinking our own champagne and over the last year and a half of Green Plum R&D, we're shipping code, a complete data platform, we're shipping that on a cadence of about four to five weeks which again, a little bit unheard of in the industry, being able to move at that pace. We work through the backlog and what is also super exciting and I'm glad that you guys are able to help me tell the world, we released version five last week. Version five is actually the only parallel open source data platform that actually has native ANSI compliance SQL and I feel a little bit like I've rewound the clock 15 years where I have to actually throw in the ANSI compliance, but I think that in a lot of ways, there are SQL alternatives that are out there in the world. They are very much not ANSI compliant and that hurts. >> It's a nuance but it's table stakes in the enterprise. ANSI compliance is just, >> There's a reason you want to be ANSI compliant, because there's a whole swath of analytic applications mainly in the data warehouse world, that were built using ANSI compliant SQL, so why do this with version five? I presume it's got to have something to do with you want to start capturing some of those applications and helping customers modernize. >> That is correct. I think the SQL piece is one part of the data platform, of really a modern data platform. The other parts are again, becoming table stakes. Being able to do text analytics, we've backed Apache Solar within Green Plum, being able to do graph analytics or spatial analytics, anything from classifications, regressions, all of that, actually becomes table stakes and we feel that enterprises have suffered a little bit over the last five or six years. They've had this promise of having a new platform that they can leverage for doing interesting new things, machine learning, AI, etc. but the existing stuff that they were trying to do has been super, super hard. What we're trying to do is bridge those together and provide both in the same platform, out of the gate so that customers can actually use it immediately and I think one of the things we've seen is there's about 1000 to one SQL experienced individuals within the enterprise versus say Haduk experience in individuals. The other thing that I think is actually super important and almost bigger than everything else I talked about is we're the, a lot of the old school postgres deriviants of MBD databases forked their databases at some point in postgres's history, for a variety of reasons from licensing to when they started. Green Plum's no different. We forked right around eight dot too with this last release of version five, we've actually up leveled the postgres base within Green Plum's 8.3. Now in and of itself, it doesn't sound, >> What does that mean? >> We are now taking a 100% commitment both to open source and both to the postgres community. I think if you look at postgres today, in its latest versions, it is a full fledged, mission critical database that can be used anywhere. What we feel is that if we can bring our core engineering developments around parallelism, around analytics and combine that with postgres itself, then we don't have to implement all of the low level database things that a lot of our competitors have to do. What's unique about it is one, Green Plum continues to be open source, which again most of our competitors are not, two if you look at primarily what they're doing, nobody's got that level of commitment to the postgres community which means all of their resources are going to be stuck building core database technology, even building that ANSI SQL compliance in, which we'll get "for free" which will let us focus on things like machine learning, artificial intelligence. >> Just give a quick second and tell about the relevance of postgres because of the success, first of all it's massive, it's everywhere, but it's not going anywhere. Just give a quick, for the audience watching, what's the relevance of it. >> Sure like you said, it is everywhere. It is the most full featured, actual database in the open source community. Arguably my SQL has "more" market share, but my SQL projects that generally leverage them are not used for mission critical enterprise applications. Being able to have parity allows us not only to have that database technology baked into Green Plum, but it also gives us all of the community stuff with it. Everything from being able to leverage the most recent ODBC and JDBC libraries, but also integrations into everything from the post GIS travert for geospatial to being able to connect to other types of data sources, etc. >> It's a big community, shows that it's successful, but again, >> And it doesn't come in a red box. >> It does not come in a red box, that is correct. >> Which is not a bad thing. Look, postgres as a technology was developed a long time ago, largely in response to think about analytics and transaction, or analytics and operating applications might have actually come to and we're now living in a world where we can actually see the hardware and a lot of practices, etc. are beginning to find ways where this may start to happen. With Green Plum and postgres both MPP based, so your, by going to this, you're able to stay more modern, more up to date on all the new technology that's coming together to support these richer, more complex classes of applications. >> You're spot on, I suppose I would argue that postgres, I feel came up with as a response to Oracle in the past of, we need an open source alternative to Oracle, but other than that, 100% correct. >> There was always a difference between postgres and MySQL, MySQL always was okay, that's that, let's do that open source, postgres coming out of Berkeley and coming out of some other places, always had a slightly different notion of the types of problems it was going to take on. >> 100% correct, 100%. But to your question before, what does this all mean to customers, I think the one thing that version five really gives us the confidence to say is, and a lot of times I hate lobbing when the ball's out like this, but we welcome and embrace with open arms any terradata customers out there that are looking to save millions if not tens of millions of dollars on a modern platform that can actually run not only on premise, not only on bare metal, but virtually and off premise. We're truly the only MPP platform, the only open source MPP data platform that can allow you to build analytics and move those analytics from Amazon to Azure to back on prem. >> Talk about this, the terradata thing for a second, I want to get down and double click on that. Customers don't want to change code, so what specifically are you guys offering terradata customers specifically. With the release of version five, with a lot of the development that we've done and some of the partnering that we've done, we are now able to take without changing a line of code of your terradata applications, you load the data within the Green Plum platform, you can point those applications directly to Green Plum and run them unchanged, so I think in the past, the reticence to move to any other platform was really the amount of time it would take to actually redevelop all of the stuff that you had. We offer an ability to go from an immediate ROI to a platform that again, bridges that gap, allows you to really be modern. >> Peter, I want to talk to you about that importance that we just said because you've been studying the private cloud report, true private cloud which is on premises, coming from a cloud operating model, automating away undifferentiated labor and shipping that to differentiated labor, but this brings up what customers want in hybrid cloud and ultimately having public cloud and private cloud so hybrid sits there. They don't want to change their code basis, this is a huge deal. >> Obviously a couple things to go along with what Jacque said. The first thing is that you're right, people want the data to run where the data naturally needs to run or should run, that's the big argument about public versus hybrid versus what we call true private cloud. The idea that decreasing the workload needs to be located where the data, where it naturally should be located because of the physical, legal, regulatory, intellectual property attributes of the data, being able to do that is really really important. The other thing that Jacque said that goes right into this question John, is that ultimately in too many domains in this analytics world, which is fundamentally predicated on the idea of breaking data out of applications so that you can use it in new and novel and more value creating ways, is that the data gets locked up in a data warehouse. What's valuable in a data warehouse is not the hardware. It's the data. By providing the facility for being able to point an application at a couple of different data source including one that's more modern, or which takes advantage of more modern technology, that can be considerably cheaper, it means the shop can elevate the story about the asset and the asset here is the data and the applications that run against it, not the hardware and the system where the data's stored and located. One of the biggest challenges, we talked earlier just to go on for a second, we talked earlier with a couple of other guests about the fact that the industry still, what your average person still doesn't understand how to value data. How to establish a data asset and one of the reasons is because it's so constantly co-mingled with the underlying hardware. >> And actually I'd even further go on, I think the advent of some of these cloud data warehouses forgets that notion of being able to run it different places and provides one of the things that customers are really looking for which is simplicity. The ability to spin up a quick MPP SQL system within say Amazon for example, almost without a doubt, a lot of the business users that I speak to are willing to sacrifice capabilities within the platform which they are for the simplicity of getting up and going. One of the things that we really focused on in V5 is being able to give that same turnkey feel and so Green Plum exists within the Amazon marketplace, within the Azure marketplace, Google later this quarter, and then in addition to the simplicity, it has all of the functionality that is missing in those platforms, again, all the analytics, all the ability to reach out and federate queries against different types of data, I think it's exciting as we continue to progress in our releases, Green Plum has, for a number of years, had this ability to seamlessly query HGFS. Like a lot of the competitors, but HGFS isn't going away, neither is a generic object store like S3. But we continue to extend that to things like Spark for example, so now the ability to actually house your data within a data platform and seamlessly integrate with Spark back and forth, if you want to use Spark, use Spark, but somewhere that data needs to be materialized so that other applications can leverage it as well. >> But even then people have been saying well, if you want to put it on this disk, then put it on this disk. Given the question about Spark versus another database manager is a higher level conversation than many of the shops who are investing millions and millions and millions of dollars in their analytic application portfolio and all you're trying to do is, as I interpret it, is trying to say look, the value in the portfolio is the applications and the data. It's not the underlying elements. There's a whole bunch of new elements we can use, you can put it in the cloud, you can put it on premise if that's where the data belongs. Use some of these new and evolving technologies, but you're focused on how the data and the applications continue to remain valuable to the business over time and not the traditional hardware assets. >> Correct and I'll again leverage a notion that we get from labs, which is this idea of user centric design and so everything that we've been putting into the Green Plum database is around, ideally the four primary users of our system. Not just the analysts and not just the data scientists, but also the operators and the IT folks. That is where I'd say the last tenant of where we're going really is this idea of coopetition. I would, as the Pivotal Green Plum guy that's been around for 10 plus years, I would tell you very straight up that we are again, an open source MPP data platform that can rival any other platform out there, whether it's terradata, whether it's Haduke, we can beat that platform. >> Why should customers call you up? Why should they call you? There's all this other stuff out there, you got legacy, you got terradata, might have other things, people are knocking at my door, they're getting pounded with sales messages, buy me I'm better than the other guy. Why Pivotal data? >> The first thing I would say is, the latest reviews from Gardner for example, well actually let me rewind. I will easily argue that terradata has been the data warehouse platform for the last 30 years that everyone has tried to emulate. I'd even argue so much as that when Haduke came on the scene eight years ago, what they did was they changed the dynamics and what they're doing now is actually trying to emulate the terradata success through things like SQL on top of Haduke. What that has basically gotten us to is we're looking for a terradata replacement at Haduke like prices, that's what Green Plum has to offer in spades. Now, if you actually extend that just a little bit, I still recognize that not everybody's going to call us, there are still 200 other vendors out there that are selling a similar product or similar kinds of stories. What I would tell you in response to those folks is that Green Plum has been around in production for the last 10 plus years, we're a proven technology for solving problems, many of those are not. We work very well in this cooperative spirit of, Green Plum can be the end all be all, but I recognize it's not going to be the end all be all so this is why we have to work within the ecosystem. >> You have to, open source is dominating. At the Linux event, we just covered open source summit, 90% of software written will be open source libraries, 10% is where the value's being added. >> For sure, if you were to start up a new star up right now, would you go with a commercial product? >> No, just postgres database is good. All right final question to end the segment. This big data space that's now being called data, certainly Strata, Haduke is now Strata Data, just trying to keep that show going longer. But you got Microsoft Azure making a lot of waves going on right now with Microsoft Ignite, so cloud is into the play here, data's changed, so the question is how has this industry changed over the past eight years. You go back to 2010, I saw Green Plum coming prior to even getting bought out, but they were kicking ass, same product evolved. Where has the space gone? What's happened, how would you summarize it to someone who's walking in for the first year like hey back in the old days, we used to walk to school in the snow with no shoes on both ways. Now it's like get off my lawn you young developers. Seriously what is the evolution of that, how would you explain it? >> Again, I would start with terradata started the industry, by far and then folks like Netease and Green Plum came around to really give a lower cost alternative. Haduke came on the scene eight some years ago, and what I pride myself in being at Green Plum for this long and Green Plum implemented the map produced paradigm as Haduke was starting to build and as it continued to build, we focused on building our own distribution and SQL and Haduke, I think what we're getting down to is the brass tacks of the business is tired of technological science experiments and they just want to get stuff done. >> And a cost of ownership that's manageable. >> And sustainable. >> And sustainable and not in a spot where they're going to be locked into a single vendor, hence the open source. >> The ones that are winning today employed what strategy that ended up working out and what strategy didn't end up working out, if you go back and say, the people who took this path failed, people who took this approach won. What's the answer there? >> Clearly anybody who was an appliance that has long since drifted. I'd also say Green Plum's in this unique position where, >> An appliance too though. >> Well, pseudo appliance yes, I still have to respond to that, we were always software. >> You pivoted luckily. >> But putting that aside, the hardware vendors have gone away, all of the software competitors that we had have actually either been sunset, sold off and forgotten and so Green Plum, here we sit as the sole standard or person that's been around for the long haul. We are now seeing a spot where we have no competition other than the forgotten really legacy guys like terradata. People are longing to get off of legacy and onto something modern, the trick will be whether that modern is some of these new and upcoming players and technologies, or whether it really focuses on solving problems. >> What's the strategy with the winning strategy? Stick to your knitting, stick to what you know or was it more of, >> For us it was two fold, one it was continuing to service our customers and make them successful so that was how we built a profitable data platform business and then the other was to double down on the strategies that seemed to be interesting to organizations which were cloud, open source, and analytics and like you said, I talked to one of the folks over at the Air Force and he was mentioning how to him, data's actually more important than fuel, being able to understand where the airplanes are, where the fuel is, where the people are, where the missiles are etc., that's actually more important than the fuel itself. Data is the thing that powers everything. >> Data's currency of everything now, great Jacque thinks so much for coming on the Cube, Pivotal Data Platform, Data Suite, Green Plum now with all these other adds, that's great congratulations. Stay on the path helping customers, you can't lose. >> Exactly. >> The Cube here helping you figure out the big data noise, we're obviously in big data New York City event for our annual, the annual Cube Wikibon event, in conjunction with Strata Data across the street, more live coverage here for three days here in New York City I'm John Furrier, Peter Burris, we'll be back after this short break. (electronic music)

Published Date : Sep 27 2017

SUMMARY :

Brought to you by Silicon Angle Media Welcome to the Cube, good to see you again. to the data suite, so you guys analysts and customers that the Green Plum Green Plum might get lost in the messaging and over the last year and a half of Green Plum R&D, It's a nuance but it's table stakes in the enterprise. I presume it's got to have something to do with and provide both in the same platform, and both to the postgres community. of postgres because of the success, It is the most full featured, and operating applications might have actually come to in the past of, we need an open source alternative of the types of problems it was going to take on. MPP data platform that can allow you the reticence to move to any other platform and shipping that to differentiated labor, is that the data gets locked up in a data warehouse. all the ability to reach out and federate queries and the applications continue to remain valuable but also the operators and the IT folks. Why should customers call you up? I still recognize that not everybody's going to call us, At the Linux event, we just covered open source summit, in the snow with no shoes on both ways. and Green Plum implemented the map produced paradigm And sustainable and not in a spot where they're going to be the people who took this path failed, that has long since drifted. to respond to that, we were always software. But putting that aside, the hardware on the strategies that seemed to be interesting Stay on the path helping customers, you can't lose. for our annual, the annual Cube Wikibon event,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JacquePERSON

0.99+

Peter BurrisPERSON

0.99+

Green PlumORGANIZATION

0.99+

PeterPERSON

0.99+

Jacque IstokPERSON

0.99+

John FurrierPERSON

0.99+

Tony BergePERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

100%QUANTITY

0.99+

JohnPERSON

0.99+

New York CityLOCATION

0.99+

AmazonORGANIZATION

0.99+

millionsQUANTITY

0.99+

90%QUANTITY

0.99+

NYCLOCATION

0.99+

BerkeleyLOCATION

0.99+

PivotalORGANIZATION

0.99+

MySQLTITLE

0.99+

2010DATE

0.99+

firstQUANTITY

0.99+

SparkTITLE

0.99+

MicrosoftORGANIZATION

0.99+

eight years agoDATE

0.99+

10%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

three daysQUANTITY

0.99+

HadukeORGANIZATION

0.99+

last weekDATE

0.99+

bothQUANTITY

0.98+

StrataORGANIZATION

0.98+

NeteaseORGANIZATION

0.98+

eightDATE

0.98+

OneQUANTITY

0.98+

Strata AdupORGANIZATION

0.98+

first thingQUANTITY

0.98+

terradataORGANIZATION

0.98+

OracleORGANIZATION

0.97+

15 yearsQUANTITY

0.97+

first yearQUANTITY

0.97+

200 other vendorsQUANTITY

0.97+

Strata DataORGANIZATION

0.97+

twoQUANTITY

0.97+

tens of millions of dollarsQUANTITY

0.97+

millions of dollarsQUANTITY

0.97+

one partQUANTITY

0.97+

Kirk Bresniker, HPE | SuperComputing 22


 

>>Welcome back, everyone live here at Supercomputing 22 in Dallas, Texas. I'm John for host of the Queue here at Paul Gillin, editor of Silicon Angle, getting all the stories, bringing it to you live. Supercomputer TV is the queue right now. And bringing all the action Bresniker, chief architect of Hewlett Packard Labs with HP Cube alumnis here to talk about Supercomputing Road to Quantum. Kirk, great to see you. Thanks for coming on. >>Thanks for having me guys. Great to be >>Here. So Paul and I were talking and we've been covering, you know, computing as we get into the large scale cloud now on premises compute has been one of those things that just never stops. No one ever, I never heard someone say, I wanna run my application or workload on slower, slower hardware or processor or horsepower. Computing continues to go, but this, we're at a step function. It feels like we're at a level where we're gonna unleash new, new creativity, new use cases. You've been kind of working on this for many, many years at hp, Hewlett Packard Labs, I remember the machine and all the predecessor r and d. Where are we right now from your standpoint, HPE standpoint? Where are you in the computing? It's as a service, everything's changing. What's your view? >>So I think, you know, you capture so well. You think of the capabilities that you create. You create these systems and you engineer these amazing products and then you think, whew, it doesn't get any better than that. And then you remind yourself as an engineer. But wait, actually it has to, right? It has to because we need to continuously provide that next generation of scientists and engineer and artists and leader with the, with the tools that can do more and do more frankly with less. Because while we want want to run the program slower, we sure do wanna run them for less energy. And figuring out how we accomplish all of those things, I think is, is really where it's gonna be fascinating. And, and it's also, we think about that, we think about that now, scale data center billion, billion operations per second, the new science, arts and engineering that we'll create. And yet it's also what's beyond what's beyond that data center. How do we hook it up to those fantastic scientific instruments that are capable to generate so much information? We need to understand how we couple all of those things together. So I agree, we are at, at an amazing opportunity to raise the aspirations of the next generation. At the same time we have to think about what's coming next in terms of the technology. Is the silicon the only answer for us to continue to advance? >>You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's doing energy. You can build it in data centers for compute. There's all kinds of new things. Is there anything in the paradigm of computing and now on the road to quantum, which I know you're involved, I saw you have on LinkedIn, you have an open rec for that. What paradigm elements are changing that weren't in play a few years ago that you're looking at right now as you look at the 20 mile stair into quantum? >>So I think for us it's fascinating because we've had a tailwind at our backs my whole career, 33 years at hp. And what I could count on was transistors got at first they got cheaper, faster and they use less energy. And then, you know, that slowed down a little bit. Now they're still cheaper and faster. As we look in that and that Moore's law continues to flatten out of it, there has to be something better to do than, you know, yet another copy of the prior design opening up that diversity of approach. And whether that is the amazing wafer scale accelerators, we see these application specific silicon and then broadening out even farther next to the next to the silicon. Here's the analog computational accelerator here is now the, the emergence of a potential quantum accelerator. So seeing that diversity of approaches, but what we have to happen is we need to harness all of those efficiencies and yet we still have to realize that there are human beings that need to create the application. So how do we bridge, how do we accommodate the physical of, of new kinds of accelerator? How do we imagine the cyber physical connection to the, to the rest of the supercomputer? And then finally, how do we bridge that productivity gap? Especially not for people who like me who have been around for a long time, we wanna think about that next generation cuz they're the ones that need to solve the problems and write the code that will do it. >>You mentioned what exists beyond silicon. In fact, are you looking at different kinds of materials that computers in the future will be built upon? >>Oh absolutely. You think of when, when we, we look at the quantum, the quantum modalities then, you know, whether it is a trapped ion or a superconducting, a piece of silicon or it is a neutral ion. There's just no, there's about half a dozen of these novel systems because really what we're doing when we're using a a quantum mechanical computer, we're creating a tiny universe. We're putting a little bit of material in there and we're manipulating at, at the subatomic level, harnessing the power of of, of quantum physics. That's an incredible challenge. And it will take novel materials, novel capabilities that we aren't just used to seeing. Not many people have a helium supplier in their data center today, but some of them might tomorrow. And understanding again, how do we incorporate industrialize and then scale all of these technologies. >>I wanna talk Turkey about quantum because we've been talking for, for five years. We've heard a lot of hyperbole about quantum. We've seen some of your competitors announcing quantum computers in the cloud. I don't know who's using these, these computers, what kind of work they're being used, how much of the, how real is quantum today? How close are we to having workable true quantum computers and what can you point to any examples of how it's being, how that technology is being used in the >>Field? So it, it remains nascent. We'll put it that way. I think part of the challenge is we see this low level technology and of course it was, you know, professor Richard Fineman who first pointed us in this direction, you know, more than 30 years ago. And you know, I I I trust his judgment. Yes. You know that there's probably some there there especially for what he was doing, which is how do we understand and engineer systems at the quantum mechanical level. Well he said a quantum mechanical system's probably the way to go. So understanding that, but still part of the challenge we see is that people have been working on the low level technology and they're reaching up to wondering will I eventually have a problem that that I can solve? And the challenge is you can improve something every single day and if you don't know where the bar is, then you don't ever know if you'll be good enough. >>I think part of the approach that we like to understand, can we start with the problem, the thing that we actually want to solve and then figure out what is the bespoke combination of classical supercomputing, advanced AI accelerators, novel quantum quantum capabilities. Can we simulate and design that? And we think there's probably nothing better to do that than than an next to scale supercomputer. Yeah. Can we simulate and design that bespoke environment, create that digital twin of this environment and if we, we've simulated it, we've designed it, we can analyze it, see is it actually advantageous? Cuz if it's not, then we probably should go back to the drawing board. And then finally that then becomes the way in which we actually run the quantum mechanical system in this hybrid environment. >>So it's na and you guys are feeling your way through, you get some moonshot, you work backwards from use cases as a, as a more of a discovery navigational kind of mission piece. I get that. And Exoscale has been a great role for you guys. Congratulations. Has there been strides though in quantum this year? Can you point to what's been the, has the needle moved a little bit a lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put our finger on what's moving, like what need, where's the needle moved I >>Guess in quantum. And I think, I think that's part of the conversation that we need to have is how do we measure ourselves. I know at the World Economic Forum, quantum Development Network, we had one of our global future councils on the future of quantum computing. And I brought in a scene I EEE fellow Par Gini who, you know, created the international technology roadmap for semiconductors. And I said, Paulo, could you come in and and give us examples, how was the semiconductor community so effective not only at developing the technology but predicting the development of technology so that whether it's an individual deciding if they should change careers or it's a nation state deciding if they should spend a couple billion dollars, we have that tool to predict the rate of change and improvement. And so I think that's part of what we're hoping by participating will bring some of that road mapping skill and technology and understanding so we can make those better reasoned investments. >>Well it's also fun to see super computing this year. Look at the bigger picture, obviously software cloud natives running modern applications, infrastructure as code that's happening. You're starting to see the integration of, of environments almost like a global distributed operating system. That's the way I call it. Silicon and advancements have been a big part of what we see now. Merchant silicon, but also dpu are on the scene. So the role role of silicon is there. And also we have supply chain problems. So how, how do you look at that as a a, a chief architect of h Hewlett Packard Labs? Because not only you have to invent the future and dream it up, but you gotta deal with the realities and you get the realities are silicon's great, we need more of that quantums around the corner, but supply chain, how do you solve that? What's your thoughts and how do you, how, how is HPE looking at silicon innovation and, and supply chain? >>And so for us it, it is really understanding that partnership model and understanding and contributing. And so I will do things like I happen to be the, the systems and architectures chapter editor for the I eee International Roadmap for devices and systems, that community that wants to come together and provide that guidance. You know, so I'm all about telling the semiconductor and the post semiconductor community, okay, this is where we need to compute. I have a partner in the applications and benchmark that says, this is what we need to compute. And when you can predict in the future about where you need to compute, what you need to compute, you can have a much richer set of conversations because you described it so well. And I think our, our senior fellow Nick Dubey would, he's coined the term internet of workflows where, you know, you need to harness everything from the edge device all the way through the extra scale computer and beyond. And it's not just one sort of static thing. It is a very interesting fluid topology. I'll use this compute at the edge, I'll do this information in the cloud, I want to have this in my exoscale data center and I still need to provide the tool so that an individual who's making that decision can craft that work flow across all of those different resources. >>And those workflows, by the way, are complicated. Now you got services being turned on and off. Observability is a hot area. You got a lot more data in in cycle inflow. I mean a lot more action. >>And I think you just hit on another key point for us and part of our research at labs, I have, as part of my other assignments, I help draft our AI ethics global policies and principles and not only tell getting advice about, about how we should live our lives, it also became the basis for our AI research lab at Shewl Packard Labs because they saw, here's a challenge and here's something where I can't actually believe, maintain my ethical compliance. I need to have engineer new ways of, of achieving artificial intelligence. And so much of that comes back to governance over that data and how can we actually create those governance systems and and do that out in the open >>That's a can of worms. We're gonna do a whole segment on that one, >>On that >>Technology, on that one >>Piece I wanna ask you, I mean, where rubber meets the road is where you're putting your dollars. So you've talked a lot, a lot of, a lot of areas of, of progress right now, where are you putting your dollars right now at Hewlett Packard Labs? >>Yeah, so I think when I draw, when I draw my 2030 vision slide, you know, I, for me the first column is about heterogeneous, right? How do we bring all of these novel computational approaches to be able to demonstrate their effectiveness, their sustainability, and also the productivity that we can drive from, from, from them. So that's my first column. My section column is that edge to exoscale workflow that I need to be able to harness all of those computational and data resources. I need to be aware of the energy consequence of moving data, of doing computation and find all of that while still maintaining and solving for security and privacy. But the last thing, and, and that's one was a, one was a how one was aware. The last thing is a who, right? And is is how do we take that subject matter expert? I think of a, a young engineer starting their career at hpe. It'll be very different than my 33 years. And part of it, you know, they will be undaunted by any, any scale. They will be cloud natives, maybe they metaverse natives, they will demand to design an open cooperative environment. So for me it's thinking about that individual and how do I take those capabilities, heterogeneous edge to exito scale workflows and then make them productive. And for me, that's, that's where we were putting our emphasis on those three. When, where and >>Who. Yeah. And making it compatible for the next generation. We see the student cluster competition going on over there. This is the only show that we cover that we've been to that is from the dorm room to the boardroom and this cuz Supercomputing now is elevating up into that workflow, into integration, multiple environments, cloud, premise, edge, metaverse. This is like a whole nother world. >>And, and, but I think it's, it's the way that regardless of which human pursuit you're in, you know, everyone is going to be demand simulation and modeling ai, ML and massive data m l and massive data analytics that's gonna be at heart of, of everything. And that's what you see. That's what I love about coming here. This isn't just the way we're gonna do science. This is the way we're gonna do everything. >>We're gonna come by your booth, check it out. We've talked to some of the folks, hpe obviously HPE Discover this year, GreenLake with center stage, it's now consumption is a service for technology. Whole nother ballgame. Congratulations on, on all this. I would say the massive, I won't say pivot, but you know, a change >>It >>Is and how you guys >>Operate. And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, but as someone who has supported designs over decades, you know, that ability to to to operate and at peak efficiency, to always keep in perfect operating order and to continuously change while still meeting the customer expectations that actually allows us to deliver innovation to our customers faster than when we are delivering warranted individual packaged products. >>Kirk, thanks for coming on Paul. Great conversation here. You know, the road to Quantum's gonna be paved through computing supercomputing software integrated workflows from the dorm room to the boardroom to Cube, bringing all the action here at Supercomputing 22. I'm Jacque Forer with Paul Gillin. Thanks for watching. We'll be right back.

Published Date : Nov 16 2022

SUMMARY :

bringing it to you live. Great to be I remember the machine and all the predecessor r and d. Where are we right now from At the same time we have to think about what's coming next in terms of the technology. You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's And then, you know, that slowed down a little bit. that computers in the future will be built upon? And understanding again, how do we incorporate industrialize and true quantum computers and what can you point to any examples And the challenge is you can improve something every single day and if you don't know where the bar is, I think part of the approach that we like to understand, can we start with the problem, lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put And I think, I think that's part of the conversation that we need to have is how do we need more of that quantums around the corner, but supply chain, how do you solve that? in the future about where you need to compute, what you need to compute, you can have a much richer set of Now you got services being turned on and off. And so much of that comes back to governance over that data and how can we actually create That's a can of worms. a lot of, a lot of areas of, of progress right now, where are you putting your dollars right And part of it, you know, they will be undaunted by any, any scale. This is the only show that we cover that we've been to that And that's what you see. the massive, I won't say pivot, but you know, a change And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, You know, the road to Quantum's gonna be paved through

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Nick DubeyPERSON

0.99+

PaulPERSON

0.99+

BresnikerPERSON

0.99+

Richard FinemanPERSON

0.99+

20 mileQUANTITY

0.99+

Hewlett Packard LabsORGANIZATION

0.99+

KirkPERSON

0.99+

PauloPERSON

0.99+

tomorrowDATE

0.99+

33 yearsQUANTITY

0.99+

first columnQUANTITY

0.99+

Jacque ForerPERSON

0.99+

Dallas, TexasLOCATION

0.99+

Shewl Packard LabsORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

Kirk BresnikerPERSON

0.99+

JohnPERSON

0.99+

threeQUANTITY

0.99+

todayDATE

0.98+

hpORGANIZATION

0.98+

MoorePERSON

0.98+

five yearsQUANTITY

0.98+

HPEORGANIZATION

0.97+

firstQUANTITY

0.97+

2030DATE

0.97+

h Hewlett Packard LabsORGANIZATION

0.97+

this yearDATE

0.96+

oneQUANTITY

0.96+

HP CubeORGANIZATION

0.95+

GreenLakeORGANIZATION

0.93+

about half a dozenQUANTITY

0.91+

billion,QUANTITY

0.91+

World Economic ForumORGANIZATION

0.9+

quantum Development NetworkORGANIZATION

0.9+

few years agoDATE

0.88+

couple billion dollarsQUANTITY

0.84+

more than 30 years agoDATE

0.84+

GiniORGANIZATION

0.78+

Supercomputing Road to QuantumTITLE

0.68+

Supercomputing 22ORGANIZATION

0.68+

ParPERSON

0.67+

billion operations per secondQUANTITY

0.67+

Silicon AngleORGANIZATION

0.66+

EEEORGANIZATION

0.66+

singleQUANTITY

0.66+

TurkeyORGANIZATION

0.56+

SuperComputing 22ORGANIZATION

0.52+

CubeORGANIZATION

0.48+

ExoscaleTITLE

0.44+

InternationalTITLE

0.4+

Jacques Nadeau, Dremio | Big Data SV 2018


 

>> Announcer: Live from San Jose, it's theCUBE, presenting Big Data Silicon Valley. Brought to you by SiliconANGLE Media and it's ecosystem partners. >> Welcome back to Big Data SV in San Jose. This theCUBE, the leader in live tech coverage. My name is Dave Vellante and this is day two of our wall-to-wall coverage. We've been here most of the week, had a great event last night, about 50 or 60 of our CUBE community members were here. We had a breakfast this morning where the Wikibon research team laid out it's big data forecast, the eighth big data forecast and report that we've put out, so check out that online. Jacques Nadeau is here. He is the CTO and co-founder of Dremio. Jacque, welcome to theCUBE, thanks for coming on. >> Thanks for having me here. >> So we were talking a little bit about what you guys do. Three year old company. Well, let me start. Why did you co-found Dremio? >> So, it was a very simple thing I saw, so, over the last ten years or so, we saw a regression in the ability for people to get at data, so you see all these really cool technologies that came out to store data. Data lakes, you know, SQL systems, all these different things that make developers very agile with data. But what we were also seeing was a regression in the ability for analysts and data consumers to get at that data because the systems weren't designed for analysts, they were designed for data producers and developers. And we said, you know what, there needs to be a way to solve this. We need to be able to empower people to be self-sufficient again at the data consumption layer. >> Okay, so you solved that problem how, you said, called it a self-service of a data platform. >> Yeah, yeah, so self-service data platform and the idea is pretty simple. It's that, no matter where the data is physically, people should be able to interact with a logical view of it. And so, we talk a little bit like it's Google Docs for your data. So people can go into the system, they can see the different data sets that are available to them, collaborate around those, create changes to those that they can then share with other people in the organization, always dealing with the logical layer and then, behind the scenes, we have physical capabilities to interact with all the different system we interact with. But that's something that business users shouldn't have to think as much about and so, if you think about how people interact with data today, it's very much about copies. So every time you want to do something, typically you're going to make a copy. I want to reshape the data, I make a copy. I want to make it go faster, I make a copy. And those copies are very, very difficult for people to manage and they could have mixed the business meaning of data with the physical, I'm making copies to make them faster or whatever. And so our perspective is that, if you can separate away the physical concerns from the logical, then business users have a much more, much more likelihood to be able to do something self-service. >> So you're essentially virtualizing my corpus of data, independent of location, is that right, I mean-- >> It's part of what we do, yeah. No, it's part of what we do. So, the way we look at it is, is kind of several different components to try to make something self-service. It starts with, yeah, virtualize or abstract away the details of the physical, right? But then, on top of that, expose a very, sort of a very user-friendly interface that allows people to sort of catalog and understand the different things, you know, search for things that they want to interact with, and then curate things, even if they're non-technical users, right? So the goal is that, if you talk to sort of even large internet companies in the Valley, it's very hard to even hire the amount of data engineering that you need to satisfy all the requests of your end-users of data. And so the, and so the goal of Dremio is basically to figure out different tools that can provide a non-technical experience for getting at the data. So that's sort of the start of it but then the second step is, once you've got access to this thing and people can collaborate and sort of deal with the data, then you've got these huge volumes of data, right? It's big data and so how do you make that go faster? And then we have some components that we deal with, sort of, speed and acceleration. >> So maybe talk about how people are leveraging this capability, this platform, what the business impact is, what have you seen there? >> So a lot of people have this problem, which is, they have data all over the place and they're trying to figure out "How do I expose this "to my end-users?" And those end-users might be analysts, they might be data scientists, they might be product managers that are trying to figure out how their product is working. And so, what they're doing today is they're typically trying to build systems internally that, to provide these capabilities. And so, for example, working with a large auto manufacturer. And they've got a big initiative where they're trying to make the data that they have, they have huge amounts of data across all sort of different parts of the organization and they're trying to make that available to different data consumers. Now, of course, there's a bunch of security concerns that you need to have around that, but they just want to make the data more accessible. And so, what they're doing is they're using Dremio to figure out ways to, basically, catalog all the data below, expose that to the different users, applying lots of different security rules around that, and then create a bunch of reflections, which make the things go faster as people are interacting with the things. >> Well, what about the governance factor? I mean, you heard this in the hadoop world years ago. "Ah, we're going to make, we're going to harden hadoop, "we're going to" and really, there was no governance and it became more and more important. How do you guys handle that? Do you partner with people? Is it up to the customer to figure that out? Do you provide that? >> It's several different things, right? It's a complex ecosystem, right? So it's a combination of things. You start with partnering with different systems to make sure that you integrate well with those things. So the different things that control some parts of credentials inside the systems all the way down to "What's the file system permissions?", right? "What are the permissions inside of something like Hive and the metastore there?" And then other systems on top of that, like Sentry or Ranger are also exposing different credentialing, right? And so we work hard to sort of integrate with those things. On top of that, Dremio also provides a full security model inside of the sort of virtual space that we work. And so people can control the permissions, the ability to access or edit any object inside of Dremio based on user roles and LDAP and those kinds of things. So it's, it's kind of multiple layers that have to be working together. >> And tell me more about the company. So founded three years ago, I think a couple of raises, >> Yep >> who's backing you? >> Yeah, yeah, yeah, so we founded just under three years ago. We had great initial investors, in Red Point and Lightspeed, so two great initial investors and we raised about 15 million on that round. And then we actually just closed a B round in January of this year and we added Norwest to the portfolio there. >> Awesome, so you're now in the mode of, I mean, they always say, you know, software is such a capital-efficient business but you see software companies raising, you know, 900 million dollars and so, presumably, that's to compete, to go to market and, you know, differentiate with your messaging and branding. Is that sort of what the, the phase that you're in now? You kind of developed a product, it's technically sound, it's proven in the marketspace and now you're scaling the, the go-to-market, is that right? >> That's exactly right. So, so we've had a lot of early successes, a lot of Fortune 100 companies using Dremio today. For example, we're working with TransUnion. We're working with Intel. We actually have a great relationship with OVH, which is the third-largest hosting company in the world, so a lot of great, Daimler is another one. So working with a lot of great companies, seeing sort of great early success with the product with those companies, and really looking to say "Hey, we're out here." We've got a booth for the first time at Strata here and we're sort of letting people know about, sort of, a better way, or easier way, for people to deal with data >> Yeah. >> A happier way. >> I mean, it's a crowded space, right? There's a lot of tools out there, a lot of companies. I'm interested in how you sort of differentiate. Obviously simplification is a part of that, the breadth of your capabilities. But maybe, in your words, you could share with me how you differentiate from the competition and how you break out from the noise. >> Yeah, yeah, yeah, so it's, you're absolutely right, it's a very crowded space. Everybody's using the same words and that makes it very hard for people to understand what's going on. And so, what we've found is very simple is that typically we will actually, the first meeting we deal with a customer, within the first 10 minutes we'll demo the product. Because so many technologies are technologies, not, they're not products and so you have to figure out how to use the product. You've got to figure out how you would customize it for your certain use-case. And what we've found with our product is, by making it very, very simple, people start, the light goes on in a very short amount of time and so, we also do things on our website so that you can see, in a couple of minutes, or even less than that, little animations that sort of give you a sense of what it's about. But really, it's just "Hey, this is a product "which is about", there's this light bulb that goes on, it's great. And you figure this out over the course of working with different customers, right? But there's this light bulb that goes on for people that are so confused by all the things that are going on and if we can just sit down with them, show them the product for a few minutes, all of a sudden they're like "Wait a minute, "I can use this", right? So you're frequently talking to buyers that are not the most technical parts of the organization initially, and so most of the technologies they look at are technologies that are very difficult to understand and they have to look to others to try to even understand how it would fit into their architecture. With Dremio, we have customers that can, that have installed it and gotten up, and within an hour or two, started to see real value. And that sort of excitement happens even in the demo, with most people. >> So you kind of have this bifurcated market. Since the big data meme, everybody says they're data-driven and you've got a bifurcated market in that, you've got the companies that are data-driven and you've got companies who say they're data-driven but really aren't. Who are your customers? Are they in both? Are they predominantly in the data-driven side? Are they predominantly in the trying to be data-driven? >> Well, I would say that they all would say that they're data-driven. >> Yeah, everyone, who's going to say "Well, we're not data-driven." >> Yeah, yeah, yeah. So I would say >> We're dead. >> I would say that everybody has data and they've got some ways that they're using it well and other places where they feel like they're not using it as well as they should. And so, I mean, the reason that we exist is to make it so it's easier for people to get value out of data, and so, if they were getting all the value they think they could get out of data, then we probably wouldn't exist and they would be fully data-driven. So I think that everybody, it's a journey and people are responding well to us, in part, because we're helping them down that journey. >> Well, the reason I asked that question is that we go to a lot of shows and everybody likes to throw out the digital transformation buzzword and then use Uber and Airbnb as an example, but if you dig deeper, you see that data is at the core of those companies and they're now beginning to apply machine intelligence and they're leveraging all this data that they've built up, this data architecture that they built up over the last five or 10 years. And then you've got this set of companies where all the data lives in silos and I can see you guys being able to help them. At the same time, I can see you helping the disruptors, so how do you see that? I mean, in terms of your role, in terms of affecting either digital transformations or digital disruptions. >> Well, I'd say that in either case, so we believe in a very sort of simple thing, which is that, so going back to what I said at the beginning, which is just that I see this regression in terms of data access, right? And so what happens is that, if you have a tightly-coupled system between two layers, then it becomes very difficult for people to sort of accommodate two different sets of needs. And so, the change over the last 10 years was the rise of the developer as the primary person for controlling data and that brought a huge amount of great things to it but analysis was not one of them. And there's tools that try to make that better but that's really the problem. And so our belief is very simple, which is that a new tier needs to be introduced between the consumers and the, and the producers of data. And that, and so that tier may interact with different systems, it may be more complex or whatever, for certain organizations, but the tier is necessary in all organizations because the analysts shouldn't be shaken around every time the developers change how they're doing data. >> Great. John Furrier has a saying that "Data is the new development kit", you know. He said that, I don't know, eight years ago and it's really kind of turned out to be the case. Jacques Nadeau, thanks very much for coming on theCUBE. Really appreciate your time. >> Yeah. >> Great to meet you. Good luck and keep us informed, please. >> Yes, thanks so much for your time, I've enjoyed it. >> You're welcome. Alright, thanks for watching everybody. This is theCUBE. We're live from Big Data SV. We'll be right back. (bright music)

Published Date : Mar 9 2018

SUMMARY :

Brought to you by SiliconANGLE Media We've been here most of the week, So we were talking a little bit about what you guys do. And we said, you know what, there needs to be a way Okay, so you solved that problem how, and the idea is pretty simple. So the goal is that, if you talk to sort of expose that to the different users, I mean, you heard this in the hadoop world years ago. And so people can control the permissions, And tell me more about the company. And then we actually just closed a B round that's to compete, to go to market and, you know, for people to deal with data and how you break out from the noise. and so most of the technologies they look at So you kind of have this bifurcated market. that they're data-driven. Yeah, everyone, who's going to say So I would say And so, I mean, the reason that we exist is At the same time, I can see you helping the disruptors, And so, the change over the last 10 years "Data is the new development kit", you know. Great to meet you. This is theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jacques NadeauPERSON

0.99+

DaimlerORGANIZATION

0.99+

John FurrierPERSON

0.99+

NorwestORGANIZATION

0.99+

IntelORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

TransUnionORGANIZATION

0.99+

JacquePERSON

0.99+

San JoseLOCATION

0.99+

OVHORGANIZATION

0.99+

LightspeedORGANIZATION

0.99+

second stepQUANTITY

0.99+

UberORGANIZATION

0.99+

two layersQUANTITY

0.99+

AirbnbORGANIZATION

0.99+

bothQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Google DocsTITLE

0.99+

Red PointORGANIZATION

0.99+

StrataORGANIZATION

0.99+

60QUANTITY

0.98+

900 million dollarsQUANTITY

0.98+

three years agoDATE

0.98+

eight years agoDATE

0.98+

twoQUANTITY

0.98+

DremioPERSON

0.98+

first 10 minutesQUANTITY

0.98+

last nightDATE

0.98+

about 15 millionQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

first timeQUANTITY

0.97+

DremioORGANIZATION

0.97+

Big Data SVORGANIZATION

0.96+

an hourQUANTITY

0.96+

two great initial investorsQUANTITY

0.95+

todayDATE

0.93+

first meetingQUANTITY

0.93+

this morningDATE

0.92+

two different setsQUANTITY

0.9+

thirdQUANTITY

0.88+

Big DataORGANIZATION

0.87+

SQLTITLE

0.87+

10 yearsQUANTITY

0.87+

CUBEORGANIZATION

0.87+

years agoDATE

0.86+

Silicon ValleyLOCATION

0.86+

January of this yearDATE

0.84+

DremioTITLE

0.84+

Three year oldQUANTITY

0.81+

last 10 yearsDATE

0.8+

SentryORGANIZATION

0.77+

one of themQUANTITY

0.75+

about 50QUANTITY

0.75+

day twoQUANTITY

0.74+

RangerORGANIZATION

0.74+

SVEVENT

0.7+

last ten yearsDATE

0.68+

eighth bigQUANTITY

0.68+

DataORGANIZATION

0.66+

BigEVENT

0.65+

couple of minutesQUANTITY

0.61+

CTOPERSON

0.56+

oneQUANTITY

0.55+

lastDATE

0.52+

100 companiesQUANTITY

0.52+

underDATE

0.51+

fiveQUANTITY

0.5+

2018DATE

0.5+

HiveTITLE

0.42+