Matei Zaharia, Databricks - #SparkSummit - #theCUBE
>> Narrator: Live from San Francisco, it's theCUBE. Covering Spark Summit2017, brought to you by Databricks. (upbeat music) >> Welcome back to Spark Summit 2017, you're watching theCUBE and we have an honored guest here today, his name is Matei Zaharia and Matei is the creator of Spark, Chief Technologist, and Co-Founder of Databricks, did I get all that right? >> Yeah, thanks a lot for having me again. Excited to be here. >> Yeah Matei we were watching your keynote this morning and we're all excited to hear about better support for deep learning, about some of the structured streaming apps now being in production. I want to ask you what happened after the keynote? What kind of feedback have you heard from people in the hallways? >> Yeah definitely, so the feedback has definitely been super positive. I think people really like the direction that we're moving in with Apache Spark and with this library, such as a deep learning pipeline one. So we've gotten a lot of questions about the deep learning library, when will it support more types and so on. It's really good at supporting images right now. And also with streaming, I think people are just excited to try out the low latency streaming. >> Any other priorities people asked you about that maybe you haven't focused on yet? >> That I haven't focused on in the keynote, so I think that's a good question, I think overall some of the things we keep seeing are people just want to make it easier to just operate Spark on it under that scale and simplify things like monitoring and debugging and so on, so that's a constant theme that we're seeing. And then another thing that's generally been going on, I didn't focus on it this time, is increasing usage by Python and R users. So there's a lot of work in the latest release to continue improving that, to make it easier to use in those languages. >> Okay, we were watching the demo, the impressive demos, this morning, in fact George was watching the keynote, he was the one millisecond latency, he said wow. George, you want to ask a little more about that? >> So yeah let's talk about, 'cause there's this rise of continuous apps, which I think you guys named. >> Matei: Yeah. >> And resonates with everyone to go along with batch and request response. And in the past, so people were saying, well Spark was doing many micro batches, latency was couple hundred milliseconds. So now that you're down at one millisecond, what does that change in terms the class of apps that you're appropriate for or you know, some people have talked about criticality of vent processing. Where is Spark on that now? >> Yeah definitely, so yeah, so the goal of this is exactly to support the full range of latency, possible all the way down to sub-millisecond latency. And give users the same programming model for them so they don't have to use a different system or a lower level programming model to get that low latency. And so basically since we began structured streaming, we moved, we tried to make sure the API is not tied in with micro-batching in anyway. And so this is the next step to actually eliminate that from the engine and be able to execute these computations. And what are the new applications? So I think this really enables two types of things we've seen. One is kind of automated decision making system, so this would be something, it could be even on say, a website or you know, say when someone's applying for a loan or something like that, could be making decisions but it could even be an even lower latency, like say stock market style of place or internet of things, or like industrial monitoring, and making decisions there. That's one thing. And then the other thing we see people doing is a lot of kind of stream to stream ETL, which is a bit more boring in some way, but as you set that up, it's nice to have this very low latency transformations that can produce new streams from an existing one, because then nothing downstream from them is effected in terms of latency. >> So in this last example, it's sort of to help build microservice type applications. >> Yeah, exactly, yeah. Well in general, there's this whole, basically this whole architecture of saying all my data will be streamed and then I'll have some applications that just produce a new stream. And then later that stuff can go into a data link or into a real time system or whatever. So it's basically keeping it low latency while it remains in stream form. >> So we were talking earlier and we've been talking to the Snappy Data folks at the place machine folks. And they built Spark into a DBMS. So that like it's immutable. I'm sorry, mutable. >> Matei: Mutable, yeah. >> Like a data frame is updateable. So what does that make possible, even if you can do the same things with Spark, without it? What does it make easier? >> So that's also in the same spirit of continuous applications, it's saying you should have a single programming model and interface for doing both your transactional work and your analytics after, and then maybe serving the results of the analytics. So that makes a lot of sense and an example of that would be, you know, I keep going back to say the financial or credit card type of use cases, but it would be something where users are conducting transactions and maybe you learn stuff about them from that. You say okay, here's where they're located, now here's what they're purchasing, whatever. And then you also want to know, I'll have to make a decision. For example, do I allow them to go past the limit on their credit card or something like that. Or is this a normal use of it or is this a fraudulent one? So that's where it helps to integrate these and you can do these things. So there are products like Snappy Data That integrate a specific database with Spark. And we're also trying to make sure in Spark, the API, so that people can integrate their own system, whatever database or key value store they want. >> So would you have to jump through hoops if you didn't want to integrate any other store other than talking to a file system, or? >> Yeah if you want to do these transactions on a file system, there will be basically some performance constraints to doing that. It depends on the weight, it's definitely the simplest thing and if you have a low enough rate of up data it could actually be fine. But if you want more fine grained ones, then it becomes a problem. >> It would seem like if you tack on a product for ingest, not that you really want to get into that, think Kafka, which could also stretch into the transforms on some basic analytics. And you mentioned, I think on the Spark East keynote, Redis for serving, you've got like now a multi sort of vendor product stack. And so there's complexity to that. >> Matei: Yeah definitely yeah. >> Do you foresee a scenario where you could see that as a high volume solution and it's something that you would take ownership of? >> I see, so well, do you mean from the Apache Spark side or from the Databricks side? >> George: Actually either. >> Yeah so I think from the Spark side, basically so far the project doesn't provide storage, it just provides computation and it plugs into different storage engines. And so it would be kind of a big shift, it might be possible, but it would be kind of a big shift to say, okay well also provide persistent storage. I think the more likely thing that will happen is better and better integrations with the most widely used open source storage systems. So Redis is one. Apache Kafka, there's a lot of work on integrating that better and so on. From the Databricks side, that is different because that is a fully managed cloud service and it definitely makes sense there that'd you have a turnkey solution for that. Right now we actually built that for people who want that we can build it, sometimes with other vendors or with just services built into Amazon, but that makes a lot of sense. >> And Matei, something I read a press release on, but I didn't hear it in the keynote this morning. I hate to steal thunder from tomorrow, but can you give us a sneak preview on serverless apps? What's that about? >> Yeah, so this is actually we put out a press release today and we'll actually put out, well we'll have a full keynote tomorrow morning and also a lot more details on our website. So this is a Databricks serverless. It's basically a serverless platform for adding Apache Spark and data science. So not to steal away too much thunder, but you know serverless computing is this idea of users can just submit query or computation, they don't have to configure the hardware at all and they just get high performance and they get results. And so far it's been very successful with stateless workloads such as Sequel or Amazon Lambda, which is you know just functions serving a webpage or something like that. So this is going to be the first offering that actually extends that model to data science and in general to Spark workloads. So you can have machine learning users, you can have these streaming applications, all these things, on that kind of environment. So yeah, we'll have a lot more detail on that tomorrow, it's something that we're excited about. >> I want to circle back to IoT apps. You know there's sort of, beyond an emerging consensus, that we're going to do a lot of training in the cloud 'cause we have access to big compute and lots of data. But then the issue on the edge, in the near to medium term, the footprint, like a lot of people are telling us high volume devices will have 3 megs of memory and a gateway server would have like two gigs and two cores. So can you carve Spark up into fitting on one of the... >> That's a good question, I think for that, it's again, the most likely way that would happen is through data sources. For example, there are these projects like Apache knife and other projects as well that let you build up a data pipeline from IoT devices all the way to the cloud. And you can imagine some computation through those. So I think, yeah I don't have a very concrete answer, I think here it is something that's coming up a bunch though, so we do want to support this type of like splitting the computation. >> But in terms of splitting the computation, you could take a trained model, model training is fat compute and then the trained model-- >> You can definitely push the model and do inference. >> Would that inference thing have to happen in a Spark run time or could it be somewhere? >> I think it could happen anywhere else also. And actually like we do see a lot of people wanting to export basically machine learning pipelines or models from Spark into another environment. So it can happen somewhere else too. Yeah and then the other aspect of it is also data collection. So if you can push something that says here is when the data is exciting, like when the data is interesting you should remember these and send them on. That would also help, because otherwise you know, say it's like a video camera or something, most of the time it's looking at nothing. I mean you don't want to send all that back. >> That's actually a key point, which is some folks like especially in the IT ops area where you know, training wheels for IoT 'cause they're doing machine learning on infrastructure. >> Matei: Yeah which is there. >> Yeah, they say oh anything outside, two standard deviations of the band of exhortations, but there's more of an answer to that, I gather, from what you're saying. >> Yeah I mean I think you can create, for example, you can create a small machine learning model that decides whether what it's seeing is unusual and sends it back or you can even make it query specific, like you can count, like I want to find this type of object that's going by the camera. And try to find that. So I think there's a lot of room to improve that. >> Okay, well we have just a couple of minutes left here, want to draw into the future a little bit. And there's been some great progress since the summit last year to this one. What would you say is the next boundary that needs to be pushed to get Spark to the next level, whatever that may be? >> Yeah definitely yeah, well okay so again on the, so first of all in terms of the project today I think the big workload is that we are seeing come up all the time, are deep learning and stream processing. These are the big emerging ones. I mean there's still a lot of data warehousing, ETL and so on, that's still there. But these are the new ones, so that's what we're focusing on on our team at least. And we'll continue building out the stuff that you saw announced today. I think beyond that, I do think that part of the problem and this is more on the Databricks side, part of the problem is also just making it much easier for teams or businesses to begin using these technologies at all. And that's where we think cloud computing or software as a service is the way because you just turn it on and you can immediately start doing things. But that's basically, the way that I view that, is right now the barrier to do any project with data science or machine learning, or even like simple kind of analytics and unstructured data, the barrier is really high. So companies can only do it on a few projects. There might be like a 100 things they could be trying, but they can only afford to spend up two or three of them. So if you lower that barrier, there'll be a lot more of them and everyone will be able to quickly try one of these applications and see whether it actually works. >> And this ties into some of you graduate studies, like with model management and things like that? >> Yeah, so on the research side. So I'm also you know, doing research at Stanford and on that side we have this lab called Dawn, which is about usable machine learning. It's exactly these things. Like how do you enable an order of magnitude of more people to try to do things with machine learning. So actually we're also doing the video push down thing I mentioned, that's one thing we're looking at. A bunch of other stuff as well. >> Matei we could talk to you all day, but we don't have all day. We're up against the break here, but I wanted to thank you very much for coming and sharing a few moments here and look forward to seeing you in the hallways here at Spark right? >> Yeah thanks again for having me. >> Thanks for joining us and thank you all for watching, here we are at theCUBE at Spark 2017, thanks for watching. (upbeat music)
SUMMARY :
Covering Spark Summit2017, brought to you by Databricks. Excited to be here. I want to ask you what happened after the keynote? Yeah definitely, so the feedback has definitely That I haven't focused on in the keynote, George, you want to ask a little more about that? of continuous apps, which I think you guys named. And in the past, so people were saying, And so this is the next step to actually eliminate So in this last example, it's sort of to help build So it's basically keeping it low latency So that like it's immutable. even if you can do the same things with Spark, And then you also want to know, the simplest thing and if you have a low for ingest, not that you really want to get into that, and it definitely makes sense there that'd you have I hate to steal thunder from tomorrow, but can you give us So you can have machine learning users, So can you carve Spark up into fitting on And you can imagine some computation through those. You can definitely push the model So if you can push something that says like especially in the IT ops area where you know, but there's more of an answer to that, I gather, Yeah I mean I think you can create, for example, What would you say is the next boundary So if you lower that barrier, there'll be a lot So I'm also you know, doing research at Stanford and look forward to seeing you in the hallways Thanks for joining us and thank you all for watching,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Matei | PERSON | 0.99+ |
Matei Zaharia | PERSON | 0.99+ |
one millisecond | QUANTITY | 0.99+ |
two gigs | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
3 megs | QUANTITY | 0.99+ |
two cores | QUANTITY | 0.99+ |
tomorrow morning | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
100 things | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.98+ |
San Francisco | LOCATION | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
two types | QUANTITY | 0.98+ |
Spark | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.97+ |
Stanford | ORGANIZATION | 0.97+ |
first offering | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
this morning | DATE | 0.96+ |
couple hundred milliseconds | QUANTITY | 0.95+ |
Lambda | TITLE | 0.94+ |
Spark Summit2017 | EVENT | 0.93+ |
one | QUANTITY | 0.89+ |
two standard | QUANTITY | 0.87+ |
#theCUBE | ORGANIZATION | 0.81+ |
single programming model | QUANTITY | 0.8+ |
Databricks | PERSON | 0.78+ |
R | TITLE | 0.78+ |
Snappy Data | ORGANIZATION | 0.77+ |
of minutes | QUANTITY | 0.67+ |
first | QUANTITY | 0.66+ |
Spark East | ORGANIZATION | 0.63+ |
Kafka | TITLE | 0.62+ |
Apache Spark | TITLE | 0.61+ |
Sequel | TITLE | 0.6+ |
Spark 2017 | EVENT | 0.58+ |
Narrator: | TITLE | 0.57+ |
theCUBE | ORGANIZATION | 0.56+ |
Redis | TITLE | 0.55+ |
Redis | ORGANIZATION | 0.5+ |
theCUBE | TITLE | 0.46+ |
#SparkSummit | TITLE | 0.35+ |
Dinesh Nirmal, IBM | CUBEConversation
(upbeat music) >> Hi everyone. We have a special program today. We are joined by Dinesh Nirmal, who is VP of Development and Analytics, for Analytics at the IBM company and Dinesh has an extremely broad perspective on what's going on in this part of the industry, and IBM has a very broad portfolio. So, between the two of us, I think we can cover a lot of ground today. So, Dinesh, welcome. >> Oh thank you George. Great to be here. >> So just to frame the discussion, I wanted to hit on sort of four key highlights. One is balancing the compatibility across cloud on-prem, and edge versus leveraging specialized services that might be on any one of those platforms. And then harmonizing and simplifying both the management and the development of services across these platforms. You have that trade-off between: do I do everything compatibly; or can I take advantage of platforms, specific stuff? And then, we've heard a huge amount of noise on Machine Learning. And everyone says they're democratizing it. We want to hear your perspective on how you think that's most effectively done. And then, if we have time, the how to manage Machine Learning feedback, data feedback loops to improve the models. So, having started with that. >> So you talked about the private cloud and the public cloud and then, how do you manage the data and the models, or the other analytical assets across the hybrid nature of today. So, if you look at our enterprises, it's a hybrid format that most customers adopt. I mean you have some data in the public side; but you have your mission critical data, that's very core to your transactions, exist in the private cloud. Now, how do you make sure that the data that you've pushed on the cloud, that you can go use to build models? And then you can take that model deployed on-prem or on public cloud. >> Is that the emerging sort of mainstream design pattern, where mission critical systems are less likely to move, for latency, for the fact that they're fused to their own hardware, but you take the data, and the researching for the models happens up in the cloud, and then that gets pushed down close to where the transaction decisions are. >> Right, so there's also the economics of data that comes into play, so if you are doing a, you know, large scale neural net, where you have GPUs, and you want to do deep learning, obviously, you know, it might make more sense for you to push it into the cloud and be able to do that or one of the other deep learning frameworks out there. But then you have your core transactional data that includes your customer data, you know, or your customer medical data, which I think some customers might be reluctant to push on a public cloud, and then, but you still want to build models and predict and all those things. So I think it's a hybrid nature, depending on the sensitivities of the data. Customers might decide to put it on cloud versus private cloud which is in their premises, right? So then how do you serve those customer needs, making sure that you can build a model on the cloud and that you can deploy that model on private cloud or vice versa. I mean, you can build that model on private cloud or only on private, and then deployed on your public cloud. Now the challenge, one last statement, is that people think, well, once I build a model, and I deploy it on public cloud, then it's easy, because it's just an API call at that time, just to call that model to execute the transactions. But that's not the case. You take support vector machine, for example, right, that still has vectors in there, that means your data is there, right, so even though you're saying you're deploying the model, you still have sensitive data there, so those are the kind of things customers need to think about before they go deploy those models. >> So I might, this is a topic for our Friday interview with a member of the Watson IT family, but it's not so black and white when you say we'll leave all your customer data with you, and we'll work on the models, because it, sort of like, teabags, you know, you can take the customer's teabag and squeeze some of the tea out, in your IBM or public cloud, and give them back the teabag, but you're getting some of the benefit of this data. >> Right, so like, it depends, depends on the algorithms you build. You could take a linear regression, and you don't have the challenges I mentioned, in support of retro machine, because none of the data is moving, it's just modeled. So it depends, I think that's where, you know, like Watson has done, will help tremendously because the data is secure in that sense. But if you're building on your own, it's a different challenge, you've got to make sure you pick the right algorithms to do that. >> Okay, so let's move on to the modern sort of what we call operational analytic pipeline, where the key steps are ingest, process, analyze, predict, serve, and you can drill down on those more. Today there's, those pipelines are pretty much built out of multi-vendor components. How do you see that evolving under pressures of, or tension between simplicity, coming from one vendor, and the pieces all designed together, and the specialization, where you want to have a, you know, unique tool in one component. >> Right, so you're exactly right. So you can take a two prong approach. One is, you can go to a cloud provider, and get each of the services, and you stitch it together. That's one approach. A challenging approach, but that has its benefits, right, I mean, you bring some core strengths from each vendor into it. The other one is the integrate approach, where you ingest the data, you shape or cleanse the data, you get it prepared for analytics, you build the model, you predict, you visualize. I mean, that all comes in one. The benefit there is you get the whole stack in one, you have one you have a whole pipeline that you can execute, you have one service provider that's giving them services, it's managed. So all those benefits come with it, and that's probably the preferred way for it integrated all together in one stack, I think that's the path most people go towards, because then you have the whole pipeline available to you, and also the services that comes with it. So any updates that comes with it, and how do you make sure, if you take the first round, one challenge you have is how do you make sure all these services are compatible with each other? How do you make sure they're compliant? So if you're an insurance company, you want it to be HIPAA compliant. Are you going to individually make sure that each of these services are HIPAA compliant? Or would you get from one integrated provider, you can make sure they are HIPAA compliant, tests are done, so all those benefits, to me, outweigh you going, putting unmanaged service all together, and then creating a data link to underlay all of it. >> Would it be fair to say, to use an analogy, where Hadoop, being sort of, originating in many different Apache products, is a quasi-multi vendor kind of pipeline, and the state of, the state of the machine learning analytic pipeline, is still kind of multi-vendor today. You see that moving toward single vendor pipeline, who do you see as the sort of last man standing? >> So, I mean, I can speak from an IBM perspective, I can say that the benefit that a company, a vendor like IBM brings forward, is like, so the different, public or private cloud or hybrid, you obviously have the choice of going to public cloud, you can get the same service on public cloud, so you get a hybrid experience, so that's one aspect of it. Then, if you get the integrated solution, all the way from ingest to visualization, you have one provider, it's tested, it's integrated, you know, it's combined, it works well together, so I would say, going forward, if you look at it purely from an enterprise perspective, I would say integrated solutions is the way to go, because that what will be the last man standing. I'll give you an example. I was with a major bank in Europe, about a month ago, and I took them through our data science experience, our machine learning project and all that, and you know, the CTO's take was that, Dinesh, I got it. Building the model itself, it only took us two days, but incorporating our model into our existing infrastructure, it has been 11 months, we haven't been able to do it. So that's the challenge our enterprises face, and they want an integrated solution to bring that model into their existing infrastructure. So that's, you know, that's my thought. >> Today though, let's talk about the IBM pipeline. Spark is core, Ingest is, off the-- >> Dinesh: Right, so you can do spark streaming, you can use Kafka, or you can use infostream which is our proprietary tool. >> Right, although, you wouldn't really use structured streaming for ingest, 'cause of the back pressure? >> Right, so they are-- >> The point that I'm trying to make is, it's still multi-vendor, and then the serving side, I don't know, where, once the analysis is done and predictions are made, some sort of sequel database has to take over, so it's, today, it's still pretty multi vendor. So how do you see any of those products broadening their footprints so that the number of pieces decreases. >> So good question, they are all going to get into end pipeline, because that's where the value is, unless you provide an integrated end to end solution for a customer, especially parts customer it's all about putting it all together, and putting these pieces together is not easy, even if you ingest the data, IOP kind of data, a lot of times, 99% of the time, data is not clean, unless you're in a competition where you get cleansed data, in real world, that never happens. So then, I would say 80% of a data scientists time is spent on cleaning the data, shaping the data, preparing the data to build that pipeline. So for most customers, it's critical that they get that end to end, well oiled, well connected solution integrated solution, than take it from each vendor, every isolated solution. To answer your question, yes, every vendor is going to move into the ingest, data cleansing phase, transformation, and the building the pipeline and then visualization, if you look at those five steps, has to be developed. >> But just building the data cleansing and transformation, having it in your, native to your own pipeline, that doesn't sound like it's going to solve the problem of messy data that needs, you know, human supervision to correct. >> I mean, so there is some level of human supervision to be sure, so I'll give you an example, right, so when data from an insurance company goes, a lot of times, the gender could be missing, how do you know if it's a male or female? Then you got to build another model to say, you know, this patient has gone for a prostate exam, you know, it's a male, gynecology is a female, so you have to do some intuitary work in there, to make sure that the data is clean, and then there's some human supervision to make sure that this is good to build models, because when you're executing that pipeline in real time, >> Yeah. >> It's all based on the past data, so you want to make sure that the data is as clean as possible to train and model, that you're going to execute on. >> So, let me ask you, turning to a slide we've got about complexity, and first, for developers, and then second, for admins, if we take the steps in the pipeline, as ingest, process, analyze, predict, serve, and sort of products or product categories as Kafka, Spark streaming and sequel, web service for predict, and MPP sequel, or no sequel for serve, even if they all came from IBM, would it be possible to unify the data model, the addressing and name space, and I'm just kicking off a few that I can think of, programming model, persistence, transaction model, workflow, testing integration, there's one thing to say it's all IBM, and then there's another thing, so that the developer working with it, sees as it as one suite. >> So it has to be validated, and that's the benefit that IBM brings already, because we obviously test each segment to make sure it works, but when you talk about complexity, building the model is one, you know, development of the model, but now the complexity also comes in the deployment of the model, now we talk about the management of the model, where, how you monitor it? When was the model deployed, was it deployed in tests, was it deployed in production, and who changed that model last, what was changed, and how is it scoring? Is it scoring high or low? You want to get notification when the model starts going low. So complexity is all the way across, all the way from getting the data, in cleaning the data, developing the model, it never ends. And the other benefit that IBM has added is the feedback loop, where when you talk about complexity, it reduces the complexity, so today, if the model scores low, you have to take it offline, retrain the model based on the new data, and then redeploy it. Usually for enterprises, there is slots where you can take it offline, put it back online, all these things, so it's a process. What we have done is created a feedback loop where we are training the model in real time, using real time data, so the model is continuously-- >> Online learning. >> Online learning. >> And challenger, champion, or AB testing to see which one is more robust. >> Right, so you can do that, I mean, you could have multiple models where you can do AB testing, but in this case, you can condition, train the model to say, okay, this model scores the best. And then, another benefit is that, if you look at the whole machine learning process, there's the data, there's development, there's deployment. On development side, more and more it's getting commoditized, meaning picking the right algorithm, there's a lot of tools, including IBM, where he can say, question what's the right one to use for this, so that piece is getting a little, less complex, I don't want to say easier, but less complex, but the data cleansing and the deployment, these are two enterprises, when you have thousands of models how do you make sure that you deploy the right model. >> So you might say that the pipeline for managing the model is sort of separate from the original data pipeline, maybe it includes the same technology, or as much of the same technology, but once your pipeline, your data pipeline is in production, the model pipeline has to keep cycling through. >> Exactly, so the data pipeline could be changing, so if you take a lone example, right, a lot of the data that goes in the model pipeline, is static, I mean, my age, it's not going to change every day, I mean, it is, but you know, the age that goes into my salary, my race, my gender, those are static data that you can take from data and put it in there, but then there's also real time data that's coming, my loan amount, my credit score, all those things, so how do you bring that data pipeline between real time and static data, into the model pipeline, so the model can predict accurately and based on the score dipping, you should be able to re-try the model using real time data. >> I want to take, Dinesh, to the issue of a multi-vendor stack again, and the administrative challenges, so here, we look at a slide that shows me just rattling off some of the admin challenges, governance, performance modeling, scheduling orchestration, availability, recovering authentication, authorization, resource isolation, elasticity, testing integration, so that's the Y-axis, and then for every different product in the pipeline, as the access, say Kafka, Spark, structured streaming MPP, sequel, no sequel, so you got a mess. >> Right. >> Most open source companies are trying to make life easier for companies by managing their software as a service for the customer, and that's typically how they monetize. But tell us what you see the problem is, or will be with that approach. >> So, great question. Let me take a very simple example. Probably most of our audience know about GDPR, which is the European law to write to forget. So if you're an enterprise, and say, George, I want my data deleted, you have to delete all of my data within a period of time. Now, that's where one of the aspects you talked about with governance comes in. How do you make sure you have governance across not just data but your individual assets? So if you're using a multi-vendor solution, in all of that, that state governance, how do I make sure that data get deleted by all these services that's all tied together. >> Let me maybe make an analogy. On CSI, when they pick up something at the crime scene, they got to make sure that it's bagged, and the chain of custody doesn't lose its integrity all the way back to the evidence room. I assume you're talking about something like that. >> Yeah, something similar. Where the data, as it moves between private cloud, public cloud, analytical assets, is using that data, all those things need to work seamlessly for you to execute that particular transaction to delete data from everywhere. >> So that's, it's not just administrative costs, but regulations that are pushing towards more homogenous platforms. >> Right, right, and even if you take some of the other things on the stack, monitoring, logging, metering, provides some of those capabilities, but you have to make sure when you put all these services together, how are they going to integrate all together? You have one monitoring stack, so if you're pulling you know, your IOT kind of data into a data center, or your whole stack evaluation, how do you make sure you're getting the right monitoring data across the board? Those are the kind of challenges that you will have. >> It's funny you mention that, because we were talking to an old Lotus colleague of mine, who was CTO of Microsoft's IT organization, and we were talking about how the cloud vendors can put machine learning application, machine learning management application across their properties, or their services, but he said one of the first problems he'll encounter is the telemetry, like it's really easy on hardware, CPUs, utilization, memory utilization, a noise enabler for iO, but as you get higher up in the application services, it becomes much more difficult to harmonize, so that a program can figure out what's going wrong. >> Right, and I mean, like anomaly detection, right? >> Yes. >> I mean, how do you make sure you're seeing patterns where you can predict something before it happens, right? >> Is that on the road map for...? >> Yeah, so we're already working with some big customers to say, if you have a data center, how do you look at outage to predict what can go wrong in the future, root cause analysis, I mean, that is a huge problem solved. So let's say customer hit a problem, you took an outage, what caused it? Because today, you have specialists who will come and try to figure out what the problem is, but can we use machine learning or deep learning to figure out, is it a fix that was missing, or an application got changed that caused a CPU spike, that caused the outage? So that whole cost analysis is the one that's the hardest to solve, because you are talking about people's decades worth of knowledge, now you are influencing a machine to do that prediction. >> And from my understanding, root cause analysis is most effective when you have a rich model of how your, in this case, data structure and apps are working, and there might be many little models, but they're held together by some sort of knowledge graph that says here is where all the pieces fit, these are the pieces below these, sort of as peers to these other things, how does that knowledge graph get built in, and is this the next generation of a configuration management database. >> Right, so I call it the self-healing, self-managing, self-fixing data center. It's easy for you to turn up the heat or A/C, the temperature goes down, I mean, those are good, but the real value for a customer is exactly what you mentioned, building up that knowledge graft from different models that all comes together, but the hardest part, is, how do you, predicting an anomaly is one thing, but getting to the root cause is a different thing, because at that point, now you're saying, I know exactly what's caused this problem, and I can prevent it from happening again. That's not easy. We are working with our customers to figure out how do we get to the root cause analysis, but it's all about building the knowledge graph with multiple models coming from different systems, today, I mean enterprises have different systems from multi-vendors. We have to bring all that monitoring data into one source, and that's where that knowledge comes in, and then different models will feed that data, and then you need to prime that data, using deep learning algorithms to say, what caused this? >> Okay, so this actually sounds extremely relevant, although we're probably, in the interest of time, going to have to dig down on that one another time, but, just at a high level, it sounds like the knowledge graph is sort of your web or directory, into how local components or local models work, and then, knowing that, if it sees problems coming up here, it can understand how it affects something else tangentially. >> So think of knowledge graph as a neural net, because it's just building new neural net based on the past data, and it has that built-in knowledge where it says, okay, these symptoms seem to be a problem that I have encountered in the past. Now I can predict the root cause because I know this happened in the past. So it's kind of like you putting that net to build new problem determinations as it goes along. So it's a complex task. It's not easy to get to root cause analysis. But that's something we are aggressively working on developing. >> Okay, so let me ask, let's talk about sort of democratizing machine learning and the different ways of doing that. You've actually talked about the big pain points, maybe not so sexy, but that are critical, which is operationalizing the models, and preparing the data. Let me bounce off you some of the other approaches. One that we have heard from Amazon is that they're saying, well, data expunging might be an issue, and operationalizing the models might be an issue, but the biggest issue in terms of making this developer ready, is we're going to take the machine learning we use to run our business, whether it's merchandising fashion, running recommendation engines, managing fulfillment or logistics, and just like I did with AWS, they're dog-fooding it internally, and then they're going to put it out on AWS as a new layer of a platform. Where do you see that being effective, and where less effective? >> Right, so let me answer the first part of your question, the democratization of learning. So that happens when for example, a real estate agent who has no idea about machine learning, be able to come and predict the house prices in this area. That's to me, is democratizing. Because at that time, you have made it available to everyone, everyone can use it. But that comes back to our first point, which is having that clean set of data. You can build all the pre-canned pipelines out there, but if you're not feeding the set of data into, none of this, you know. Garbage in, garbage out, that's what you're going to get. So when we talk about democratization, it's not that easy and simple because you can build all this pre-canned pipelines that you have used in-house for your own purposes, but every customer has many unique cases. So if I take you as a bank, your fraud detection methods is completely different than me as a bank, my limit for fraud detection could be completely different. So there is always customization that's involved, the data that's coming in is different, so while it's a buzzword, I think there's knowledge that people need to feed it, there's models that needs to be tuned and trained, and there's deployment that is completely different, so you know, there is work that has to be done. >> So then what I'm taking away from what you're saying is, you don't have to start from ground zero with your data, but you might want to add some of your data, which is specialized, or slightly different from what the pre-trained model is, you still have to worry about operationalizing it, so it's not a pure developer ready API, but it uplevels the skills requirement so that it's not quite as demanding as working with TensorFlow or something like that. >> Right, I mean, so you can always build pre-canned pipelines and make it available, so we have already done that. For example, fraud detection, we have pre-canned pipelines for IT analytics, we have pre-canned pipelines. So it's nothing new, you can always do what you have done in house, and make it available to the public or the customers, but then they have to take it and have to do customization to meet their demands, bring their data to re-train the model, all those things has to be done, it's not just about providing the model, but every customer use case is completely different, whether you are looking at fraud detection from that one bank's perspective, not all banks are going to do the same thing. Same thing for predicting, for example, the loan, I mean, your loan approval process is going to be completely different than me as a bank loan approval process. >> So let me ask you then, and we're getting low on time here, but what would you, if you had to characterize Microsoft, Azure, Google, Amazon, as each bringing to bear certain advantages and disadvantages, and you're now the ambassador, so you're not a representative of IBM, help us understand the sweet spot for each of those. Like you're trying to fix the two sides of the pipeline, I guess, thinking of it like a barbell, you know, where are the others based on their data assets and their tools, where do they need to work. >> So, there's two aspects of it, there's enterprise aspect, so as an enterprise, I would like to say, it's not just about the technology, but there's also the services aspect. If my model goes down in the middle of the night, and my banking app is down, who do I call? If I'm using a service that is available on the cloud provider which is open source, do I have the right amount of coverage to call somebody and fix it. So there's the enterprise capabilities, availabilities, reliability, that is different, than a developer comes in, has a CSV file that he or she wants to build a model to predict something, that's different, this is different, two different aspects. So if you talk about, you know, all these vendors, if I'm bearing an enterprise card, some of the things I would look is, can I get an integrated solution, end to end on the machine learning platform. >> And that means end to end in one location, >> Right. >> So you don't have network issues or latency and stuff like that. >> Right, it's an integrated solution, where I can bring in the data, there's no challenges to latency, those kinds of things, and then can I get the enterprise level service, SLA all those things, right? So, in there, the named vendors obviously have an upper hand, because they are preferred to enterprises than a brand new open source that will come along, but then there is, within enterprises, there are a line of businesses building models, using some of the open source vendors, which is okay, but eventually they'd have to get deployed and then how do you make sure you have that enterprise capabilities up there. So if you ask me, I think each vendor brings some capabilities. I think the benefit IBM brings in is, one, you have the choice or the freedom to bring in cloud or on-prem or hybrid, you have all the choices of languages, like we support R, Python Spar, Spark, I mean, SPS, so I mean, the choice, the freedom, the reliability, the availability, the enterprise nature, that's where IBM comes in and differentiates, and that's for our customers, a huge plus. >> One last question, and we're really out of time, in terms of thinking about a unified pipeline, when we were at Spark Summit, sitting down with Matei Zaharia and Reynold Shin, the question came up, the data breaks has an incomplete pipeline, no persistence, no ingest, not really much in the way of serving, but boy are they good at, you know, data transmigration, and munging and machine learning, but they said they consider it part of their ultimate responsibility to take control. And on the ingest side it's Kafka, the serving side, might be Redis or something else, or the Spark databases like Snappy Data and Splice Machine. Spark is so central to IBM's efforts. What might a unified Spark pipeline look like? Have you guys thought about that? >> It's not there, obviously they probably could be working on it, but for our purpose, Spark is critical for us, and the reason we invested in Spark so much is because of the executions, where you can take a tremendous amount of data, and, you know, crunch through it in a very short amount of time, that's the reason, we also invented Spark Sequel, because we have a good chunk of customers still use Sequel heavily, We put a lot of work into the Spark ML, so we are continuing to invest, and probably they will get to and integrated into a solution, but it's not there yet, but as it comes along, we'll adapt. If it meets our needs and demands, and enterprise can do it, then definitely, I mean, you know, we saw that Spark's core engine has the ability to crunch a tremendous amount of data, so we are using it, I mean, 45 of our internal products use Spark as our core engine. Our DSX, Data Science Experience, has Spark as our core engine. So, yeah, I mean, today it's not there, but I know they're probably working on it, and if there are elements of this whole pipeline that comes together, that is convenient for us to use, and at enterprise level, we will definitely consider using it. >> Okay, on that note, Dinesh, thanks for joining us, and taking time out of your busy schedule. My name is George Gilbert, I'm with Dinesh Nirmal from IBM, VP of Analytics Development, and we are at the Cube studio in Palo Alto, and we will be back in the not too distant future, with more interesting interviews with some of the gurus at IBM. (peppy music)
SUMMARY :
So, between the two of us, I think Oh thank you George. the how to manage Machine Learning feedback, that you can go use to build models? but you take the data, and the researching for and that you can deploy that model on private cloud but it's not so black and white when you say and you don't have the challenges I mentioned, and the specialization, where you want to have and get each of the services, and you stitch it together. who do you see as the sort of last man standing? So that's, you know, that's my thought. Spark is core, Ingest is, off the-- Dinesh: Right, so you can do spark streaming, so that the number of pieces decreases. and then visualization, if you look at those five steps, of messy data that needs, you know, human supervision so you want to make sure that the data is as clean as in the pipeline, as ingest, process, analyze, if the model scores low, you have to take it offline, to see which one is more robust. Right, so you can do that, I mean, you could have So you might say that the pipeline for managing I mean, it is, but you know, the age that goes MPP, sequel, no sequel, so you got a mess. But tell us what you see the problem is, Now, that's where one of the aspects you talked about and the chain of custody doesn't lose its integrity for you to execute that particular transaction to delete but regulations that are pushing towards more Those are the kind of challenges that you will have. It's funny you mention that, because we were to say, if you have a data center, how do you look at most effective when you have a rich model and then you need to prime that data, using deep learning but, just at a high level, it sounds like the knowledge So it's kind of like you putting that net Let me bounce off you some of the other approaches. pipelines that you have used in-house for your own purposes, the pre-trained model is, you still have to worry So it's nothing new, you can always do what you have So let me ask you then, and we're getting low on time So if you talk about, you know, all these vendors, So you don't have network issues or latency and then how do you make sure you have that but boy are they good at, you know, where you can take a tremendous amount of data, of the gurus at IBM.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dinesh Nirmal | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
80% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
Dinesh | PERSON | 0.99+ |
Reynold Shin | PERSON | 0.99+ |
Friday | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
five steps | QUANTITY | 0.99+ |
45 | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
11 months | QUANTITY | 0.99+ |
each segment | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
two enterprises | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
first round | QUANTITY | 0.99+ |
each vendor | QUANTITY | 0.99+ |
Lotus | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
Azure | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
one challenge | QUANTITY | 0.99+ |
one approach | QUANTITY | 0.99+ |
Spark | TITLE | 0.99+ |
two sides | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
one stack | QUANTITY | 0.98+ |
one component | QUANTITY | 0.98+ |
one source | QUANTITY | 0.98+ |
GDPR | TITLE | 0.98+ |
One last question | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
thousands of models | QUANTITY | 0.98+ |
one vendor | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Kafka | TITLE | 0.97+ |
one thing | QUANTITY | 0.97+ |
Sequel | TITLE | 0.97+ |
one location | QUANTITY | 0.97+ |
second | QUANTITY | 0.96+ |
Mark Grover & Jennifer Wu | Spark Summit 2017
>> Announcer: Live from San Francisco, it's the Cube covering Spark Summit 2017, brought to you by databricks. >> Hi, we're back here where the Cube is live, and I didn't even know it Welcome, we're at Spark Summit 2017. Having so much fun talking to our guests I didn't know the camera was on. We are doing a talk with Cloudera, a couple of experts that we have here. First is Mark Grover, who's a software engineer and an author. He wrote the book, "Dupe Application Architectures." Mark, welcome to the show. >> Mark: Thank you very much. Glad to be here. And just to his left we also have Jennifer Wu, and Jennifer's director of product management at Cloudera. Did I get that right? >> That's right. I'm happy to be here, too. >> Alright, great to have you. Why don't we get started talking a little bit more about what Cloudera is maybe introducing new at the show? I saw a booth over here. Mark, do you want to get started? >> Mark: Yeah, there are two exciting things that we've launched at least recently. There Cloudera Altus, which is for transient work loads and being able to do ETL-Like workloads, and Jennifer will be happy to talk more about that. And then there's Cloudera data science workbench, which is this tool that allows folks to use data science at scale. So, get away from doing data science in silos on your personal laptops, and do it in a secure environment on cloud. >> Alright, well, let's jump into Data Science Workbench first. Tell me a little bit more about that, and you mentioned it's for exploratory data science. So give us a little more detail on what it does. >> Yeah, absolutely. So, there was private beta for Cloudera Data Science Workbench earlier in the year and then it was GA a few months ago. And it's like you said, an exploratory data science tool that brings data science to the masses within an enterprise. Previously people used to have, it was this dichotomy, right? As a data scientist, I want to have the latest and greatest tools. I want to use the latest version of Python, the latest notebook kernel, and I want to be able to use R and Python to be able to crunch this data and run my models in machine learning. However, on the other side of this dichotomy are the IT organization of the organization, where if they want to make sure that all tools are compliant and that your clusters are secure, and your data is not going into places that are not secured by state of the art security solutions, like Kerberos for example, right? And of course if the data scientists are putting the data on their laptops and taking the laptop around to wherever they go, that's not really a solution. So, that was one problem. And the other one was if you were to bring them all together in the same solution, data scientists have different requirements. One may want to use Python 2.6. Another one maybe want to use 3.2, right? And so Cloudera Data Science Workbench is a new product that allows data scientists to visualize and do machine learning through this very nice notebook-like interface, share their work with the rest of their colleagues in the organization, but also allows you to keep your clusters secure. So it allows you to run against a Kerberized cluster, allows single sign on to your web interface to Data Science Workbench, and provides a really nice developer experience in the sense that My workflow and my tools and my version of Python does not conflict with Jennifer's version of Python. We all have our own docker and Kubernetes-based infrastructure that makes sure that we use the packages that we need, and they don't interfere with each other. We're going to go to Jennifer on Altus in just a few minutes, but George first give you a chance to maybe dig in on Data Science workshop. >> Two questions on the data science side: some of the really toughest nuts to crack have been Sort of a common environment for the collaborators, but also the ability to operationalize the models once you've sort of agreed on them, and manage the lifecycle across teams, you know? Like, challenger champion, promote something, or even before that doing the ab testing, and then sort of what's in production is typically in a different language from what, you know, it was designed in and sort of integrating it with the apps. Where is that on the road map? Cause no one really has a good answer for that. >> Yeah, that's an excellent question. In general I think it's the problem to crack these days. How do you productionalize something that was written by a data scientist in a notebook-like system onto the production cluster, right? And I think the part where the data scientist works in a different language than the language that's in production, I think that problem, the best I can say right now is to actually have someone rewrite that. Have someone rewrite that in the language you're going to make in production, right? I don't see that to be the more common part. I think the more widespread problem is even when the language is production, how do you go making the part that the data scientist wrote, the model or whatever that would be, into a prodution cluster? And so, Data Science Workbench in particular runs on the same cluster that is being managed by Cloudera manager, right? So this is a tool that you install, but that is available to you as a web server, as a web interface, and so that allows you to move your development machine learning algorithms from your data science workbench to production much more easier, because it's all running on the same hardware and same systems. There's no separate Cloudera managers that you have to use to manage the workbench compared to your actual cluster. >> Okay. A tangential question, but one of the, the difficulties of doing machine learning is finding all the training data and, and sort of data science expertise to sit with the domain expert to, you know, figure out proper model of features, things like that. One of the things we've seen so far from the cloud vendors is they take their huge datasets in terms of voice, you know, images. They do the natural language understanding, speech or rather text to speech, you know, facial recognition. Cause they have such huge datasets they can train on. We're hearing noises that they'd going to take that down to the more mundane statistical kind of machine learning algorithms, so that you wouldn't be, like, here's a algorithm to do churn, you know, go to town, but that they might have something that's already kind of pre-populated that you would just customize. Is that something that you guys would tackle, too? >> I can't speak for the road map in that sense, but I think some of that problem needs to be tackled by projects like Spark for example. So I think as the stack matures, it's going to raise the level of abstraction as time goes on. And I think whatever benefits Spark ecosystem will have will come directly to distributions like Cloudera. >> George: That's interesting. >> Yeah >> Okay >> Alright, well let's go to Jennifer now and talk about Altus a little bit. Now you've been on the Cube show before, right? >> I have not. >> Okay, well, familiar with your work. Tell us again, you're the product manager for Altus. What does it do, and what was the motivation to build it? >> Yeah, we're really excited about Cloudera Altus. So, we released Cloudera Altus in its first GA form in April, and we launched Cloudera Altus in a public environment in Strata London about two weeks ago, so we're really excited about this and we are very excited to now open this up to all of the customer base. And what it is is a platform as a service offering designed to leverage, basically, the agility and the scale of cloud, and make a very easy to use type of experience to expose Cloudera capacity for, in particular for data engineering type of workloads. So the end user will be able to very easily, in a very agile manner, get data engineering capacity on Cloudera in the cloud, and they'll be able to do things like ETL and large scale data processing, productionized machine learning workflows in the cloud with this new data engineering as a service experience. And we wanted to abstract away the cloud, and cluster operations, and make the end user a really, the end user experience very easy. So, jobs and workloads as first class objects. You can do things like submit jobs, clone jobs, terminate jobs, troubleshoot jobs. We wanted to make this very, very easy for the data engineering end user. >> It does sound like you've sort of abstracted away a lot of the infrastructure that you would associate with on-prem, and sort of almost make it, like, programmable and invisible. But, um, I guess my, one of my questions is when you put it in a cloud environment, when you're on-prem you have a certain set of competitors which is kind of restrictive, because you are the standalone platform. But when you go on the cloud, someone might say, "I want to use red shift on Amazon," or Snowflake, you know, as the MPP sequel database at the end of a pipeline. And it's not just, I'm using those as examples. There's, you know, dozens, hundreds, thousands of other services to choose from. >> Yes. >> What happens to the integrity of that platform if someone carves off one piece? >> Right. So, interoperability and a unified data pipeline is very important to us, so we want to make sure that we can still service the entire data pipeline all the way from ingest and data processing to analytics. So our team has 24 different open source components that we deliver in the CDH distribution, and we have committers across the entire stack. We know the application, and we want to make sure that everything's interoperable, no matter how you deploy the cluster. So if you deploy data engineering clusters through Cloudera Altus, but you deployed Impala clusters for data marks in the cloud through Cloudera Director or through any other format, we want all these clusters to be interoperable, and we've taken great pains in order to make everything work together well. >> George: Okay. So how do Altus and Sata Science Workbench interoperate with Spark? Maybe start with >> You want to go first with Altus? >> Sure, so, we, in terms of interoperability we focus on things like making sure there are no data silos so that the data that you use for your entire data lake can be consumed by the different components in our system, the different compute engines and different tools, and so if you're processing data you can also look at this data and visualize this data through Data Science Workbench. So after you do data ingestion and data processing, you can use any of the other analytic tools and then, and this includes Data Science Workbench. >> Right, and for Data Science Workbench runs, for example, with the latest version of Spark you could pick, the currently latest released version of Spark, Spark 2.1, Spark 2.2 is being boarded of course, and that will soon be integrated after its release. For example you could use Data Science Workbench with your flavor of Spark two's version and you can run PySpark or Scala jobs on this notebook-like interface, be able to share your work, and because you're using Spark Underneath the hood it uses yarn for resource management, the Data Science Workbench itself uses Docker for configuration management, and Kubernetes for resource managing these Docker containers. >> What would be, if you had to describe sort of the edge conditions and the sweet spot of the application, I mean you talked about data engineering. One thing, we were talking to Matei Zaharia and Ronald Chin about was, and Ali Ghodsi as well was if you put Spark on a database, or at least a, you know, sophisticated storage manager, like Kudu, all of a sudden there're a whole new class of jobs or applications that open up. Have you guys thought about what that might look like in the future, and what new applications you would tackle? >> I think a lot of that benefit, for example, could be coming from the underlying storage engine. So let's take Spark on Kudu, for example. The inherent characteristics of Kudu today allow you to do updates without having to either deal with the complexity of something like Hbase, or the crappy performance of dealing HDFS compactions, right? So the sweet spot comes from Kudu's capabilities. Of course it doesn't support transactions or anything like that today, but imagine putting something like Spark and being able to use the machine learning libraries and, we have been limited so far in the machine learning algorithms that we have implemented in Spark by the storage system sometimes, and, for example new machine learning algorithms or the existing ones could rewritten to make use of the update features for example, in Kudu. >> And so, it sounds like it makes it, the machine learning pipeline might get richer, but I'm not hearing that, and maybe this isn't sort of in the near term sort of roadmap, the idea that you would build sort of operational apps that have these sophisticated analytics built in, you know, where the analytics, um, you've done the training but at run time, you know, the inferencing influences a transaction, influences a decision. Is that something that you would foresee? >> I think that's totally possible. Again, at the core of it is the part that now you have one storage system that can do scans really well, and it can also do random reads and writes any place, right? So as your, and so that allows applications which were previously siloed because one appication that ran off of HDFS, another application that ran out of Hbase, and then so you had to correlate them to just being one single application that can use to train and then also use their trained data to then make decisions on the new transactions that come in. >> So that's very much within the sort of scope of imagination, or scope. That's part of sort of the ultimate plan? >> Mark: I think it's definitely conceivable now, yeah. >> Okay. >> We're up against a hard break coming up in just a minute, so you each get a 30-second answer here, so it's the same question. You've been here for a day and a half now. What's the most surprising thing you've learned that you thing should be shared more broadly with the Spark community? Let's start with you. >> I think one of the great things that's happening in Spark today is people have been complaining about latency for a long time. So if you saw the keynote yesterday, you would see that Spark is making forays into reducing that latency. And if you are interested in Spark, using Spark, it's very exciting news. You should keep tabs on it. We hope to deliver lower latency as a community sooner. >> How long is one millisecond? (Mark laughs) >> Yeah, I'm largely focused on cloud infrastructure and I found here at the conference that, like, many many people are very much prepared to actually start taking more, you know, more POCs and more interest in cloud and the response in terms of all of this in Altus has been very encouraging. >> Great. Well, Jennifer, Mark, thank you so much for spending some time here on the Cube with us today. We're going to come by your booth and chat a little bit more later. It's some interesting stuff. And thank you all for watching the Cube today here at Spark Summit 2017, and thanks to Cloudera for bringing us these two experts. And thank you for watching. We'll see you again in just a few minutes with our next interview.
SUMMARY :
covering Spark Summit 2017, brought to you by databricks. I didn't know the camera was on. And just to his left we also have Jennifer Wu, I'm happy to be here, too. Mark, do you want to get started? and being able to do ETL-Like workloads, and you mentioned it's for exploratory data science. And the other one was if you were to bring them all together and manage the lifecycle across teams, you know? and so that allows you to move your development machine the domain expert to, you know, I can't speak for the road map in that sense, and talk about Altus a little bit. to build it? on Cloudera in the cloud, and they'll be able to do things a lot of the infrastructure that you would associate with We know the application, and we want to make sure Maybe start with so that the data that you use for your entire data lake and you can run PySpark in the future, and what new applications you would tackle? or the existing ones could rewritten to make use the idea that you would build sort of operational apps Again, at the core of it is the part that now you have That's part of sort of the ultimate plan? that you thing should be shared more broadly So if you saw the keynote yesterday, you would see that and the response in terms of all of this on the Cube with us today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jennifer | PERSON | 0.99+ |
Mark Grover | PERSON | 0.99+ |
Jennifer Wu | PERSON | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
April | DATE | 0.99+ |
Ronald Chin | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Matei Zaharia | PERSON | 0.99+ |
30-second | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Dupe Application Architectures | TITLE | 0.99+ |
dozens | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
Two questions | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Spark | TITLE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two experts | QUANTITY | 0.99+ |
a day and a half | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
Python 2.6 | TITLE | 0.99+ |
Strata London | LOCATION | 0.99+ |
one piece | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
Cloudera Altus | TITLE | 0.98+ |
Scala | TITLE | 0.98+ |
Docker | TITLE | 0.98+ |
One | QUANTITY | 0.97+ |
Kudu | ORGANIZATION | 0.97+ |
one millisecond | QUANTITY | 0.97+ |
PySpark | TITLE | 0.96+ |
R | TITLE | 0.95+ |
one | QUANTITY | 0.95+ |
two weeks ago | DATE | 0.93+ |
Data Science Workbench | TITLE | 0.92+ |
Cloudera | TITLE | 0.91+ |
hundreds | QUANTITY | 0.89+ |
Hbase | TITLE | 0.89+ |
each | QUANTITY | 0.89+ |
24 different open source components | QUANTITY | 0.89+ |
few months ago | DATE | 0.89+ |
single | QUANTITY | 0.88+ |
kernel | TITLE | 0.88+ |
Altus | TITLE | 0.88+ |
Day One Wrap - #SparkSummit - #theCUBE
>> Announcer: Live from San Francisco, it's the CUBE covering Spark Summit 2017, brought to by Databricks. (energetic music plays) >> And what an exciting day we've had here at the CUBE. We've been at Spark Summit 2017, talking to partners, to customers, to founders, technologists, data scientists. It's been a load of information, right? >> Yeah, an overload of information. >> Well, George, you've been here in the studio with me talking with a lot of the guests. I'm going to ask you to maybe recap some of the top things you've heard today for our guests. >> Okay so, well, Databricks laid down, sort of, three themes that they wanted folks to take away. Deep learning, Structured Streaming, and serverless. Now, deep learning is not entirely new to Spark. But they've dramatically improved their support for it. I think, going beyond the frameworks that were written specifically for Spark, like Deeplearning4j and BigDL by Intel And now like TensorFlow, which is the opensource framework from Google, has gotten much better support. Structured Streaming, it was not clear how much more news we were going to get, because it's been talked about for 18 months. And they really, really surprised a lot of people, including me, where they took, essentially, the processing time for an event or a small batch of events down to 1 millisecond. Whereas, before, it was in the hundreds if not higher. And that changes the type of apps you can build. And also, the Databricks guys had coined the term continuous apps, which means they operate on a never-ending stream of data, which is different from what we've had in the past where it's batch or with a user interface, request-response. So they definitely turned up the volume on what they can do with continuous apps. And serverless, they'll talk about more tomorrow. And Jim, I think, is going to weigh in. But it, basically, greatly simplifies the ability to run this infrastructure, because you don't think of it as a cluster of resources. You just know that it's sort of out there, and you ask requests of it, and it figures out how to fulfill it. I will say, the other big surprise for me was when we have Matei, who's the creator of Spark and the chief technologist at Databricks, come on the show and say, when we asked him about how Spark was going to deal with, essentially, more advanced storage of data so that you could update things, so that you could get queries back, so that you could do analytics, and not just of stuff that's stored in Spark but stuff that Spark stores essentially below it. And he said, "You know, Databricks, you can expect to see come out with or partner with a database to do these advanced scenarios." And I got the distinct impression, and after listen to the tape again, that he was talking about for Apache Spark, which is separate from Databricks, that they would do some sort of key-value store. So in other words, when you look at competitors or quasi-competitors like Confluent Kafka or a data artist in Flink, they don't, they're not perfect competitors. They overlap some. Now Spark is pushing its way more into overlapping with some of those solutions. >> Alright. Well, Jim Kobielus. And thank you for that, George. You've been mingling with the masses today. (laughs) And you've been here all day as well. >> Educated masses, yeah, (David laughs) who are really engaged in this stuff, yes. >> Well, great, maybe give us some of your top takeaways after all the conversations you've had today. >> They're not all that dissimilar from George's. What Databricks, Databricks of course being the center, the developer, the primary committer in the Spark opensource community. They've done a number of very important things in terms of the announcements today at this event that push Spark, the Spark ecosystem, where it needs to go to expand the range of capabilities and their deployability into production environments. I feel the deep-learning side, announcement in terms of the deep-learning pipeline API very, very important. Now, as George indicated, Spark has been used in a fair number of deep-learning development environments. But not as a modeling tool so much as a training tool, a tool for In Memory distributed training of deep-learning models that we developed in TensorFlow, in Caffe, and other frameworks. Now this announcement is essentially bringing support for deep learning directly into the Spark modeling pipeline, the machine-learning modeling pipeline, being able to call out to deep learning, you know, TensorFlow and so forth, from within MLlib. That's very important. That means that Spark developers, of which there are many, far more than there are TensorFlow developers, will now have an easy pass to bring more deep learning into their projects. That's critically important to democratize deep learning. I hope, and from what I've seen what Databricks has indicated, that they have support currently in API reaching out to both TensorFlow and Keras, that they have plans to bring in API support for access to other leading DL toolkits such as Caffe, Caffe 2, which is Facebook-developed, such as MXNet, which is Amazon-developed, and so forth. That's very encouraging. Structured Streaming is very important in terms of what they announced, which is an API to enable access to faster, or higher-throughput Structured Streaming in their cloud environment. And they also announced that they have gone beyond, in terms of the code that they've built, the micro-batch architecture of Structured Streaming, to enable it to evolve into a more true streaming environment to be able to contend credibly with the likes of Flink. 'Cause I think that the Spark community has, sort of, had their back against the wall with Structured Streaming that they couldn't fully provide a true sub-millisecond en-oo-en latency environment heretofore. But it sounds like with this R&D that Databricks is addressing that, and that's critically important for the Spark community to continue to evolve in terms of continuous computation. And then the serverless-apps announcement is also very important, 'cause I see it as really being, it's a fully-managed multi-tenant Spark-development environment, as an enabler for continuous Build, Deploy, and Testing DevOps within a Spark machine-learning and now deep-learning context. The Spark community as it evolves and matures needs robust DevOps tools to production-ize these machine-learning and deep-learning models. Because really, in many ways, many customers, many developers are now using, or developing, Spark applications that are real 24-by-7 enterprise application artifacts that need a robust DevOps environment. And I think that Databricks has indicated they know where this market needs to go and they're pushing it with R&D. And I'm encouraged by all those signs. >> So, great. Well thank you, Jim. I hope both you gentlemen are looking forward to tomorrow. I certainly am. >> Oh yeah. >> And to you out there, tune in again around 10:00 a.m. Pacific Time. We're going to be broadcasting live here. From Spark Summit 2017, I'm David Goad with Jim and George, saying goodbye for now. And we'll see you in the morning. (sparse percussion music playing) (wind humming and waves crashing).
SUMMARY :
Announcer: Live from San Francisco, it's the CUBE to customers, to founders, technologists, data scientists. I'm going to ask you to maybe recap And that changes the type of apps you can build. And thank you for that, George. after all the conversations you've had today. for the Spark community to continue to evolve I hope both you gentlemen are looking forward to tomorrow. And to you out there, tune in again
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
George | PERSON | 0.99+ |
David | PERSON | 0.99+ |
David Goad | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Matei | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Spark | TITLE | 0.99+ |
both | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Intel | ORGANIZATION | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
18 months | QUANTITY | 0.98+ |
Flink | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
Confluent Kafka | ORGANIZATION | 0.97+ |
Caffe | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
TensorFlow | TITLE | 0.94+ |
three themes | QUANTITY | 0.94+ |
10:00 a.m. Pacific Time | DATE | 0.94+ |
CUBE | ORGANIZATION | 0.94+ |
Deeplearning4j | TITLE | 0.94+ |
Spark | ORGANIZATION | 0.93+ |
1 millisecond | QUANTITY | 0.93+ |
Keras | ORGANIZATION | 0.91+ |
Day One | QUANTITY | 0.81+ |
BigDL | TITLE | 0.79+ |
TensorFlow | ORGANIZATION | 0.79+ |
7 | QUANTITY | 0.77+ |
MLlib | TITLE | 0.73+ |
Caffe 2 | ORGANIZATION | 0.7+ |
Caffe | TITLE | 0.7+ |
24- | QUANTITY | 0.68+ |
MXNet | ORGANIZATION | 0.67+ |
Apache Spark | ORGANIZATION | 0.54+ |
Matthew Hunt | Spark Summit 2017
>> Announcer: Live from San Francisco, it's theCUBE covering Spark Summit 2017, brought to you by Databricks. >> Welcome back to theCUBE, we're talking about data signs and engineering at scale, and we're having a great time, aren't we, George? >> We are! >> Well, we have another guest now we're going to talk to, I'm very pleased to introduce Matt Hunt, who's a technologist at Bloomberg, Matt, thanks for joining us! >> My pleasure. >> Alright, we're going to talk about a lot of exciting stuff here today, but I want to first start with, you're a long-time member of the Spark community, right? How many Spark Summits have you been to? >> Almost all of them, actually, it's quite amazing to see the 10th one, yes. >> And you're pretty actively involved with the user group on the east coast? >> Matt: Yeah, I run the New York users group. >> Alright, well, what's that all about? >> We have some 2,000 people in New York who are interested in finding out what goes on, and which technologies to use, and what are people working on. >> Alright, so hopefully, you saw the keynote this morning with Matei? >> Yes. >> Alright, any comments or reactions from the things that he talked about as priorities? >> Well, I've always loved the keynotes at the Spark Summits, because they announce something that you don't already know is coming in advance, at least for most people. The second Spark Summit actually had people gasping in the audience while they were demoing, a lot of senior people-- >> Well, the one millisecond today was kind of a wow one-- >> Exactly, and I would say that the one thing to pick out of the keynote that really stood out for me was the changes in improvements they've made for streaming, including potentially being able to do sub-millisecond times for some workloads. >> Well, maybe talk to us about some of the apps that you're building at Bloomberg, and then I want you to join in, George, and drill down some of the details. >> Sure. And Bloomberg is a large company with 4,000-plus developers, we've been working on apps for 30 years, so we actually have a wide range of applications, almost all of which are for news in the financial industry. We have a lot of homegrown technology that we've had to adapt over time, starting from when we built our own hardware, but there's some significant things that some of these technologies can potentially really help simplify over time. Some recent ones, I guess, trade anomaly detection would be one. How can you look for patterns of insider trading? How can you look for bad trades or attempts to spoof? There's a huge volume of trade data that comes in, that's a natural application, another one would be regulatory, there's a regulatory system called MiFID, or MiFID II, the regulations required for Europe, you have to be able to record every trade for seven years, provide daily reports, there's clearly a lot around that, and then I would also just say, our other internal databases have significant analytics that can be done, which is just kind of scraping the surface. >> These applications sound like they're oriented towards streaming solutions, and really low latency. Has that been a constraint on what you can build so far? >> I would definitely say that we have some things that are latency constrained, it tends to be not like high frequency trading, where you care about microseconds, but milliseconds are important, how long does it take to get an answer, but I would say equally important with latency is efficiency, and those two often wind up being coupled together, though not always. >> And so when you say coupled, is it because it's a trade-off, or 'cause you need both? >> Right, so it's a little bit of both, for a number of things, there's an upper threshold for the latency that we can accept. Certain architectural changes imply higher latencies, but often, greater efficiencies. Micro-batching often means that you can simplify and get greater throughput, but at a cost of higher latency. On the other hand, if you have a really large volume of things coming in, and your method of processing them isn't efficient enough, it gets too slow simply from that, and that's why it's not just one or the other. >> So in getting down to one millisecond or below, can they expose knobs where you can choose the trade-offs between efficiency and latency, and is that relevant for the apps that you're building? >> I mean, clearly if you can choose between micro-batching and not micro-batching, that's a knob that you can have, so that's one explicit one, but part of what's useful is, often when you sit down to try and determine what is the main cause of latency, you have to look at the full profile of a stack of what it's going through, and then you discover other inefficiencies that can be ironed out, and so it just makes it faster overall. I would say, a lot of what the Databricks guys in the Spark community have worked on over the years is connected to that, Project Tungsten and so on, well, all these things that make things much slower, much less efficient than they need to be, and we can close that gap a lot, I would say that from the very beginning. >> This brings up something that we were talking about earlier, which is, Matei has talked for a long time about wanting to take N 10 control of continuous apps, for simplicity and performance, and so there's this, we'll write with transactional consistency, so we're assuring the customer of exactly one's semantics when we write to a file system or database or something like that. But, Spark has never really done native storage, whereas Matei came here on the show earlier today and said, "Well, Databricks as a company "is going to have to do something in that area," and he talks specifically about databases, and he said, he implied that Apache Spark, separate from Databricks, would also have to do more in state management, I don't know if he was saying key value store, but how would that open up a broader class of apps, how would it make your life simpler as a developer? >> Right. Interesting and great question, this is kind of a subject that's near and dear to my own heart, I would say. So part of that, when you take a step back, is about some of the potential promise of what Spark could be, or what they've always wanted to be, which is a form of a universal computation engine. So there's a lot of value, if you can learn one small skillset, but it can work in a wide variety of use cases, whether it's streaming or at rest or analytics, and plug other things in. As always, there's a gap in any such system between theory and reality, and how much can you close that gap, but as for storage systems, this is something that, you and I have talked about this before, and I've written about it a fair amount too, Spark is historically an analytic system, so you have a bunch of data, and you can do analytics on it, but where's that data come from? Well, either it's streaming in, or you're reading from files, but most people need, essentially, an actual database. So what constitutes the universal system? You need file store, you need a distributive file store, you need a database with generally transactional semantics because the other forms are too hard for people to understand, you need analytics that are extensible, and you need a way to stream data in, and there's how close can you get to that, versus how much do you have to fit other parts that come together, very interesting question. >> So, so far, they've sort of outsourced that to DIY, do-it-yourself, but if they can find a sufficiently scalable relational database, they can do the sort of analytical queries, and they can sort of maintain state with transactions for some amount of the data flowing through. My impression is that, like Cassandra would be the, sort of the database that would handle all updates, and then some amount of those would be filtered through to a multi-model DBMS. When I say multi-model, I mean handles transactions and analytics. Knowing that you would have the option to drop that out, what applications would you undertake that you couldn't use right now, where the theme was, we're going to take big data apps into production, and then the competition that they show for streaming is of Kafka and Flink, so what does that do to that competitive balance? >> Right, so how many pieces do you need, and how well do they fit together is maybe the essence of that question, and people ask that all the time, and one of the limits has been, how mature is each piece, how efficient is it, and do they work together? And if you have to master 5,000 skills and 200 different products, that's a huge impediment to real-world usage. I think we're coalescing around a smaller set of options, so in the, Kafka, for example, has a lot of usage, and it seems to really be, the industry seems to be settling on that is what people are using for inbound streaming data, for ingest, I see that everywhere I go. But what happens when you move from Kafka into Spark, or Spark has to read from a database? This is partly a question of maturity. Relational databases are very hard to get right. The ones that we have have been under development for decades, right? I mean, DB2 has been around for a really long time with very, very smart people working on it, or Oracle, or lots of other databases. So at Bloomberg, we actually developed our own databases for relational databases that were designed for low latency and very high reliability, so we actually just opensourced that a few weeks ago, it's called ComDB2, and the reason we had to do that was the industry solutions at the time, when we started working on that, were inadequate to our needs, but we look at how long that took to develop for these other systems and think, that's really hard for someone else to get right, and so, if you need a database, which everyone does, how can you make that work better with Spark? And I think there're a number of very interesting developments that can make that a lot better, short of Spark becoming and integrating a database directly, although there's interesting possibilities with that too. How do you make them work well together, we could talk about for a while, 'cause that's a fascinating question. >> On that one topic, maybe the Databricks guys don't want to assume responsibility for the development, because then they're picking a winner, perhaps? Maybe, as Matei told us earlier, they can make the APIs easier to use for a database vendor to integrate, but like we've seen Splice Machine and SnappyData do the work, take it upon themselves to take data frames, the core data structure, in Spark, and give it transactional semantics. Does that sound promising? >> There're multiple avenues for potential success, and who can use which, in a way, depends on the audience. If you look at things like Cassandra and HBase, they're distributing key value stores that additional things are being built on, so they started as distributed, and they're moving towards more encompassing systems, versus relational databases, which generally started as single image on single machine, and are moving towards federation distribution, and there's been a lot with that with post grads, for example. One of the questions would be, is it just knobs, or why don't they work well together? And there're a number of reasons. One is, what can be pushed down, how much knowledge do you have to have to make that decision, and optimizing that, I think, is actually one of the really interesting things that could be done, just as we have database query optimizers, why not, can you determine the best way to execute down a chain? In order to do that well, there are two things that you need that haven't yet been widely adopted, but are coming. One is the very efficient copy of data between systems, and Apache Arrow, for example, is very, very interesting, and it's nearing the time when I think it's just going to explode, because it lets you connect these systems radically more efficiently in a standardized way, and that's one of the things that was missing, as soon as you hop from one system to another, all of a sudden, you have the semantic computational expense, that's a problem, we can fix that. The other is, the next level of integration requires, basically, exposing more hooks. In order to know, where should a query be executed and which operator should I push down, you need something that I think of as a meta-optimizer, and also, knowledge about the shape of the data, or statistics underlying, and ways to exchange that back and forth to be able to do it well. >> Wow, Matt, a lot of great questions there. We're coming up on a break, so we have to wrap things up, and I wanted to give you at least 30 seconds to maybe sum up what you'd like to see your user community, the Spark community, do over the next year. What are the top issues, things you'd love to see worked on? >> Right. It's an exciting time for Spark, because as time goes by, it gets more and more mature, and more real-world applications are viable. The hardest thing of all is to get, anywhere you in any organization's to get people working together, but the more people work together to enable these pieces, how do I efficiently work with databases, or have these better optimizations make streaming more mature, the more people can use it in practice, and that's why people develop software, is to actually tackle these real-world problems, so, I would love to see more of that. >> Can we all get along? (chuckling) Well, that's going to be the last word of this segue, Matt, thank you so much for coming on and spending some time with us here to share the story! >> My pleasure. >> Alright, thank you so much. Thank you George, and thank you all for watching this segment of theCUBE, please stay with us, as Spark Summit 2017 will be back in a few moments.
SUMMARY :
covering Spark Summit 2017, brought to you by Databricks. it's quite amazing to see the 10th one, yes. and what are people working on. that you don't already know is coming in advance, and I would say that the one thing and then I want you to join in, George, you have to be able to record every trade for seven years, Has that been a constraint on what you can build so far? where you care about microseconds, On the other hand, if you have a really large volume and then you discover other inefficiencies and so there's this, we'll write and there's how close can you get to that, what applications would you undertake and so, if you need a database, which everyone does, and give it transactional semantics. it's just going to explode, because it lets you and I wanted to give you at least 30 seconds and that's why people develop software, Alright, thank you so much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Matt Hunt | PERSON | 0.99+ |
Bloomberg | ORGANIZATION | 0.99+ |
Matthew Hunt | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Matei | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
each piece | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
one millisecond | QUANTITY | 0.99+ |
5,000 skills | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Spark | TITLE | 0.98+ |
Europe | LOCATION | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
DB2 | TITLE | 0.98+ |
200 different products | QUANTITY | 0.98+ |
Spark Summits | EVENT | 0.98+ |
Spark Summit | EVENT | 0.98+ |
today | DATE | 0.98+ |
one system | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
4,000-plus developers | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
HBase | ORGANIZATION | 0.95+ |
second | QUANTITY | 0.94+ |
decades | QUANTITY | 0.94+ |
MiFID II | TITLE | 0.94+ |
one topic | QUANTITY | 0.92+ |
this morning | DATE | 0.92+ |
single machine | QUANTITY | 0.91+ |
One of | QUANTITY | 0.91+ |
ComDB2 | TITLE | 0.9+ |
few weeks ago | DATE | 0.9+ |
Cassandra | PERSON | 0.89+ |
earlier today | DATE | 0.88+ |
10th one | QUANTITY | 0.88+ |
2,000 people | QUANTITY | 0.88+ |
one thing | QUANTITY | 0.87+ |
Kafka | TITLE | 0.87+ |
single image | QUANTITY | 0.87+ |
MiFID | TITLE | 0.85+ |
Spark | ORGANIZATION | 0.81+ |
Splice Machine | TITLE | 0.81+ |
Project Tungsten | ORGANIZATION | 0.78+ |
theCUBE | ORGANIZATION | 0.78+ |
at least 30 seconds | QUANTITY | 0.77+ |
Cassandra | ORGANIZATION | 0.72+ |
Apache Spark | ORGANIZATION | 0.71+ |
questions | QUANTITY | 0.7+ |
things | QUANTITY | 0.69+ |
Apache Arrow | ORGANIZATION | 0.69+ |
SnappyData | TITLE | 0.66+ |
Kickoff - #SparkSummit - #theCUBE
>> Announcer: Live, from San Francisco, it's theCUBE! Covering Spark Summit 2017. Brought to you by Databricks. (energetic techno music) >> Welcome to theCube! I'm your host, David Goad, and we're here at the Spark Summit 2017 in San Francisco, where it's all about data science and engineering at scale. Now, I know there's been a lot of great technology shows here at Moscone Center, but this is going to be one of the greatest, I think. We are joined by here by George Gilbert, who is the lead analyst for big data and analytics at Wikibon. George, welcome to theCUBE. >> Good to be here, David. >> All right, so, I know this is kind of like reporting in real time, 'cause you're listening to the keynote right now, right? >> George: Yeah. >> Well, I know we wanted to get us started with some of the key themes that you've heard. You've done a lot of work recently with how applications are changing with machine learning, as well as the new distributed computing. So, as you listen to what Matei is talking about, and some of the other keynote presenters, what are some of the key themes you're hearing so far? >> There's two big things that they are emphasizing so far this year, or at this Spark Summit. One is structured streaming, which they've been talking about more and more over the last 18 months, but it officially goes production-ready in the 2.2 release of Spark, which is imminent. But they also showed something really, really interesting with structured streaming. There've always been other streaming products, and the relevance of streaming is that we're more and more building applications that process data continuously. Not in either big batches or just request-response with a user interface. Your streaming capabilities dictate the class of apps that you're appropriate for. The Spark structured streaming had a lot of overhead in it, 'cause it had to manage a cluster. It was working with a query optimizer, and so it would basically batch up events in groups that would go through, like, once every 200 milliseconds to a full second. Which is near real-time, but not considered real-time. And I know I'm driving into the details a bit, but it's very significant. They demoed on stage today-- >> David: I saw the demo. >> They showed structured streams running one millisecond latency. That's a big breakthrough, because that means, essentially, you can do per-event processing, which is true streaming. >> And so this contributes to deep learning, right? Low-latency streaming. >> Well, it can complement it, because when you do machine learning, or deep learning, you basically have a model, and you want to predict something. The stream is flowing along, and so for every data element in the stream, you might want a prediction, or a classification, or something like that. Spark had okay support for deep learning before, but that's the other big emphasis now. Before, they could sort of serve models, like in production, but training models was somewhat more difficult for deep learning. That took parallelization they didn't have. >> I noticed there were three demos that kind of tied together in a little bit of a James Bond story. So, maybe the first one was talking about image classification, transfer learning, tell me a little bit more about what you heard from there. I know you need to mute your keynote. The first demo from Tim Hunter. >> The demo, like with James Bond, was, we're going to show, among my favorite movies, they show cars, they're learning to label cars, and then they're showing cars that appeared in James Bond movies, and so they're training the model to predict, was this car seen in a James Bond movie? And then they also have, they were joining it with data that showed where the car was last seen. So it's sort of like a James Bond sighting. And then they train that model, and then they sort of ran it in production, essentially, at real-time speed. >> And the continuous processing demo showed how fast that could be run. >> Right, right. That was a cool demo. That was a nice visual. >> And then we had the gentleman from Stanford, Christopher Re came up to talk more about the applications for machine learning. Is it really going to be radically easier to use? >> We didn't make it all the way through that keynote, but yes, there are things that can be used to make machine learning easier to use. There's, for one thing, like if you take the old statistical machine learning stuff, it's still very hard to identify the features, or the variables, that you're going to use in the model. And deep learning, many people expect over the next few years to be able to help with that, so that the features are something that a data scientist would collaborate with a domain expert. And deep learning, just the way it learns the features of a cat, like, here's the nose, here's the ears, here's the whiskers, there's the expectation that deep learning will help identify the features for models. So you turn machine learning on itself, and it helps things. Among other things that should get easier. >> We're going to get to talk to several of those keynoters a little bit later in the show, so do a little more deeper dive on that. Maybe talk to us just generally to about, who's here at this show, and what do you think they're looking for in the Spark community? >> Spark was always a bottom-up, adoption-first, because it fixed some really difficult problems with the predecessor technology, which was called MapReduce, which was the compute engine in Hadoop. That was not familiar to most programmers, whereas Spark, you know, there's an API for machine learning, there's an API for batch processing, for string processing, graph processing, but you can use SQL over all of those, and that made it much more accessible. And the fact that, now machine learning's built in, streaming's built in. All those things, you basically, MapReduce, the old version, was the equivalent of assembly language. This is at a SQL-level language. >> And so you were here at Spark Summit 2016, right? >> George: Yeah. >> We've seen some advances. Would you say it's incremental advances, or are we really making big leaps? >> Well, Spark 2.0 was a big leap, and we're just approaching 2.2. I would say that getting this structured streaming down to such low latency is a big, big deal, and adding good support for deep learning, which is now all the craze. Although most people are using it for, essentially, vision, listening, speaking, natural language processing, but it'll spread to other use cases. >> Yeah, we're going to hear about some more of those use cases throughout the show. We've got customers coming in, I won't name them all right now, but they'll be rolling in. What do you want to know most from those customers? >> The real thing is, Spark started out as, like, offline analytic preparation of data that was in data lakes. And it's moving more into the mainstream of production apps. The big thing is, what's the sweet spot? What type of apps, where are the edge conditions? That's what I think we'll be looking for. >> And when Matei came out on stage, what did you hear from him? What was the first thing he was prioritizing? Feel free to check your notes that you were taking! >> He was talking about, he did the state of the union as he normally does. The astonishing figure that there's like 375,000, I think, Spark Meetup members-- >> David: Wow. >> Yeah. And that's grown over the last four years, basically, from almost zero. So his focus really was on deep learning and on streaming, and those are the things we want to drill down a little bit. In the context of, what can you build with both? >> Well, we're coming up on our first break here, George. I'm really looking forward to interviewing some more of the guests today. So, thanks very much, and I invite you to stay with us here on theCUBE. We'll see you soon. (energetic techno music)
SUMMARY :
Brought to you by Databricks. but this is going to be one of the greatest, I think. and some of the other keynote presenters, And I know I'm driving into the details a bit, essentially, you can do per-event processing, And so this contributes to deep learning, right? and so for every data element in the stream, So, maybe the first one was talking about and so they're training the model to predict, And the continuous processing demo showed That was a cool demo. the applications for machine learning. so that the features are something a little bit later in the show, MapReduce, the old version, was the equivalent Would you say it's incremental advances, but it'll spread to other use cases. What do you want to know most from those customers? And it's moving more into the mainstream of production apps. he did the state of the union as he normally does. In the context of, what can you build with both? and I invite you to stay with us here on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George | PERSON | 0.99+ |
David Goad | PERSON | 0.99+ |
Tim Hunter | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Christopher Re | PERSON | 0.99+ |
three demos | QUANTITY | 0.99+ |
Matei | PERSON | 0.99+ |
one millisecond | QUANTITY | 0.99+ |
375,000 | QUANTITY | 0.99+ |
Moscone Center | LOCATION | 0.99+ |
Spark Summit | EVENT | 0.98+ |
SQL | TITLE | 0.98+ |
first thing | QUANTITY | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one thing | QUANTITY | 0.98+ |
Spark | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
this year | DATE | 0.96+ |
first one | QUANTITY | 0.96+ |
MapReduce | TITLE | 0.96+ |
first demo | QUANTITY | 0.96+ |
Spark Summit 2016 | EVENT | 0.95+ |
Hadoop | TITLE | 0.95+ |
Databricks | ORGANIZATION | 0.94+ |
James Bond | PERSON | 0.93+ |
first break | QUANTITY | 0.92+ |
Wikibon | ORGANIZATION | 0.92+ |
two big things | QUANTITY | 0.86+ |
Stanford | ORGANIZATION | 0.84+ |
2.2 release | QUANTITY | 0.83+ |
once every 200 milliseconds | QUANTITY | 0.81+ |
one | QUANTITY | 0.8+ |
Spark | ORGANIZATION | 0.76+ |
last 18 months | DATE | 0.75+ |
almost zero | QUANTITY | 0.71+ |
James | PERSON | 0.71+ |
first | QUANTITY | 0.7+ |
last four years | DATE | 0.7+ |
second | QUANTITY | 0.69+ |
Spark 2.0 | COMMERCIAL_ITEM | 0.68+ |
theCUBE | ORGANIZATION | 0.66+ |
#theCUBE | ORGANIZATION | 0.64+ |
years | DATE | 0.5+ |
2.2 | OTHER | 0.49+ |
#SparkSummit | TITLE | 0.48+ |
Bond | TITLE | 0.48+ |
Kickoff | EVENT | 0.31+ |
Wikibon Big Data Market Update Pt. 1 - Spark Summit East 2017 - #sparksummit - #theCUBE
>> [Announcer] Live from Boston, Massachusetts, this is theCUBE, covering Spark Summit East 2017, brought to you by Databricks. Now, here are your hosts, Dave Vellante and George Gilbert. >> We're back, welcome to Boston, everybody, this is a special presentation that George Gilbert and I are going to provide to you now. SiliconANGLE Media is the umbrella brand of our company, and we've got three sub-brands. One of them is Wikibon, it's the research organization that Gorge works in, and then of course, we have theCUBE and then SiliconANGLE, which is the tech publication, and then we extensively, as you may know, use CrowdChat and other social data, but we want to drill down now on the Wikibon, Wikibon research side of things. Wikibon was the first research company ever to do a big data forecast. Many, many years ago, our friend Jeff Kelly produced that for several years, we opensourced it, and it really, I think helped the industry a lot, sort of framing the big data opportunity, and then George last year did the first Spark forecast, really Spark adoption, so what we want to do now is talk about some of the trends in the marketplace, this is going to be done in two parts, today's part one, and we're really going to talk about the overall market trends and the market conditions, and then we're going to go to part two tomorrow, where you're going to release some of the numbers, right? And we'll share some of the numbers today. So, we're going to start on the first slide here, we're going to share with you some slides. The Wikibon forecast review, and George is going to, I'm going to ask you to talk about where we are at with big data apps, everybody's saying it's peaked, big data's now going mainstream, where are we at with big data apps? >> [George] Okay, so, I want to quote, just to provide context, the former CTO on VMware, Steve Herrod. He said, "In the end, it wasn't big data, "it was big analytics." And what's interesting is that when we start thinking about it, there have been three classes of, there have been traditionally two classes of workloads, one batch, and in the context of analytics, that means running reports in the background, doing offline business intelligence, but then there was also the interactive-type work. What's emerging is something that's continuously happening, and it doesn't mean that all apps are going to be always on, it just means that there are, all apps will have a batch component, an interactive component, like with the user, and then a streaming, or continuous component. >> [Dave] So it's a new type of workload. >> Yes. >> Okay. Anything else you want to point out here? >> Yeah, what's worth mentioning, this is, it's not like it's going to burst fully-formed out of the clouds, and become sort of a new standard, there's two things that has to happen, the technology has to mature, so right now you have some pretty tough trade-offs between integration, which provides simplicity, and choice and optimization, which gives you fragmentation, and then skillset, and both of those need to develop. >> [Dave] Alright, we're going to talk about both of those a little bit later in this segment. Let's go to the next slide, which really talks to some of the high-level forecast that we released last year, so these are last year's numbers, correct? >> Yes, yes. >> [Dave] Okay, so, what's changed? You've got the ogive curve, which is sort of the streaming penetration, Spark/streaming, that's what, was last year, this is now reflective of continuous, you'll be updating that, how is this changing, what do you want us to know here? >> [George] Okay, so the key takeaways here are, first, we took three application patterns, the first being the data lake, which is sort of the original canonical repository of all your data. That never goes away, but on top of it, you layer what we were calling last year systems of engagement, which is where you've got the interactive machine learning component helping to anticipate and influence a user's decision, and then on top of that, which was the aqua color, was the self-tuning systems, which is probably more IIoT stuff, where you've got a whole ecosystem of devices and intelligence in the cloud and at the edge, and you don't necessarily need a human in the loop. But, these now, when you look at them, you can break them down as having three types of workloads, the batch, the interactive, and the continuous. >> Okay, and that is sort of a new workload here, and this is a real big theme of your research now is, we all remember, no, we don't all remember, I remember punch cards, that's the ultimate batch, and then of course, the terminals were interactive, and you think of that as closer to real time, but now, this notion of continuous, if you go to the next slide, Patrick, we can take a look at how workloads are changing, so George, take us through that dynamic. >> [George] Okay so, to understand where we're going, sometimes it helps to look at where we've come from, and the traditional workloads, if we talk about applications, they were divided into, now, we talked about sort of batch versus interactive, but now, they were also divided into online transaction processing, operational application, systems of record, and then there was the analytic side, which was reporting on it, but this was sort of backward-looking reporting, and we begin to see some convergence between the two with web and mobile apps, where a user was interacting both with the analytics that informed an interaction that they might have. That's looking backwards, and we're going to take a quick look at some of the new technologies that augmented those older application patterns. Then we're going to go look at the emergent workloads and what they look like. >> Okay so, let's have a quick conversation about this before we go on to the next segment. Hadoop obviously was batch. It really was a way, as we've talked about today and many other dates in theCUBE, a way to reduce the expense of doing data warehousing and business intelligence, I remember we were interviewing Jeff Hammerbacher, and he said, "When I was at Facebook, "my mission was to break the dependency "and the container, the storage container." So he really wanted to, needed to reduce costs, he saw that infrastructure needed to change, so if you look at the next slide, which is really sort of talking to Hadoop doing batch in traditional BI, take us through that, and then we'll sort of evolve to the future. >> Okay, so this is an example of traditional workloads, batch business intelligence, because Hadoop has not really gotten to the maturity point of view where you can really do interactive business intelligence. It's going to take a little more work. But here, you've basically put in a repository more data than you could possibly ever fit in a data warehouse, and the key is, this environment was very fragmented, there were many different engines involved, and so there was a high developer complexity, and a high operational complexity, and we're getting to the point where we can do somewhat better on the integration, and we're getting to the point where we might be able to do interactive business intelligence and start doing a little bit of advanced analytics like machine learning. >> Okay. Let's talk a little bit about why we're here, we're here 'cause it's Spark Summit, Spark was designed to simplify big data, simplify a lot of the complexity in Hadoop, so on the next slide, you've got this red line of Spark, so what is Spark's role, what does that red line represent? >> Okay, so the key takeaway from this slide is, couple things. One, it's interesting, but when you listen to Matei Zaharia, who is the creator of Spark, he said, "I built this to be a better MapReduce than MapReduce," which was the old crufty heart of Hadoop. And of course, they've stretched it far beyond their original intentions, but it's not the panacea yet, and if you put it in the context of a data lake, it can help you with what a data engineer does with exploring and munging the data, and what a data scientist might do in terms of processing the data and getting it ready for more advanced analytics, but it doesn't give you an end-to-end solution, not even within the data lake. The point of explaining this is important, because we want to explain how, even in the newer workloads, Spark isn't yet mature to handle the end-to-end integration, and by making that point, we'll show where it needs still more work, and where you have to substitute other products. >> Okay, so let's have a quick discussion about those workloads. Workloads really kind of drive everything, a lot of decisions for organizations, where to put things, and how to protect data, where the value is, so in this next slide you've got, you're juxtaposing traditional workloads with emerging workloads, so let's talk about these new continuous apps. >> Okay, so, this tees it up well, 'cause we focused on the traditional workloads. The emerging ones are where data is always coming in. You could take a big flow of data and sort of end it and bucket it, and turn it into a batch process, but now that we have the capability to keep processing it, and you want answers from it very near real time, you don't want to stop it from flowing, so the first one that took off like this was collecting telemetry about the operation and performance of your apps and your infrastructure, and Splunk sort of conquered that workload first. And then the second one, the one that everyone's talking about now is sort of Internet of Things, but more accurately, the Industrial Internet of Things, and that stream of data is, again, something you'll want to analyze and act on with as little delay as possible. The third one is interesting, asynchronous microservices. This is difficult, because this doesn't necessarily require a lot of new technology, so much as a new skillset for developers, and that's going to mean it takes off fairly slowly. Maybe new developers coming out of school will adopt it whole cloth, but this is where you don't rely on a big central database, this is where you break things into little pieces, and each piece manages itself. >> So you say the components of these arrows that you're showing in just explore processor, these are all sort of discrete elements of the data flow that you have to then integrate as a customer? >> [George] Yes, frankly, these are all steps that could be an end-to-end integrative process, but it's not yet mature enough really to do it end-to-end. For example, we don't even have a data store that can go all the way from ingest to serve, and by ingest, I mean taking the millions, potentially millions or more, events per second coming in from your Internet of Things devices, the explorer would be in that same data store, letting you visualize what's there, and process doing the analysis, and serving then is, from that same data store, letting your industrial devices, or your business intelligence workloads get real-time updates. For this to work as one whole, we need a data store, for example, that can go from end-to-end, in addition to the compute and analytic capabilities that go end-to-end. The point of this is, for continuous workloads, we do want to get to this integrated point somehow, sometime, but we're not there yet. >> Okay, let's go deeper, and take a look at the next slide, you've got this data feedback loop, and you've got this prediction on top of this, what does all that mean, let's double-click on that. >> Okay, so now we're unpacking the slide we just looked at, in that we're unpacking it into two different elements, one is what you're doing when you're running the system, and the next one will be what you're doing when you're designing it. And so for this one, what you're doing when you're running the system, I've grayed out the where's the data coming from and where's it going to, just to focus on how we're operating on the data, and again, to repeat the green part, which is storage, we don't have an end-to-end integrated store that could cost-effectively, scalably handle this whole chain of steps, but what we do have is that in the runtime, you're going to ingest the data, you're going to process it and make it ready for prediction, then there's a step that's called devops for data science, we know devops for developers, but devops for data science, as we're going to see, actually unpacks a whole 'nother level of complexity, but this devops for data science, this is where you get the prediction, of, okay, so, if this turbine is vibrating and has a heat spike, it means shut it down because something's going to fail. That's the prediction component, and the serve part then takes that prediction, and makes sure that that device gets it fast. >> So you're putting that capability in the hands of the data science component so they can effect that outcome virtually instantaneously? >> Yes, but in this case, the data scientist will have done that at design time. We're still at run time, so this is, once the data scientist has built that model, here, it's the engineer who's keeping it running. >> Yeah, but it's designed into the process, that's the devops analogy. Okay great, well let's go to that sort of next piece, which is design, so how does this all affect design, what are the implications there? >> So now, before we had ingest process, then prediction with devops for data science, and then serving, now when you're at design time, you ingest the data, and there's a whole unpacking of steps, which requires a handful, or two fistfuls of tools right now to make operate. This is to acquire the data, explore it, prepare it, model it, assess it, distribute it, all those things are today handled by a collection of tools that you have to stitch together, and then you have process at which could be typically done in Spark, where you do the analysis, and then serving it, Spark isn't ready to serve, that's typically a high-speed database, one that either has tons of data for history, or gets very, very fast updates, like a Redis that's almost like a cache. So the point of this is, we can't yet take Spark as gospel from end to end. >> Okay so, there's a lot of complexity here. >> [George] Right, that's the trade-off. >> So let's take a look at the next slide, which talks to where that complexity comes from, let's look at it first from the developer side, and then we'll look at the admin, so, so on the next slide, we're looking at the complexity from the dev perspective, explain the axes here. >> Okay, okay. So, there's two axes. If you look at the x-axis at the bottom, there's ingest, explore, process, serve. Those were the steps at a high level that we said a developer has to master, and it's going to be in separate products, because we don't have the maturity today. Then on the y-axis, we have some, but not all, this is not an exhaustive list of all the different things a developer has to deal with, with each product, so the complexity is multiplying all the steps on the y-axis, data model, addressing, programming model, persistence, all the stuff's on the y-axis, by all the products he needs on the x-axis, it's a mess, which is why it's very, very hard to build these types of systems today. >> Well, and why everybody's pushing on this whole unified integration, that was a major thing that we heard throughout the day today. What about from the admin's side, let's take a look at the next slide, which is our last slide, in terms of the operational complexity, take us through that. >> [George] Okay, so, the admin is when the system's running, and reading out the complexity, or inferring the complexity, follows the same process. On the y-axis, there's a separate set of tasks. These are admin-related. Governance, scheduling and orchestration, a high availability, all the different types of security, resource isolation, each of these is done differently for each product, and the products are on the x-axis, ingest, explore, process, serve, so that when you multiply those out, and again, this isn't exhaustive, you get, again, essentially a mess of complexity. >> Okay, so we got the message, if you're a practitioner of these so-called big data technologies, you're going to be dealing with more complexity, despite the industry's pace of trying to address that, but you're seeing new projects pop up, but nonetheless, it feels like the complexity curve is growing faster than customer's ability to absorb that complexity. Okay, well, is there hope? >> Yes. But here's where we've had this conundrum. The Apache opensource community has been the most amazing source of innovation I think we've ever seen in the industry, but the problem is, going back to the amazing book, The Cathedral and the Bazaar, about opensource innovation versus top-down, the cathedral has this central architecture that makes everything fit together harmoniously, and beautifully, with simplicity. But the bazaar is so much faster, 'cause it's sort of this free market of innovation. The Apache ecosystem is the bazaar, and the burden is on the developer and the administrator to make it work together, and it was most appropriate for the big internet companies that had the skills to do that. Now, the companies that are distributing these Apache opensource components are doing a Herculean job of putting them together, but they weren't designed to fit together. On the other hand, you've got the cloud service providers, who are building, to some extent, services that have standard APIs that might've been supported by some of the Apache products, but they have proprietary implementations, so you have lock-in, but they have more of the cathedral-type architecture that-- >> And they're delivering 'em their services, even though actually, many of those data services are discrete APIs, as you point out, are proprietary. Okay, so, very useful, George, thank you, if you have questions on this presentation, you can hit Wikibon.com and fire off a question to us, we'll make sure it gets to George and gets answered. This is part one, part two tomorrow is we're going to dig into some of the numbers, right? So if you care about where the trends are, what the numbers look like, what the market size looks like, we'll be sharing that with you tomorrow, all this stuff, of course, will be available on-demand, we'll be doing CrowdChats on this, George, excellent job, thank you very much for taking us through this. Thanks for watching today, it is a wrap of day one, Spark Summit East, we'll be back live tomorrow from Boston, this is theCUBE, so check out siliconangle.com for a review of all the action today, all the news, check out Wikibon.com for all the research, siliconangle.tv is where we house all these videos, check that out, we start again tomorrow at 11 o'clock east coast time, right after the keynotes, this is theCUBE, we're at Spark Summit, #SparkSummit, we're out, see you tomorrow. (electronic music jingle)
SUMMARY :
brought to you by Databricks. and the market conditions, and then we're going to go and it doesn't mean that all apps are going to be always on, Anything else you want to point out here? the technology has to mature, so right now Let's go to the next slide, which really and at the edge, and you don't necessarily need and you think of that as closer to real time, and the traditional workloads, "and the container, the storage container." and we're getting to the point where so on the next slide, you've got this red line of Spark, but it's not the panacea yet, and if you put it Okay, so let's have a quick discussion and you want answers from it very near real time, and by ingest, I mean taking the millions, and take a look at the next slide, and the next one will be what you're doing here, it's the engineer who's keeping it running. Yeah, but it's designed into the process, So the point of this is, we can't yet take Spark so on the next slide, we're looking of all the different things a developer has to deal with, let's take a look at the next slide, and the products are on the x-axis, it feels like the complexity curve is growing faster and the burden is on the developer and the administrator of all the action today, all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Hammerbacher | PERSON | 0.99+ |
Steve Herrod | PERSON | 0.99+ |
Jeff Kelly | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Matei Zaharia | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
millions | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Spark | TITLE | 0.99+ |
Gorge | ORGANIZATION | 0.99+ |
one batch | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
two classes | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
three classes | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
two different elements | QUANTITY | 0.99+ |
first slide | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
The Cathedral and the Bazaar | TITLE | 0.99+ |
each product | QUANTITY | 0.99+ |
each piece | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
first one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
first research | QUANTITY | 0.98+ |
Spark Summit East 2017 | EVENT | 0.97+ |
Hadoop | TITLE | 0.97+ |
two things | QUANTITY | 0.97+ |
two fistfuls of tools | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
day one | QUANTITY | 0.95+ |
#SparkSummit | EVENT | 0.93+ |
siliconangle.com | OTHER | 0.93+ |
two axes | QUANTITY | 0.92+ |
Kickoff - Spark Summit East 2017 - #sparksummit - #theCUBE
>> Narrator: Live from Boston, Massachusetts, this is theCUBE covering Spark Summit East 2017. Brought to you by Databricks. Now, here are your hosts, Dave Vellante and George Gilbert. >> Everybody the euphoria is still palpable here, we're in downtown Boston at the Hynes Convention Center. For Spark Summit East, #SparkSummit, my co-host and I, George Gilbert, will be unpacking what's going on for the next two days. George, it's good to be working with you again. >> Likewise. >> I always like working with my man, George Gilbert. We go deep, George goes deeper. Fantastic action going on here in Boston, actually quite a good crowd here, it was packed this morning in the keynotes. The rave is streaming. Everybody's talking about streaming. Let's sort of go back a little bit though George. When Spark first came onto the scene, you saw these projects coming out of Berkeley, it was the hope of bringing real-timeness to big data, dealing with some of the memory constraints that we found going from batch to real-time interactive and now streaming, you're going to talk about that a lot. Then you had IBM come in and put a lot of dough behind Spark, basically giving it a stamp, IBM's imprimatur-- >> George: Yeah. >> Much in the same way it did with Lynx-- >> George: Yeah. >> Kind of elbowing it's way in-- >> George: Yeah. >> The marketplace and sort of gaining a foothold. Many people at the time thought that Hadoop needed Spark more than Spark needed Hadoop. A lot of people thought that Spark was going to replace Hadoop. Where are we today? What's the state of big data? >> Okay so to set some context, when Hadoop V1, classic Hadoop came out it was file system, commodity file system, keep everything really cheap, don't have to worry about shared storage, which is very expensive and the processing model, the execution of munging through data was map produced. We're all familiar with those-- >> Dave: Complicated but dirt cheap. >> Yes. >> Dave: Relative to a traditional data warehouse. >> Yes. >> Don't buy a big Oracle Unix box or Lynx box, buy this new file system and figure out how to make it work and you'll save a ton of money. >> Yeah, but unlike the traditional RDBMS', it wasn't really that great for doing interactive business intelligence and things like that. It was really good for big batch jobs that would run overnight or periods of hours, things like that. The irony is when Matei Zaharia, the co-creator of Spark or actually the creator and co-founder of Databricks, which is steward of Spark. When he created the language and the execution environment, his objective was to do a better MapReduce than Radue, than MapReduce, make it faster, take advantage of memory, but he did such a good job of it, that he was able to extend it to be a uniform engine not just for MapReduce type batch stuff, but for streaming stuff. >> Dave: So originally they start out thinking that if I get this right-- >> Yeah. >> It was sort of a microbatch leveraging memory more effectively and then it extended beyond-- >> The microbatch is their current way to address the streaming stuff. >> Dave: Okay. >> It takes MapReduce, which would be big long running jobs, and they can slice them up and so each little slice turns into an element in the stream. >> Dave: Okay, so the point it was improvement upon these big long batch jobs-- >> George: Yeah. >> They're making it batch to interactive in real-time, so let's go back to big data for a moment here. >> George: Yeah. >> Big data was the hottest topic in the world three or four years ago and now it's sort of waned as a buzz word, but big data is now becoming more mainstream. We've talked about that a lot. A lot of people think it's done. Is big data done? >> George: Not it's more that it's sort of-- it's boring for us, kind of pundits, to talk about because it's becoming part of the fabric. The use cases are what's interesting. It started out as a way to collect all data into this really cheap storage repository and then once you did that, this was the data you couldn't afford to put into your terra data, data warehouse at 25,000 per terabyte or with running costs a multiple of that. Here you put all your data in here, your data scientists and data engineers started munging with the data, you started taking workloads off your data warehouse, like ETL things that didn't belong there. Now people are beginning to experiment with business intelligence sort of exploration and reporting on Hadoop, so taking more workloads off the data warehouse. The limitations, there are limitations there that will get solved by putting MPP SQL back-ends on it, but the next step after that. So we're working on that step, but the one that comes after that is make it easier for data scientists to use this data, to create predictive models-- [Dave] Okay, so I often joke that the ROI on big data was reduction on investment and lowering the denominator-- >> George: Yeah. >> In the expense equation, which I think it's fair to say that big data and Hadoop succeeded in achieving that, but then the question becomes, what's the real business impact. Clearly big data has not, except in some edge cases and there are a number of edge cases and examples, but it's not yet anyway lived up to the promise of real-time, affecting outcomes before, you know taking the human out of the decision, bringing transaction and analytics together. Now we're hearing a lot of that talk around AI and machine learning, of course, IoT is the next big thing, that's where streaming fits in. Is it same line new bottle? Or is it sort of the evolution of the data meme? >> George: It's an evolution, but it's not just a technology evolution to make it work. When we've been talking about big data as efficiency, like low cost, cost reduction for the existing type of infrastructure, but when it starts going into machine learning you're doing applications that are more strategic and more top line focused. That means your c-level execs actually have to get involved because they have to talk about the strategic objectives, like growth versus profitability or which markets you want to target first. >> So has Spark been a headwind or tailwind to Hadoop? >> I think it's very much been a tailwind because it simplified a lot of things that took many, many engines in Hadoop. That's something that Matei, creator of Spark, has been talking about for awhile. >> Dave: Okay something I learned today and actually I had heard this before, but the way I phrased it in my tweet, Genomiocs is kicking Moore's Law's ass. >> George: Yeah. >> That the price performance of sequencing a gene improves three x every year to what is essentially a doubling every 18 months for Moore's Law. The amount of data that's being created is just enormous, I think we heard from Broad Institute that they create 17 terabytes a day-- >> George: Yeah. >> As compared to YouTube, which is 24 terabytes a day. >> And then a few years it will be-- >> It will be dwarfing YouTube >> Yeah. >> Of course Twitter you couldn't even see-- >> Yeah. >> So what do you make of that? Is that just the fun fact, is that a new use case, is that really where this whole market is headed? >> It's not a fun fact because we've been hearing for years and years about this study about data doubling every 18 to 24 months, that's coming from the legacy storage guys who can only double their capacity every 18 to 24 months. The reality is that when we take what was analog data and we make it digitally accessible, the only thing that's preventing us from capturing all this data is the cost to acquire and manage it. The available data is growing much, much faster than 40% every 18 months. >> Dave: So what you're saying is that-- I mean this industry has marched to the cadence of Moore's Law for decades and what you're saying is that linear curve is actually reshaping and it's becoming exponential. >> George: For data-- >> Yes. >> George: So the pressure is on for compute, which is now the bottleneck to get clever and clever about how to process it-- >> So that says innovation has to come from elsewhere, not just Moore's Law. It's got to come from a combination of-- Thomas Friedman talks a lot about Moore's Law being one of the fundamentals, but there are others. >> George: Right. >> So from a data perspective, what are those combinatorial effects that are going to drive innovation forward? >> George: There was a big meetup for Spark last night and the focus was this new database called SnappyData that spun out of Pivotal and it's being mentored by Paul Maritz, ex-head of Development in Microsoft in the 90s and former head of VMWare. The interesting thing about this database, and we'll start seeing it in others, is you don't necessarily want to be able to query and analyze petabytes at once, it will take too long, sort of like munging through data of that size on Hadoop took too long. You can do things that approximate the answer and get it much faster. We're going to see more tricks like that. >> Dave: It's interesting you mention Maritz, I heard a lot of messaging this morning that talked about essentially real-time analysis and being able to make decisions on data that you've never seen before and actually affect outcomes. This narrative I first heard from Maritz many, many years ago when they launched Pivotal. He launched Pivotal to be this platform for building big data apps and now you're seeing Databricks and others sort of usurp that messaging and actually seeming to be at the center of that trend. What's going on there? >> I think there's two, what would you call it, two centers of gravity and our CTO David Floyer talks about this. The edge is becoming more intelligent because there's a huge bandwidth and latency gap between these smart devices at the edge, whether the smart device is like a car or a drone or just a bunch of sensors on a turbine. Those things need to analyze and respond in near real-time or hard real-time, like how to tune themselves, things like that, but they also have to send a lot of data back to the cloud to learn about how these things evolve. In other words it would be like sending the data to the cloud to figure out how the weather patterns are changing. >> Dave: Um,humm. >> That's the analogy. You need them both. >> Dave: Okay. >> So Spark right now is really good in the cloud, but they're doing work so that they can take a lighter weight version and put at the edge. We've also seen Amazon put some stuff at the edge and Azure as well. >> Dave: I want you to comment. We're going to talk about this later, we have a-- George and I are going to do a two-part series at this event. We're going to talk about the state of the market and then we're going to release our big data, in a glimpse to our big data numbers, our Spark forecast, our streaming forecast-- I say I mention streaming because that is-- we talk about batch, we talk about interactive/real-time, you know you're at a terminal-- anybody who's as old as I am remembers that. But now you're talking about streaming. Streaming is a new workload type, you call these things continuous apps, like streams of events coming into a call center, for example, >> George: Yeah. >> As one example that you used. Add some color to that. Talk about that new workload type and the roll of streaming, and really potentially how it fits into IoT. >> Okay, so for the last 60 years, since the birth of digital computing, we've had either one of two workloads, they were either batch, which is jobs that ran offline, you put your punch cards in and sometime later the answer comes out. Or we've had interactive, which is originally it was green screens and now we have PCs and mobile devices. The third one coming up now is continuous or streaming data that you act on in near real-time. It's not that those apps will replace the previous ones, it's that you'll have apps that have continuous processing, batch processing, interactive as a mix. An example would be today all the information about how your applications and data center infrastructure are operating, that's a lot of streams of data that Splunk first, took amat and did very well with-- so that you're looking in real-time and able to figure out if something goes wrong. That type of stuff, all the coulometry from your data center, that is a training wheel for Internet things, where you've got lots of stuff out at the edge. >> Dave: It's interesting you mention Splunk, Splunk doesn't actually use the big data term in its marketing, but they actually are big data and they are streaming. They're actually not talking about it, they're just doing it, but anyway-- Alright George, great thanks for that overview. We're going to break now, bring back our first guest, Arun Murthy, coming in from Hortonworks, co-founder at Hortonworks, so keep it right there everybody. This is theCUBE we're live from Spark Summit East, #SparkSummit, we'll be right back. (upbeat music)
SUMMARY :
Brought to you by Databricks. George, it's good to be working with you again. and now streaming, you're going to talk about that a lot. Many people at the time thought that Hadoop needed Spark and the processing model, buy this new file system and figure out how to make it work and the execution environment, to address the streaming stuff. in the stream. so let's go back to big data for a moment here. and now it's sort of waned as a buzz word, [Dave] Okay, so I often joke that the ROI on big data and machine learning, of course, IoT is the next big thing, but it's not just a technology evolution to make it work. That's something that Matei, creator of Spark, but the way I phrased it in my tweet, That the price performance of sequencing a gene all this data is the cost to acquire and manage it. I mean this industry has marched to the cadence So that says innovation has to come from elsewhere, and the focus was this new database called SnappyData and actually seeming to be at the center of that trend. but they also have to send a lot of data back to the cloud That's the analogy. So Spark right now is really good in the cloud, We're going to talk about this later, we have a-- As one example that you used. and sometime later the answer comes out. We're going to break now,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Arun Murthy | PERSON | 0.99+ |
Matei Zaharia | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Thomas Friedman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Matei | PERSON | 0.99+ |
Broad Institute | ORGANIZATION | 0.99+ |
Berkeley | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Maritz | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two-part | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
25,000 per terabyte | QUANTITY | 0.99+ |
Hynes Convention Center | LOCATION | 0.99+ |
24 months | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.98+ |
first guest | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
Hadoop | TITLE | 0.97+ |
last night | DATE | 0.97+ |
three | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
40% | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Spark Summit East 2017 | EVENT | 0.97+ |
17 terabytes a day | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
24 terabytes a day | QUANTITY | 0.97+ |
ORGANIZATION | 0.96+ | |
decades | QUANTITY | 0.96+ |
90s | DATE | 0.96+ |
Moore's Law | TITLE | 0.96+ |
two workloads | QUANTITY | 0.96+ |
Spark | TITLE | 0.95+ |
four years ago | DATE | 0.94+ |
Moore's | TITLE | 0.94+ |
two centers | QUANTITY | 0.92+ |
Unix | COMMERCIAL_ITEM | 0.92+ |
Kickoff | EVENT | 0.92+ |
#SparkSummit | EVENT | 0.91+ |