Image Title

Search Results for Namik:

Namik Hrle, IBM | IBM Think 2018


 

>> Narrator: Live, from Las Vegas, it's theCUBE, covering IBM Think 2018, brought to you by IBM. >> Welcome back to theCUBE. We are live on day one of the inaugural IBM Think 2018 event. I'm Lisa Martin with Dave Vellante, and we are in sunny Vegas at the Mandalay Bay, excited to welcome to theCUBE, one of the IBM Fellows, Namik Hrle, welcome to theCUBE. >> Thank you so much. >> So you are not only an IBM Fellow, but you're also an IBM analytics technical leadership team chair. Tell us about you're role on that technical leadership team. What are some of the things that you're helping to drive? And maybe even give us some of the customer feedback that you're helping to infiltrate into IBM's technical direction. >> Okay, so basically, technical leadership team is a group of top technical leaders in the IBM analytics group, and we are kind of chartered by evaluating the new technologies, providing the guidance to our business leaders into what to invest, what to de-invest, listening to our customer requirements, listening to how the customers actually using the technology, and making sure that IBM is timely there when it's needed. And also very important element of the technical leadership team is also to promote the innovation, innovative activities, particularly kind of grass roots innovative activities. Meaning helping our technical leaders across the analytics, to encourage them to come up with innovation, to present the ideas to that, to follow up on those, to potentially turn them into projects, and so on. So that's it. >> And guide them, or just sort of send them off to discover? >> As a matter of fact, we should be probably mostly sounding board, so not necessarily that this is coming from top down, but trying to encourage them, trying to incite them, trying to kind of make the innovative activity interesting, and also at the same time, make sure that they see that there's something coming out of it. It's not just they are coming up, and then nothing's happening, but trying also to turn that into the reality by working with our business developers, which, by the way, who, by the way, they control the resources, right? So, in order to do something like that. >> How much of it is guiding folks on who want to go down a certain path that maybe you know has been attempted before in that particular way, so you know what probably better to go elsewhere? Or, do you let them go and make the same mistake? Is there any of that? Like, don't go down that, don't go through that door. >> Well, as you can imagine, it's human attempt to say, Well, you know, I've already tried, already done. but you know we are really trying not to do that. >> Yeah >> We are trying not to do that, trying to have an open mind, because in this industry in which we are there's always new set of opportunities, and new conditions, and even if you are going to talk about our current topic, like fast data, and so on, I believe that many of these things have been around already, we just didn't know how how to actually, how to help, how to support something like that. But now, with the new set of the knowledge we can actually do that. >> So, let's get into the fast data. I mean, wasn't too long ago, we just asked earlier guest what inning are we at in IOT? He said the third inning. It wasn't long ago we were in the third inning of a dupe, and everything was batched, and then all of a sudden, big data changed, everything became streaming, real-time, fast data. What do you mean by fast data? What is it? What's the state of fast data inside IBM? >> Well, thank you for that question, because I also wanted when I was preparing bit of this interview, of course, I wanted first to, that we are all on the same page in terms of what fast data actually means right? And there's of course in our industry, it's full of hype and misunderstanding and everything else. And like many other things and concepts, actually it's not fundamentally newest thing. It's just the fact that the current state of technology, and enhancements in the technology, allow us to do something that we couldn't do before. So, the requirements for the fast data value proposition were always there, but right now technology allows us actually to derive the real time inside out of the data, irrespective of the data volume, variety, velocity. And when I just said that three V's, it sounds like big data, right? >> Dave: Yeah. >> And, as a matter of fact, there is a pretty large intersection with big data, but there's a huge difference. And the huge difference that typically big data is really associated with data at rest, while the fast data is really associated with data in motion. So the examples of that particular patterns are all over the place. I mean, you can think of like a click stream and stuff. You can think about ticker financial data right? You can think about manufacturing IOT data, sensors, locks. And the spectrum of industries that take advantage of that are all over the place. From financial and retail, from manufacturing, from utilities, all the way to advertising, to agriculture, and everything else. So, I like, for example, very often when I talk about fast data, people first drop immediately into let's say, you know this have YouTube streaming, or this is Facebook, Twitter, kinds of postings, and everything else. While this is true, and certainly there are business cases built on something like that, what interests me more are the huge cases, like for example Airbus, right? With 10,000 sensors in each of the wings, for using 7 terabytes of information per day, which, by the way, cannot be just dumped somewhere like before, and then do some batch processing on it. But you actually have to process that data right there, when it happens, that millisecond because, you know, the ramifications are pretty, pretty serious about that, right? Or take for example, opportunity in the utility industry, like in power, electricity, where the distributors and manufacturers really entice people to put this smart metering in place. So, they can basically measure the consumption of power, electricity, power basically on a hourly basis. And instead of giving you once yearly, kind of bill, of what it is, to know that all the time, what is the consumption, to react on spikes, to avoid blackouts, and to come up with a totally new set of business models in terms of, you know, offering some special incentives for spending or not spending, adding additional manufacturers, I mean, fantastic set of use cases. I believe that Carter said that by 2020, like 80% of the businesses will have some sort of situational awareness obligation, which is not a world of basically using this kind of capability, of event driven messaging. And I agree with that 100%. >> So it's data, fast data is data that is analyzed in real time. >> Namik: Right. >> Such that you can affect an outcome [Namik] Right. >> Before, what, before something bad happens? Before you lose the buyer? Before-- >> All over the place. You know, before fraud happens in financials, right? Before manufacturing lines breaks, right? Before, you know, airplanes, something happens with the airplane. So there are many, many, many examples of something like that, right? And when we talk about it, what we need to understand, again, even the technologies that are needed in order to deliver fast data, value propositions, are kind of known technologies. I mean, what do you really need? You need very scalable POP SOP messaging systems like Kafka, for example, right? In order to acquire the data. Then you need a system which is typically a streaming system, streams, and you have tons of offerings in the open source space, like, you know, Apache Spark streaming, you have Storm, you have Fling, Apache Fling products, as well as you have our IBM Stream. Typically it is for really the kind of enterprise for your service delivery. And then, very importantly, and this is something that I hope we will have time to talk today, is you you also need to be able to basically absorb that data. And not only do the analytics on the fly, but also to store that data and combine that with analytics with the data that is historical. And typically for that, if you read what people are kind of suggesting what to do, you have also lots of open source technology that can do that, like a Sombra, like some HDFS based systems, and so on. But what I'm saying is all of them come with this kind of complexity that yes, you can have land data somewhere, but then you need to put it somewhere else in order to do the analytics. And basically, you are introducing the latency between data production and data consumption. And this is why I believe that the technology like DB2 event store, that we announced just yesterday, is actually something that will come very, very interestingly, a very powerful part of the whole files data story. >> So, let's talk about that a little bit more. Fast data as a term, and thank you for clarifying what it means to IBM, isn't new, but to your point, as technology is evolving, it's opening up new opportunities, much like, it sounds like kind of the innovation lab that you have within IBM, there might be, Dave was asking, ideas that people bring that aren't new, maybe they were tried before, but maybe now there's new enabling technologies. Tell us about how is IBM enabling organizations, whether they're fast paced innovative start ups, to enterprise organizations, not create that sort of latency and actually achieve the business benefits that fast data can help them achieve today with today's, or rather technologies that you're announcing at the show. >> Right, right. So again, let's go through these stages that I said that every fast data technology and project and solution should really probably have. As I said, first of all you need to have some messaging POP system, and I believe that the systems like Kafka are absolutely enough for something like that. >> Dave: Sure. >> Then you need a system that's going to take this data off that fire hose coming from the cuff, which is stream, stream technology, but and as I said, lots of technologies in the open source, but IBM Stream as a technology is something that has also hundreds of different basically models, whether predictive analytics, whether it's prescriptive analytics, whether machine learning, basically kind of AI elements, text to speech. If you can apply on the data, on the wire, with the wire speed, so you need that kind of enterprise quality of service in terms of applying the analytics on the data that is streaming, and then we come to the DB2 event store, basically a repository for that fire hose data. Where you can put this data in the format in which you can basically, immediately, without any latency between data creation and data consumption, do the analytics on it. That's what we did with our DB2 event store. So, not only that we can ingest, like millions of events per second, literally millions and millions events per second, but we can also store that in a basically open format, which is tremendous value. Remember, any data based system basically in the past, stores data in its own format. So you have to use that system that created data, in order to consume that data. >> Dave: Sure. >> What event, DB2 event store does, is actually, it ingest that data, puts it into the format that you can use any kind of open source product, like for example, Spark Analytics, to do the analytics on the data. You could use Spark Machine Learning Libraries to do immediately kind of machine learning, modeling as well as scoring, on that data. So, I believe that that particular element of event store, coupled with a tremendous capability to acquire data, is what makes a really differentiation. >> And it does that how? Through a set of API's that allows it to be read? >> So, basically, when the data is coming off the hose, you know, off the streams or something like that, what event store actually does, it puts the data, it's basically in memory database right? It puts the data in memory, >> Dave: Something else that's been around forever. >> Exactly, something else yeah. We just have more of it, right? (laughing) And guess what? If it is in memory, it's going to be faster than if it is on disk. What a surprise. >> Yeah. (chuckling) >> So, of course, when put the data into the memory, and immediately makes it basically available for querying, if you need this data that just came in. But then, kind of asynchronously, offloads the data into basically Apache Parquet format. Into the columnar store. Basically allowing very powerful analytical capabilities immediately on the data. And again, if you like, you can go to the event store to query that data, but you don't have to. You can basically use any kind of tool, like Spark, like Titan or Anaconda Stack, to go after the data and do the analytics on it, to build the models on it, and so on. >> And that asynchronous transformation is fast? >> Asynchronous transformation is such that it gives you this data, which we now call historical data, basically in a minute. >> Dave: Okay. >> So it's kind of like minutes. >> So reasonable low latency. >> But what's very important to understand that actually the union of that data and the data that is in the memory on this one, we by the way, make transparent, can give you 100% what we call kind of almost transactional consistency of your queries against the data that is kind of coming in. So, it's really now a hybrid kind of store, of the memory, in the memory, very fast log, because also logging this data in order for to have it for high visibility across multiple things because this is highly scalable, I mean, it's highly what we call web scale kind of data base. And then parquet format for the open source storing of the data for historic analysis. >> Let's in our last 30 seconds or so, give us some examples, I know this was just announced, but maybe a customer genericize in terms of the business benefits that one of the beta customers is achieving leveraging this technology. >> So, in order for customers really to take advantage of all that, as I said, what I would suggest customers to do first of all to understand where the situation or where these applications actually make sense to them. Where the data is coming in fire hoses, not in the traditional transactional capabilities, but through the fire hose. Where does it come? And then apply these technologies, as I just said. Acquisition of the data, streaming on the wire, analytics, and then DB2 event store as the sort of the data. For all that, what you also need, just to tell you, you also need kind of messaging run time, which typically products like, for example, ACCA technology, and that's why we have also, we have entered also in partnership with the Liebmann in order to deliver the entire, kind of experience, for customer that want to build application that run on a fast data. >> So maybe enabling customers to become more proactive maybe predictive, eventually? >> To enable customers to take advantage of this tremendously business relevant data, that is, data that is coming in the, is it the click stream? Is it financial data? Is it IOT data? And to combine it with the assets that they already have, coming from transactions, well, that's a powerful combination. That basically they can build totally brand new business models, as well as enhance existing ones, to something that is going to, you know, improve productivity, for example, or improve the customer satisfaction, or grow the customer segments, and so on and so forth. >> Well, Namik, thank you so much for coming on theCUBE, and sharing the insight of the announcements. It's pretty cool, Dave, I'm sittin' between you, and an IBM Fellow. >> Yeah, that's uh-- >> It's pretty good for a Monday. It's Monday, isn't it? >> Thank you so much. >> Not easy becoming an IBM Fellow, so congratulations on that. >> Thank you so much. >> Lisa: And thanks, again. >> Thank you for having me. >> Lisa: Absolutely, our pleasure. For Dave Vellante, I'm Lisa Martin. We are live at Mandalay Bay in Las Vegas. Nice, sunny day today, where we are on our first day of three days of coverage at IBM Think 2018. Check out our CUBE conversations on thecube.net. Head over to siliconangle.com to find our articles on everything we've done so far at this event and other events, and what we'll be doing for the next few days. Stick around, Dave and I are going to be right back, with our next guest after a short break. (innovative music)

Published Date : Mar 19 2018

SUMMARY :

covering IBM Think 2018, brought to you by IBM. We are live on day one of the inaugural What are some of the things that you're helping to drive? providing the guidance to our business leaders So, in order to do something like that. before in that particular way, so you know what Well, as you can imagine, it's human attempt to say, and new conditions, and even if you are going to talk So, let's get into the fast data. and enhancements in the technology, allow us to do something of that are all over the place. So it's data, fast data is data that is analyzed Such that you can affect an outcome that yes, you can have land data somewhere, that you have within IBM, there might be, and I believe that the systems like Kafka off that fire hose coming from the cuff, it ingest that data, puts it into the format If it is in memory, it's going to be faster to query that data, but you don't have to. it gives you this data, which we now call that is in the memory on this one, we by the way, that one of the beta customers Acquisition of the data, streaming on the wire, to something that is going to, you know, and sharing the insight of the announcements. It's pretty good for a Monday. so congratulations on that. for the next few days.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

NamikPERSON

0.99+

Namik HrlePERSON

0.99+

100%QUANTITY

0.99+

millionsQUANTITY

0.99+

LisaPERSON

0.99+

Las VegasLOCATION

0.99+

80%QUANTITY

0.99+

10,000 sensorsQUANTITY

0.99+

Mandalay BayLOCATION

0.99+

MondayDATE

0.99+

LiebmannORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

2020DATE

0.99+

CarterPERSON

0.99+

three daysQUANTITY

0.99+

third inningQUANTITY

0.99+

ApacheORGANIZATION

0.99+

thecube.netOTHER

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

hundredsQUANTITY

0.98+

IBM Think 2018EVENT

0.98+

FacebookORGANIZATION

0.98+

KafkaTITLE

0.98+

AirbusORGANIZATION

0.98+

SparkTITLE

0.98+

NamikORGANIZATION

0.97+

YouTubeORGANIZATION

0.97+

TwitterORGANIZATION

0.96+

first dayQUANTITY

0.96+

Spark AnalyticsTITLE

0.95+

Anaconda StackTITLE

0.95+

DB2TITLE

0.95+

TitanTITLE

0.94+

millions of events per secondQUANTITY

0.94+

three VQUANTITY

0.92+

a minuteQUANTITY

0.92+

millions events per secondQUANTITY

0.89+

day oneQUANTITY

0.88+

StreamCOMMERCIAL_ITEM

0.86+

eachQUANTITY

0.84+

7 terabytes of informationQUANTITY

0.75+

oneQUANTITY

0.74+

FlingTITLE

0.71+

DB2EVENT

0.66+

StormTITLE

0.65+

ACCAORGANIZATION

0.64+

theCUBEORGANIZATION

0.64+