Image Title

Search Results for Strada:

Paul Young, Google Cloud Platform | SAP SAPPHIRE NOW 2018


 

from Orlando Florida it's the cube covering si P sapphire now 2018 brought to you by net app welcome to the cube I'm Lisa Martin with Keith Townsend and we are in Orlando Florida that sa piece a fire now 2018 or in the net out booth really cool sa piece a fire is an enormous event this is like the 25th year they've been doing it and it's been really interesting to learn Keith about sa P and how they have really transformed and one of the things that's critical is their partner ecosystem so we're excited to welcome back to the cube a cube alumni Paul Young who is the director of sa P go to market from Google platform Paul it's nice to see you thanks so what is the current news with Google and sa P so you know I think we're making a major push into this three Marquette I think the the yesterday's announcements are we all still have a four tire buy on a server online but we also brought up capacity all the way up to 20 terabytes so we really can handle pretty much all the customer base at this point so on the one end that's good there is however a lot of other stuff we're doing in the AI space in the joint engineering space with SCP and and a lot of work we're doing in the make it a lot easier for SUV customers to adopt the cloud right and and beyond just what's happening a lot in the market right now which is you know 80 percent of the customers who mu and s pieces in the cloud just do straight lift and shift so there's no for momentum with a it's just ticking the box you're in the cloud we're doing a ton of work in engineering on our own and with SCP right now to make that a much more valuable journey for the customers so yeah I don't wake up in the morning at Google and think what am I going to do today it's you know it's a there's a lot of stuff going on so Paul let's not be shy that we've had you on the cube before and your ear s AP alone and as you look out at the hyper scalars the big cloud providers s ap more or less has a reference architecture for how to do cloud how to do s AP and a hyper scale of cloud but it's not just about that base capability when I when I talk to my phone I love asking Google questions when I look at you know capabilities like AI and tensor flow and machine learning that gets me excited just in general what as you looked out at the Haifa scalers what excited you about Google is specific as you we were s ap work to fall 3 so what's so exciting about Google I did I joke internally I was I was a customer of recipes for seven years I did 20 years of SVP and and yeah and and then woke up one morning and decided to go to Google yeah I do I get this question a lot on the yeah my conversation always is it wasn't based on the cafeteria food there are other things to join me across it seriously cuz in my last roll at scpi I was working with all three the hyper scalars and one of the questions I always got from SCP people is well they're all just the same right or and when you actually work with them you discover the are different and that's no disrespect to anyone but they approach the world differently they all have different business models and and the Google thing that really put me is that the the kind of engineering and the future focus was just tremendous right this other girl could do was was immense and so I said I'll jump forward to the future and then will come back but just if you look at the investment school was making in AI and machine learning all the stuff we were order a Google i/o with the the you know custom-built testable computers that can just do an amazing performance greatness or but it's got to be applied right so so things that partially built with Deloitte it's a deletion of the demonstration for it but just to give an example of where we think the future is we build a model in Nai where we have we basically two invoices and we taught the AI system to do data entry and SCP so that's not an interface we didn't say hey here's an invoice and here's all the fields and we map them all across and here's ETL and here's other things we do right here's our interface mapping we literally said imagine you're an AP processor how do you enter an invoice and you give it detail universities and it spends a lot of time doing really stupid things trying to put addresses in the number field of someone else and then suddenly it works so how to enter an invoice and at that point it knows how to enter an invoice and then what you do is you give it more and more invoices or more and more different structures and it learns how to what an invoice is and it learns how to process that and then suddenly it can do complete data entry right so we build as a model this is sort of thing Google does just to test the limits Deloitte came along and said well that's really cool could we actually take it and run it as a product and so the light now has that in there there are engineering further out where literally you can give it any invoice it will it's not OCR it will look at the invoice and it will work out that is an invoice where all the bits you need are from it it will then work out how you would do data entry on that into an SUV system and it will enter the invoice that's a future world where I know SUVs already launched the I our own doing three-way match interesting we're talking about future won't where your your entire accounts payable Department is a Gmail inbox where they mail you invoices that you've never seen before but we're able to understand what a vendor is grantee as a vendor guarantee is not fraud checked and do the deed to entry completely automatically that is the massive new world right and that's just a tiny little bit of what we can do at Google we have it just pretty also we haven't demo running on the booth where we have tensorflow looking at pure experience pharmaceuticals right right we have we have a demo run on the booth which is a graphic of someone we're actually running at customers where we have a camera reading pharmaceutical boxes as they go past or their pinky perfect curlers in this case but it doesn't just look at the box and say I count one box it reads the text on the box but it reads the text in the box was in noise from STP was supposed to be manufactured and it comes back and says well am I putting double-strength pills and single side boxes is this most legal have I mean sent the correct box is it you know is the packaging correct it also knows what a good box looks like and it learns what a damaged box looks like a nice packaging looks like an it knows how to reject them and again that level of technology where we can monitor all of your production lines and give you guarantee quality and pharmaceuticals anywhere else tell me six months ago anyone even imagined that was possible we're doing that right now all right that that ability to work with SCP because it's all integrated with SCP we're doing Depot of efficient that ability to deliver that sort of capability at the speed we deliver that is world-changing right well you know one of the things that I just kept imagining as you gwangsu the description of invoicing thankee was on a run of the day I'm a small business owner and these things are troublesome like you get in an invoice and I'm thinking you know I got a deal my my wife does the Council of payable accounts receivable I'm like there has to be a way to automate get but then I thought about just those challenges like you get one person says an invoice that the invoices at the bottom right hand corner the the invoice numbers on the bottom right hand corner the the amount due etcetera etc just really silly questions that AI should be AI machine learning should be able to deal with build mederma yesterday on stage says that AI should all been human capability and that's a great example of how a I augments you might take a bit and it doesn't in the AP example it doesn't do a hundred percent correct all the time right it knows what it's wrong in the example of Joey runs your seat comes up and says the dates wrong here I need to fix it so it's taken the it's taken the menial work out of the process and it's lighten people really add value in it but it's also a great example of the cloud at work and what it's supposed to do right again if all you do is take official SCP and drop it in the cloud you're just running in a different place if you get to a world where with Google we we don't expose your data to everybody else but we understand what the world's invoices look like and we have that knowledge and we make the entire world more efficient by having the model know how to work that's a radically better place right and that's that's that's there's just never been that value prop before and that's it's a great big exciting thing to wake up in the morning to think that's what we do right so Lisa in the industry we have this term that data has credit I think it's fairly safe at the this week we can say that processing technology compute has gravity it's we had another guest on it says that they use a process and a technology in solution and one customer works out fine and another customer not the same results it's this complexity is this kind of dish 'part of technology that is just not easy to apply across across companies so the other part really quickly that I want to talk about is you know this isn't just about AI right it's not just about the future I mean one of the key in me I said I'm a long-term HCV customer I work a lot of customers everybody wants to get to the cool bit you know and though I always used to joke internally everybody wants to eat candy they're ready vegetables first right and so we better get you across or you can candida vegetables whichever way you've got to eat both there's some point right so um so look just getting customers into the club becomes one of the challenges it's one of the other areas where we're really applying engineering so I'm three weeks ago we bought della Strada as an example Villa Stratos is an amazing company what well so it does basically it's a plug into VMware you drop it into VMware and it watches your SUV systems running it profiles them and it works out what size capacity you're going to need in the cloud at the point where it's then got enough information it'll basically ping you and say hey I know no I'm not a machine do you want exactly the same performance at lowest price in the cloud or do you want better performance here's two configurations pick the one you want give it your Google user ID and password it will build the security build the application servers and begin a migration for you automatically depending on the timing demand the size the box between 30 minutes and two hours later you will have a running version of your SCP system in the closet never been done before that's been performance the way it works basically it's a bit a little bit of magic but it knows how much what's the minimum amount of data we need to ship across through NSEP it knows where all the data is hidden on the box on the disk then sdb needs to run and it just ships that first and then it fills in the gaps afterwards the repair mechanism so from there on the one hand you could do lists and share and frankly our competitors have been using it to do lift and shift in the past it over some a ton of potential right for a bunch of customers we can replicate their production boxes in real time and give them 30-second RPO RTO in high availability but that done but it's like that I can now take that replicated image and I can run operations on it I can run tests on I can run QE rebuilds were you because of the Google pricing model you don't pay me in advance you pay me in arrears for only the computer time that you use so you are a QA system you've got two days worth of work to rebuild it don't shut down your QA system pay me for two days rebuild and you're done or we have integrated it directly into the SDP upgrade tools so you can pipe across your system to us and we will immediately do a test upgrade for you into s4 HANA or you see us rocky or BW an Hana whatever you want I have a customer in Canada who really jumped from ECC e6 and hazard by 5 to s4 Hana using an earlier version of the tools in 72 hours with a lot of gaps to look at in between we reckon we're gonna crush that down into under 24 hours so under 24 hours we can you can literally click on an SUV server and we will not just bring you to the cloud but we will upgrade you all the way to the latest version and we we have all the components we've done it we're pushing that through right and so what we're doing now is taken the hard work and automating that so we can get to the really cool stuff in the eye side right that's way again this is where all of us for all the hyper scalers hosts you know SV systems we want to do something that's better than that right we want to make it easy to get there but we know that in order to justify what you do we're all have seven your room app 2x or hard on right so we want to make it really easy to do that and we want to make it incredibly easy to add in AI and all the other technologies along the way that's a DES and a pricing model that nobody will be right and that's that's a pretty cool place to be I'm mighty glad to be a good place I could tell by your energy so ease of use everybody wants that you talked about just the example of invoices how they can vary so dramatically and you know whether you're a small business owner to a large enterprise there's so much complexity and and fact that was one of the things that was talked about it was this morning well yeah when how so plot I was even talking about naming conventions and how customers were starting to get confused with all of the different acquisitions SAT has done so a I what Google is doing with AI on sa piece sounds like a huge differentiator so tell us as we wrap up here what makes you know in a nutshell Google different than the other hyper scale that s AP partners with and specifically what excites you about going to market with s AP at the base level your Google's just on a different scale from everybody right we are effectively put 25% of the internet if you look at our own assets we we own dark fiber that's equivalent to about 4% of the entire caballo sorry four times the entire capacity of the Internet right MA so my ability to deliver to those customers at scale and up performance levels just unchallenged in this space so you know it's a Google clearly is excelled in a lot of different areas it's been credibly starting to bring that to SVP and carry through but you're right that the the the value add ultimately isn't just the hey I can I can run you and I can run you better write the value add is so March we announced direct innovation rihana and Google bigquery when you're talking about bigquery right massive datasets that you can know Bridge to Hana if you're a retailer this is one last example I can now join all the ad tech data Google has so I can tell you all the agile currently run in Google once we march was being viewed anonymized in clusters so you can't tell the original consumers but I know that data and directly worded to bigquery and I can join at stp so I can now say you are advertising in this area let's being clicked on but I know you don't have the inventory to actually support the advertising so I want you to move advertising somewhere else right and so I can do that manually rename when I had any I to that the potential is is incredible right we've only just started so ya know next time I want the cube we'll see where we're at but it's a it's a fun place to be speaking the next time gasps have a conference coming up Google next is coming up at the end of July yeah it's we have a lot of announcements through probably the rest of the year right there's a lot of stuff going on as we come to massive scale in the SUV space so yeah anyone who's interested in this stuff especially even if you're just interesting the I stuff Google next is the place to be so sounds like it I'm expecting some big things from that based on what you talked about on how enthusiastic you are about being at Google Paul thanks so much for joining Keith and me back on the cube and we look forward to talking to you again Thanks thank you for watching the cube Lisa Martin with Keith Townsend @s AP Safire 2018 thanks for watching

Published Date : Jun 9 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Paul YoungPERSON

0.99+

CanadaLOCATION

0.99+

25%QUANTITY

0.99+

seven yearsQUANTITY

0.99+

80 percentQUANTITY

0.99+

Villa StratosORGANIZATION

0.99+

two daysQUANTITY

0.99+

Lisa MartinPERSON

0.99+

DeloitteORGANIZATION

0.99+

2018DATE

0.99+

72 hoursQUANTITY

0.99+

Keith TownsendPERSON

0.99+

Orlando FloridaLOCATION

0.99+

JoeyPERSON

0.99+

GoogleORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

KeithPERSON

0.99+

three weeks agoDATE

0.99+

Orlando FloridaLOCATION

0.99+

MarchDATE

0.99+

oneQUANTITY

0.98+

25th yearQUANTITY

0.98+

todayDATE

0.98+

two invoicesQUANTITY

0.98+

yesterdayDATE

0.98+

PaulPERSON

0.98+

six months agoDATE

0.97+

NaiLOCATION

0.97+

one boxQUANTITY

0.97+

sevenQUANTITY

0.97+

LisaPERSON

0.96+

end of JulyDATE

0.96+

hundred percentQUANTITY

0.96+

bothQUANTITY

0.96+

one personQUANTITY

0.95+

this morningDATE

0.94+

30-secondQUANTITY

0.94+

this weekDATE

0.94+

about 4%QUANTITY

0.94+

under 24 hoursQUANTITY

0.93+

three-wayQUANTITY

0.93+

GmailTITLE

0.93+

two configurationsQUANTITY

0.93+

under 24 hoursQUANTITY

0.93+

firstQUANTITY

0.92+

marchDATE

0.92+

one morningQUANTITY

0.91+

threeQUANTITY

0.91+

SCPORGANIZATION

0.88+

VMwareTITLE

0.88+

Council of payable accountsORGANIZATION

0.87+

one customerQUANTITY

0.87+

Paul YoungPERSON

0.87+

scpiORGANIZATION

0.86+

single side boxesQUANTITY

0.85+

one ofQUANTITY

0.84+

thingsQUANTITY

0.83+

s4COMMERCIAL_ITEM

0.82+

3DATE

0.82+

NSEPTITLE

0.81+

one last exampleQUANTITY

0.8+

Yuanhao Sun, Transwarp | Big Data SV 2018


 

>> Announcer: Live, from San Jose, it's The Cube (light music) Presenting Big Data Silicon Valley. Brought to you by Silicon Angle Media, and its ecosystem partners. >> Hi, I'm Peter Burris and welcome back to Big Data SV, The Cube's, again, annual broadcast of what's happening in the big data marketplace here at, or adjacent to Strada here in San Jose. We've been broadcasting all day. We're going to be here tomorrow as well, over at the Forager eatery and place to come meander. So come on over. Spend some time with us. Now, we've had a number of great guests. Many of the thought leaders that are visiting here in San Jose today were on the big data marketplace. But I don't think any has traveled as far as our next guest. Yuanhao Sun is the ceo of Transwarp. Come all the way from Shanghai Yuanhao. It's once again great to see you on The Cube. Thank you very much for being here. >> Good to see you again. >> So Yuanhao, the Transwarp as a company has become extremely well known for great technology. There's a lot of reasons why that's the case, but you have some interesting updates on how the technology's being applied. Why don't you tell us what's going on? >> Okay, so, recently we announced the first order to the TPC-DS benchmark result. Our product, calling scepter, that is, SQL engine on top of Hadoop. We already add quite a lot of features, like dissre transactions, like a full SQL support. So that it can mimic, like oracle or the mutual, and also traditional database features so that we can pass the whole test. This single is also scalable, because it's distributed, scalable. So the large benchmark, like TPC-DS. It starts from 10 terabytes. SQL engine can pester without much trouble. >> So I know that there have been other firms that have claimed to pass TPCC-DS, but they haven't been audited. What does it mean to say you're audited? I'd presume that as a result, you've gone through some extremely stringent and specific tests to demonstrate that you can actually pass the entire suite. >> Yes, actually, there is a third party auditor. They already audit our test process and it results for the passed six, uh, five months. So it is fully audited. The reason why we can pass the test is because, actually, there's two major reasons for traditional databases. They are not scalable to the process large dataset. So they could not pass the test. For (mumbles) vendors, because the SQL engine, the features to reach enough to pass all the test. You know, there several steps in the benchmark, and the SQL queries, there are 99 queries, the syntax is not supported by all howve vendors yet. And also, the benchmark required to upload the data, after the queries, and then we run the queries for multiple concurrent users. That means you have to support disputed transactions. You have to make the upload data consistent. For howve vendors, the SQL engine on Hadoop. They haven't implemented the de-switch transaction capabilities. So that's why they failed to pass the benchmark. >> So I had the honor of traveling to Shanghai last year and going and speaking at your user conference and was quite impressed with the energy that was in the room as you announced a large number of new products. You've been very focused on taking what open source has to offer but adding significant value to it. As you said, you've done a lot with the SQL interfaces and various capabilities of SQL on top of Hadoop. Where is Transwarp going with its products today? How is it expanding? How is it being organizing? How is it being used? >> We group these products into three catalog, including big data, cloud, AI and the machine learning. So there are three categories. The big data, we upgrade the SQL engine, the stream engine, and we have a set of tools called adjustable studio to help people to streamline the big data operations. And the second part I lie is data cloud. We call it transwarp data cloud. So this product is going to be raised in early in May this year. So this product we build this product on top of common idiots. We provide how to buy the service, get a sense as service, air as a service to customers. A lot of people took credit multiple tenets. And they turned as isolated by network, storage, cpu. They free to create a clusters and speeding up on turning it off. So it can also scale hundreds of cost. So this is the, I think this is the first we implement, like, a network isolation and sweaty percendency in cobinets. So that it can support each day affairs and all how to components. And because it is elastic, just like car computing, but we run on bare model, people can consult the data, consult the applications in one place. Because all application and Hadoop components are conternalized, that means, we are talking images. We can spend up a very quickly and scale through a larger cluster. So this data cloud product is very interesting for large company, because they usually have a small IT team. But they have to provide a (mumbles), and a machine only capability to larger groups, like one found the people. So they need a convenient way to manage all these bigger clusters. And they have to isolate the resources. Even they need a bidding system. So this product is, we already have few big names in China, like China Post, Picture Channel, and Secret of Source Channel. So they are already applying this data cloud for their internal customers. >> And China has a, has a few people, so I presume that, you know, China Post for example, is probably a pretty big implementation. >> Yes so, they have a, but the IT team is, like less than 100 people, but they have to support thousands of users. So that's why they, you usually would deploy 100 cluster for each application, right, but today, for large organization, they have lots of applications. They hope to leverage big data capability, but a very small team, IT team, can also part of so many applications. So they need a convenient the way like a, just like when you put Hadoop on public cloud. We provide a product that allows you to provide a hardware service in private cloud on bare model machines. So this is the second product category. And the third is the machine learning and artificial intelligence. We provide a data sales platform, a machine learning tool, that is, interactive tools that allows people to create the machine only pipelines and models. We even implemented some automatic modeling capability that allow you to, to fisher in youring automatically or seeming automatically and to select the best items for you so that the machine learning can be, so everyone can be at Los Angeles. So they can use our tool to quickly create a models. And we also have some probuter models for different industry, like financial service, like banks, security companies, even iot. So we have different probuter machine only models for them. We just need to modify the template, then apply the machine only models to the applications very quickly. So that probably like a lesson, for example, for a bank customer, they just use it to deploy a model in one week. This is very quick for them. Otherwise, in the past, they have a company to build that application, to develop much models. They usually takes several months. Today it is much faster. So today we have three categories, particularly like cloud and machine learning. >> Peter Burris: Machine learning and AI. >> And so three products. >> And you've got some very, very big implementations. So you were talking about a couple of banks, but we were talking, before we came on, about some of the smart cities. >> Yuanhao Sun: Right. Kinds of things that you guys are doing at enormous scale. >> Yes, so we deploy our streaming productor for more than 300 cities in China. So this cluster is like connected together. So we use streaming capability to monitor the traffic and send the information from city to the central government. So all the, the sort of essential repoetry. So whenever illegal behavior on the road is detected, that information will be sent to the policeman, or the central repoetry within two second. Whenever you are seen by the camera in any place in China, their loads where we send out within two seconds. >> So the bad behavior is detected. It's identified as the location. The system also knows where the nearest police person is. And it sends a message and says, this car has performed something bad. >> Yeah and you should stop that car in the next station or in the next crossroad. Today there are tens of thousands policeman. They depends on this system for their daily work. >> Peter Burris: Interesting. >> So, just a question on, it sounds like one of your, sort of nearest competitors, in terms of, let's take the open source community, at least the APIs, and in their case open source, Waway. Have their been customers that tried to do a POC with you and with Waway, and said, well it took four months using the pure open source stuff, and it took, say, two weeks with your stack having, being much broader and deeper? Are any examples like that? >> There are quite a lot. We have more macro-share, like in financial services, we have about 100 bank users. So if we take all banks into account, for them they already use Hadoop. So we, our macro-share is above 60%. >> George Gilbert: 60. >> Yeah, in financial services. We usually do POC and, like run benchmarks. They are real workloads and usually it takes us three days or one week. They can found, we can speed up their workload very quickly. For Bank of China, they might go to their oracle workload to our platform. And they test our platform and the huave platform too. So the first thing is they cannot marry the whole oracle workload to open source Hadoop, because the missing features. We are able to support all this workloads with very minor modifications. So the modification takes only several hours. And we can finish the whole workload within two hours, but originally they take, usually take oracle more than one day, >> George Gilbert: Wow. >> more than ten hours to finish the workload. So it is very easy to see the benefits quickly. >> Now the you have a streaming product also with that same SQL interface. Are you going to see a migration of applications that used to be batch to more near real time or continuous, or will you see a whole new set of applications that weren't done before, because the latency wasn't appropriate? >> For streaming applications, real time cases they are mostly new applications, but if we are using storm api or spark streaming api, it is not so easy to develop your applications. And another issue is once you detect one new rule, you had to add those rules dynamically to your cluster. So to add to your printer, they do not have so many knowledge of writing scholar codes. They only know how to configure. Probably they are familiar with c-code. They just need to add one SQL statement to add a new rule. So that they can. >> In your system. >> Yeah, in our system. So it is much easier for them to program streaming applications. And for those customers who they don't have real time equations, they hope to do, like a real time data warehousing. They collect all this data from websites from their censors, like Petrol Channel, an oil company, the large oil company. They collect all the (mumbles) information directly to our streaming product. In the past, they just accredit to oracle and around the dashboard. So it only takes hours to see the results. But today, the application can be moved through our streaming product with only a few modifications, because they are all SQL statements. And this application becomes the real time. They can see the real time dashboard results in several seconds. >> So Yuanhao, you're number one in China. You're moving more aggressively to participate in the US market. What's the, last question, what's the biggest difference between being number one in China, the way that big data is being done in China versus the way you're encountering big data being done here, certainly in the US, for example? Is there a difference? >> I think there are some difference. Some a seem, katsumoto usually request a POC. But in China, they usually, I think they focus more on the results. They focus on what benefit they can gain from your product. So we have to prove them. So we have to hip them to my great application to see the benefits. I think in US, they focus more on technology than Chinese customers. >> Interesting, so they're more on technology here in the US, more in the outcome in China. Once again, Yuanhao Sun, from, ceo of Transwarp, thank you very much for being on The Cube. >> Thank you. And I'm Peter Burris with George Gilbert, my co-host, and we'll be back with more from big data SV, in San Jose. Come on over to the Forager, and spend some time with us. And we'll be back in a second. (light music)

Published Date : Mar 8 2018

SUMMARY :

Brought to you by Silicon Angle Media, over at the Forager eatery and place to come meander. So Yuanhao, the Transwarp as a company has become So that it can mimic, like oracle or the mutual, to demonstrate that you can actually pass the entire suite. And also, the benchmark required to upload the data, So I had the honor of traveling to Shanghai last year So this product is going to be raised you know, China Post for example, and to select the best items for you So you were talking about a couple of banks, Kinds of things that you guys are doing at enormous scale. from city to the central government. So the bad behavior is detected. or in the next crossroad. and it took, say, two weeks with your stack having, So if we take all banks into account, So the first thing is they cannot more than ten hours to finish the workload. Now the you have a streaming product also So to add to your printer, So it only takes hours to see the results. to participate in the US market. So we have to prove them. in the US, more in the outcome in China. Come on over to the Forager, and spend some time with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

ShanghaiLOCATION

0.99+

George GilbertPERSON

0.99+

USLOCATION

0.99+

ChinaLOCATION

0.99+

99 queriesQUANTITY

0.99+

three daysQUANTITY

0.99+

two weeksQUANTITY

0.99+

Silicon Angle MediaORGANIZATION

0.99+

five monthsQUANTITY

0.99+

San JoseLOCATION

0.99+

China PostORGANIZATION

0.99+

Picture ChannelORGANIZATION

0.99+

one weekQUANTITY

0.99+

sixQUANTITY

0.99+

four monthsQUANTITY

0.99+

Los AngelesLOCATION

0.99+

10 terabytesQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

TodayDATE

0.99+

tomorrowDATE

0.99+

more than one dayQUANTITY

0.99+

more than 300 citiesQUANTITY

0.99+

second partQUANTITY

0.99+

two hoursQUANTITY

0.99+

less than 100 peopleQUANTITY

0.99+

more than ten hoursQUANTITY

0.99+

WawayORGANIZATION

0.99+

Bank of ChinaORGANIZATION

0.99+

thirdQUANTITY

0.99+

HadoopTITLE

0.99+

Petrol ChannelORGANIZATION

0.99+

three productsQUANTITY

0.98+

one new ruleQUANTITY

0.98+

hundredsQUANTITY

0.98+

three categoriesQUANTITY

0.98+

SQLTITLE

0.98+

singleQUANTITY

0.98+

TranswarpORGANIZATION

0.98+

firstQUANTITY

0.98+

tens of thousands policemanQUANTITY

0.98+

Yuanhao SunORGANIZATION

0.98+

each applicationQUANTITY

0.98+

two secondsQUANTITY

0.98+

100 clusterQUANTITY

0.97+

first thingQUANTITY

0.97+

about 100 bank usersQUANTITY

0.97+

two secondQUANTITY

0.97+

each dayQUANTITY

0.97+

Big Data SVORGANIZATION

0.97+

The CubeORGANIZATION

0.96+

two major reasonsQUANTITY

0.95+

oneQUANTITY

0.95+

above 60%QUANTITY

0.95+

early in May this yearDATE

0.94+

Source ChannelORGANIZATION

0.93+

Big DataORGANIZATION

0.92+

ChineseOTHER

0.9+

StradaLOCATION

0.89+

second product categoryQUANTITY

0.88+

Rich Napolitano, Plexxi | Nutanix .NEXT 2017


 

>> Announcer: Live from Washington DC, it's theCUBE covering .NEXT conference. Brought to you by Nutanix. >> Welcome back to DC everybody. Welcome back to Nutanix NEXTConf. This is the leader in live tech coverage. My name is Dave Vellante. I'm here with Stu Miniman. Rich Napolitano is here as the CEO of Plexxi. Good friend, long time CUBE alum. Great to see you. >> Great to see you guys. Pleasure to be here with you again. >> Yeah, so you know, I love the fact that you're back in startup land. I mean, you did unbelievable things at EMC, but really this is your real love, alright, runnin' startups, you know, eating glass as we call it. So, when we first heard about Plexxi, I have to admit, Rich, we were down at Strada that time and it was kind of heavy and really geeky. I'm not a networky guy. You've really done a great job of sort of transforming the messaging and the company's vision. Share with us what's up with Plexxi. >> Yeah, I know, again thank you for inviting me here and it's a pleasure. It's an exciting time for the company. You know, we're actually breaking out, right and so it's great to see the momentum. And, the team has done a fabulous job. The challenge in the early days is, you know, you have the technology and you're trying to establish your product market fit and we've done that now. And so, it's exciting to be at this important inflection point, you know, tremendous revenue growth this year. You know, we could probably be profitable if we want to be this year which there's not many startups that can say that. And, what's happened is fundamentally we really connected now what we have built, our technology to the ping points in the marketplace and we have, you know, deep deep clarity and understanding of that now. >> So, talk a little bit more about the sort of value proposition and kind of why you guys, why Dave started the company and why you joined, what you're all about. >> Yeah, so we're building the next generation networks. We're not building additional networks and so we're very focused on the next era of computing, you know, third platform, you know, and scaled down infrastructure, cloud, where the requirements on the infrastructure are very different. You need to just build a much more agile and flexible infrastructure. You know, the choice the public cloud is there and it's going to be there forever, but how do you build an agile infrastructure for private and for hybrid infrastructure? And, what we've realized, and Dave realized this early on, is that the networking architectures haven't fundamentally changed in a very, very long time. And, you know, there's an emergence now, and this is what we've really learned in the last two years, there's an emergence of this other data center network. You know, Sysco has been dominant and done a phenomenal job in traditional data sending networking, but there's this emergence of this other network and we now we call it by a name, which is the Hyper Converge Network, HCN. And so, in very simple terms, what is Plexxi? Plexxi builds the HCN for the HEI infrastructure. >> Okay, Rich, you're just going to have to unpack this a little bit so, you know, people in the networking world will be said, we understand that it was a lot of the east, west traffic, the traffic between those, but you know, architecturally you know, we kind of got rid of the sand and now we've got this distributed software model that I've got these nodes, so where was the gap that you were lookin' to fill and you know, does Nutanix understand that this was a challenge? >> Those are all great questions and very relevant to the challenge. So, when you really look at the problems we solved, we start, we pull it up to the top for a second and we've learned a lot about this the last couple of years. What people want is simplicity. They don't want complexity. And, we built a lot of complexity into every layer of the infrastructure. Everything from the applications to the operating environments, to compute, to the storage and to the network. And so, what we really bring to the marketplace is a much simpler approach to deploy infrastructure and we do that by simplifying the network dramatically. So, and we do that by having a software definable network that's built out of industry standard components. So, Plexxi really brings three things to the table. We figured out how to build this very elastic and agile fabric to allow you to compute storage, allow you to connect storaging a few things together. And, we do that on white box switches and that's dramatically reduced our cost point and is tremendously simple to deploy, but on top of that, we built our software abstractions. And, it really is the key to us is really our software control and our integrations into operating environments. So, what we bring to market is an integrated solution with a set of switches that build this fabric, but our software controller allows you to provision this network seamlessly in the same way that Nutanix talks about being the invisible infrastructure, we're the invisible network. >> So, when Nutanix first started they were like, we're going to kill that sand 'cause you don't need some of that complexity, so when do I need this you know, fabric as you call it, as that interconnected tissue, you know, what size customer, you know, what kind of challenges does that, you know, really knock down. And So, if you're living within a rack you don't have any of these problems really. Right, I mean our integration into Nutanix is so sophisticated now that even within the rack we dramatically simplify your network provisioning so even within a rack our value proposition of simplicity and ease of use is compelling. We make the network invisible in that context. So, as you provision your VM's or your storage in a Nutanix environment, the network comes along. The value proposition just is most compelling as you go to second, third or more racks. Some of our biggest customers deploy us in tremendous configurations, you know, 10 racks in 10 rows, thousands of servers. But, we can start as small as you know, one or two switches. And so, the value proposition really is, how seamlessly can you build your infrastructure, in other words, can you make the network invisible in these infrastructures? And, that's exactly what we do. >> You have this picture in your booth, these things that you're handing out, and it's really simple. You got the old way which is storage, server and networking all that complexity. Nutanix, really kind of attacked the server and storage piece, brought those together, connect to the network. What you guys are doing is collapsing that complexity even further. Is that right, so what does that mean for a customer from a scaling standpoint? >> So, if you look at the three tier architecture as you talked about, then we're maxing multiple networks. And, the first thing anyone does whey they deploy converged infrastructure, hyperconversions in particular, is they eliminate the SAD. So, that was another network, we just never really thought about it that way. And so, effectively what we do is we allow you to have the properties of a SAD on your network. So, for a storage guy, notions of like Fibre Channel zoning are inherent now in our IP oriented network. Our network is very low latency because of our architecture. So, as you scale your latency is constant as you would things like NVME, our latency is extremely low. It's not a multi tiered network, so you don't have the complexity of building a multi tiered network as you scale your converged infrastructure. The benefit of hyperconversion is that you can deploy these racks of infrastructure and easily deploy them. The challenge is that if you don't attack the networking problem you still bump into that as you deploy this infrastructure. >> That becomes your new bottleneck. >> It's your new bottleneck for performance, but it's really for administration. And so, our integration layer ties into Nutanix and makes us aware of Nutanix operating environment, its file system, when nodes are being added or removed, when you're doing STApps or backups, et cetera and the network is shaped in the context of that application called Nutanix. It'll do the same thing for VMware. >> And, when you say it's tied into Nutanix, is that you know, the Nutanix the kind of the software between nodes is also things like AHV. Do you have awareness of that? >> So, AHV or VMware and PRISM, so you know, our management console can be launched from PRISM now so you can seamlessly have an experience. You can't tell when you're really in Nutanix or when you're in Plexxi's management domain. But, more importantly, we're aware of when nodes are added. We understand if you're rebuilding your underlying file systems, et cetera, as the requirements on the network shift, as you add more workloads, as workloads move, as applications move on the infrastructure and you need more compute over here or more storage there, our network adapts to that. >> So, explain how this is different than, just say, Nutanix bringing its platform and partnering up with UCS, for example. What's different about what you're doing? >> So, we're, for one thing, we're only the network, right. And so, the compute infrastructure, we don't do that. We don't do storage. We don't compute. And so, we're just a network that is really, think about it as the fabric for compute and storage as opposed to a data center network where you connect, you know, your printers and your desktops and your infrastructure for your, you know, multiple sites, et cetera. That's the kind of Sysco, if you will, network. We're this embedded network in these hyperconvert solutions. Put one or two switches in your rack and as you pump out this converged infrastructure you just scale that fabric seamlessly. And, it's so well integrated inside of Nutanix you don't even realize it's another network. It's just embedded in the infrastructure. >> So, sorry Stu. From a buyer's standpoint, do I get to eliminate some other or limit my growth of my traditional network or do I have to throw that out and bring this in? >> So, we're totally compatible with existing networks. So, what you do is you do two or three things. We can insert into existing networks without modifying them, but you don't need to keep adding top rack switches and spines to your existing network because our, most of the traffic stays on this other network. The Nutanix guy, sales teams, are actually starting to call this the Nutanix Network or the Nutanix Fabric because it's embedded in their solution. So, most of the traffic between Nutanix and those goes on that network which minimizes your northbound traffic to your existing network which just frankly, removes a headache from traditional network admins to deal with this other stuff. And, that same way the network admin in the past didn't worry about sand traffic. You shouldn't have to worry about this other problems too. >> So, Rich, it's interesting, talking to Nutanix customers you're right, smaller customers don't have networking issues, some larger customers it depends on how good their network is. The thing coming on the horizon that's going to dramatically change this embedded network thing is got to be NVME over fabrics, so what does that mean for Nutanix standalone and you know, I got to think that that's a huge tie to bring you into a lot of accounts. >> I mean, it is clear that the next tsunami, I mean you know, we were all involved in the early days of Flash and we saw that coming when we were at EMC, you know, I probably saw more Flash than anybody in the world actually, in terms of petabytes actually. And, NVME is that next wave, right. So, whether it's embedded in Nutanix or it's standalone bricks, you know, it's going to elevate the, this east, west, this need for this other network and you know, to pitch Plexxi a little bit, there's no better network that's tuned for this. The nature of our network is it's flat, it's extremely low latency, so we're actually awaiting the day that, you know, NVME hits the market in a big way because it will blow apart every other network, every hierarchal network will just be blown apart because the latency characteristics of a multi tiered network are just, are just clear. You can measure it. Also, we're doing a lot of stuff like that. >> Are any of your solutions ready for this today? >> We're ready for it. >> And, when you simplify the network like that, the entire infrastructure, and you provide that infrastructure with virtually no latency impact, now you can start to see the way in which application development changes and, you know, everybody's talking about digital disruption and how they going to pay for it. They're going to pay for it by, I would think, shifting labor resource from non-differentiated infrastructure to some of these more exciting areas. We've just heard that from two CIOs. >> We see this a lot. Telecom Italia is here with us. Sparkle, one of our bigger customers, we have a session this afternoon at 3 o'clock and Sparkle's going to be in the session with us and I just met a good hour with them here. And, it's all about the operating expense. It's like, Nutanix plus Plexxi reduces my operating expense and he's going to repeatedly say that. And, it's just clear that people cannot afford the complexity associated with traditional networks anymore. They can't hire programmers to build out, you know, not to pick on ACI, but complicated scripts for ACI, they can't afford to build those programs. Our integration layer makes that seamless, it takes it away. >> So, what's your relationship with Nutanix? You're obviously doing some hardcore integration. How do you describe the partnership and do you have other partnerships that you can talk about? >> So, right now we have a number of large scale, service provider customers we sell through distribution and other partners. We're partnering a lot with Nutanix now, a little bit with SimpliVity, but we're going to go after all of the HCI vendors ultimately. But, pretty clearly Nutanix is the leader and we've been developing a relationship at the top and in the field and parallel we've been recruiting Nutanix partners. AERO's our master distributor, so we're recruiting AERO partners that sell Nutanix and we're building a set of solutions. We announce our reference architecture this week with Nutanix, so we're very focused on Nutanix. They're clearly the leader in this space and they get our value proposition. Invisible infrastructure meets the invisible network. I mean, it's perfect. >> You mentioned before you could be profitable if you wanted to be. It's kind of, it's not in vogue to be profitable, Rich. People want growth, but you know, hey, this booming market's not going to last forever. >> Timing's different, timing is different. I think, actually I think it plays to our strength that you know, I looked at our financials a couple of weeks ago and I realized some about 80% of all that we've spent has been in R and D, and that's not common. Most starters at this stage have invested a lot more in the go to market and now's our time to go do that, but we have, now we have the advantage that we have such tremendous revenue growth that we can fund a bunch of it ourselves and the capital markets are different than they were two or three years ago when Nutanix was growing. So, I think it's prudent for CEOs now to be just more, more capital efficient because the markets are different and I think we're in a unique position now given all of our growth. >> Well, Rich, congratulations on the early success. We know what you're capable of. We'll be watching. I really wish you the best. >> My pleasure, thank you. >> Alright, keep it right there everybody. We'll be back with our next guest. This is theCUBE. We're live from Nutanix, NEXTConf. Be right back.

Published Date : Jun 28 2017

SUMMARY :

Brought to you by Nutanix. Rich Napolitano is here as the CEO of Plexxi. Pleasure to be here with you again. Yeah, so you know, I love the fact The challenge in the early days is, you know, value proposition and kind of why you guys, of computing, you know, third platform, and agile fabric to allow you to compute storage, But, we can start as small as you know, What you guys are doing is collapsing the networking problem you still bump is shaped in the context of that application called Nutanix. is that you know, the Nutanix the kind of So, AHV or VMware and PRISM, so you know, and partnering up with UCS, for example. That's the kind of Sysco, if you will, network. do I get to eliminate some other or limit my growth So, what you do is you do two or three things. that mean for Nutanix standalone and you know, awaiting the day that, you know, NVME hits the entire infrastructure, and you provide and Sparkle's going to be in the session with us have other partnerships that you can talk about? They're clearly the leader in this space People want growth, but you know, hey, this booming that you know, I looked at our financials I really wish you the best. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

NutanixORGANIZATION

0.99+

Rich NapolitanoPERSON

0.99+

AEROORGANIZATION

0.99+

RichPERSON

0.99+

oneQUANTITY

0.99+

Telecom ItaliaORGANIZATION

0.99+

SyscoORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

twoQUANTITY

0.99+

10 racksQUANTITY

0.99+

Washington DCLOCATION

0.99+

thousandsQUANTITY

0.99+

UCSORGANIZATION

0.99+

firstQUANTITY

0.99+

10 rowsQUANTITY

0.99+

thirdQUANTITY

0.99+

twoDATE

0.99+

DCLOCATION

0.99+

two CIOsQUANTITY

0.99+

secondQUANTITY

0.99+

three tierQUANTITY

0.99+

Hyper Converge NetworkORGANIZATION

0.99+

FlashTITLE

0.99+

NutanixTITLE

0.98+

EMCORGANIZATION

0.98+

PlexxiPERSON

0.98+

this weekDATE

0.98+

about 80%QUANTITY

0.97+

this yearDATE

0.97+

third platformQUANTITY

0.97+

two switchesQUANTITY

0.96+

three years agoDATE

0.96+

HCNORGANIZATION

0.96+

NVMEORGANIZATION

0.96+

three thingsQUANTITY

0.96+

todayDATE

0.95+

tsunamiEVENT

0.94+

PlexxiTITLE

0.94+

PlexxiORGANIZATION

0.93+

waveEVENT

0.92+

2017DATE

0.92+

one thingQUANTITY

0.91+

SparkleORGANIZATION

0.9+

SimpliVityORGANIZATION

0.89+

.NEXTEVENT

0.87+

last two yearsDATE

0.87+

couple of weeks agoDATE

0.87+

ACIORGANIZATION

0.86+

StradaORGANIZATION

0.84+

Wikibon Big Data Market Update pt. 2 - Spark Summit East 2017 - #SparkSummit - #theCUBE


 

(lively music) >> [Announcer] Live from Boston, Massachusetts, this is the Cube, covering Sparks Summit East 2017. Brought to you by Databricks. Now, here are your hosts, Dave Vellante and George Gilbert. >> Welcome back to Sparks Summit in Boston, everybody. This is the Cube, the worldwide leader in live tech coverage. We've been here two days, wall-to-wall coverage of Sparks Summit. George Gilbert, my cohost this week, and I are going to review part two of the Wikibon Big Data Forecast. Now, it's very preliminary. We're only going to show you a small subset of what we're doing here. And so, well, let me just set it up. So, these are preliminary estimates, and we're going to look at different ways to triangulate the market. So, at Wikibon, what we try to do is focus on disruptive markets, and try to forecast those over the long term. What we try to do is identify where the traditional market research estimates really, we feel, might be missing some of the big trends. So, we're trying to figure out, what's the impact, for example, of real time. And, what's the impact of this new workload that we've been talking about around continuous streaming. So, we're beginning to put together ways to triangulate that, and we're going to show you, give you a glimpse today of what we're doing. So, if you bring up the first slide, we showed this yesterday in part one. This is our last year's big data forecast. And, what we're going to do today, is we're going to focus in on that line, that S-curve. That really represents the real time component of the market. The Spark would be in there. The Streaming analytics would be in there. Add some color to that, George, if you would. >> [George] Okay, for 60 years, since the dawn of computing, we have two ways of interacting with computers. You put your punch cards in, or whatever else and you come back and you get your answer later. That's batch. Then, starting in the early 60's, we had interactive, where you're at a terminal. And then, the big revolution in the 80's was you had a PC, but you still were either interactive either with terminal or batch, typically for reporting and things like that. What's happening is the rise of a new interaction mode. Which is continuous processing. Streaming is one way of looking at it but it might be more effective to call it continuous processing because you're not going to get rid of batch or interactive but your apps are going to have a little of each. So, what we're trying to do, since this is early, early in its life cycle, we're going to try and look at that streaming component from a couple of different angles. >> Okay, as I say, that's represented by this Ogive curve, or the S-curve. On the next slide, we're at the beginning when you think about these continuous workloads. We're at the early part of that S-curve, and of course, most of you or many of you know how the S-curve works. It's slow, slow, slow. For a lot of effort, you don't get much in return. Then you hit the steep part of that S-curve. And that's really when things start to take off. So, the challenge is, things are complex right now. That's really what this slide shows. And Spark is designed, really, to reduce some of that complexity. We've heard a lot about that, but take us through this. Look at this data flow from ingest, to explore, to process, to serve. We talked a lot about that yesterday, but this underscores the complexity in the marketplace. >> [George] Right, and while we're just looking mostly at numbers today, the point of the forecast is to estimate when the barriers, representing complexities, start to fall. And then, when we can put all these pieces together, in just explore, process, serve. When that becomes an end-to-end pipeline. When you can start taking the data in on one end, get a scientist to turn it into a model, inject it into an application, and that process becomes automated. That's when it's mature enough for the knee in the curve to start. >> And that's when we think the market's going to explode. But now so, how do you bound this. Okay, when we do forecasts, we always try to bound things. Because if they're not bounded, then you get no foundation. So, if you look at the next slide, we're trying to get a sense of real-time analytics. How big can it actually get? That's what this slide is really trying to-- >> [George] So this one was one firm's take on real-time analytics, where by 2027, they see it peaking just under-- >> [Dave] When you say one firm, you mean somebody from the technology district? >> [George] Publicly available data. And we take it as as a, since they didn't have a lot of assumptions published, we took it as, okay one data point. And then, we're going to come at it with some bottoms-up end top-down data points, and compare. >> [Dave] Okay, so the next slide we want to drill into the DBMS market and when you think about DBMS, you think about the traditional RDBMS and what we know, or the Oracle, SQL Server, IBMDB2's, etc. And then, you have this emergent NewSQL, and noSQL entrance, which are, obviously, we talked today to a number of folks. The number of suppliers is exploding. The revenue's still relatively small. Certainly small relative to the RDBMS marketplace. But, take us through what your expectations is here, and what some of the assumptions are behind this. >> [George] Okay, so the first thing to understand is the DBMS market, overall, is about $40 billion of which 30 billion goes to online transaction processing supporting real operational apps. 10 billion goes to Orlap or business intelligence type stuff. The Orlap one is shrinking materially. The online transaction processing one, new sales is shrinking materially but there's a huge maintenance stream. >> [Dave] Yeah which companies like Oracle and IBM and Microsoft are living off of that trying to fund new development. >> We modeled that declining gently and beginning to accelerate more going out into the latter years of the tenure period. >> What's driving that decline? Obviously, you've got the big sucking sound of a dup in part, is driving that. But really, increasingly it's people shifting their resources to some of these new emergent applications and workloads and new types of databases to support them right? But these are still, those new databases, you can see here, the NewSQL and noSQL still, relatively, small. A lot of it's open source. But then it starts to take off. What's your assumption there? >> So here, what's going on is, if you look at dollars today, it's, actually, interesting. If you take the noSQL databases, you take DynamoDB, you take Cassandra, Hadoop, HBase, Couchbase, Mongo, Kudu and you add all those up, it's about, with DynamoDB, it's, probably, about 1.55 billion out of a $40 billion market today. >> [Dave] Okay but it's starting to get meaningful. We were approaching two billion. >> But where it's meaningful is the unit share. If that were translated into Oracle pricing. The market would be much, much bigger. So the point it. >> Ten X? >> At least, at least. >> Okay, so in terms of work being done. If there's a measure of work being done. >> [George] We're looking at dollars here. >> Operations per second or etcetera, it would be enormous. >> Yes, but that's reflective of the fact that the data volumes are exploding but the prices are dropping precipitously. >> So do you have a metric to demonstrate that. We're, obviously, not going to show it today but. >> [George] Yes. >> Okay great, so-- >> On the business intelligence side, without naming names, the data warehouse appliance vendors are charging anywhere from 25,000 per terabyte up to, when you include running costs, as high as 100,000 a terabyte. That their customers are estimating. That's not the selling cost but that's the cost of ownership per terabyte. Whereas, if you look at, let's say Hadoop, which is comparable for the off loading some of the data warehouse work loads. That's down to the 5K per terabyte range. >> Okay great, so you expect that these platforms will have a bigger and bigger impact? What's your pricing assumption? Is prices going to go up or is it just volume's going to go through the roof? >> I'm, actually, expecting pricing. It's difficult because we're going to add more and more functionality. Volumes go up and if you add sufficient functionality, you can maintain pricing. But as volumes go up, typically, prices go down. So it's a matter of how much do these noSQL and NewSQL databases add in terms of functionality and I distinguish between them because NewSQL databases are scaled out version of Oracle or Teradata but they are based on the more open source pricing model. >> Okay and NoSQL, don't forget, stands for not only SQL, not not SQL. >> If you look at the slides, big existing markets never fall off a cliff when they're in the climb. They just slowly fade. And, eventually, that accelerates. But what's interesting here is, the data volumes could explode but the revenue associated with the NoSQL which is the dark gray and the NewSQL which is the blue. Those don't explode. You could take, what's the DBMS cost of supporting YouTube? It would be in the many, many, many billions of dollars. It would support 1/2 of an Oracle itself probably. But it's all open source there so. >> Right, so that's minimizing the opportunity is what you're saying? >> Right. >> You can see the database market is flat, certainly flattish and even declining but you do expect some growth in the out years as part of that evasion, that volume, presumably-- >> And that's the next slide which is where we've seen that growth come from. >> Okay so let's talk about that. So the next slide, again, I should have set this up better. The X-axis year is worldwide dollars and the horizontal axis is time. And we're talking here about these continuous application work loads. This new work load that you talked about earlier. So take us through the three. >> [George] There's three types of workloads that, in large part, are going to be driving most of this revenue. Now, these aren't completely, they are completely comparable to the DBMS market because some of these don't use traditional databases. Or if they do, they're Torry databases and I'll explain that. >> [Dave] Sure but if I look at the IoT Edge, the Cloud and the micro services and streaming, that's a tail wind to the database forecast in the previous slide, is that right? >> [George] It's, actually, interesting but the application and infrastructure telemetry, this is what Splunk pioneered. Which is all the torrents of data coming out of your data center and your applications and you're trying to manage what's going on. That is a database application. And we know Splunk, for 2016, was 400 million. In software revenue Hadoop was 750 million. And the various other management vendors, New Relic, AppDynamics, start ups and 5% of Azure and AWS revenue. If you add all that up, it comes out to $1.7 billion for 2016. And so, we can put a growth rate on that. And we talked to several vendors to say, okay, how much will that work load be compared to IoT Edge Cloud. And the IoT Edge Cloud is the smart devices at the Edge and the analytics are in the fog but not counting the database revenue up in the Cloud. So it's everything surrounding the Cloud. And that, actually, if you look out five years, that's, maybe, 20% larger than the app and infrastructure telemetry but growing much, much faster. Then the third one where you were talking about was this a tail wind to the database. Micro server systems streaming are very different ways of building applications from what we do now. Now, people build their logic for the application and everyone then, stores their data in this centralized external database. In micro services, you build a little piece of the app and whatever data you need, you store within that little piece of the app. And so the database requirements are, rather, primitive. And so that piece will not drive a lot of database revenue. >> So if you could go back to the previous slide, Patrick. What's driving database growth in the out years? Why wouldn't database continue to get eaten away and decline? >> [George] In broad terms, the overall database market, it staying flat. Because as prices collapse but the data volumes go up. >> [Dave] But there's an assumption in here that the NoSQL space, actually, grows in the out years. What's driving that growth? >> [George] Both the NoSQL and the NewSQL. The NoSQL, probably, is best serving capturing the IoT data because you don't need lots of fancy query capabilities for concurrency. >> [Dave] So it is a tail wind in a sense in that-- >> [George] IoT but that's different. >> [Dave] Yeah sure but you've got the overall market growing. And that's because the new stuff, NewSQL and NoSQL is growing faster than the decline of the old stuff. And it's not in the 2020 to 2022 time frame. It's not enough to offset that decline. And then they have it start growing again. You're saying that's going to be driven by IoT and other Edge use cases? >> Yes, IoT Edge and the NewSQL, actually, is where when they mature, you start to substitute them for the traditional operational apps. For people who want to write database apps not who want to write micro service based apps. >> Okay, alright good. Thank you, George, for setting it up for us. Now, we're going to be at Big Data SV in mid March? Is that right? Middle of March. And George is going to be releasing the actual final forecast there. We do it every year. We use Spark Summit to look at our preliminary numbers, some of the Spark related forecasts like continuous work loads. And then we harden those forecasts going into Big Data SV. We publish our big data report like we've done for the past, five, six, seven years. So check us out at Big Data SV. We do that in conjunction with the Strada events. So we'll be there again this year at the Fairmont Hotel. We got a bunch of stuff going on all week there. Some really good programs going on. So check out siliconangle.tv for all that action. Check out Wikibon.com. Look for new research coming out. You're going to be publishing this quarter, correct? And of course, check out siliconangle.com for all the news. And, really, we appreciate everybody watching. George, been a pleasure co-hosting with you. As always, really enjoyable. >> Alright, thanks Dave. >> Alright, to that's a rap from Sparks. We're going to try to get out of here, hit the snow storm and work our way home. Thanks everybody for watching. A great job everyone here. Seth, Ava, Patrick and Alex. And thanks to our audience. This is the Cube. We're out, see you next time. (lively music)

Published Date : Feb 9 2017

SUMMARY :

Brought to you by Databricks. of the Wikibon Big Data Forecast. What's happening is the rise of a new interaction mode. On the next slide, we're at the beginning for the knee in the curve to start. So, if you look at the next slide, And then, we're going to come at it with some bottoms-up [Dave] Okay, so the next slide we want to drill into the [George] Okay, so the first thing to understand and IBM and Microsoft are living off of that going out into the latter years of the tenure period. you can see here, the NewSQL and you add all those up, [Dave] Okay but it's starting to get meaningful. So the point it. Okay, so in terms of work being done. it would be enormous. that the data volumes are exploding So do you have a metric to demonstrate that. some of the data warehouse work loads. the more open source pricing model. Okay and NoSQL, don't forget, but the revenue associated with the NoSQL And that's the next slide which is where and the horizontal axis is time. in large part, are going to be driving of the app and whatever data you need, What's driving database growth in the out years? the data volumes go up. that the NoSQL space, actually, grows is best serving capturing the IoT data because And it's not in the 2020 to 2022 time frame. and the NewSQL, actually, And George is going to be releasing This is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

George GilbertPERSON

0.99+

PatrickPERSON

0.99+

GeorgePERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

SethPERSON

0.99+

30 billionQUANTITY

0.99+

AlexPERSON

0.99+

two billionQUANTITY

0.99+

2016DATE

0.99+

$40 billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

2027DATE

0.99+

20%QUANTITY

0.99+

five yearsQUANTITY

0.99+

New RelicORGANIZATION

0.99+

OrlapORGANIZATION

0.99+

$1.7 billionQUANTITY

0.99+

10 billionQUANTITY

0.99+

2020DATE

0.99+

BostonLOCATION

0.99+

AvaPERSON

0.99+

mid MarchDATE

0.99+

third oneQUANTITY

0.99+

last yearDATE

0.99+

AppDynamicsORGANIZATION

0.99+

2022DATE

0.99+

yesterdayDATE

0.99+

WikibonORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

two daysQUANTITY

0.99+

siliconangle.comOTHER

0.99+

400 millionQUANTITY

0.99+

750 millionQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

todayDATE

0.99+

5%QUANTITY

0.99+

Middle of MarchDATE

0.99+

Sparks SummitEVENT

0.99+

first slideQUANTITY

0.99+

threeQUANTITY

0.99+

two waysQUANTITY

0.98+

Boston, MassachusettsLOCATION

0.98+

early 60'sDATE

0.98+

about $40 billionQUANTITY

0.98+

one firmQUANTITY

0.98+

this yearDATE

0.98+

Ten XQUANTITY

0.98+

Spark SummitEVENT

0.97+

25,000 per terabyteQUANTITY

0.97+

80'sDATE

0.97+

DatabricksORGANIZATION

0.97+

DynamoDBTITLE

0.97+

three typesQUANTITY

0.97+

BothQUANTITY

0.96+

Sparks Summit East 2017EVENT

0.96+

Spark Summit East 2017EVENT

0.96+

this weekDATE

0.95+

SparkTITLE

0.95+

Stephanie McReynolds - HP Big Data 2015 - theCUBE


 

live from Boston Massachusetts extracting the signal from the noise it's the kue covering HP big data conference 2015 brought to you by HP software now your host John furrier and Dave vellante okay welcome back everyone we are here live in boston massachusetts for HP's big data conference this is a special presentation of the cube our flagship program where we go out to the events and extract the season for the noise I'm John furrier with Dave allante here Wikibon down on research our next guest Stephanie McReynolds VP margon elation hot new startup that's been kind of coming out of stealth that's out there big data a lot of great stuff Stephanie welcome to the cube great see you great to be here tell us what the start at first of all because good buzz going on it's kind of stealth buzz but it's really with the fought leaders and really the you know the people in the industry who know what they're talking about like what you guys are doing so so introduce the company tells me you guys are doing and relationship with Vertica and exciting stuff absolutely a lesion is a exciting company we just started to come out of south in March of this year and we came out of self with some great production customers so eBay is a customer they have hundreds of analysts using our systems we also have square as a customer smaller analytics team but the value that you Neelix teams are getting out of this product is really being able to access their data in human context so we do some machine learning to look at how individuals are using data in an organization and take that machine learning and also gather some of the human insights about how that data is being used by experts surface that all in line with in work so what kind of data cuz Stonebreaker was kind of talking yesterday about the 3 v's which we all know but the one that's really coming mainstream in terms of a problem space is variety variety you have the different variety of schema sources and then you have a lot of unstructured exhaust or data flying around can you be specific on what you guys do yeah I mean it's interesting because there's several definitions of data and big data going around right and so I'm you know we connect to a lot of database systems and we also connect to a lot of Hadoop implementations so we deal with both structured data as well as what i consider unstructured data and i think the third part of what we do is bring in context from human created data or cumin information with which robert yesterday was talking about a little bit which is you know what happens in a lot of analytic organizations is that and there's a very manual process of documenting some of the data that's being used in these projects and that's done on wiki pages or spreadsheets that are floating around the organization and that's actually a really black base camp all these collaboration all these collaboration platforms and what you realize when you start to really get into the work of using that information to try to write your queries is that trying to reference a wiki page and then write your sequel and flip back and forth between maybe ten different documents is not very productive for the analyst so what our customers are seeing is that by consolidating all of that data and information in one place where the tables are actually reference side by side with the annotations their analysts can get from twenty to fifty percent savings and productivity and new analysts maybe more importantly new analyst can get up to speed quite a bit quicker and that square the day I was talking to one of the the data scientists and he was was talking about you know his process for finding data in the organization which prior to using elation it would take about 30 minutes going two maybe three or four people to find the data he needed for his analysis and with elation in five seconds he can run a query search for the date he wants gets it back gets all kind of all that expert annotation already around that base data said he's ready to roll he can start I'm testing some of us akashi go platform right they've heard it was it a platform and it and you said you work with a lot of database the databases right so it's tightly integrated with the database in this use case so it's interesting and you know we see databases as a source of information so we don't create copies of the data on our platform we go out and point to the data where it lies and surface that you know that data to to the end user now in the case of verdict on our relationship with Vertica and we've also integrated verdict in our stack to support we call data forensics which is the building for not an analyst who's using the system day to day but for NIT individual to understand where the behaviors around this data and the types of analysis that are being done and so verdicts a great high performance platform for dashboarding and business intelligence a back end of that providing you know quick access to aggregates so one of they will work on a vertica you guys just the engine what specifically again yeah so so we use the the vertica the vertical engine underneath our forensics product and then the that's you know one portion of our platform the rest of our platform is built out on other other technologies so verdict is part of your solution it's part of our solution it's it's one application that we part of one application we deliver so we've been talking all week about this week Colin Mahoney in his talk yesterday and I saw a pretty little history on erp how initially was highly customized and became packaged apps and he sort of pointed to a similar track with analytics although he said it's not going to be the same it's going to be more composable sort of applications I wonder and historically the analytics in the database have been closely aligned I'll say maybe not integrated you see that model continuing do you see it more packaged apps or will thus what Collins calling composable apps what's the relationship between your platforming and the application yeah so our platform is is really more tooling for those individuals that are building or creating those applications so we're helping data scientists and analysts find what algorithms they want to use as a foundation for those applications so a little bit more on the discovery side where folks are doing a lot of experiment and experimentation they may be having to prepare data in different ways in order to figure out what might work for those applications and that's where we fit in as a vendor and what's your license model and so you know we're on a subscription model we have customers that have data teams in the in the hundreds at a place like eBay you know the smaller implementations could be maybe just teams of five analyst 10a analyst fairly small spatial subscription and it's a seat base subscription but we can run in the cloud we can run on premise and we do some interesting things around securing the data where you can and see your columns bommana at the data sets for financial services organizations and our customers that have security concerns and most of those are on premise top implementation 70 talk about the inspiration of the company in about the company he's been three years since then came out of stealth what's the founders like what's the DNA the company what do you guys do differently and what was the inspiration behind this yeah what's really what's really interesting I think about the founding of the company is that and the technical founders come from both Google and Apple so you have an interesting observation that both individuals had made independently hardcore algorithmic guy and then like relevant clean yeah and both those kind of made interesting observations about how Google and Apple two of the most data-driven companies you know on the planet we're struggling and their analytics teams were struggling with being able to share queries and share data sets and there was a lot of replication of work that was happening and so much for the night you know but both of these folks from different angles kind of came together at adulation said look there's there's a lot of machine learning algorithms that could help with this process and there's also a lot of good ways with natural language processing to let people interact with their data in more natural ways the founder from from Apple Aaron key he was on the Siri team so we had a lot of experience designing products for navigability and ease of use and natural language learning and so those two perspectives coming together have created some technology fundamentals in our product and it's an experience to some scar tissue from large-scale implementations of data yeah very large-scale implementations of data and also a really deep awareness of what the human equation brings to the table so machine learning algorithms aren't enough in and of themselves and I think ken rudin had some interesting comments this morning where you know he kind of pushed it one step further and said it's not just about finding insight data science about is about having impact and you can't have impact unless you create human contacts and you have communication and collaboration around the data so we give analyst a query tool by which we surface the machine learning context that we have about the data that's being used in the organization and what queries have been running that data but we surface in a way where the human can get recommendations about how to improve their their sequel and drive towards impact and then share that understanding with other analysts in the organization so you get an innovation community that's started so who you guys targets let's step back on the page go to market now you guys are launched got some funding can you share the amount or is it private confidential or was how much did you raise who are you targeting what's your go-to market what's the value proposition give us the give us this data yeah so its initial value proposition is just really about analyst productivity that's where we're targeted how can you take your teams of analysts and everyone knows it's hard to hire these days so you're not going to be able to grow those teams out overnight how do you make the analyst the data scientist the phd's you have on staff much more productive how do you take that eighty to ninety percent of the time that they make them using stuff sharing data because I stuff you in the sharing data try to get them out of the TD of trying to just find eight in the organization and prepare it and let them really innovate and and use that to drive value back to the to the organization so we're often selling to individual analysts to analytics teams the go to market starts there and the value proposition really extends much further in the organization so you know you find teams and organizations that have been trying to document their data through traditional data governance means or ETL tools for a very long time and a lot of those projects have stalled out and the way that we crawl systems and use machine learning automation and to automate some of that documentation really gives those projects and new life in our enterprise data has always been elusive I mean do you go back decades structured day to all these pre pre built databases it's been hard right so it's you can crack that nut that's going to be a very lucrative in this opportunity I got the Duke clusters now storing everything I mean some clients we talked to here on the key customers of a CHP or IBM big companies they're storing everything just because they don't know they do it again yeah I mean if the past has been hard in part because we in some cases over manage the modeling of the data and I think what's exciting now about storing all your data in Hadoop and storing first and then asking questions later is you're able to take a more discovery oriented hypothesis testing iterative approach and if you think about how true innovation works you know you build insights on top of one another to get to the big breakthrough concepts and so I think we're at an interesting point in the market for a solution like this that can help with that increasing complexity of data environment so you just raise your series a raised nine million you maybe did some seed round before that so pretty early days for you guys you mentioned natural language processing before one of your founders are you using NLP and in your solution in any way or so we have a we have a search interface that allows you to look for that technical data to look for metadata and for data objects and by entering a simple simple natural language search terms so we are using that as part of our interface in solution right and so kind of early customer successes can you talk about any examples or yeah you know there's some great examples and jointly with Vertica square is as a customer and their analytics team is using us on a day-to-day basis not only to find data sets and the organization but to document those those data sets and eBay has hundreds of analysts that are using elation today in a day to day manner and they've seen quite a bit of productivity out of their new analysts that are coming on the system's it used to take analysts about 18 months to really get their feet around them in the ebay environment because of the complexity of all of the different systems at ebay and understanding where to go for that customer table you know that they needed to use and now analysts are up and running about six months and their data governance team has found that elation has really automated and prioritized the process around documentation for them and so it's a great light a great foundation for them there and data curators and data stewards to go in and rich the data and collaborate more with the analysts and the actual data users to get to a point of catalogued catalog data disease so what's the next you guys going to be on the road in New York Post Radek hadoop world big data NYC is coming up a big event in New York I'm Cuba visa we're getting the word out about elation and then what we're doing we have customers that are you know starting to speak about their use cases and the value that they're seeing and will be in New York market share I believe will be speaking on our behalf there to share their stories and then we're also going to a couple other conferences after that you know the fall is an exciting time which one's your big ones there so i will be at strada in New York and a September early October and then mid-october we're going to be at both teradata partners and tableaus conference as well so we connect not only to databases of all set different sorts but also to go with users are the tools yeah awesome well anything else you'd like to add share at the company is awesome we're some great things about you guys been checking around I'll see you found out about you guys and a lot of people like the company I mean a lot of insiders like moving little see you didn't raise too much cash that's raised lettin that's not the million zillion dollar round I think what led you guys take nine million yeah raised a million and I you know I think we're building this company in a traditional value oriented way great word hey stay long bringing in revenue and trying to balance that out with the venture capital investment it's not that we won't take money but we want to build this company in a very durable so the vision is to build a durable company absolutely absolutely and that may be different than some of our competitors out there these days but that's that we've and I have not taken any financing and SiliconANGLE at all so you know we're getting we believe in that and you might pass up some things but you know what have control and you guys have some good partners so congratulations um final word what's this conference like you go to a lot of events what's your take on this on this event yeah I do i do end up going to a lot of events that's part of the marketing role you know i think what's interesting about this conference is that there are a lot of great conversations that are happening and happening not just from a technology perspective but also between business people and deep thinking about how to innovate and verticals customers i think are some of the most loyal customers i've seen in the in the market so it's great in their advanced to they're talking about some pretty big problems but they're solving it's not like little point solutions it's more we architecting some devops i get a dev I'm good I got trashed on Twitter private messages all last night about me calling this a DevOps show it's not really a DevOps cloud show but there's a DevOps vibe here the people who are working on the solutions I think they're just a real of real vibe people are solving real problems and they're talking about them and they're sharing their opinions and I I think that's you know that's similar to what you see in DevOps the guys with dev ops are in the front line the real engineers their engineering so they have to engineer because of that no pretenders here that's for sure are you talking about it's not a big sales conference right it's a lot of customer content their engineering solutions talking to Peter wants a bullshit they want reaiah I mean I got a lot on the table i'm gonna i'm doing some serious work and i want serious conversations and that's refreshing for us but we love love of hits like it's all right Stephanie thinks for so much come on cubes sharing your insight congratulations good luck with the new startup hot startups here in Boston hear the verdict HP software show will be right back more on the cube after this short break you you

Published Date : Aug 12 2015

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Colin MahoneyPERSON

0.99+

Stephanie McReynoldsPERSON

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

twentyQUANTITY

0.99+

PeterPERSON

0.99+

eBayORGANIZATION

0.99+

New YorkLOCATION

0.99+

BostonLOCATION

0.99+

threeQUANTITY

0.99+

John furrierPERSON

0.99+

StephaniePERSON

0.99+

five secondsQUANTITY

0.99+

Vertica squareORGANIZATION

0.99+

IBMORGANIZATION

0.99+

ken rudinPERSON

0.99+

three yearsQUANTITY

0.99+

Dave vellantePERSON

0.99+

VerticaORGANIZATION

0.99+

nine millionQUANTITY

0.99+

ninety percentQUANTITY

0.99+

Dave allantePERSON

0.99+

yesterdayDATE

0.99+

CubaLOCATION

0.99+

both individualsQUANTITY

0.99+

hundredsQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

ten different documentsQUANTITY

0.99+

twoQUANTITY

0.99+

fifty percentQUANTITY

0.98+

bothQUANTITY

0.98+

mid-octoberDATE

0.98+

HPORGANIZATION

0.98+

robertPERSON

0.98+

oneQUANTITY

0.98+

nine millionQUANTITY

0.98+

about six monthsQUANTITY

0.98+

CollinsPERSON

0.97+

2015DATE

0.97+

HadoopTITLE

0.97+

two perspectivesQUANTITY

0.97+

AaronPERSON

0.97+

SiriTITLE

0.97+

four peopleQUANTITY

0.97+

eightyQUANTITY

0.97+

this weekDATE

0.97+

NeelixORGANIZATION

0.96+

about 30 minutesQUANTITY

0.96+

RadekPERSON

0.95+

CHPORGANIZATION

0.95+

HP Big DataORGANIZATION

0.95+

one placeQUANTITY

0.95+

about 18 monthsQUANTITY

0.95+

March of this yearDATE

0.95+

NYCLOCATION

0.95+

fiveQUANTITY

0.94+

hundreds of analystsQUANTITY

0.94+

ebayORGANIZATION

0.93+

eightQUANTITY

0.93+

third partQUANTITY

0.92+

boston massachusettsLOCATION

0.91+

one applicationQUANTITY

0.91+

70QUANTITY

0.9+

this morningDATE

0.9+

TwitterORGANIZATION

0.89+

todayDATE

0.89+

SiliconANGLEORGANIZATION

0.88+

big dataEVENT

0.88+

one stepQUANTITY

0.88+

September early OctoberDATE

0.87+

last nightDATE

0.84+

lot of eventsQUANTITY

0.84+

NLPORGANIZATION

0.83+

a millionQUANTITY

0.79+

lot of eventsQUANTITY

0.78+

lotQUANTITY

0.78+

Anjul Bhambri - IBM Information on Demand 2013 - theCUBE


 

okay welcome back to IBM's information on demand live in Las Vegas this is the cube SiliconANGLE movie bonds flagship program we go out to the events it's check the student from the noise talk to the thought leaders get all the data share that with you and you go to SiliconANGLE com or Wikibon or to get all the footage and we're if you want to participate with us we're rolling out our new innovative crowd activated innovation application called crowd chat go to crouch at net / IBM iod just login with your twitter handle or your linkedin and participate and share your voice is going to be on the record transcript of the cube conversations I'm John furrier with silicon items with my co-host hi buddy I'm Dave vellante Wikibon dork thanks for watching aren't you Oh bhambri is here she's the vice president of big data and analytics at IBM many time cube guests as you welcome back good to see you again thank you so we were both down at New York City last week for the hadoop world really amazing to see how that industry has evolved I mean you guys I've said the number of times today and I said this to you before you superglued your your big data or your analytics business to the Big Data meme and really created a new category I don't know if that was by design or you know or not but it certainly happened suddenly by design well congratulations then because because I think that you know again even a year a year and a half ago those two terms big data and analytics were sort of separate now it's really considered as one right yeah yeah I think because initially as people our businesses started getting really flooded with big data right dealing with the large volumes dealing with structured semi-structured or unstructured data they were looking at that you know how do you store and manage this data in a cost-effective manner but you know if you're just only storing this data that's useless and now obviously it's people realize that they need and there is insights from this data that has to be gleaned and there's technology that is available to do that so so customers are moving very quickly to that it's not just about cost savings in terms of handling this data but getting insights from it so so big data and analytics you know is becoming it's it's becoming synonymous heroes interesting to me on Jules is you know just following this business it's all it's like there's a zillion different nails out there and and and everybody has a hammer and they're hitting the nail with their unique camera but I've it's like IBM as a lot of different hammers so we could talk about that a little bit you've got a very diverse portfolio you don't try to force one particular solution on the client you it sort of an it's the Pens sort of answer we could talk about that a little bit yeah sure so in the context of big data when we look at just let's start with transactional data right that continues to be the number one source where there is very valuable insights to be gleaned from it so the volumes are growing that you know we have retailers that are handling now 2.5 million transactions per hour a telco industry handling 10 billion call data detailed records every day so when you look at that level that volume of transactions obviously you need to be you need engines that can handle that that can process analyze and gain insights from this that you can get you can do ad hoc analytics on this run queries and get information out of this at the same speed at which this data is getting generated so you know we we announced the blu acceleration rate witches are in memory columnstore which gives you the power to handle these kinds of volumes and be able to really query and get value out of this very quickly so but now when you look at you know you go beyond the structured data or beyond transactional data there is semi structured unstructured data that's where which is still data at rest is where you know we have big insights which leverages Apache Hadoop open source but we've built lots of capabilities on top of that where we get we give the customers the best of open source plus at the same time the ability to analyze this data so you know we have text analytics capabilities we provide machine learning algorithms we have provided integration with that that customers can do predictive modeling on this data using SPSS using open source languages like our and in terms of visualization they can visualize this data using cognos they can visualize this data using MicroStrategy so we are giving customers like you said it's not just you know there's one hammer and they have to use that for every nail the other aspect has been around real time and we heard that a lot at strada right in the like I've been going to start us since the beginning and those that time even though we were talking about real time but nobody else true nobody was talking nobody was back in the hadoop world days ago one big bats job yeah so in real time is now the hotbed of the conversation a journalist storm he's new technologies coming out with him with yarn has done it's been interesting yeah you seen the same thing yeah so so and and of course you know we have a very mature technology in that space you know InfoSphere streams for a real-time analytics has been around for a long time it was you know developed initially for the US government and so we've been you know in the space for more than anybody else and we have deployments in the telco space where you know these tens of billions of call detail records are being processed analyzed in real time and you know these telcos are using it to predict customer churn to prevent customer churn gaining all kinds of insights and extremely high you know very low latency so so it's good to see that you know other companies are recognizing the need for it and are you know bringing other offerings out in this space yes every time before somebody says oh I want to go you know low latency and I want to use spark you say okay no problem we could do that and streets is interesting because if I understand it you're basically acting on the data producing analytics prior to persisting the data on in memory it's all in memory and but yet at the same time is it of my question is is it evolving where you now can blend that sort of real-time yeah activity with maybe some some batch data and and talk about how that's evolving yeah absolutely so so streams is for for you know where as data is coming in it can be processed filtered patterns can be seen in streams of data by correlating connecting different streams of data and based on a certain events occurring actions can be taken now it is possible that you know all of this data doesn't need to be persisted but there may be some aspects or some attributes of this data that need to be persisted you could persist this data in a database that is use it as a way to populate your warehouse you could persist it in a Hadoop based offering like BigInsights where you can you know bring in other kinds of data and enrich the data it's it's like data loans from data and a different picture emerges Jeff Jonas's puzzle right so that's that that's very valid and so so when we look at the real time it is about taking action in real time but there is data that can be persisted from that in both the warehouse as well as on something like the insides are too I want to throw a term at you and see what what what this means to you we actually doing some crowd chats with with IBM on this topic data economy was going to SS you have no date economy what does the data economy mean to you what our customers you know doing with the data economy yes okay so so my take on this is that there are there are two aspects of this one is that the cost of storing the data and analyzing the data processing the data has gone down substantially the but the value in this data because you can now process analyze petabytes of this data you can bring in not just structured but semi-structured and unstructured data you can glean information from different types of data and a different picture emerges so the value that is in this data has gone up substantially I previously a lot of this data was probably discarded people without people knowing that there is useful information in this so to the business the value in the data has gone up what they can do with this data in terms of making business decisions in terms of you know making their customers and consumers more satisfied giving them the right products and services and how they can monetize that data has gone up but the cost of storing and analyzing and processing has gone down rich which i think is fantastic right so it's a huge win win for businesses it's a huge win win for the consumers because they are getting now products and services from you know the businesses which they were not before so that that to me is the economy of data so this is why I John I think IBM is really going to kill it in this in this business because they've got such a huge portfolio they've got if you look at where I OD has evolved data management information management data governance all the stuff on privacy these were all cost items before people looked at him on I gotta deal with all this data and now it's there's been a bit flip uh-huh IBM is just in this wonderful position to take advantage of it of course Ginny's trying to turn that you know the the battleship and try to get everybody aligned but the moons and stars are aligning and really there's a there's a tailwind yeah we have a question on domains where we have a question on Twitter from Jim Lundy analyst former Gartner analyst says own firm now shout out to Jim Jim thanks for for watching as always I know you're a cube cube alum and also avid watcher and now now a loyal member of the crowd chat community the question is blu acceleration is helps drive more data into actionable analytics and dashboards mm-hmm can I BM drive new more new deals with it I've sued so can you expound it answers yes yes yes and can you elaborate on that for Jim yeah I you know with blu acceleration you know we have had customers that have evaluated blue and against sa bihana and have found that what blue can provide is is they ahead of what SI p hana can provide so we have a number of accounts where you know people are going with the performance the throughput you know what blue provides is is very unique and it's very head of what anybody else has in the market in solving SI p including SI p and and you know it's ultimately its value to the business right and that's what we are trying to do that how do we let our customers the right technology so that they can deal with all of this data get their arms around it get value from this data quickly that's that's really of a sense here wonderful part of Jim's question is yes the driving new deals for sure a new product new deals me to drive new footprints is that maybe what he's asking right in other words you traditional IBM accounts are doing doing deals are you able to drive new footprints yeah yeah we you know there are there are customers that you know I'm not gonna take any names here but which have come to us which are new to IBM right so it's a it's that to us and that's happening that new business that's Nate new business and that's happening with us for all our big data offerings because you know the richness that is there in the portfolio it's not that we have like you were saying Dave it's not that we have one hammer and we are going to use it for every nail that is out there you know as people are looking at blue big insights for her to streams for real time and with all this comes the whole lifecycle management and governance right so security privacy all those things don't don't go away so all the stuff that was relevant for the relational data now we are able to bring that to big data very quickly and which is I think of huge value to customers and as people are moving very quickly in this big data space there's nobody else who can just bring all of these assets together from and and you know provide an integrated platform what use cases to Jim's point I don't you know I know you don't want to name names but can you name you how about some use cases that that these customers are using with blue like but use cases and they solving so you know I from from a use case a standpoint it is really like you know people are seeing performance which is you know 30 32 times faster than what they had seen when they were not using and in-memory columnstore you know so eight to twenty five thirty two times per men's gains is is you know something that is huge and is getting more and more people attracted to this so let's take an industry take financial services for example so the big the big ones in financial services are a risk people want to know you know are they credit risk yeah there's obviously marketing serving up serving up ads a fraud detection you would think is another one that in more real time are these these you know these will be the segments and of course you know retail where again you know there is like i was saying right that the number of transactions that are being handled is is growing phenomenally i gave one example which was around 2.5 million transactions per hour which was unheard of before and the information that has to be gleaned from it which is you know to leverage this for demand forecasting to leverage this for gaining insights in terms of giving the customers the right kind of coupons to make sure that those coupons are getting you know are being used so it was you know before the world used to be you get the coupons in your email in your mail then the world changed to that you get coupons after you've done the transaction now where we are seeing customers is that when a customer walks in the store that's where they get the coupons based on which i layer in so it's a combination of the transactional data the location data right and we are able to bring all of this together so so it's blue combined with you know what things like streams and big insights can do that makes the use cases even more powerful and unique so I like this new format of the crowd chatting emily is a one hour crowd chat where it's kind of like thought leaders just going to pounding away but this is more like reddit AMA but much better question coming in from grant case is one of the themes to you is one of the themes we've heard about in Makino was the lack of analytical talent what is going on to contribute more value for an organization skilling up the work for or implementing better software tools for knowledge workers so in terms so skills is definitely an issue that has been a been a challenge in the in the industry with and it got pretty compound with big data and the new technology is coming in from the standpoint of you know what we are doing for the data scientists which is you know the people who are leveraging data to to gain new insights to explore and and and discover what other attributes they should be adding to their predictive models to improve the accuracy of those models so there is there's a very rich set of tools which are used for exploration and discovery so we have which is both from you know Cognos has such such such capabilities we have such capabilities with our data Explorer absolutely basically tooling for the predictive on the modeling sister right now the efforts them on the modeling and for the predictive and descriptive analytics right I mean there's a lot of when you look at that Windows petabytes of data before people even get to predictive there's a lot of value to be gleaned from descriptive analytics and being able to do it at scale at petabytes of data was difficult before and and now that's possible with extra excellent visualization right so that it's it's taking things too that it the analytics is becoming interactive it's not just that you know you you you are able to do this in real time ask the questions get the right answers because the the models running on petabytes of data and the results coming from that is now possible so so interactive analytics is where this is going so another question is Jim was asking i was one of ibm's going around doing blue accelerator upgrades with all its existing clients loan origination is a no brainer upgrade I don't even know that was the kind of follow-up that I had asked is that new accounts is a new footprint or is it just sort of you it is spending existing it's it's boat it's boat what is the characteristic of a company that is successfully or characteristics of a company that is successfully leveraging data yeah so companies are thinking about now that you know their existing edw which is that enterprise data warehouse needs to be expanded so you know before if they were only dealing with warehouses which one handling just structure data they are augmenting that so this is from a technology standpoint right there augmenting that and building their logical data warehouse which takes care of not just the structure data but also semi-structured and unstructured data are bringing augmenting the warehouses with Hadoop based offerings like big insights with real-time offerings like streams so that from an IT standpoint they are ready to deal with all kinds of data and be able to analyze and gain information from all kinds of data now from the standpoint of you know how do you start the Big Data journey it the platform that at least you know we provide is a plug-and-play so there are different starting points for for businesses they may have started with warehouses they bring in a poly structured store with big inside / Hadoop they are building social profiles from social and public data which was not being done before matching that with the enterprise data which may be in CRM systems master data management systems inside the enterprise and which creates quadrants of comparisons and they are gaining more insights about the customer based on master data management based on social profiles that they are building so so this is one big trend that we are seeing you know to take this journey they have to you know take smaller smaller bites digests that get value out of it and you know eat it in chunks rather than try to you know eat the whole pie in one chunk so a lot of companies starting with exploration proof of concepts implementing certain use cases in four to six weeks getting value and then continuing to add more and more data sources and more and more applications so there are those who would say those existing edw so many people man some people would say they should be retired you would disagree with that no no I yeah I I think we very much need that experience and expertise businesses need that experience and expertise because it's not an either/or it's not that that goes away and there comes a different kind of a warehouse it's an evolution right but there's a tension there though wouldn't you say there's an organizational tension between the sort of newbies and the existing you know edw crowd i would say that maybe you know three years ago that was there was a little bit of that but there is i mean i talked to a lot of customers and there is i don't see that anymore so people are people are you know they they understand they know what's happening they are moving with the times and they know that this evolution is where the market is going where the business is going and where the technology you know they're going to be made obsolete if they don't embrace it right yeah yeah so so as we get on time I want to ask you a personal question what's going on with you these days with within IBM asli you're in a hot area you are at just in New York last week tell us what's going on in your life these days I mean things going well I mean what things you're looking at what are you paying attention to what's on your radar when you wake up and get to work before you get to work what's what are you thinking about what's the big picture so so obviously you know big data has been really fascinating right lots of lots of different kinds of applications in different industries so working with the customers in telco and healthcare banking financial sector has been very educational right so a lot of learning and that's very exciting and what's on my radar is we are obviously now seeing that we've done a lot of work in terms of helping customers develop and their Big Data Platform on-premise now we are seeing more and more a trend where people want to put this on the cloud so that's something that we have now a lot of I mean it's not like we haven't paid attention to the cloud but you know in the in the coming months you are going to see more from us are where you know how do we build cus how do we help customers build both private and and and public cloud offerings are and and you know where they can provide analytics as a service two different lines of business by setting up the clouds soso cloud is certainly on my mind software acquisition that was a hole in the portfolio and that filled it you guys got to drive that so so both software and then of course OpenStack right from an infrastructure standpoint for what's happening in the open source so we are you know leveraging both of those and like I said you'll hear more about that OpenStack is key as I say for you guys because you have you have street cred when it comes to open source I mean what you did in Linux and made a you know great business out of that so everybody will point it you know whether it's Oracle or IBM and HP say oh they just want to sell us our stack you've got to demonstrate and that you're open and OpenStack it's great way to do that and other initiatives as well so like I say that's a V excited about that yeah yeah okay I sure well thanks very much for coming on the cube it's always a pleasure to thank you see you yeah same here great having you back thank you very much okay we'll be right back live here inside the cube here and IV IBM information on demand hashtag IBM iod go to crouch at net / IBM iod and join the conversation where we're going to have a on the record crowd chat conversation with the folks out the who aren't here on-site or on-site Worth's we're here alive in Las Vegas I'm Java with Dave on to write back the q

Published Date : Nov 5 2013

SUMMARY :

of newbies and the existing you know edw

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

Jeff JonasPERSON

0.99+

Jim LundyPERSON

0.99+

IBMORGANIZATION

0.99+

Las VegasLOCATION

0.99+

New York CityLOCATION

0.99+

one hourQUANTITY

0.99+

New YorkLOCATION

0.99+

Anjul BhambriPERSON

0.99+

OracleORGANIZATION

0.99+

30QUANTITY

0.99+

HPORGANIZATION

0.99+

Dave vellantePERSON

0.99+

DavePERSON

0.99+

2013DATE

0.99+

LinuxTITLE

0.99+

GartnerORGANIZATION

0.99+

last weekDATE

0.99+

eightQUANTITY

0.99+

two aspectsQUANTITY

0.99+

last weekDATE

0.99+

three years agoDATE

0.98+

fourQUANTITY

0.98+

bothQUANTITY

0.98+

six weeksQUANTITY

0.98+

one chunkQUANTITY

0.98+

SPSSTITLE

0.98+

John furrierPERSON

0.97+

one hammerQUANTITY

0.97+

US governmentORGANIZATION

0.97+

GinnyPERSON

0.97+

year and a half agoDATE

0.96+

32 timesQUANTITY

0.96+

two termsQUANTITY

0.95+

telcoORGANIZATION

0.95+

todayDATE

0.94+

redditORGANIZATION

0.93+

CognosORGANIZATION

0.93+

around 2.5 million transactions per hourQUANTITY

0.93+

one exampleQUANTITY

0.93+

two different linesQUANTITY

0.93+

ibmORGANIZATION

0.92+

themesQUANTITY

0.9+

oneQUANTITY

0.9+

number oneQUANTITY

0.9+

petabytesQUANTITY

0.9+

Jim JimPERSON

0.89+

10 billion call dataQUANTITY

0.89+

OpenStackTITLE

0.89+

HadoopTITLE

0.88+

bhambriPERSON

0.88+

daysDATE

0.88+

tens of billions of callQUANTITY

0.87+

WikibonORGANIZATION

0.85+

twitterORGANIZATION

0.85+

twenty five thirty two timesQUANTITY

0.85+

2.5 million transactions per hourQUANTITY

0.84+

one bigQUANTITY

0.83+

blueORGANIZATION

0.83+

one big batsQUANTITY

0.82+

one ofQUANTITY

0.8+

IBM iodTITLE

0.78+

zillion different nailsQUANTITY

0.77+

TwitterORGANIZATION

0.74+

SiliconANGLE comOTHER

0.74+

MakinoTITLE

0.73+