Image Title

Search Results for Glassbeam:

Larry Lancaster, Zebrium | Virtual Vertica BDC 2020


 

>> Announcer: It's theCUBE! Covering the Virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Hi, everybody. Welcome back. You're watching theCUBE's coverage of the Vertica Virtual Big Data Conference. It was, of course, going to be in Boston at the Encore Hotel. Win big with big data with the new casino but obviously Coronavirus has changed all that. Our hearts go out and we are empathy to those people who are struggling. We are going to continue our wall-to-wall coverage of this conference and we're here with Larry Lancaster who's the founder and CTO of Zebrium. Larry, welcome to theCUBE. Thanks for coming on. >> Hi, thanks for having me. >> You're welcome. So first question, why did you start Zebrium? >> You know, I've been dealing with machine data a long time. So for those of you who don't know what that is, if you can imagine servers or whatever goes on in a data center or in a SAS shop. There's data coming out of those servers, out of those applications and basically, you can build a lot of cool stuff on that. So there's a lot of metrics that come out and there's a lot of log files that come. And so, I've built this... Basically spent my career building that sort of thing. So tools on top of that or products on top of that. The problem is that since at least log files are completely unstructured, it's always doing the same thing over and over again, which is going in and understanding the data and extracting the data and all that stuff. It's very time consuming. If you've done it like five times you don't want to do it again. So really, my idea was at this point with machine learning where it's at there's got to be a better way. So Zebrium was founded on the notion that we can just do all that automatically. We can take a pile of machine data, we can turn it into a database, and we can build stuff on top of that. And so the company is really all about bringing that value to the market. >> That's cool. I want to get in to that, just better understand who you're disrupting and understand that opportunity better. But before I do, tell us a little bit about your background. You got kind of an interesting background. Lot of tech jobs. Give us some color there. >> Yeah, so I started in the Valley I guess 20 years ago and when my son was born I left grad school. I was in grad school over at Berkeley, Biophysics. And I realized I needed to go get a job so I ended up starting in software and I've been there ever since. I mean, I spent a lot of time at, I guess I cut my teeth at Nedap, which was a storage company. And then I co-founded a business called Glassbeam, which was kind of an ETL database company. And then after that I ended up at Nimble Storage. Another company, EMC, ended up buying the Glassbeam so I went over there and then after Nimble though, which where I build the InfoSight platform. That's where I kind of, after that I was able to step back and take a year and a half and just go into my basement, actually, this is my kind of workspace here, and come up with the technology and actually build it so that I could go raise money and get a team together to build Zebrium. So that's really my career in a nutshell. >> And you've got Hello Kitty over your right shoulder, which is kind of cool >> That's right. >> And then up to the left you got your monitor, right? >> Well, I had it. It's over here, yeah. >> But it was great! Pull it out, pull it out, let me see it. So, okay, so you got that. So what do you do? You just sit there and code all night or what? >> Yeah, that's right. So Hello Kitty's over here. I have a daughter and she setup my workspace here on this side with Hello Kitty and so on. And over on this side, I've got my recliner where I basically lay it all the way back and then I pivot this thing down over my face and put my keyboard on my lap and I can just sit there for like 20 hours. It's great. Completely comfortable. >> That's cool. All right, better put that monitor back or our guys will yell at me. But so, obviously, we're talking to somebody with serious coding chops and I'll also add that the Nimble InfoSight, I think it was one of the best pick ups that HP, HPE, has had in a while. And the thing that interested me about that, Larry, is the ability that the company was able to take that InfoSight and poured it very quickly across its product lines. So that says to me it was a modern, architecture, I'm sure API, microservices, and all those cool buzz words, but the proof is in their ability to bring that IP to other parts of the portfolio. So, well done. >> Yeah, well thanks. Appreciate that. I mean, they've got a fantastic team there. And the other thing that helps is when you have the notion that you don't just build on top of the data, you extract the data, you structure it, you put that in a database, we used Vertica there for that, and then you build on top of that. Taking the time to build that layer is what lets you build a scalable platform. >> Yeah, so, why Vertica? I mean, Vertica's been around for awhile. You remember you had the you had the old RDBMS, Oracles, Db2s, SQL Server, and then the database was kind of a boring market. And then, all of a sudden, you had all of these MPP companies came out, a spade of them. They all got acquired, including Vertica. And they've all sort of disappeared and morphed into different brands and Micro Focus has preserved the Vertica brand. But it seems like Vertica has been able to survive the transitions. Why Vertica? What was it about that platform that was unique and interested you? >> Well, I mean, so they're the first fund to build, what I would call a real column store that's kind of market capable, right? So there was the C-Store project at Berkeley, which Stonebreaker was involved in. And then that became sort of the seed from which Vertica was spawned. So you had this idea of, let's lay things out in a columnar way. And when I say columnar, I don't just mean that the data for every column is in a different set of files. What I mean by that is it takes full advantage of things like run length and coding, and L file and coding, and block--impression, and so you end up with these massive orders of magnitude savings in terms of the data that's being pulled off of storage as well as as it's moving through the pipeline internally in Vertica's query processing. So why am I saying all this? Because it's fundamentally, it was a fundamentally disruptive technology. I think column stores are ubiquitous now in analytics. And I think you could name maybe a couple of projects which are mostly open source who do something like Vertica does but name me another one that's actually capable of serving an enterprise as a relational database. I still think Vertica is unique in being that one. >> Well, it's interesting because you're a startup. And so a lot of startups would say, okay, we're going with a born-in-the-cloud database. Now Vertica touts that, well look, we've embraced cloud. You know, we have, we run in the cloud, we run on PRAM, all different optionality. And you hear a lot of vendors say that, but a lot of times they're just taking their stack and stuffing it into the cloud. But, so why didn't you go with a cloud-native database and is Vertica able to, I mean, obviously, that's why you chose it, but I'm interested from a technologist standpoint as to why you, again, made that choice given all these other choices around there. >> Right, I mean, again, I'm not, so... As I explained a column store, which I think is the appropriate definition, I'm not aware of another cloud-native-- >> Hm, okay. >> I'm aware of other cloud-native transactional databases, I'm not aware of one that has the analytics form it and I've tried some of them. So it was not like I didn't look. What I was actually impressed with and I think what let me move forward using Vertica in our stack is the fact that Eon really is built from the ground up to be cloud-native. And so we've been using Eon almost ever since we started the work that we're doing. So I've been really happy with the performance and with reliability of Eon. >> It's interesting. I've been saying for years that Vertica's a diamond in the rough and it's previous owner didn't know what to do with it because it got distracted and now Micro Focus seems to really see the value and is obviously putting some investments in there. >> Yeah >> Tell me more about your business. Who are you disrupting? Are you kind of disrupting the do-it-yourself? Or is there sort of a big whale out there that you're going to go after? Add some color to that. >> Yeah, so our broader market is monitoring software, that's kind of the high-level category. So you have a lot of people in that market right now. Some of them are entrenched in large players, like Datadog would be a great example. Some of them are smaller upstarts. It's a pretty, it's a pretty saturated market. But what's happened over the last, I'd say two years, is that there's been sort of a push towards what's called observability in terms of at least how some of the products are architected, like Honeycomb, and how some of them are messaged. Most of them are messaged these days. And what that really means is there's been sort of an understanding that's developed that that MTTR is really what people need to focus on to keep their customers happy. If you're a SAS company, MTTR is going to be your bread and butter. And it's still measured in hours and days. And the biggest reason for that is because of what's called unknown unknowns. Because of complexity. Now a days, things are, applications are ten times as complex as they used to be. And what you end up with is a situation where if something is new, if it's a known issue with a known symptom and a known root cause, then you can setup a automation for it. But the ones that really cost a lot of time in terms of service disruption are unknown unknowns. And now you got to go dig into this massive mass of data. So observability is about making tools to help you do that, but it's still going to take you hours. And so our contention is, you need to automate the eyeball. The bottleneck is now the eyeball. And so you have to get away from this notion of a person's going to be able to do it infinitely more efficient and recognize that you need automated help. When you get an alert agent, it shouldn't be that, "Hey, something weird's happening. Now go dig in." It should be, "Here's a root cause and a symptom." And that should be proposed to you by a system that actually does the observing. That actually does the watching. And that's what Zebrium does. >> Yeah, that's awesome. I mean, you're right. The last thing you want is just another alert and it say, "Go figure something out because there's a problem." So how does it work, Larry? In terms of what you built there. Can you take us inside the covers? >> Yeah, sure. So there's really, right now there's two kinds of data that we're ingesting. There's metrics and there's log files. Metrics, there's actually sort of a framework that's really popular in DevOp circles especially but it's becoming popular everywhere, which is called Prometheus. And it's a way of exporting metrics so that scrapers can collect them. And so if you go look at a typical stack, you'll find that most of the open source components and many of the closed source components are going to have exporters that export all their stacks to Prometheus. So by supporting that stack we can bring in all of those metrics. And then there's also the log files. And so you've got host log files in a containerized environment, you've got container logs, and you've got application-specific logs, perhaps living on a host mount. And you want to pull all those back and you want to be able to associate this log that I've collected here is associated with the same container on the same host that this metric is associated with. But now what? So once you've got that, you've got a pile of unstructured logs. So what we do is we take a look at those logs and we say, let's structure those into tables, right? So where I used to have a log message, if I look in my log file and I see it says something like, X happened five times, right? Well, that event types going to occur again and it'll say, X happened six times or X happened three times. So if I see that as a human being, I can say, "Oh clearly, that's the same thing." And what's interesting here is the times that X, that X happened, and that this number read... I may want to know when the numbers happened as a time series, the values of that column. And so you can imagine it as a table. So now I have table for that event type and every time it happens, I get a row. And then I have a column with that number in it. And so now I can do any kind of analytics I want almost instantly across my... If I have all my event types structured that way, every thing changes. You can do real anomaly detection and incident detection on top of that data. So that's really how we go about doing it. How we go about being able to do autonomous monitoring in a way that's effective. >> How do you handle doing that for, like the Spoke app? Do you have to, does somebody have to build a connector to those apps? How do you handle that? >> Yeah, that's a really good question. So you're right. So if I go and install a typical log manager, there'll be connectors for different apps and usually what that means is pulling in the stuff on the left, if you were to be looking at that log line, and it will be things like a time stamp, or a severity, or a function name, or various other things. And so the connector will know how to pull those apart and then the stuff to the right will be considered the message and that'll get indexed for search. And so our approach is we actually go in with machine learning and we structure that whole thing. So there's a table. And it's going to have a column called severity, and timestamp, and function name. And then it's going to have columns that correspond to the parameters that are in that event. And it'll have a name associated with the constant parts of that event. And so you end up with a situation where you've structured all of it automatically so we don't need collectors. It'll work just as well on your home-grown app that has no collectors or no parsers to find or anything. It'll work immediately just as well as it would work on anything else. And that's important, because you can't be asking people for connectors to their own applications. It just, it becomes now they've go to stop what they're doing and go write code for you, for your platform and they have to maintain it. It's just untenable. So you can be up and running with our service in three minutes. It'll just be monitoring those for you. >> That's awesome! I mean, that is really a breakthrough innovation. So, nice. Love to see that hittin' the market. Who do you sell to? Both types of companies and what role within the company? >> Well, definitely there's two main sort of pushes that we've seen, or I should say pulls. One is from DevOps folks, SRE folks. So these are people who are tasked with monitoring an environment, basically. And then you've got people who are in engineering and they have a staging environment. And what they actually find valuable is... Because when we find an incident in a staging environment, yeah, half the time it's because they're tearing everything up and it's not release ready, whatever's in stage. That's fine, they know that. But the other half the time it's new bugs, it's issues and they're finding issues. So it's kind of diverged. You have engineering users and they don't have titles like QA, they're Dev engineers or Dev managers that are really interested. And then you've got DevOps and SRE people there (mumbles). >> And how do I consume your product? Is the SAS... I sign up and you say within three minutes I'm up and running. I'm paying by the drink. >> Well, (laughs) right. So there's a couple ways. So, right. So the easiest way is if you use Kubernetes. So Kubernetes is what's called a container orchestrator. So these days, you know Docker and containers and all that, so now there's container orchestrators have become, I wouldn't say ubiquitous but they're very popular now. So it's kind of on that inflection curve. I'm not exactly sure the penetration but I'm going to say 30-40% probably of shops that were interested are using container orchestrators. So if you're using Kubernetes, basically you can install our Kubernetes chart, which basically means copying and pasting a URL and so on into your little admin panel there. And then it'll just start collecting all the logs and metrics and then you just login on the website. And the way you do that is just go to our website and it'll show you how to sign up for the service and you'll get your little API key and link to the chart and you're off and running. You don't have to do anything else. You can add rules, you can add stuff, but you don't have to. You shouldn't have to, right? You should never have to do any more work. >> That's great. So it's a SAS capability and I just pay for... How do you price it? >> Oh, right. So it's priced on volume, data volume. I don't want to go too much into it because I'm not the pricing guy. But what I'll say is that it's, as far as I know it's as cheap or cheaper than any other log manager or metrics product. It's in that same neighborhood as the very low priced ones. Because right now, we're not trying to optimize for take. We're trying to make a healthy margin and get the value of autonomous monitoring out there. Right now, that's our priority. >> And it's running in the cloud, is that right? AWB West-- >> Yeah, that right. Oh, I should've also pointed out that you can have a free account if it's less than some number of gigabytes a day we're not going to charge. Yeah, so we run in AWS. We have a multi-tenant instance in AWS. And we have a Vertica Eon cluster behind that. And it's been working out really well. >> And on your freemium, you have used the Vertica Community Edition? Because they don't charge you for that, right? So is that how you do it or... >> No, no. We're, no, no. So, I don't want to go into that because I'm not the bizdev guy. But what I'll say is that if you're doing something that winds up being OEM-ish, you can work out the particulars with Vertica. It's not like you're going to just go pay retail and they won't let you distinguish between tests, and prod, and paid, and all that. They'll work with you. Just call 'em up. >> Yeah, and that's why I brought it up because Vertica, they have a community edition, which is not neutered. It runs Eon, it's just there's limits on clusters and storage >> There's limits. >> But it's still fully functional though. >> So to your point, we want it multi-tenant. So it's big just because it's multi-tenant. We have hundred of users on that (audio cuts out). >> And then, what's your partnership with Vertica like? Can we close on that and just describe that a little bit? >> What's it like. I mean, it's pleasant. >> Yeah, I mean (mumbles). >> You know what, so the important thing... Here's what's important. What's important is that I don't have to worry about that layer of our stack. When it comes to being able to get the performance I need, being able to get the economy of scale that I need, being able to get the absolute scale that I need, I've not been disappointed ever with Vertica. And frankly, being able to have acid guarantees and everything else, like a normal mature database that can join lots of tables and still be fast, that's also necessary at scale. And so I feel like it was definitely the right choice to start with. >> Yeah, it's interesting. I remember in the early days of big data a lot of people said, "Who's going to need these acid properties and all this complexity of databases." And of course, acid properties and SQL became the killer features and functions of these databases. >> Who didn't see that one coming, right? >> Yeah, right. And then, so you guys have done a big seed round. You've raised a little over $6 million dollars and you got the product market fit down. You're ready to rock, right? >> Yeah, that's right. So we're doing a launch probably, well, when this airs it'll probably be the day before this airs. Basically, yeah. We've got people... Like literally in the last, I'd say, six to eight weeks, It's just been this sort of pique of interest. All of a sudden, everyone kind of gets what we're doing, realizes they need it, and we've got a solution that seems to meet expectations. So it's like... It's been an amazing... Let me just say this, it's been an amazing start to the year. I mean, at the same time, it's been really difficult for us but more difficult for some other people that haven't been able to go to work over the last couple of weeks and so on. But it's been a good start to the year, at least for our business. So... >> Well, Larry, congratulations on getting the company off the ground and thank you so much for coming on theCUBE and being part of the Virtual Vertica Big Data Conference. >> Thank you very much. >> All right, and thank you everybody for watching. This is Dave Vellante for theCUBE. Keep it right there. We're covering wall-to-wall Virtual Vertica BDC. You're watching theCUBE. (upbeat music)

Published Date : Mar 31 2020

SUMMARY :

brought to you by Vertica. and we're here with Larry Lancaster why did you start Zebrium? and basically, you can build a lot of cool stuff on that. and understand that opportunity better. and actually build it so that I could go raise money It's over here, yeah. So what do you do? and then I pivot this thing down over my face and I'll also add that the Nimble InfoSight, And the other thing that helps is when you have the notion and Micro Focus has preserved the Vertica brand. and so you end up with these massive orders And you hear a lot of vendors say that, I'm not aware of another cloud-native-- I'm not aware of one that has the analytics form it and now Micro Focus seems to really see the value Are you kind of disrupting the do-it-yourself? And that should be proposed to you In terms of what you built there. And so you can imagine it as a table. And so you end up with a situation I mean, that is really a breakthrough innovation. and it's not release ready, I sign up and you say within three minutes And the way you do that So it's a SAS capability and I just pay for... and get the value of autonomous monitoring out there. that you can have a free account So is that how you do it or... and they won't let you distinguish between Yeah, and that's why I brought it up because Vertica, But it's still So to your point, I mean, it's pleasant. What's important is that I don't have to worry I remember in the early days of big data and you got the product market fit down. that haven't been able to go to work and thank you so much for coming on theCUBE All right, and thank you everybody for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Larry LancasterPERSON

0.99+

Dave VellantePERSON

0.99+

LarryPERSON

0.99+

BostonLOCATION

0.99+

five timesQUANTITY

0.99+

three timesQUANTITY

0.99+

six timesQUANTITY

0.99+

EMCORGANIZATION

0.99+

sixQUANTITY

0.99+

ZebriumORGANIZATION

0.99+

20 hoursQUANTITY

0.99+

GlassbeamORGANIZATION

0.99+

NedapORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

Nimble StorageORGANIZATION

0.99+

HPORGANIZATION

0.99+

HPEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

a year and a halfQUANTITY

0.99+

Micro FocusORGANIZATION

0.99+

ten timesQUANTITY

0.99+

two kindsQUANTITY

0.99+

two yearsQUANTITY

0.99+

three minutesQUANTITY

0.99+

first questionQUANTITY

0.99+

eight weeksQUANTITY

0.98+

StonebreakerORGANIZATION

0.98+

PrometheusTITLE

0.98+

30-40%QUANTITY

0.98+

EonORGANIZATION

0.98+

hundred of usersQUANTITY

0.98+

OneQUANTITY

0.98+

Vertica Virtual Big Data ConferenceEVENT

0.98+

KubernetesTITLE

0.97+

first fundQUANTITY

0.97+

Virtual Vertica Big Data Conference 2020EVENT

0.97+

AWB WestORGANIZATION

0.97+

Virtual Vertica Big Data ConferenceEVENT

0.97+

HoneycombORGANIZATION

0.96+

SASORGANIZATION

0.96+

20 years agoDATE

0.96+

Both typesQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

DatadogORGANIZATION

0.95+

two mainQUANTITY

0.94+

over $6 million dollarsQUANTITY

0.93+

Hello KittyORGANIZATION

0.93+

SQLTITLE

0.93+

ZebriumPERSON

0.91+

SpokeTITLE

0.89+

Encore HotelLOCATION

0.88+

InfoSightORGANIZATION

0.88+

CoronavirusOTHER

0.88+

oneQUANTITY

0.86+

lessQUANTITY

0.85+

OraclesORGANIZATION

0.85+

2020DATE

0.85+

CTOPERSON

0.84+

VerticaTITLE

0.82+

Nimble InfoSightORGANIZATION

0.81+

Calline Sanchez | IBM Interconnect 2017


 

(upbeat techno music) >> Announcer: Live from Las Vegas, it's theCUBE covering InterConnect 2017, brought to you by IBM. >> Okay, welcome back, everyone. We are here live in Las Vegas for IBM InterConnect 2017. This is theCUBE's coverage of IBM InterConnect. I'm John Furrier, my cohost, Dave Vellante. We have Calline Sanchez, Vice President of IBM Enterprise, Storage Development at IBM. We had an interview at VMworld last year about tape, making tape cool. Great to see you again. >> Thank you. Thank you for welcoming me back, so I guess I wasn't too bad last time. >> No, you're good. >> Calline: Right? >> We love tapes. The tape culture out there, there's a tape community. >> Calline: Yes! >> Tape has been dead forever. It's going to die this year is what everyone always predicted. They're going to die next year. It never dies. Tape it always around, and Dave and I, you know, we see this all the time. >> Calline: Yeah, it's back. >> It's cool. It's relevant, and it's the east expensive storage... >> That is correct. >> Out there. So what's the update? What's cool about tape this year? >> So, I think when I was speaking to you earlier, you talked about flape, what we're doing with Flash and actual tape. So in partnership with our micro-coders, our engineers and scientists we partner with in Tucson, Arizona, with a team in Zurich in research, to really figure out what we're doing with flape. And by the way, flape is a cool name, right? It's a very developer name. >> Well, you know, Wikibon coined that term. That was David Floyer. >> Oh, really. >> The Flash plus tape. Yes. But the premise was that there's not a lot of innovation going on in disc drive heads. >> Calline: Correct. >> And they're hermetically sealed, whereas in tape, you can do a lot more, more bandwidth, and you can do some cool stuff with search, right? >> Yes. >> And new tape formats. >> Calline: Right. >> Right, so that's all coming together, and are you... Is there software now associated with that index so you can more quickly search? >> So we have created a management layer that supports what we intend with flape, and also across the tape portfolio to really consume applications at a higher level to enable what we need to do with our consumability, not only from a tape perspective, but also with Flash. >> Right, so the economics really still favor tape. Flape, Flash supports the speed, so it starts to encroach on some of that long-term archiving. >> Which is important based on archive, 'cause, well, aren't we all data hoarders? We like to keep our data and archive it and stage it off, whatever it is. It could be based on what we're doing with tape and also, you know, hard disc drives. Some clients that I work with substantiate archive data to cheap drives as well. So hopefully, eventually, the transition will be to enable what we want to do with Flash, once Flash of course is a cheaper or a stronger price competitive thing. So, bridging though to, from our last conversation, believe me, tape is sexy, so I'm telling the audience here, it's like, hello, if you're not talking about tape, well, where have you been? But at the same time, I want to talk about Flash, and what we do with DS8000. So we have an enterprise monolithic system that covers like six nines of availability, that substantiates what we do in the Flash market, and we just recently announced, from enterprise down to entry, so mid-range as well as entry devices, that are all Flash. So we care more, from an IBM perspective, on what we're doing associated with the Flash investment. Friends of mine like Erik Herzog, I'm sure, get on stage with you to talk about like, IBM is focusing on Flash. It's relevant to us. It's relevant to our clients. >> And the software, too. Very software-driven. Flash is key, but you're bringing this capacity at this... What was it, six nines? >> Yep. >> I mean... >> It's reliability. >> I mean that's just like, just dump all the data. That's a perfect scenario. >> Yes, and it's a beautiful thing. In addition to what Flash does, from an engineering perspective... Forgive me, I'm going to be a geek for a moment, is that it allows us to in the lab to focus on other things, so basically, that latency or the chase for performance equals more, more meaning that we can focus more on what it means to develop optimizers, like, for instance, EasyTier, et cetera, to really enable a better benefit. Also some of those engineers and scientists allow us to focus more on flape as well. >> So explain that concept. Okay, latency equals more. What specifically do you mean, like the latency on the devices, \the data movement, just double down on that for a second. >> So, from a performance perspective, we have to work around bottlenecks. That was where our focus was, but now, with Flash, we worry less about those individual components from a reliability perspective as well as chasing latency or performance measurements based on IOPS, and in order to do that, we don't have to worry about it as much anymore from an engineering standpoint. So it allows us the time to really focus on what matters next, like the value that we could think of that we could benefit clients with regards to advanced technologies, technologies of value. >> Like what? What does that free you? It liberates both creativity... >> Calline: Data and analytics. >> Really, okay. >> Calline: So like, for instance, the expo floor associated with Interconnect, you meet all these people that talk about how data matters. Well, it's the intelligence around data, and so we want to figure out how to harness that data and drive out the intelligence so that the smarts associated with the data, and that's what it allows us to talk about. >> Yeah, so let's keep on that theme of business impact, we were talking about latency before. Everybody knows Flash is fast. You're implying, or I'm inferring from your statements that it's still more expensive. >> Calline: Correct. >> However, you got data reduction technologies and you have this data sharing notion. In other words, I can share much more data with the same copy out of a Flash. That has an impact on developer productivity, which ripples through on innovation. >> Calline: Yes, correct. >> Are you seeing that have business impacts within your client base? >> Yes. So, for instance, at first, we started to talk about, from an engineering perspective, compression and deduplication, how to be much more efficient with that data storage. And then afterwards, we started to talk about, well, you know, we had to move quickly to serve our clients associated with those feature functions, and now we're about talking about how we harness or archive, you know, how we enable big data, and also the IT aspect, the intelligence of the data, and how we can translate that to improving client value. For example, I just saw on the expo floor, partnered with Glassbeam, and with the... I did basically a meeting with Glassbeam on the floor to talk about what we've done with them in partnership to harness the power of data. >> Dave: What is Glassbeam? >> So Glassbeam is basically makes sense of the mess of data that we just have out there and makes it much more intelligent. So, it allows their system, their algorithms, to ingest data and better understand where we're at with that data, no different than what we do with Watson as well. So, that, from a Watson perspective, you ingest the data and you can provide additional smartness about that data as well as the intelligence of. >> And what kind of data are we talking about here? Structured data, unstructured data? >> Structured data, and specifically associated with Glassbeam, it was all about really bringing in the plumbing of data for our clients worldwide. So, clients experience our systems worldwide associated DS8000, and we wanted to better be in a position to serve our clients adequately, and what I mean by that is they could have an error that occurred or we want to be proactive with them based on Call Home, and also some of the heartbeat information we get based on the systems, and we want to adequately share with them that. So, you as a client could, I could send you a really beautiful, simple email or communication, maybe it's a tweet that basically says, hey, there's something we're worried about, and we've got to proactively address it, ASAP. >> Well, and there's all kinds of metrics buried in those files, right? There's utilization data, there's data on the effectiveness of thin provisioning, you mentioned compression, deduplication... >> Calline: Yes. >> I mean, I don't know what else is in there. It's probably a ton more stuff, obviously problems that occur. So have you been able to get to the point where you could be anticipatory and head off, you know, front run some of those problems? >> So our end goal is to build an autonomic system, an autonomic system that has the brain to self-heal, and that's what we want to focus on in the future. Now, are we there yet? No, we're not, but what we're doing with Watson or Glassbeam or some of these optimizers, these tools, to build better systems, it something that we're doing associated with building the future of an autonomic system. >> I mean, one of the things John and I have been talking about with this, you know, Jenny was talking about cognitive to the core this morning... >> Yep. >> And this cognitive world we live in. >> Just a whole new set of metrics emerging and KPIs. I mean, you mentioned self-healing. We still, to this day track, availability, and okay, the light on the server versus the application, things like that. >> Calline: Yeah. >> We see, and I wonder if you could comment on this, a whole new set of KPIs emerging from the infrastructure standpoint of, you know, what percent of the problems were self-healed... >> Calline: Yes. >> How can we affect that and increase that and what are we doing with that free time? Are you hearing from that clients, that they're changing or adding to the metrics, KPIs that they're entering? >> Yes, so first, am I hearing from clients on that? Yes. So it's always these questions of like, okay, so from a cognition perspective, cognitive focus, what are you going to do to help us to self-heal as well as how do you build in the intelligence based on artificial intelligence to really self-heal, and that's one of the focuses we're working on. >> So what's the coolest thing happening now, 'cause last time, I loved the conscious we had about capacity and stuff that I learned was all the engineering, just to squeeze more out of... 'Cause the tape is a great thing, but reliability is killer. You got some great reliability, so it's a good solution, but there's always the engineering side of it that's science. What's going on that you guys are kind of digging away at, pounding away at for tech that people might not know about for tape? >> So, using the cognitive systems or AI as the foundation, we're thinking about how to build in intelligence within our systems, and the way to do that is the reason why I keep focusing on this word, autonomic. How do we build a true autonomic system? It's almost like a system that has its own brain, right? And that chip set that exists inside associated with DS8000 is like power devices, right, whether it's six-core, eight-core, whatever, how big of a brain do you want is kind of a discussion to have, but what's important about that is we really want to figure out how to be smart enough to self-heal, and we don't know how to do that just yet, and it's going to be, just like you had mentioned, all this information and pulling it in to really determine how we go about doing so. >> So that's kind of near-term, those are sort of... Maybe in the binoculars you can start to see how you can utilize analytics and cognitive to do some of that self-healing. I wanted to ask you a sort of telescope question. We heard Jenny talk today about quantum. What are your thoughts on that in terms of the implications for storage? >> My thoughts on quantum. So first of all, let's figure out how to harness the science of quantum computing, right? So that's the first fundamental step, like, I don't know, first step of the twelve-step problem, realizing you have a challenge, right? (Dave laughs) So, from that, it's like really realize that and recognize that, and IBM is working on what we're doing with quantum computing. As far as how it relates specifically to storage, so, we think it could be a benefit with relates to DS8000 tape as well, because think about it. Tape, as far as the library side, that's what we did is we built out infrastructure that really harnessed this aspect of data and did it in the cheapest way possible, energy-efficient way possible, so I think quantum, from our perspective, is like a leapfrog into the future of what we enable with some of our thinking there. And Jenny and team as well as her senior leadership are influencing how we should think about quantum computing as it relates to storage. So, I say the next time that we meet, you should probably ask that question of me again, like how far along are you? >> Dave: Deal. >> Step one and a half or two of the twelve-step program? >> I would say one and a half. >> Dave: Go ahead, sorry. >> No, go ahead. >> I wanted to ask you about when Ed Walsh took over. >> By the way, I like the two of you competing on questions. (all laugh) >> We both like to talk. >> We can't get enough tape. (all laugh) >> We have tape everywhere, look at it. Taping down the lights... >> So, here's my question, Calline. So when Ed Walsh took over the GM of the Storage Division, I asked him this. IBM's always had a rich heritage of R&D and development. However, my comment was, sometimes it was sort of development for development's sake, and I feel like, and he sort of said this. One of my missions is to get, you know, align engineering with, you know, go to market, get stuff out of the pipeline, into the market sooner. From an engineering perspective, have you guys begun to do that? What changes have you affected? Are you seeing the effects of that sort of initiative? >> So, when have an agility process within IBM Development that was, basically Ed Walsh was a huge advocate for that, supported it, and his intent is for us the push all of this wonderful IP that we build in-house to the marketplace as quickly as possible. So I say at this moment, we're there. I just, right now, he's, in the nicest way possible, and the most charming way, telling me, it's like, you're not fast enough. (men laugh) Right? And that's a good thing. That means that there's more innovation, more intellectual property we can put into the marketplace, faster, quicker whatever that means, in larger increments, versus it being me... Previously, I would tell you, it's like, so DS8000, I may deliver that to you, target-wise, 12 months from now. That's not good enough anymore. >> So Ed's coming on tomorrow, so we'll ask him how Calline's doing maybe. (all laugh) We'll put him on the spot and you on the spot at the same time, if you don't mind. >> Oh yeah, no problem. >> Calline, it's always great to chat with you, love these conversations, thanks for coming on theCUBE, sharing the insights on the tape, the DS8000. Appreciate it. >> Thank you very much. >> And it's theCUBE live here in Las Vegas for IBM InterConnect. I'm John Furrier with Dave Vellante. You're watching theCUBE. Stay with us, we've got more great interviews for the rest of the day and all day tomorrow. We'll be right back. (upbeat techno music)

Published Date : Mar 21 2017

SUMMARY :

brought to you by IBM. Great to see you again. Thank you for welcoming me back, We love tapes. It's going to die this year is It's relevant, and it's the So what's the update? speaking to you earlier, Well, you know, But the premise was that there's not a lot so you can more quickly search? to enable what we need to Right, so the economics to enable what we want to do with Flash, And the software, too. just dump all the data. In addition to what Flash does, like the latency on the and in order to do that, we What does that free you? so that the smarts associated we were talking about latency before. and you have this data sharing notion. and also the IT aspect, the and you can provide additional and also some of the heartbeat information you mentioned compression, So have you been able to get to the point has the brain to self-heal, I mean, one of the We still, to this day track, emerging from the and that's one of the What's going on that you guys and it's going to be, just I wanted to ask you a sort So, I say the next time that we meet, I wanted to ask you about of you competing on questions. We can't get enough tape. Taping down the lights... One of my missions is to get, I may deliver that to you, at the same time, if you don't mind. great to chat with you, for the rest of the day

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

JennyPERSON

0.99+

Erik HerzogPERSON

0.99+

Ed WalshPERSON

0.99+

John FurrierPERSON

0.99+

David FloyerPERSON

0.99+

twoQUANTITY

0.99+

ZurichLOCATION

0.99+

IBMORGANIZATION

0.99+

GlassbeamORGANIZATION

0.99+

CallinePERSON

0.99+

Calline SanchezPERSON

0.99+

Las VegasLOCATION

0.99+

six-coreQUANTITY

0.99+

next yearDATE

0.99+

Tucson, ArizonaLOCATION

0.99+

eight-coreQUANTITY

0.99+

last yearDATE

0.99+

DS8000COMMERCIAL_ITEM

0.99+

this yearDATE

0.99+

FlashTITLE

0.99+

tomorrowDATE

0.99+

WatsonORGANIZATION

0.98+

firstQUANTITY

0.98+

twelve-stepQUANTITY

0.98+

todayDATE

0.98+

InterconnectORGANIZATION

0.98+

OneQUANTITY

0.98+

EdPERSON

0.97+

theCUBEORGANIZATION

0.97+

one and a halfQUANTITY

0.96+

CallineORGANIZATION

0.96+

bothQUANTITY

0.95+

Vice PresidentPERSON

0.94+

IBM InterConnectORGANIZATION

0.94+

IBM InterConnect 2017EVENT

0.93+

first stepQUANTITY

0.93+

IBM Enterprise,ORGANIZATION

0.92+

InterConnect 2017EVENT

0.91+

2017DATE

0.89+

EasyTierTITLE

0.88+

oneQUANTITY

0.87+

first fundamentalQUANTITY

0.84+

WikibonORGANIZATION

0.84+

12 monthsQUANTITY

0.83+

a secondQUANTITY

0.8+