Image Title

Search Results for Bill Joy:

Frank Gens, IDC | Actifio Data Driven 2019


 

>> From Boston, Massachusets, it's The Cube. Covering Actifio 2019: Data Driven, Brought to you by Actifio. >> Welcome back to Boston, everybody. We're here at the Intercontinental Hotel at Actifio's Data Driven conference, day one. You're watching The Cube. The leader in on-the-ground tech coverage. My name is is Dave Valante, Stu Minamin is here, so is John Ferrer, my friend Frank Gens is here, he's the Senior Vice President and Chief Analyst at IDC and Head Dot Connector. Frank, welcome to The Cube. >> Well thank you Dave. >> First time. >> First time. >> Newbie. >> Yep. >> You're going to crush it, I know. >> Be gentle. >> You know, you're awesome, I've watched you over the many years, of course, you know, you seem to get competitive, and it's like who gets the best rating? Frank always had the best ratings at the Directions conference. He's blushing but I could- >> I don't know if that's true but I'll accept it. >> I could never beat him, no matter how hard I tried. But you are a phenomenal speaker, you gave a great conversation this morning. I'm sure you drew a lot from your Directions talk, but every year you lay down this, you know, sort of, mini manifesto. You describe it as, you connect the dots, IDC, thousands of analysts. And it's your job to say okay, what does this all mean? Not in the micro, let's up-level a little bit. So, what's happening? You talked today, You know you gave your version of the wave slides. So, where are we in the waves? We are exiting the experimentation phase, and coming in to a new phase that multiplied innovation. I saw AI on there, block-chain, some other technologies. Where are we today? >> Yeah, well I think having mental models of the6 industry or any complex system is pretty important. I mean I've made a career dumbing-down a complex industry into something simple enough that I can understand, so we've done it again now with what we call the third platform. So, ten years ago seeing the whole raft of new technologies at the time were coming in that would become the foundation for the next thirty years of tech, so, that's an old story now. Cloud, mobile, social, big data, obviously IOT technologies coming in, block-chain, and so forth. So we call this general era the third platform, but we noticed a few years ago, well, we're at the threshold of kind of a major scale-up of innovation in this third platform that's very different from the last ten or twelve years, which we called the experimentation stage. Where people were using this stuff, using the cloud, using mobile, big data, to create cool things, but they were doing it in kind of a isolated way. Kind of the traditional, well I'm going to invent something and I may have a few friends help me, whereas, the promise of the cloud has been , well, if you have a lot of developers out on the cloud, that form a community, an ecosystem, think of GitHub, you know, any of the big code repositories, or the ability to have shared service as often Amazon, Cloud, or IBM, or Google, or Microsoft, the promise is there to actually bring to life what Bill Joy said, you know, in the nineties. Which was no matter how smart you are, most of the smart people in the world work for someone else. So the questions always been, well, how do I tap into all those other smart people who don't work for me? So we can feel that where we are in the industry right now is the business model of multiplied innovation or if you prefer, a network of collaborative innovation, being able to build something interesting quickly, using a lot of innovation from other people, and then adding your special sauce. But that's going to take the scale of innovation just up a couple of orders of magnitude. And the pace, of course, that goes with that, is people are innovating much more rapid clip now. So really, the full promise of a cloud-native innovation model, so we kind of feel like we're right here, which means there's lots of big changes around the technologies, around kind of the world of developers and apps, AI is changing, and of course, the industry structure itself. You know the power positions, you know, a lot of vendors have spent a lot of energy trying to protect the power positions of the last thirty years. >> Yeah so we're getting into some of that. So, but you know, everybody talks about digital transformation, and they kind of roll their eyes, like it's a big buzzword, but it's real. It's dataware at a data-driven conference. And data, you know, being at the heart of businesses means that you're seeing businesses transition industries, or traverse industries, you know, Amazon getting into groceries, Apple getting into content, Amazon as well, etcetera, etcetera, etcetera, so, my question is, what's a tech company? I mean, you know, Bennyhoff says that, you know, every company's a sass company, and you're certainly seeing that, and it's got to be great for your business. >> Yeah, yeah absolutely >> Quantifying all those markets, but I mean, the market that you quantify is just it's every company now. Banks, insurance companies, grocers, you know? Everybody is a tech company. >> I think, yeah, that's a hundred percent right. It is that this is the biggest revolution in the economy, you know, for many many decades. Or you might say centuries even. Is yeah, whoever put it, was it Mark Andreson or whoever used to talk about software leading the world, we're in the middle of that. Only, software now is being delivered in the form of digital or cloud services so, you know, every company is a tech company. And of course it really raises the question, well what are tech companies? You know, they need to kind of think back about where does our value add? But it is great. It's when we look at the world of clouds, one of the first things we observed in 2007, 2008 was, well, clouds wasn't just about S3 storage clouds, or salesforce.com's softwares and service. It's a model that can be applied to any industry, any company, any offering. And of course we've seen all these startups whether it's Uber or Netflix or whoever it is, basically digital innovation in every single industry, transforming that industry. So, to me that's the exciting part is if that model of transforming industries through the use of software, through digital technology. In that kind of experimentation stage it was mainly a startup story. All those unicorns. To me the multiplied innovation chapter, it's about- (audio cuts out) finally, you know, the cities, the Procter & Gambles, the Walmarts, the John Deere's, they're finally saying hey, this cloud platform and digital innovation, if we can do that in our industry. >> Yeah, so intrapreneurship is actually, you know, starting to- >> Yeah. >> So you and I have seen a lot of psychos, we watched the you know, the mainframe wave get crushed by the micro-processor based revolution, IDC at the time spent a lot of time looking at that. >> Vacuum tubes. >> Water coolant is back. So but the industry has marched to the cadence of Moore's Law forever. Even Thomas Friedman when he talks about, you know, his stuff and he throws in Moore's Law. But no longer Moore's Law the sort of engine of innovation. There's other factors. So what's the innovation cocktail looking forward over the next ten years? You've talked about cloud, you know, we've talked about AI, what's that, you know, sandwich, the innovation sandwich look like? >> Yeah so to me I think it is the harnessing of all this flood of technologies, again, that are mainly coming off the cloud, and that parade is not stopping. Quantum, you know, lots of other technologies are coming down the pipe. But to me, you know, it is the mixture of number one the cloud, public cloud stacks being able to travel anywhere in the world. So take the cloud on the road. So it's even, I would say, not even just scale, I think of, that's almost like a mount of compute power. Which could happen inside multiple hyperscale data centers. I'm also thinking about scale in terms of the horizontal. >> Bringing that model anywhere. >> Take me out to the edge. >> Wherever your data lives. >> Take me to a Carnival cruise ship, you know, take me to, you know, an apple-powered autonomous car, or take me to a hospital or a retail store. So the public cloud stacks where all the innovation is basically happening in the industry. Jail-breaking that out so it can come, you know it's through Amazon, AWS Outpost, or Ajerstack, or Google Anthos, this movement of the cloud guys, to say we'll take public cloud innovation wherever you need it. That to me is a big part of the cocktail because that's you know, basically the public clouds have been the epicenter of most tech innovation the last three or four years, so, that's very important. I think, you know just quickly, the other piece of the puzzle is the revolution that's happening in the modularity of apps. So the micro services revolution. So, the building of new apps and the refactoring of old apps using containers, using servos technologies, you know, API lifecycle management technologies, and of course, agile development methods. Kind of getting to this kind of iterative sped up deployment model, where people might've deployed new code four times a year, they're now deploying it four times a minute. >> Yeah right. >> So to me that's- and kind of aligned with that is what I was mentioning before, that if you can apply that, kind of, rapid scale, massive volume innovation model and bring others into the party, so now you're part of a cloud-connected community of innovators. And again, that could be around a Github, or could be around a Google or Amazon, or it could be around, you know, Walmart. In a retail world. Or an Amazon in retail. Or it could be around a Proctor & Gamble, or around a Disney, digital entertainment, you know, where they're creating ecosystems of innovators, and so to me, bringing people, you know, so it's not just these technologies that enable rapid, high-volume modular innovation, but it's saying okay now plugging lots of people's brains together is just going to, I think that, here's the- >> And all the data that throws off obviously. >> Throws a ton of data, but, to me the number we use it kind of is the punchline for, well where does multiplied innovation lead? A distributed cloud, this revolution in distributing modular massive scale development, that we think the next five years, we'll see as many new apps developed and deploye6d as we saw developed and deployed in the last forty years. So five years, the next five years, versus the last forty years, and so to me that's, that is the revolution. Because, you know, when that happens that means we're going to start seeing that long tail of used cases that people could never get to, you know, all the highly verticalized used cases are going to be filled, you know we're going to finally a lot of white space has been white for decades, is going to start getting a lot of cool colors and a lot of solutions delivered to them. >> Let's talk about some of the macro stuff, I don't know the exact numbers, but it's probably three trillion, maybe it's four trillion now, big market. You talked today about the market's going two x GDP. >> Yeah. >> For the tech market, that is. Why is it that the tech market is able to grow at a rate faster than GDP? And is there a relationship between GDP and tech growth? >> Yeah, well, I think, we are still, while, you know, we've been in tech, talk about those apps developed the last forty years, we've both been there, so- >> And that includes the iPhone apps, too, so that's actually a pretty impressive number when you think about the last ten years being included in that number. >> Absolutely, but if you think about it, we are still kind of teenagers when you think about that Andreson idea of software eating the world. You know, we're just kind of on the early appetizer, you know, the sorbet is coming to clear our palates before we go to the next course. But we're not even close to the main course. And so I think when you look at the kind of, the percentage of companies and industry process that is digital, that has been highly digitized. We're still early days, so to me, I think that's why. That the kind of the steady state of how much of an industry is kind of process and data flow is based on software. I'll just make up a number, you know, we may be a third of the way to whatever the steady state is. We've got two-thirds of the way to go. So to me, that supports growth of IT investment rising at double the rate of overall. Because it's sucking in and absorbing and transforming big pieces of the existing economy, >> So given the size of the market, given that all companies are tech companies. What are your thoughts on the narrative right now? You're hearing a lot of pressure from, you know, public policy to break up big tech. And we saw, you know you and I were there when Microsoft, and I would argue, they were, you know, breaking the law. Okay, the Department of Justice did the right thing, and they put handcuffs on them. >> Yeah. >> But they never really, you know, went after the whole breakup scenario, and you hear a lot of that, a lot of the vitriol. Do you think that makes sense? To break up big tech and what would the result be? >> You don't think I'm going to step on those land mines, do you? >> Okay well I've got an opinion. >> Alright I'll give you mine then. Alright, since- >> I mean, I'll lay it out there, I just think if you break up big tech the little techs are going to get bigger. It's going to be like AT&T all over again. The other thing I would add is if you want to go after China for, you know, IP theft, okay fine, but why would you attack the AI leaders? Now, if they're breaking the law, that should not be allowed. I'm not for you know, monopolistic, you know, illegal behavior. What are your thoughts? >> Alright, you've convinced me to answer this question. >> We're having a conversation- >> Nothing like a little competitive juice going. You're totally wrong. >> Lay it out for me. >> No, I think, but this has been a recurring pattern, as you were saying, it even goes back further to you know, AT&T and people wanting to connect other people to the chiraphone, and it goes IBM mainframes, opening up to peripherals. Right, it goes back to it. Exactly. It goes back to the wheel. But it's yeah, to me it's a valid question to ask. And I think, you know, part of the story I was telling, that multiplied innovation story, and Bill Joy, Joy's Law is really about platform. Right? And so when you get aggregated portfolio of technical capabilities that allow innovation to happen. Right, so the great thing is, you know, you typically see concentration, consolidation around those platforms. But of course they give life to a lot of competition and growth on top of them. So that to me is the, that's the conundrum, because if you attack the platform, you may send us back into this kind of disaggregated, less creative- so that's the art, is to take the scalpel and figure out well, where are the appropriate boundaries for, you know, putting those walls, where if you're in this part of the industry, you can't be in this. So, to me I think one, at least reasonable way to think about it is, so for example, if you are a major cloud platform player, right, you're providing all of the AI services, the cloud services, the compute services, the block-chain services, that a lot of the sass world is using. That, somebody could argue, well, if you get too strong in the sass world, you then could be in a position to give yourself favorable position from the platform. Because everyone in the sass world is depending on the platform. So somebody might say you can't be in. You know, if you're in the sass position you'll have to separate that from the platform business. But I think to me, so that's a logical way to do it, but I think you also have to ask, well, are people actually abusing? Right, so I- >> I think it's a really good question. >> I don't think it's fair to just say well, theoretically it could be abused. If the abuse is not happening, I don't think you, it's appropriate to prophylactically, it's like go after a crime before it's committed. So I think, the other thing that is happening is, often these monopolies or power positions have been about economic power, pricing power, I think there's another dynamic happening because consumer date, people's data, the Facebook phenomenon, the Twitter and the rest, there's a lot of stuff that's not necessarily about pricing, but that's about kind of social norms and privacy that I think are at work and that we haven't really seen as big a factor, I mean obviously we've had privacy regulation is Europe with GDPR and the rest, obviously in check, but part of that's because of the social platforms, so that's another vector that is coming in. >> Well, you would like to see the government actually say okay, this is the framework, or this is what we think the law should be. I mean, part of it is okay, Facebook they have incentive to appropriate our data and they get, okay, and maybe they're not taking enough responsibility for. But I to date have not seen the evidence as we did with, you know, Microsoft wiping out, you know, Lotus, and Novel, and Word Perfect through bundling and what it did to Netscape with bundling the browser and the price practices that- I don't see that, today, maybe I'm just missing it, but- >> Yeah I think that's going to be all around, you know, online advertising, and all that, to me that's kind of the market- >> Yeah, so Google, some of the Google stuff, that's probably legit, and that's fine, they should stop that. >> But to me the bigger issue is more around privacy.6 You know, it's a social norm, it's societal, it's not an economic factor I think around Facebook and the social platforms, and I think, I don't know what the right answer is, but I think certainly government it's legitimate for those questions to be asked. >> Well maybe GDPR becomes that framework, so, they're trying to give us the hook but, I'm having too much fun. So we're going to- I don't know how closely you follow Facebook, I mean they're obviously big tech, so Facebook has this whole crypto-play, seems like they're using it for driving an ecosystem and making money. As opposed to dealing with the privacy issue. I'd like to see more on the latter than the former, perhaps, but, any thoughts on Facebook and what's going on there with their crypto-play? >> Yeah I don't study them all that much so, I am fascinated when Mark Zuckerberg was saying well now our key business now is about privacy, which I find interesting. It doesn't feel that way necessarily, as a consumer and an observer, but- >> Well you're on Facebook, I'm on Facebook, >> Yeah yeah. >> Okay so how about big IPOs, we're in the tenth year now of this huge, you know, tail-wind for tech. Obviously you have guys like Uber, Lyft going IPO,6 losing tons of money. Stocks actually haven't done that well which is kind of interesting. You saw Zoom, you know, go public, doing very well. Slack is about to go public. So there's really a rush to IPO. Your thoughts on that? Is this sustainable? Or are we kind of coming to the end here? >> Yeah so, I think in part, you know, predicting the stock market waves is a very tough thing to do, but I think one kind of secular trend is going to be relevant for these tech IPOs is what I was mentioning earlier, is that we've now had a ten, twelve year run of basically startups coming in and reinventing industries while the incumbents in the industries are basically sitting on their hands, or sleeping. So to me the next ten years, those startups are going to, not that, I mean we've seen that large companies waking up doesn't necessarily always lead to success but it feels to me like it's going to be a more competitive environment for all those startups Because the incumbents, not all of them, and maybe not even most of them, but some decent portion of them are going to wind up becoming digital giants in their own industry. So to me I think that's a different world the next ten years than the last ten. I do think one important thing, and I think around acquisitions MNA, and we saw it just the last few weeks with Google Looker and we saw Tab Low with Salesforce, is if that, the mega-cloud world of Microsoft, Ajer, and Amazon, Google. That world is clearly consolidating. There's room for three or four global players and that game is almost over. But there's another power position on top of that, which is around where did all the app, business app guys, all the suite guys, SAP, Oracle, Salesforce, Adobe, Microsoft, you name it. Where did they go? And so we see, we think- >> Service Now, now kind of getting big. >> Absolutely, so we're entering a intensive period, and I think again, the Tab Low and Looker is just an example where those companies are all stepping on the gas to become better platforms. So apps as platforms, or app portfolio as platforms, so, much more of a data play, analytics play, buying other pieces of the app portfolio, that they may not have. And basically scaling up to become the business process platforms and ecosystems there. So I think we are just at the beginning of that, so look for a lot of sass companies. >> And I wonder if Amazon could become a platform for developers to actually disrupt those traditional sass guys. It's not obvious to me how those guys get disrupted, and I'm thinking, everybody says oh is Amazon going to get into the app space? Maybe some day if they happen to do a cam expans6ion, But it seems to me that they become a platform fo6r new apps you know, your apps explosion.6 At the edge, obviously, you know, local. >> Well there's no question. I think those appcentric apps is what I'd call that competition up there and versus kind of a mega cloud. There's no question the mega cloud guys. They've already started launching like call center, contact center software, they're creeping up into that world of business apps so I don't think they're going to stop and so I think that that is a reasonable place to look is will they just start trying to create and effect suites and platforms around sass of their own. >> Startups, ecosystems like you were saying. Alright, I got to give you some rapid fire questions here, so, when do you think, or do you think, no, I'm going to say when you think, that owning and driving your own car will become the exception, rather than the norm? Buy into the autonomous vehicles hype? Or- >> I think, to me, that's a ten-year type of horizon. >> Okay, ten plus, alright. When will machines be able to make better diagnosis than than doctors? >> Well, you could argue that in some fields we're almost there, or we're there. So it's all about the scope of issue, right? So if it's reading a radiology, you know, film or image, to look for something right there, we're almost there. But for complex cancers or whatever that's going to take- >> One more dot connecting question. >> Yeah yeah. >> So do you think large retail stores will essentially disappear? >> Oh boy that's a- they certainly won't disappear, but I think they can so witness Apple and Amazon even trying to come in, so it feels that the mix is certainly shifting, right? So it feels to me that the model of retail presence, I think that will still be important. Touch, feel, look, socialize. But it feels like the days of, you know, ten thousand or five thousand store chains, it feels like that's declining in a big way. >> How about big banks? You think they'll lose control of the payment systems? >> I think they're already starting to, yeah, so, I would say that is, and they're trying to get in to compete, so I think that is on its way, no question. I think that horse is out of the barn. >> So cloud, AI, new apps, new innovation cocktails, software eating the world, everybody is a tech company. Frank Gens, great to have you. >> Dave, always great to see you. >> Alright, keep it right there buddy. You're watching The Cube, from Actifio: Data Driven nineteen. We'll be right back right after this short break. (bouncy electronic music)

Published Date : Jun 18 2019

SUMMARY :

Brought to you by Actifio. We're here at the Intercontinental Hotel at many years, of course, you know, You know you gave your version of the wave slides. an ecosystem, think of GitHub, you know, I mean, you know, Bennyhoff says that, you know, that you quantify is just it's every company now. digital or cloud services so, you know, we watched the you know, the mainframe wave get crushed we've talked about AI, what's that, you know, sandwich, you know, it is the mixture of number one the cocktail because that's you know, and so to me, bringing people, you know, are going to be filled, you know we're going to I don't know the exact numbers, but it's probably Why is it that the tech market is able to grow And that includes the iPhone apps, too, And so I think when you look at the and I would argue, they were, you know, breaking the law. But they never really, you know, Alright I'll give you mine then. the little techs are going to get bigger. Nothing like a little competitive juice going. so that's the art, is to take the scalpel I don't think it's fair to just say well, as we did with, you know, Microsoft wiping out, you know, Yeah, so Google, some of the Google stuff, and the social platforms, and I think, I don't know I don't know how closely you follow Facebook, I am fascinated when Mark Zuckerberg was saying of this huge, you know, tail-wind for tech. Yeah so, I think in part, you know, predicting the buying other pieces of the app portfolio, At the edge, obviously, you know, local. and so I think that that is a reasonable place to look Alright, I got to give you some rapid fire questions here, diagnosis than than doctors? So if it's reading a radiology, you know, film or image, But it feels like the days of, you know, I think that horse is out of the barn. software eating the world, everybody is a tech company. We'll be right back right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ValantePERSON

0.99+

Stu MinaminPERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartsORGANIZATION

0.99+

John FerrerPERSON

0.99+

Procter & GamblesORGANIZATION

0.99+

Thomas FriedmanPERSON

0.99+

2007DATE

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

FrankPERSON

0.99+

AppleORGANIZATION

0.99+

Frank GensPERSON

0.99+

three trillionQUANTITY

0.99+

Mark AndresonPERSON

0.99+

UberORGANIZATION

0.99+

Bill JoyPERSON

0.99+

John DeereORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

four trillionQUANTITY

0.99+

DisneyORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

ten-yearQUANTITY

0.99+

tenth yearQUANTITY

0.99+

WalmartORGANIZATION

0.99+

BennyhoffPERSON

0.99+

OracleORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

2008DATE

0.99+

IBMORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

third platformQUANTITY

0.99+

LyftORGANIZATION

0.99+

MNAORGANIZATION

0.99+

Mark ZuckerbergPERSON

0.99+

IDCORGANIZATION

0.99+

AjerORGANIZATION

0.99+

GDPRTITLE

0.99+

tenQUANTITY

0.99+

BostonLOCATION

0.99+

ten thousandQUANTITY

0.99+

threeQUANTITY

0.99+

SalesforceORGANIZATION

0.98+

bothQUANTITY

0.98+

AjerstackORGANIZATION

0.98+

Proctor & GambleORGANIZATION

0.98+

SAPORGANIZATION

0.98+

SlackORGANIZATION

0.98+

First timeQUANTITY

0.98+

todayDATE

0.98+

appleORGANIZATION

0.98+

two-thirdsQUANTITY

0.98+

Department of JusticeORGANIZATION

0.98+

LotusTITLE

0.97+

Word PerfectTITLE

0.97+

The CubeTITLE

0.97+

five yearsQUANTITY

0.97+

AWS OutpostORGANIZATION

0.96+

OneQUANTITY

0.96+

TwitterORGANIZATION

0.96+

decadesQUANTITY

0.96+

ten years agoDATE

0.96+

Alfred Essa, McGraw-Hill Education | Corinium Chief Analytics Officer Spring 2018


 

>> Announcer: From the Corinium Chief Analytics Officer Conference, Spring, San Francisco, its theCUBE. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at the Corinium Chief Analytics Officer event in San Francisco, Spring, 2018. About 100 people, predominantly practitioners, which is a pretty unique event. Not a lot of vendors, a couple of them around, but really a lot of people that are out in the wild doing this work. We're really excited to have a return guest. We last saw him at Spark Summit East 2017. Can you believe I keep all these shows straight? I do not. Alfred Essa, he is the VP, Analytics and R&D at McGraw-Hill Education. Alfred, great to see you again. >> Great being here, thank you. >> Absolutely, so last time we were talking it was Spark Summit, it was all about data in motion and data on the fly, and real-time analytics. You talked a lot about trying to apply these types of new-edge technologies and cutting-edge things to actually education. What a concept, to use artificial intelligence, a machine learning for people learning. Give us a quick update on that journey, how's it been progressing? >> Yeah, the journey progresses. We recently have a new CEO come on board, started two weeks ago. Nana Banerjee, very interesting background. PhD in mathematics and his area of expertise is Data Analytics. It just confirms the direction of McGraw-Hill Education that our future is deeply embedded in data and analytics. >> Right. It's funny, there's a often quoted kind of fact that if somebody came from a time machine from, let's just pick 1849, here in San Francisco, everything would look different except for Market Street and the schools. The way we get around is different. >> Right. >> The things we do to earn a living are different. The way we get around is different, but the schools are just slow to change. Education, ironically, has been slow to adopt new technology. You guys are trying to really change that paradigm and bring the best and latest in cutting edge to help people learn better. Why do you think it's taken education so long and must just see nothing but opportunity ahead for you. >> Yeah, I think the... It was sort of a paradox in the 70s and 80s when it came to IT. I think we have something similar going on. Economists noticed that we were investing lots and lots of money, billions of dollars, in information technology, but there were no productivity gains. So this was somewhat of a paradox. When, and why are we not seeing productivity gains based on those investments? It turned out that the productivity gains did appear and trail, and it was because just investment in technology in itself is not sufficient. You have to also have business process transformation. >> Jeff Frick: Right. >> So I think what we're seeing is, we are at that cusp where people recognize that technology can make a difference, but it's not technology alone. Faculty have to teach differently, students have to understand what they need to do. It's a similar business transformation in education that I think we're starting to see now occur. >> Yeah it's great, 'cause I think the old way is clearly not the way for the way forward. That's, I think, pretty clear. Let's dig into some of these topics, 'cause you're a super smart guy. One thing's talk about is this algorithmic transparency. A lot of stuff in the news going on, of course we have all the stuff with self-driving cars where there's these black box machine learning algorithms, and artificial intelligence, or augmented intelligence, bunch of stuff goes in and out pops either a chihuahua or a blueberry muffin. Sometimes it's hard to tell the difference. Really, it's important to open up the black box. To open up so you can at least explain to some level of, what was the method that took these inputs and derived this outpout. People don't necessarily want to open up the black box, so kind of what is the state that you're seeing? >> Yeah, so I think this is an area where not only is it necessary that we have algorithmic transparency, but I think those companies and organizations that are transparent, I think that will become a competitive advantage. That's how we view algorithms. Specifically, I think in the world of machine learning and artificial intelligence, there's skepticism, and that skepticism is justified. What are these machines? They're making decisions, making judgments. Just because it's a machine, doesn't mean it can't be biased. We know it can be. >> Right, right. >> I think there are techniques. For example, in the case of machine learning, what the machines learns, it learns the algorithm, and those rules are embedded in parameters. I sort of think of it as gears in the black box, or in the box. >> Jeff Frick: Right. >> What we should be able to do is allow our customers, academic researchers, users, to understand at whatever level they need to understand and want to understand >> Right. >> What the gears do and how they work. >> Jeff Frick: Right. >> Fundamental, I think for us, is we believe that the smarter our customers are and the smarter our users are, and one of the ways in which they can become smarter is understanding how these algorithms work. >> Jeff Frick: Right. >> We think that that will allow us to gain a greater market share. So what we see is that our customers are becoming smarter. They're asking more questions and I think this is just the beginning. >> Jeff Frick: Right. >> We definitely see this as an area that we want to distinguish ourselves. >> So how do you draw lines, right? Because there's a lot of big science underneath those algorithms. To different degrees, some of it might be relatively easy to explain as a simple formula, other stuff maybe is going into some crazy, statistical process that most layman, or business, or stakeholders may or may not understand. Is there a way you slice it? Is there kind of wars of magnitude in how much you expose, and the way you expose within that box? >> Yeah, I think there is a tension. The tension traditionally, I think organizations think of algorithms like they think of everything else, as intellectual property. We want to lock down our intellectual property, we don't want to expose that to our competitors. I think... I think that's... We do need to have intellectual property, however, I think many organizations get locked into a mental model, which I don't think is just the right one. I think we can, and we want our customers to understand how our algorithm works. We also collaborate quite a bit with academic researchers. We want validation from the academic research community that yeah, the stuff that you're building is in fact based on learning science. That it has warrant. That when you make claims that it works, yes, we can validate that. Now, where I think... Based on the research that we do, things that we publish, our collaboration with researchers, we are exposing and letting the world know how we do things. At the same time, it's very, very difficult to build an engineer, an architect, scalable solutions that implement those algorithms for millions of users. That's not trivial. >> Right, right, right. >> Even if we give away quite a bit of our secret sauce, it's not easy to implement that. >> Jeff Frick: Right. >> At the same time, I believe and we believe, that it's good to be chased by our competition. We're just going to go faster. Being more open also creates excitement and an ecosystem around our products and solutions, and it just makes us go faster. >> Right, which gives to another transition point, which would you talk about kind of the old mental model of closed IP systems, and we're seeing that just get crushed with open source. Not only open source movements around specific applications, and like, we saw you at Spark Summit, which is an open source project. Even within what you would think for sure has got to be core IP, like Facebook opening up their hardware spec for their data centers, again. I think what's interesting, 'cause you said the mental model. I love that because the ethos of open source, by rule, is that all the smartest people are not inside your four walls. >> Exactly. >> There's more of them outside the four walls regardless of how big your four walls are, so it's more of a significant mental shift to embrace, adopt, and engage that community from a much bigger accumulative brain power than trying to just trying to hire the smartest, and keep it all inside. How is that impacting your world, how's that impacting education, how can you bring that power to bear within your products? >> Yeah, I think... You were in effect quoting, I think it was Bill Joy saying, one of the founders of Sun Microsystems, they're always, you have smart people in your organization, there are always more smarter people outside your organization, right? How can we entice, lure, and collaborate with the best and the brightest? One of the ways we're doing that is around analytics, and data, and learning science. We've put together a advisory board of learning science researchers. These are the best and brightest learning science researcher, data scientists, learning scientists, they're on our advisory board and they help and set, give us guidance on our research portfolio. That research portfolio is, it's not blue sky research, we're on Google and Facebook, but it's very much applied research. We try to take the no-knowns in learning science and we go through a very quick iterative, innovative pipeline where we do research, move a subset of those to product validation, and then another subset of that to product development. This is under the guidance, and advice, and collaboration with the academic research community. >> Right, right. You guys are at an interesting spot, because people learn one way, and you've mentioned a couple times this interview, using good learning science is the way that people learn. Machines learn a completely different way because of the way they're built and what they do well, and what they don't do so well. Again, I joked before about the chihuahua and the blueberry muffin, which is still one of my favorite pictures, if you haven't seen it, go find it on the internet. You'll laugh and smile I promise. You guys are really trying to bring together the latter to really help the former. Where do those things intersect, where do they clash, how do you meld those two methodologies together? >> Yeah, it's a very interesting question. I think where they do overlap quite a bit is... in many ways machines learn the way we learn. What do I mean by that? Machine learning and deep learning, the way machines learn is... By making errors. There's something, a technical concept in machine learning called a loss function, or a cost function. It's basically the difference between your predicted output and ground truth, and then there's some sort of optimizer that says "Okay, you didn't quite get it right. "Try again." Make this adjustment. >> Get a little closer. >> That's how machines learn, they're making lots and lots of errors, and there's something behind the scenes called the optimizer, which is giving the machine feedback. That's how humans learn. It's by making errors and getting lots and lots of feedback. That's one of the things that's been absent in traditional schooling. You have a lecture mode, and then a test. >> Jeff Frick: Right. >> So what we're trying to do is incorporate what's called formative assessment, this is just feedback. Make errors, practice. You're not going to learn something, especially something that's complicated, the first time. You need to practice, practice, practice. Need lots and lots of feedback. That's very much how we learn and how machines learn. Now, the differences are, technologically and state of knowledge, machines can now do many things really well but there's still some things and many things, that humans are really good at. What we're trying to do is not have machines replace humans, but have augmented intelligence. Unify things that machines can do really well, bring that to bear in the case of learning, also insights that we provide. Instructors, advisors. I think this is the great promise now of combining the best of machine intelligence and human intelligence. >> Right, which is great. We had Gary Kasparov on and it comes up time and time again. The machine is not better than a person, but a machine and a person together are better than a person or a machine to really add that context. >> Yeah, and that dynamics of, how do you set up the context so that both are working in tandem in the combination. >> Right, right. Alright Alfred, I think we'll leave it there 'cause I think there's not a better lesson that we could extract from our time together. I thank you for taking a few minutes out of your day, and great to catch up again. >> Thank you very much. >> Alright, he's Alfred, I'm Jeff. You're watching theCUBE from the Corinium Chief Analytics Officer event in downtown San Francisco. Thanks for watching. (energetic music)

Published Date : May 18 2018

SUMMARY :

Announcer: From the Corinium Chief but really a lot of people that are out in the wild and cutting-edge things to actually education. It just confirms the direction of McGraw-Hill Education The way we get around is different. but the schools are just slow to change. I think we have something similar going on. that I think we're starting to see now occur. is clearly not the way for the way forward. Yeah, so I think this is an area For example, in the case of machine learning, and one of the ways in which they can become smarter and I think this is just the beginning. that we want to distinguish ourselves. in how much you expose, and the way you expose Based on the research that we do, it's not easy to implement that. At the same time, I believe and we believe, I love that because the ethos of open source, How is that impacting your world, and then another subset of that to product development. the latter to really help the former. the way machines learn is... That's one of the things that's been absent of combining the best of machine intelligence and it comes up time and time again. Yeah, and that dynamics of, that we could extract from our time together. in downtown San Francisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Nana BanerjeePERSON

0.99+

San FranciscoLOCATION

0.99+

Bill JoyPERSON

0.99+

Alfred EssaPERSON

0.99+

AlfredPERSON

0.99+

Sun MicrosystemsORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Gary KasparovPERSON

0.99+

JeffPERSON

0.99+

GoogleORGANIZATION

0.99+

CoriniumORGANIZATION

0.99+

McGraw-Hill EducationORGANIZATION

0.99+

70sDATE

0.99+

Spring 2018DATE

0.99+

oneQUANTITY

0.99+

Spark SummitEVENT

0.98+

two weeks agoDATE

0.98+

two methodologiesQUANTITY

0.98+

billions of dollarsQUANTITY

0.98+

first timeQUANTITY

0.98+

bothQUANTITY

0.98+

80sDATE

0.97+

millionsQUANTITY

0.97+

About 100 peopleQUANTITY

0.97+

OneQUANTITY

0.97+

Corinium Chief Analytics OfficerEVENT

0.96+

Market StreetLOCATION

0.96+

Spark Summit East 2017EVENT

0.96+

Spring, 2018DATE

0.95+

theCUBEORGANIZATION

0.88+

Corinium Chief Analytics OfficerEVENT

0.86+

Chief Analytics OfficerEVENT

0.78+

downtown San FranciscoLOCATION

0.75+

one wayQUANTITY

0.7+

EducationORGANIZATION

0.64+

McGraw-HillPERSON

0.62+

SpringDATE

0.62+

lots of moneyQUANTITY

0.6+

favorite picturesQUANTITY

0.59+

1849LOCATION

0.57+

couple timesQUANTITY

0.57+

coupleQUANTITY

0.56+

Sastry Malladi, FogHorn | Big Data SV 2018


 

>> Announcer: Live from San Jose, it's theCUBE, presenting Big Data Silicon Valley, brought to you by SiliconANGLE Media and its ecosystem partner. (upbeat electronic music) >> Welcome back to The Cube. I'm Lisa Martin with George Gilbert. We are live at our event, Big Data SV, in downtown San Jose down the street from the Strata Data Conference. We're joined by a new guest to theCUBE, Sastry Malladi, the CTO Of FogHorn. Sastry, welcome to theCUBE. >> Thank you, thank you, Lisa. >> So FogHorn, cool name, what do you guys do, who are you? Tell us all that good stuff. >> Sure. We are a startup based in Silicon Valley right here in Mountain View. We started about three years ago, three plus years ago. We provide edge computing intelligence software for edge computing or fog computing. That's how our company name got started is FogHorn. For our particularly, for our IoT industrial sector. All of the industrial guys, whether it's transportation, manufacturing, oil and gas, smart cities, smart buildings, any of those different sectors, they use our software to predict failure conditions in real time, or do condition monitoring, or predictive maintenance, any of those use cases and successfully save a lot of money. Obviously in the process, you know, we get paid for what we do. >> So Sastry... GE populized this concept of IIoT and the analytics and, sort of the new business outcomes you could build on it, like Power by the Hour instead of selling a jet engine. >> Sastry: That's right. But there's... Actually we keep on, and David Floor did some pioneering research on how we're going to have to do a lot of analytics on the edge for latency and bandwidth. What's the FogHorn secret sauce that others would have difficulty with on the edge analytics? >> Okay, that's a great question. Before I directly answer the question, if you don't mind, I'll actually even describe why that's even important to do that, right? So a lot of these industrial customers, if you look at, because we work with a lot of them, the amount of data that's produced from all of these different machines is terabytes to petabytes of data, it's real. And it's not just the traditional digital sensors but there are video, audio, acoustic sensors out there. The amount of data is humongous, right? It's not even practical to send all of that to a Cloud environment and do data processing, for many reasons. One is obviously the connectivity, bandwidth issues, and all of that. But the two most important things are cyber security. None of these customers actually want to connect these highly expensive machines to the internet. That's one. The second is the lack of real-time decision making. What they want to know, when there is a problem, they want to know before it's too late. We want to notify them it is a problem that is occurring so that have a chance to go fix it and optimize their asset that is in question. Now, existing solutions do not work in this constrained environment. That's why FogHorn had to invent that solution. >> And tell us, actually, just to be specific, how constrained an environment you can operate in. >> We can run in about less than 100 to 150 megabytes of memory, single-core to dual-core of CPU, whether it's an ARM processor, an x86 Intel-based processor, almost literally no storage because we're a real-time processing engine. Optionally, you could have some storage if you wanted to store some of the results locally there but that's the kind of environment we're talking about. Now, when I say 100 megabytes of memory, it's like a quarter of Raspberry Pi, right? And even in that environment we have customers that run dozens of machinery models, right? And we're not talking -- >> George: Like an ensemble. >> Like an anomaly detection, a regression, a random forest, or a clustering, or a gamut, some of those. Now, if we get into more deep learning models, like image processing and neural net and all of that, you obviously need a little bit more memory. But what we have shown, we could still run, one of our largest smart city buildings customer, elevator company, runs in a raspberry Pi on millions of elevators, right? Dozens of machinery algorithms on top of that, right? So that's the kind of size we're talking about. >> Let me just follow up with one question on the other thing you said, with, besides we have to do the low-latency locally. You said a lot of customers don't want to connect these brown field, I guess, operations technology machines to the internet, and physically, I mean there was physical separation for security. So it's like security, Bill Joy used to say "Security by obscurity." Here it's security by -- >> Physical separation, absolutely. Tell me about it. I was actually coming from, if you don't mind, last week I was in Saudi Arabia. One of the oil and gas plants where we deployed our software, you have to go to five levels of security even to get to there, It's a multibillion dollar plant and refining the gas and all of that. Completely offline, no connectivity to the internet, and we installed, in their existing small box, our software, connected to their live video cameras that are actually measuring the stuff, doing the processing and detecting the specific conditions that we're looking for. >> That's my question, which was if they want to be monitoring. So there's like one low level, really low hardware low level, the sensor feeds. But you could actually have a richer feed, which is video and audio, but how much of that, then, are you doing the, sort of, inferencing locally? Or even retraining, and I assume that since it's not the OT device, and it's something that's looking at it, you might be more able to send it back up the Cloud if you needed to do retraining? >> That's exactly right. So the way the model works is particularly for image processing because you need, it's a more complex process to train than create a model. You could create a model offline, like in a GPU box, an FPGA box and whatnot. Import and bring the model back into this small little device that's running in the plant, and now the live video data is coming in, the model is inferencing the specific thing. Now there are two ways to update and revise the model: incremental revision of the model, you could do that if you want, or you can send the results to a central location. Not internet, they do have local, in this example for example a PIDB, an OSS PIDB, or some other local service out there, where you have an opportunity to gather the results from each of these different locations and then consolidate and retrain the model, put the model back again. >> Okay, the one part that I didn't follow completely is... If the model is running ultimately on the device, again and perhaps not even on a CPU, but a programmable logic controller. >> It could, even though a programmable controller also typically have some shape of CPU there as well. These days, most of the PLCs, programmable controllers, have either an RM-based processor or an x86-based processor. We can run either one of those too. >> So, okay, assume you've got the model deployed down there, for the, you know, local inferencing. Now, some retraining is going to go on in the Cloud, where you have, you're pulling in the richer perspective from many different devices. How does that model get back out to the device if it doesn't have the connectivity between the device and the Cloud? >> Right, so if there's strictly no connectivity, so what happens is once the model is regenerated or retrained, they put a model in a USB stick, it's a low attack. USB stick, bring it to the PLC device and upload the model. >> George: Oh, so this is sort of how we destroyed the Iranian centrifuges. >> That's exactly right, exactly right. But you know, some other environments, even though it's not connectivity to the Cloud environment, per se, but the devices have the ability to connect to the Cloud. Optionally, they say, "Look, I'm the device "that's coming up, do you have an upgraded model for me?" Then it can pull the model. So in some of the environments it's super strict where there are absolutely no way to connect this device, you put it in a USB stick and bring the model back here. Other environments, device can query the Cloud but Cloud cannot connect to the device. This is a very popular model these days because, in other words imagine this, an elevator sitting in a building, somebody from the Cloud cannot reach the elevator, but an elevator can reach the Cloud when it wants to. >> George: Sort of like a jet engine, you don't want the Cloud to reach the jet engine. >> That's exactly right. The jet engine can reach the Cloud it if wants to, when it wants to, but the Cloud cannot reach the jet engine. That's how we can pull the model. >> So Sastry, as a CTO you meet with customers often. You mentioned you were in Saudi Arabia last week. I'd love to understand how you're leveraging and gaging with customers to really help drive the development of FogHorn, in terms of being differentiated in the market. What are those, kind of bi-directional, symbiotic customer relationships like? And how are they helping FogHorn? >> Right, that's actually a great question. We learn a lot from customers because we started a long time ago. We did an initial version of the product. As we begin to talk to the customers, particularly that's part of my job, where I go talk to many of these customers, they give us feedback. Well, my problem is really that I can't even do, I can't even give you connectivity to the Cloud, to upgrade the model. I can't even give you sample data. How do you do that modeling, right? And sometimes they say, "You know what, "We are not technical people, help us express the problem, "the outcome, give me tools "that help me express that outcome." So we created a bunch of what we call OT tools, operational technology tools. How we distinguish ourselves in this process, from the traditional Cloud-based vendor, the traditional data science and data analytics companies, is that they think in terms of computer scientists, computer programmers, and expressions. We think in terms of industrial operators, what can they express, what do they know? They don't really necessarily care about, when you tell them, "I've got an anomaly detection "data science machine algorithm", they're going to look at you like, "What are you talking about? "I don't understand what you're talking about", right? You need to tell them, "Look, this machine is failing." What are the conditions in which the machine is failing? How do you express that? And then we translate that requirement, or that into the underlying models, underlying Vel expressions, Vel or CPU expression language. So we learned a ton from user interface, capabilities, latency issues, connectivity issues, different protocols, a number of things that we learn from customers. >> So I'm curious with... More of the big data vendors are recognizing data in motion and data coming from devices. And some, like Hortonworks DataFlow NiFi has a MiNiFi component written in C plus plus, really low resource footprint. But I assume that that's really just a transport. It's almost like a collector and that it doesn't have the analytics built in -- >> That's exactly right, NiFi has the transport, it has the real-time transport capability for sure. What it does not have is this notion of that CEP concept. How do you combine all of the streams, everything is a time series data for us, right, from the devices. Whether it's coming from a device or whether it's coming from another static source out there. How do you express a pattern, a recognition pattern definition, across these streams? That's where our CPU comes in the picture. A lot of these seemingly similar software capabilities that people talk about, don't quite exactly have, either the streaming capability, or the CPU capability, or the real-time, or the low footprint. What we have is a combination of all of that. >> And you talked about how everything's time series to you. Is there a need to have, sort of an equivalent time series database up in some central location? So that when you subset, when you determine what relevant subset of data to move up to the Cloud, or you know, on-prem central location, does it need to be the same database? >> No, it doesn't need to be the same database. It's optional. In fact, we do ship a local time series database at the edge itself. If you have a little bit of a local storage, you can down sample, take the results, and store it locally, and many customers actually do that. Some others, because they have their existing environment, they have some Cloud storage, whether it's Microsoft, it doesn't matter what they use, we have connectors from our software to send these results into their existing environments. >> So, you had also said something interesting about your, sort of, tool set, as being optimized for operations technology. So this is really important because back when we had the Net-Heads and the Bell-Heads, you know it was a cultural clash and they had different technologies. >> Sastry: They sure did, yeah. >> Tell us more about how selling to operations, not just selling, but supporting operations technology is different from IT technology and where does that boundary live? >> Right, so typical IT environment, right, you start with the boss who is the decision maker, you work with them and they approve the project and you go and execute that. In an industrial, in an OT environment, it doesn't quite work like that. Even if the boss says, "Go ahead and go do this project", if the operator on the floor doesn't understand what you're talking about, because that person is in charge of operating that machine, it doesn't quite work like that. So you need to work bottom up as well, to convincing them that you are indeed actually solving their pain point. So the way we start, where rather than trying to tell them what capabilities we have as a product, or what we're trying to do, the first thing we ask is what is their pain point? "What's your problem? What is the problem "you're trying to solve?" Some customers say, "Well I've got yield, a lot of scrap. "Help me reduce my scrap. "Help me to operate my equipment better. "Help me predict these failure conditions "before it's too late." That's how the problem starts. Then we start inquiring them, "Okay, what kind of data "do you have, what kind of sensors do you have? "Typically, do you have information about under what circumstances you have seen failures "versus not seeing failures out there?" So in the process of inauguration we begin to understand how they might actually use our software and then we tell them, "Well, here, use your software, "our software, to predict that." And, sorry, I want 30 more seconds on that. The other thing is that, typically in an IT environment, because I came from that too, I've been in this position for 30 plus years, IT, UT and all of that, where we don't right away talk about CEP, or expressions, or analytics, and we don't talk about that. We talk about, look, you have these bunch of sensors, we have OT tools here, drag and drop your sensors, express the outcome that you're trying to look for, what is the outcome you're trying to look for, and then we drive behind the scenes what it means. Is it analytics, is it machine learning, is it something else, and what is it? So that's kind of how we approach the problem. Of course, if, sometimes you do surprisingly occasionally run into very technical people. From those people we can right away talk about, "Hey, you need these analytics, you need to use machinery, "you need to use expressions" and all of that. That's kind of how we operate. >> One thing, you know, that's becoming clearer is I think this widespread recognition that's data intensive and low latency work to be done near the edge. But what goes on in the Cloud is actually closer to simulation and high-performance compute, if you want to optimize a model. So not just train it, but maybe have something that's prescriptive that says, you know, here's the actionable information. As more of your data is video and audio, how do you turn that into something where you can simulate a model, that tells you the optimal answer? >> Right, so this is actually a good question. From our experience, there are models that require a lot of data, for example, video and audio. There are some other models that do not require a lot of data for training. I'll give you an example of what customer use cases that we have. There's one customer in a manufacturing domain, where they've been seeing a lot of finished goods failures, there's a lot of scrap and the problem then was, "Hey, predict the failures, "reduce my scrap, save the money", right? Because they've been seeing a lot of failures every single day, we did not need a lot of data to train and create a model to that. So, in fact, we just needed one hour's worth of data. We created a model, put the thing, we have reduced, completely eliminated their scrap. There are other kinds of models, other kinds of models of video, where we can't do that in the edge, so we're required for example, some video files or simulated audio files, take it to an offline model, create the model, and see whether it's accurately predicting based on the real-time video coming in or not. So it's a mix of what we're seeing between those two. >> Well Sastry, thank you so much for stopping by theCUBE and sharing what it is that you guys at FogHorn are doing, what you're hearing from customers, how you're working together with them to solve some of these pretty significant challenges. >> Absolutely, it's been a pleasure. Hopefully this was helpful, and yeah. >> Definitely, very educational. We want to thank you for watching theCUBE, I'm Lisa Martin with George Gilbert. We are live at our event, Big Data SV in downtown San Jose. Come stop by Forager Tasting Room, hang out with us, learn as much as we are about all the layers of big data digital transformation and the opportunities. Stick around, we will be back after a short break. (upbeat electronic music)

Published Date : Mar 8 2018

SUMMARY :

brought to you by SiliconANGLE Media down the street from the Strata Data Conference. what do you guys do, who are you? Obviously in the process, you know, the new business outcomes you could build on it, What's the FogHorn secret sauce that others Before I directly answer the question, if you don't mind, how constrained an environment you can operate in. but that's the kind of environment we're talking about. So that's the kind of size we're talking about. on the other thing you said, with, and refining the gas and all of that. the Cloud if you needed to do retraining? Import and bring the model back If the model is running ultimately on the device, These days, most of the PLCs, programmable controllers, if it doesn't have the connectivity USB stick, bring it to the PLC device and upload the model. we destroyed the Iranian centrifuges. but the devices have the ability to connect to the Cloud. you don't want the Cloud to reach the jet engine. but the Cloud cannot reach the jet engine. So Sastry, as a CTO you meet with customers often. they're going to look at you like, and that it doesn't have the analytics built in -- or the real-time, or the low footprint. So that when you subset, when you determine If you have a little bit of a local storage, So, you had also said something interesting So the way we start, where rather than trying that tells you the optimal answer? and the problem then was, "Hey, predict the failures, and sharing what it is that you guys at FogHorn are doing, Hopefully this was helpful, and yeah. We want to thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

Lisa MartinPERSON

0.99+

Saudi ArabiaLOCATION

0.99+

Sastry MalladiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

one hourQUANTITY

0.99+

SastryPERSON

0.99+

Silicon ValleyLOCATION

0.99+

GEORGANIZATION

0.99+

100 megabytesQUANTITY

0.99+

LisaPERSON

0.99+

Bill JoyPERSON

0.99+

twoQUANTITY

0.99+

FogHornORGANIZATION

0.99+

last weekDATE

0.99+

Mountain ViewLOCATION

0.99+

30 more secondsQUANTITY

0.99+

David FloorPERSON

0.99+

one questionQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

San JoseLOCATION

0.99+

30 plus yearsQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

three plus years agoDATE

0.99+

one customerQUANTITY

0.98+

oneQUANTITY

0.98+

secondQUANTITY

0.98+

C plus plusTITLE

0.98+

OneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

150 megabytesQUANTITY

0.98+

two waysQUANTITY

0.97+

Strata Data ConferenceEVENT

0.97+

IranianOTHER

0.97+

five levelsQUANTITY

0.95+

millions of elevatorsQUANTITY

0.95+

about less than 100QUANTITY

0.95+

one partQUANTITY

0.94+

VelOTHER

0.94+

One thingQUANTITY

0.92+

dozens of machinery modelsQUANTITY

0.92+

eachQUANTITY

0.91+

IntelORGANIZATION

0.91+

FogHornPERSON

0.86+

2018DATE

0.85+

first thingQUANTITY

0.85+

single-coreQUANTITY

0.85+

NiFiORGANIZATION

0.82+

Power by the HourORGANIZATION

0.81+

about three years agoDATE

0.81+

Forager Tasting RORGANIZATION

0.8+

a tonQUANTITY

0.8+

CTOPERSON

0.79+

multibillion dollarQUANTITY

0.79+

DataEVENT

0.79+

Bell-HeadsORGANIZATION

0.78+

every single dayQUANTITY

0.76+

The CubeORGANIZATION

0.75+

CloudCOMMERCIAL_ITEM

0.73+

Dozens of machinery algorithmsQUANTITY

0.71+

PiCOMMERCIAL_ITEM

0.71+

petabytesQUANTITY

0.7+

raspberryORGANIZATION

0.69+

Big DataORGANIZATION

0.68+

CloudTITLE

0.67+

dual-coreQUANTITY

0.65+

SastryORGANIZATION

0.62+

NetORGANIZATION

0.61+