David C King, FogHorn Systems | CUBEConversation, November 2018
(uplifting orchestral music) >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the Palo Alto studios, having theCUBE Conversation, a little break in the action of the conference season before things heat up, before we kind of come to the close of 2018. It's been quite a year. But it's nice to be back in the studio. Things are a little bit less crazy, and we're excited to talk about one of the really hot topics right now, which is edge computing, fog computing, cloud computing. What do all these things mean, how do they all intersect, and we've got with us today David King. He's the CEO of FogHorn Systems. David, first off, welcome. >> Thank you, Jeff. >> So, FogHorn Systems, I guess by the fog, you guys are all about the fog, and for those that don't know, fog is kind of this intersection between cloud, and on prem, and... So first off, give us a little bit of the background of the company and then let's jump into what this fog thing is all about. >> Sure, actually, it all dovetails together. So yeah, you're right, FogHorn, the name itself, came from Cisco's invented term, called fog computing, from almost a decade ago, and it connoted this idea of computing at the edge, but didn't really have a lot of definition early on. And so, FogHorn was started actually by a Palo Alto Incubator, just nearby here, that had the idea that hey, we got to put some real meaning and some real meat on the bones here, with fog computing. And what we think FogHorn has become over the last three and a half years, since we took it out of the incubator, since I joined, was to put some real purpose, meaning, and value in that term. And so, it's more than just edge computing. Edge computing is a related term. In the industrial world, people would say, hey, I've had edge computing for three, 40, 50 years with my production line control and also my distributed control systems. I've got hard wired compute. I run, they call them, industrial PCs in the factory. That's edge compute. The IT roles come along and said, no, no, no, fog compute is a more advanced form of it. Well, the real purpose of fog computing and edge computing, in our view, in the modern world, is to apply what has traditionally been thought of as cloud computing functions, big, big data, but running in an industrial environment, or running on a machine. And so, we call it as really big data operating in the world's smallest footprint, okay, and the real point of this for industrial customers, which is our primary focus, industrial IoT, is to deliver as much analytic machine learning, deep learning AI capability on live-streaming sensor data, okay, and what that means is rather than persisting a lot of data either on prem, and then sending it to the cloud, or trying to stream all this to the cloud to make sense of terabytes or petabytes a day, per machine sometimes, right, think about a jet engine, a petabyte every flight. You want to do the compute as close to the source as possible, and if possible, on the live streaming data, not after you've persisted it on a big storage system. So that's the idea. >> So you touch on all kinds of stuff there. So we'll break it down. >> Unpack it, yeah. >> Unpack it. So first off, just kind of the OT/IT thing, and I think that's really important, and we talked before turning the cameras on about Dr. Tom from HP, he loves to make a big symbolic handshake of the operations technology, >> One of our partners. >> Right, and IT, and the marriage of these two things, where before, as you said, the OT guys, the guys that have been running factories, you know, they've been doing this for a long time, and now suddenly, the IT folks are butting in and want to get access to that data to provide more control. So, you know, as you see the marriage of those two things coming together, what are the biggest points of friction, and really, what's the biggest opportunity? >> Great set of questions. So, quite right, the OT folks are inherently suspicious of IT, right? I mean, if you don't know the history, 40 plus years ago, there was a fork in the road, where in factory operations, were they going to embrace things like ethernet, the internet, connected systems? In fact, they purposely air gapped an island of those systems 'cause they was all about machine control, real-time, for safety, productivity, and uptime of the machine. They don't want any, you can't use kind of standard ethernet, it has to be industrial ethernet, right? It has to have time bound and deterministic. It can't be a retry kind of a system, right? So different MAC layer for a reason, for example. What did the physical wiring look like? It's also different cabling, because you can't have cuts, jumps in the cable, right? So it's a different environment entirely that OT grew up in, and so, FogHorn is trying to really bring the value of what people are delivering for AI, essentially, into that environment in a way that's non-threatening to, it's supplemental to, and adds value in the OT world. So Dr. Tom is right, this idea of bringing IT and OT together is inherently challenging, because these were kind of fork in the road, island-ed in the networks, if you will, different systems, different nomenclature, different protocols, and so, there's a real education curve that IT companies are going through, and the idea of taking all this OT data that's already been produced in tremendous volumes already before you add new kinds of sensing, and sending it across a LAN which it's never talked to before, then across a WAN to go to a cloud, to get some insight doesn't make any sense, right? So you want to leverage the cloud, you want to leverage data centers, you want to leverage the LAN, you want to leverage 5G, you want to leverage all the new IT technologies, but you have to do it in a way that makes sense for it and adds value in the OT context. >> I'm just curious, you talked about the air gapping, the two systems, which means they are not connected, right? >> No, they're connected with a duct, they're connected to themselves, in the industrial-- >> Right, right, but before, the OT system was air gapped from the IT system, so thinking about security and those types of threats, now, if those things are connected, that security measure has gone away, so what is the excitement, adoption scare when now, suddenly, these things that were separate, especially in the age of breaches that we know happen all the time as you bring those things together? >> Well, in fact, there have been cyber breaches in the OT context. Think about Stuxnet, think about things that have happened, think about the utilities back keys that were found to have malwares implanted in them. And so, this idea of industrial IoT is very exciting, the ability to get real-time kind of game changing insights about your production. A huge amount of economic activity in the world could be dramatically improved. You can talk about trillions of dollars of value which the McKenzie, and BCG, and Bain talk about, right, by bringing kind of AI, ML into the plant environment. But the inherent problem is that by connecting the systems, you introduce security problems. You're talking about a huge amount of cost to move this data around, persist it then add value, and it's not real-time, right? So, it's not that cloud is not relevant, it's not that it's not used, it's that you want to do the compute where it makes sense, and for industrial, the more industrialized the environment, the more high frequency, high volume data, the closer to the system that you can do the compute, the better, and again, it's multi-layer of compute. You probably have something on the machine, something in the plant, and something in the cloud, right? But rather than send raw OT data to the cloud, you're going to send processed intelligent metadata insights that have already been derived at the edge, update what they call the fleet-wide digital twin, right? The digital twin for that whole fleet of assets should sit in the cloud, but the digital twin of the specific asset should probably be on the asset. >> So let's break that down a little bit. There's so much good stuff here. So, we talked about OT/IT and that marriage. Next, I just want to touch on cloud, 'cause a lot of people know cloud, it's very hot right now, and the ultimate promise of cloud, right, is you have infinite capacity >> Right, infinite compute. >> Available on demand, and you have infinite compute, and hopefully you have some big fat pipes to get your stuff in and out. But the OT challenge is, and as you said, the device challenge is very, very different. They've got proprietary operating systems, they've been running for a very, very long time. As you said, they put off boatloads, and boatloads, and boatloads of data that was never really designed to feed necessarily a machine learning algorithm, or an artificial intelligence algorithm when these things were designed. It wasn't really part of the equation. And we talk all the time about you know, do you move the compute to the data, you move the data to the compute, and really, what you're talking about in this fog computing world is kind of a hybrid, if you will, of trying to figure out which data you want to process locally, and then which data you have time, relevance, and other factors that just go ahead and pump it upstream. >> Right, that's a great way to describe it. Actually, we're trying to move as much of the compute as possible to the data. That's really the point of, that's why we say fog computing is a nebulous term about edge compute. It doesn't have any value until you actually decide what you're trying to do with it, and what we're trying to do is to take as much of the harder compute challenges, like analytics, machine learning, deep learning, AI, and bring it down to the source, as close to the source as you can, because you can essentially streamline or make more efficient every layer of the stack. Your models will get much better, right? You might have built them in the cloud initially, think about a deep learning model, but it may only be 60, 70% accurate. How do you do the improvement of the model to get it closer to perfect? I can't go send all the data up to keep trying to improve it. Well, typically, what happens is I down sample the data, I average it and I send it up, and I don't see any changes in the average data. Guess what? We should do is inference all the time and all the data, run it in our stack, and then send the metadata up, and then have the cloud look across all the assets of a similar type, and say, oh, the global fleet-wide model needs to be updated, and then to push it down. So, with Google just about a month ago, in Barcelona, at the IoT show, what we demonstrated was the world's first instance of AI for industrial, which is closed loop machine learning. We were taking a model, a TensorFlow model, trained in the cloud in the data center, brought into our stack and referring 100% inference-ing in all the live data, pushing the insights back up into Google Cloud, and then automatically updating the model without a human or data scientist having to look at it. Because essentially, it's ML on ML. And that to us, ML on ML is the foundation of AI for industrial. >> I just love that something comes up all the time, right? We used to make decisions based on the sampling of historical data after the fact. >> That's right, that's how we've all been doing it. >> Now, right, right now, the promise of streaming is you can make it based on all the data, >> All the time. >> All the time in real time. >> Permanently. >> This is a very different thing. So, but as you talked about, you know, running some complex models, and running ML, and retraining these things. You know, when you think of edge, you think of some little hockey puck that's out on the edge of a field, with limited power, limited connectivity, so you know, what's the reality of, how much power do you have at some of these more remote edges, or we always talk about the field of turbines, oil platforms, and how much power do you need, and how much compute that it actually starts to be meaningful in terms of the platform for the software? >> Right, there's definitely use cases, like you think about the smart meters, right, in the home. The older generation of those meters may have had very limited compute, right, like you know, talking about single megabyte of memory maybe, or less, right, kilobytes of memory. Very hard to run a stack on that kind of footprint. The latest generation of smart meters have about 250 megabytes of memory. A Raspberry Pi today is anywhere from a half a gig to a gig of memory, and we're fundamentally memory-bound, and obviously, CPU if it's trying to really fast compute, like vibration analysis, or acoustic, or video. But if you're just trying to take digital sensing data, like temperature, pressure, velocity, torque, we can take humidity, we can take all of that, believe it or not, run literally dozens and dozens of models, even train the models in something as small as a Raspberry Pi, or a low end x86. So our stack can run in any hardware, we're completely OS independent. It's a full up software layer. But the whole stack is about 100 megabytes of memory, with all the components, including Docker containerization, right, which compares to about 10 gigs of running a stream processing stack like Spark in the Cloud. So it's that order of magnitude of footprint reduction and speed of execution improvement. So as I said, world's smallest fastest compute engine. You need to do that if you're going to talk about, like a wind turbine, it's generating data, right, every millisecond, right. So you have high frequency data, like turbine pitch, and you have other conceptual data you're trying to bring in, like wind conditions, reference information about how the turbine is supposed to operate. You're bringing in a torrential amount of data to do this computation on the fly. And so, the challenge for a lot of the companies that have really started to move into the space, the cloud companies, like our partners, Google, and Amazon, and Microsoft, is they have great cloud capabilities for AI, ML. They're trying to move down to the edge by just transporting the whole stack to there. So in a plant environment, okay, that might work if you have massive data centers that can run it. Now I still got to stream all my assets, all the data from all of my assets to that central point. What we're trying to do is come out the opposite way, which is by having the world's smallest, fastest engine, we can run it in a small compute, very limited compute on the asset, or near the asset, or you can run this in a big compute and we can take on lots and lots of use cases for models simultaneously. >> I'm just curious on the small compute case, and again, you want all the data-- >> You want to inference another thing, right? >> Does it eventually go back, or is there a lot of cases where you can get the information you need off the stream and you don't necessarily have to save or send that upstream? >> So fundamentally today, in the OT world, the data usually gets, if the PLC, the production line controller, that has simple KPIs, if temperature goes to X or pressure goes to Y, do this. Those simple KPIs, if nothing is executed, it gets dumped into a local protocol server, and then about every 30, 60, 90 days, it gets written over. Nobody ever looks at it, right? That's why I say, 99% of the brown field data in OT has never really been-- >> Almost like a security-- >> Has never been mined for insight. Right, it just gets-- >> It runs, and runs, and runs, and every so often-- >> Exactly, and so, if you're doing inference-ing, and doing real time decision making, real time actual with our stack, what you would then persist is metadata insights, right? Here is an event, or here is an outcome, and oh, by the way, if you're doing deep learning or machine learning, and you're seeing deviation or drift from the model's prediction, you probably want to keep that and some of the raw data packets from that moment in time, and send that to the cloud or data center to say, oh, our fleet-wide model may not be accurate, or may be drifting, right? And so, what you want to do, again, different horses for different courses. Use our stack to do the lion's share of the heavy duty real time compute, produce metadata that you can send to either a data center or a cloud environment for further learning. >> Right, so your piece is really the gathering and the ML, and then if it needs to go back out for more heavy lifting, you'll send it back up, or do you have the cloud application as well that connects if you need? >> Yeah, so we build connectors to you know, Google Cloud Platform, Google IoT Core, to AWS S3, to Microsoft Azure, virtually any, Kafka, Hadoop. We can send the data wherever you want, either on plant, right back into the existing control systems, we can send it to OSIsoft PI, which is a great time series database that a lot of process industries use. You could of course send it to any public cloud or a Hadoop data lake private cloud. You can send the data wherever you want. Now, we also have, one of our components is a time series database. You can also persist it in memory in our stack, just for buffering, or if you have high value data that you want to take a measurement, a value from a previous calculation and bring it into another calculation during later, right, so, it's a very flexible system. >> Yeah, we were at OSIsoft PI World earlier this year. Some fascinating stories that came out of-- >> 30 year company. >> The building maintenance, and all kinds of stuff. So I'm just curious, some of the easy to understand applications that you've seen in the field, and maybe some of the ones that were a surprise on the OT side. I mean, obviously, preventative maintenance is always towards the top of the list. >> Yeah, I call it the layer cake, right? Especially when you get to remote assets that are either not monitored or lightly monitored. They call it drive-by monitoring. Somebody shows up and listens or looks at a valve or gauge and leaves. Condition-based monitoring, right? That is actually a big breakthrough for some, you know, think about fracking sites, or remote oil fields, or mining sites. The second layer is predictive maintenance, which the next generation is kind of predictive, prescriptive, even preventive maintenance, right? You're making predictions or you're helping to avoid downtime. The third layer, which is really where our stack is sort of unique today in delivering is asset performance optimization. How do I increase throughput, how do I reduce scrap, how do I improve worker safety, how do I get better processing of the data that my PLC can't give me, so I can actually improve the performance of the machine? Now, ultimately, what we're finding is a couple of things. One is, you can look at individual asset optimization, process optimization, but there's another layer. So often, we're deployed to two layers on premise. There's also the plant-wide optimization. We talked about wind farm before, off camera. So you've got the wind turbine. You can do a lot of things about turbine health, the blade pitch and condition of the blade, you can do things on the battery, all the systems on the turbine, but you also need a stack running, like ours, at that concentration point where there's 200 plus turbines that come together, 'cause the optimization of the whole farm, every turbine affects the other turbine, so a single turbine can't tell you speed, rotation, things that need to change, if you want to adjust the speed of one turbine, versus the one next to it. So there's also kind of a plant-wide optimization. Talking about time that's driving, there's going to be five layers of compute, right? You're going to have the, almost what I call the ECU level, the individual sub-system in the car that, the engine, how it's performing. You're going to have the gateway in the car to talk about things that are happening across systems in the car. You're going to have the peer to peer connection over 5G to talk about optimization right between vehicles. You're going to have the base station algorithms looking at a micro soil or macro soil within a geographic area, and of course, you'll have the ultimate cloud, 'cause you want to have the data on all the assets, right, but you don't want to send all that data to the cloud, you want to send the right metadata to the cloud. >> That's why there are big trucks full of compute now. >> By the way, you mentioned one thing that I should really touch on, which is, we've talked a lot about what I call traditional brown field automation and control type analytics and machine learning, and that's kind of where we started in discrete manufacturing a few years ago. What we found is that in that domain, and in oil and gas, and in mining, and in agriculture, transportation, in all those places, the most exciting new development this year is the movement towards video, 3D imaging and audio sensing, 'cause those sensors are now becoming very economical, and people have never thought about, well, if I put a camera and apply it to a certain application, what can I learn, what can I do that I never did before? And often, they even have cameras today, they haven't made use of any of the data. So there's a very large customer of ours who has literally video inspection data every product they produce everyday around the world, and this is in hundreds of plants. And that data never gets looked at, right, other than training operators like, hey, you missed the defects this day. The system, as you said, they just write over that data after 30 days. Well, guess what, you can apply deep learning tensor flow algorithms to build a convolutional neural network model and essentially do the human visioning, rather than an operator staring at a camera, or trying to look at training tapes. 30 days later, I'm doing inference-ing of the video image on the fly. >> So, do your systems close loop back to the control systems now, or is it more of a tuning mechanism for someone to go back and do it later? >> Great question, I just got asked that this morning by a large oil and gas super major that Intel just introduced us to. The short answer is, our stack can absolutely go right back into the control loop. In fact, one of our investors and partners, I should mention, our investors for series A was GE, Bosch, Yokogawa, Dell EMC, and our series debuted a year ago was Intel, Saudi Aramco, and Honeywell. So we have one foot in tech, one foot in industrial, and really, what we're really trying to bring is, you said, IT, OT together. The short answer is, you can do that, but typically in the industrial environment, there's a conservatism about, hey, I don't want to touch, you know, affect the machine until I've proven it out. So initially, people tend to start with alerting, so we send an automatic alert back into the control system to say, hey, the machine needs to be re-tuned. Very quickly, though, certainly for things that are not so time-sensitive, they will just have us, now, Yokogawa, one of our investors, I pointed out our investors, actually is putting us in PLCs. So rather than sending the data off the PLC to another gateway running our stack, like an x86 or ARM gateway, we're actually, those PLCs now have Raspberry Pi plus capabilities. A lot of them are-- >> To what types of mechanism? >> Well, right now, they're doing the IO and the control of the machine, but they have enough compute now that you can run us in a separate module, like the little brain sitting right next to the control room, and then do the AI on the fly, and there, you actually don't even need to send the data off the PLC. We just re-program the actuator. So that's where it's heading. It's eventually, and it could take years before people get comfortable doing this automatically, but what you'll see is that what AI represents in industrial is the self-healing machine, the self-improving process, and this is where it starts. >> Well, the other thing I think is so interesting is what are you optimizing for, and there is no right answer, right? It could be you're optimizing for, like you said, a machine. You could be optimizing for the field. You could be optimizing for maintenance, but if there is a spike in pricing, you may say, eh, we're not optimizing now for maintenance, we're actually optimizing for output, because we have this temporary condition and it's worth the trade-off. So I mean, there's so many ways that you can skin the cat when you have a lot more information and a lot more data. >> No, that's right, and I think what we typically like to do is start out with what's the business value, right? We don't want to go do a science project. Oh, I can make that machine work 50% better, but if it doesn't make any difference to your business operations, so what? So we always start the investigation with what is a high value business problem where you have sufficient data where applying this kind of AI and the edge concept will actually make a difference? And that's the kind of proof of concept we like to start with. >> So again, just to come full circle, what's the craziest thing an OT guy said, oh my goodness, you IT guys actually brought some value here that I didn't know. >> Well, I touched on video, right, so without going into the whole details of the story, one of our big investors, a very large oil and gas company, we said, look, you guys have done some great work with I call it software defined SCADA, which is a term, SCADA is the network environment for OT, right, and so, SCADA is what the PLCs and DCSes connect over these SCADA networks. That's the control automation role. And this investor said, look, you can come in, you've already shown us, that's why they invested, that you've gone into brown field SCADA environments, done deep mining of the existing data and shown value by reducing scrap and improving output, improving worker safety, all the great business outcomes for industrial. If you come into our operation, our plant people are going to say, no, you're not touching my PLC. You're not touching my SCADA network. So come in and do something that's non-invasive to that world, and so that's where we actually got started with video about 18 months ago. They said, hey, we've got all these video cameras, and we're not doing anything. We just have human operators writing down, oh, I had a bad event. It's a totally non-automated system. So we went in and did a video use case around, we call it, flare monitoring. You know, hundreds of stacks of burning of oil and gas in a production plant. 24 by seven team of operators just staring at it, writing down, oh, I think I had a bad flare. I mean, it's a very interesting old world process. So by automating that and giving them an AI dashboard essentially. Oh, I've got a permanent record of exactly how high the flare was, how smoky was it, what was the angle, and then you can then fuse that data back into plant data, what caused that, and also OSIsoft data, what was the gas composition? Was it in fact a safety violation? Was it in fact an environmental violation? So, by starting with video, and doing that use case, we've now got dozens of use cases all around video. Oh, I could put a camera on this. I could put a camera on a rig. I could've put a camera down the hole. I could put the camera on the pipeline, on a drone. There's just a million places that video can show up, or audio sensing, right, acoustic. So, video is great if you can see the event, like I'm flying over the pipe, I can see corrosion, right, but sometimes, like you know, a burner or an oven, I can't look inside the oven with a camera. There's no camera that could survive 600 degrees. So what do you do? Well, that's probably, you can do something like either vibration or acoustic. Like, inside the pipe, you got to go with sound. Outside the pipe, you go video. But these are the kind of things that people, traditionally, how did they inspect pipe? Drive by. >> Yes, fascinating story. Even again, I think at the end of the day, it's again, you can make real decisions based on all the data in real time, versus some of the data after the fact. All right, well, great conversation, and look forward to watching the continued success of FogHorn. >> Thank you very much. >> All right. >> Appreciate it. >> He's David King, I'm Jeff Frick, you're watching theCUBE. We're having a CUBE conversation at our Palo Alto studio. Thanks for watching, we'll see you next time. (uplifting symphonic music)
SUMMARY :
of the conference season the background of the company and the real point of this So you touch on Unpack it, of the OT/IT thing, and the marriage of these two things, and the idea of taking all this OT data and something in the cloud, right? and the ultimate promise of cloud, right, and then which data you have time, and all the data, all the time, right? That's right, that's how and how much power do you need, and you have other conceptual data 99% of the brown field data in OT Right, it just gets-- and some of the raw data packets You can send the data wherever you want. that came out of-- and maybe some of the ones the peer to peer connection over 5G of compute now. and essentially do the human visioning, back into the control system to say, and the control of the machine, You could be optimizing for the field. of AI and the edge concept So again, just to come full circle, Outside the pipe, you go video. based on all the data in real time, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
David King | PERSON | 0.99+ |
Bosch | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
600 degrees | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
David C King | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
November 2018 | DATE | 0.99+ |
FogHorn Systems | ORGANIZATION | 0.99+ |
Yokogawa | ORGANIZATION | 0.99+ |
Honeywell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
two systems | QUANTITY | 0.99+ |
one foot | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
BCG | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
third layer | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
seven team | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
second layer | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Saudi Aramco | ORGANIZATION | 0.99+ |
200 plus turbines | QUANTITY | 0.99+ |
SCADA | TITLE | 0.99+ |
60 | QUANTITY | 0.99+ |
two layers | QUANTITY | 0.99+ |
McKenzie | ORGANIZATION | 0.98+ |
Dr. | PERSON | 0.98+ |
Tom | PERSON | 0.98+ |
a year ago | DATE | 0.98+ |
OSIsoft | ORGANIZATION | 0.98+ |
Yokogawa | PERSON | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
single megabyte | QUANTITY | 0.98+ |
30 year | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
AWS | ORGANIZATION | 0.97+ |
90 days | QUANTITY | 0.97+ |
half a gig | QUANTITY | 0.97+ |
about 10 gigs | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
five layers | QUANTITY | 0.96+ |
40 plus years ago | DATE | 0.96+ |
one turbine | QUANTITY | 0.96+ |
about 100 megabytes | QUANTITY | 0.96+ |
about 250 megabytes | QUANTITY | 0.96+ |
30 days later | DATE | 0.96+ |
one thing | QUANTITY | 0.96+ |
dozens | QUANTITY | 0.96+ |
OSIsoft PI World | ORGANIZATION | 0.95+ |
trillions of dollars | QUANTITY | 0.95+ |
50 years | QUANTITY | 0.94+ |
single turbine | QUANTITY | 0.93+ |
Bain | PERSON | 0.93+ |
Hadoop | TITLE | 0.92+ |
60, 70% | QUANTITY | 0.91+ |
a decade ago | DATE | 0.9+ |
a month ago | DATE | 0.89+ |
FogHorn | ORGANIZATION | 0.89+ |
CUBE | ORGANIZATION | 0.88+ |