Joe Selle & Tom Ward, IBM | IBM CDO Fall Summit 2018
>> Live from Boston, it's theCUBE! Covering IBM Chief Data Officer Summit, brought to you by IBM. >> Welcome back everyone to the IBM CDO Summit and theCUBE's live coverage, I'm your host Rebecca Knight along with my co-host Paul Gillin. We have Joe Selle joining us. He is the Cognitive Solution Lead at IBM. And Thomas Ward, Supply Chain Cloud Strategist at IBM. Thank you so much for coming on the show! >> Thank you! >> Our pleasure. >> Pleasure to be here. >> So, Tom, I want to start with you. You are the author of Risk Insights. Tell our viewers a little bit about Risk Insights. >> So Risk Insights is a AI application. We've been working on it for a couple years. What's really neat about it, it's the coolest project I've ever worked on. And it really gets a massive amount of data from the weather company, so we're one of the biggest consumers of data from the weather company. We take that and we'd visualize who's at risk from things like hurricanes, earthquakes, things like IBM sites and locations or suppliers. And we basically notify them in advance when those events are going to impact them and it ties to both our data center operations activity as well as our supply chain operations. >> So you reduce your risk, your supply chain risk, by being able to proactively detect potential outages. >> Yeah, exactly. So we know in some cases two or three days in advance who's in harm's way and we're already looking up and trying to mitigate those risks if we need to, it's going to be a real serious event. So Hurricane Michael, Hurricane Florence, we were right on top of it and said we got to worry about these suppliers, these data center locations, and we're already working on that in advance. >> That's very cool. So, I mean, how are clients and customers, there's got to be, as you said, it's the coolest project you've ever worked on? >> Yeah. So right now, we use it within IBM right? And we use it to monitor some of IBM's client locations, and in the future we're actually, there was something called the Call for Code that happened recently within IBM, this project was a semifinalist for that. So we're now working with some non-profit groups to see how they could also avail of it, looking at things like hospitals and airports and those types of things as well. >> What other AI projects are you running? >> Go ahead. >> I can answer that one. I just wanted to say one thing about Risk Insights, which didn't come out from Tom's description, which is that one of the other really neat things about it is that it provides alerts, smart alerts out to supply chain planners. And the alert will go to a supply chain planner if there's an intersection of a supplier of IBM and a path of a hurricane. If the hurricane is vectored to go over that supplier, the supply chain planner that is responsible for those parts will get some forewarning to either start to look for another supplier, or make some contingency plans. And the other nice thing about it is that it launches what we call a Resolution Room. And the Resolution Room is a virtual meeting place where people all over the globe who are somehow impacted by this event can collaborate, share documents, and have a persistent place to resolve this issue. And then, after that's all done, we capture all the data from that issue and the resolution and we put that into a body of knowledge, and we mine that knowledge for a playbook the next time a similar event comes along. So it's a full-- >> It becomes machine learning. >> It's a machine learning-- >> Sort of data source. >> It's a full soup to nuts solution that gets smarter over time. >> So you should be able to measure benefits, you should have measurable benefits by now, right? What are you seeing, fewer disruptions? >> Yes, so in Risk Insights, we know that out of a thousand of events that occurred, there were 25 in the last year that were really the ones we needed to identify and mitigate against. And out of those we know there have been circumstances where, in the past IBM's had millions of dollars of losses. By being more proactive, we're really minimizing that amount. >> That's incredible. So you were going to talk about other kinds of AI that you run. >> Right, so Tom gave an overview of Risk Insights, and we tied it to supply chain and to monitoring the uptime of our customer data centers and things like that. But our portfolio of AI is quite broad. It really covers most of the middle and back and front office functions of IBM. So we have things in the sales domain, the finance domain, the HR domain, you name it. One of the ones that's particularly interesting to me of late is in the finance domain, monitoring accounts receivable and DSO, day sales outstanding. So a company like IBM, with multiple billions of dollars of revenue, to make a change of even one day of day sales outstanding, provides gigantic benefit to the bottom line. So we have been integrating disparate databases across the business units and geographies of IBM, pulling that customer and accounts receivable data into one place, where our CFO can look at an integrated approach towards our accounts receivable and we know where the problems are, and we're going to use AI and other advanced analytic techniques to determine what's the best treatment for that AI, for those customers who are at risk because of our predictive models, of not making their payments on time or some sort of financial risk. So we can integrate a lot of external unstructured data with our own structured data around customers, around accounts, and pull together a story around AR that we've never been able to pull before. That's very impactful. >> So speaking of unstructured data, I understand that data lakes are part of your AI platform. How so? >> For example, for Risk Insights, we're monitoring hundreds of trusted news sources at any given time. So we know, not just where the event is, what locations are at risk, but also what's being reported about it. We monitor Twitter reports about it, we monitor trusted news sources like CNN or MSNBC, or on a global basis, so it gives our risk analyst not just a view of where the event is, where it's located, but also what's being said, how severe it is, how big are those tidal waves, how big was the storm surge, how many people were affected. By applying some of the machine learning insights to these, now we can say, well if there are couple hundred thousand people without power then it's very likely there is going to be multimillions of dollars of impact as a result. So we're now able to correlate those news reports with the magnitude of impact and potential financial impact to the businesses that we're supporting. >> So the idea being that IBM is saying, look what we've done for our own business (laughs), imagine what we could do for you. As Inderpal has said, it's really using IBM as its own test case and trying to figure this all out and learning as it goes and he said, we're going to make some mistakes, we've already made some mistakes but we're figuring it out so you don't have to make those mistakes. >> Yeah that's right. I mean, if you think about the long history of this, we've been investing in AI, really, since, depending on how you look at it, since the days of the 90's, when we were doing Deep Blue and we were trying to beat Garry Kasparov at chess. Then we did another big huge push on the Jeopardy program, where we we innovated around natural language understanding and speed and scale of processing and probability correctness of answers. And then we kind of carry that right through to the current day where we're now proliferating AI across all of the functions of IBM. And there, then, connecting to your comment, Inderpal's comment this morning was around let's just use all of that for the benefit of other companies. It's not always an exact fit, it's never an exact fit, but there are a lot of pieces that can be replicated and borrowed, either people, process or technology, from our experience, that would help to accelerate other companies down the same path. >> One of the questions around AI though is, can you trust it? The insights that it derives, are they trustworthy? >> I'll give a quick answer to that, and then Tom, it's probably something you want to chime in on. There's a lot of danger in AI, and it needs to be monitored closely. There's bias that can creep into the datasets because the datasets are being enhanced with cognitive techniques. There's bias that can creep into the algorithms and any kind of learning model can start to spin on its own axis and go in its own direction and if you're not watching and monitoring and auditing, then it could be starting to deliver you crazy answers. Then the other part is, you need to build the trust of the users, because who wants to take an answer that's coming out of a black box? We've launched several AI projects where the answer just comes out naked, if you will, just sitting right there and there's no context around it and the users never like that. So we've understood now that you have to put the context, the underlying calculations, and the assessment of our own probability of being correct in there. So those are some of the things you can do to get over that. But Tom, do you have anything to add to that? >> I'll just give an example. When we were early in analyzing Twitter tweets about a major storm, what we've read about was, oh, some celebrity's dog was in danger, like uh. (Rebecca laughs) This isn't very helpful insight. >> I'm going to guess, I probably know the celebrity's dog that was in danger. (laughs) >> (laughs) actually stop saying that. So we learned how to filter those things out and say what are the meaningful keywords that we need to extract from and really then can draw conclusions from. >> So is Kardashian a meaningful word, (all laughing) I guess that's the question. >> Trending! (all laughing) >> Trending now! >> I want to follow up on that because as an AI developer, what responsibility do developers have to show their work, to document how their models have worked? >> Yes, so all of our information that we provided the users all draws back to, here's the original source, here's where the information was taken from so we can draw back on that. And that's an important part of having a cognitive data, cognitive enterprise data platform where all this information is stored 'cause then we can refer to that and go deeper as well and we can analyze it further after the fact, right? You can't always respond in the moment, but once you have those records, that's how you can learn from it for the next time around. >> I understand that building test models in some cases, particularly in deep learning is very difficult to build reliable test models. Is that true, and what progress is being made there? >> In our case, we're into the machine learning dimension yet, we're not all the way into deep learning in the project that I'm involved with right now. But one reason we're not there is 'cause you need to have huge, huge, vast amounts of robust data and that trusted dataset from which to work. So we aspire towards and we're heading towards deep learning. We're not quite there yet, but we've started with machine learning insights and we'll progress from there. >> And one of the interesting things about this AI movement overall is that it's filled with very energetic people that's kind of a hacker mindset to the whole thing. So people are grabbing and running with code, they're using a lot of open source, there's a lot of integration of the black box from here, from there in the other place, which all adds to the risk of the output. So that comes back to the original point which is that you have to monitor, you have to make sure that you're comfortable with it. You can't just let it run on its own course without really testing it to see whether you agree with the output. >> So what other best practices, there's the monitoring, but at the same time you do that hacker culture, that's not all bad. You want people who are energized by it and you are trying new things and experimenting. So how do you make sure you let them have, sort of enough rein but not free rein? >> I would say, what comes to mind is, start with the business problem that's a real problem. Don't make this an experimental data thing. Start with the business problem. Develop a POC, a proof of concept. Small, and here's where the hackers come in. They're going to help you get it up and running in six weeks as opposed to six months. And then once you're at the end of that six-week period, maybe you design one more six-week iteration and then you know enough to start scaling it and you scale it big so you've harnessed the hackers, the energy, the speed, but you're also testing, making sure that it's accurate and then you're scaling it. >> Excellent. Well thank you Tom and Joe, I really appreciate it. It's great to have you on the show. >> Thank you! >> Thank you, Rebecca, for the spot. >> I'm Rebecca Knight for Paul Gillin, we will have more from the IBM CDO summit just after this. (light music)
SUMMARY :
brought to you by IBM. Thank you so much for coming on the show! You are the author of Risk Insights. consumers of data from the weather company. So you reduce your risk, your supply chain risk, and trying to mitigate those risks if we need to, as you said, it's the coolest project you've ever worked on? and in the future we're actually, there was something called from that issue and the resolution and we put that It's a full soup to nuts solution the ones we needed to identify and mitigate against. So you were going to talk about other kinds of AI that you run. and we know where the problems are, and we're going to use AI So speaking of unstructured data, So we know, not just where the event is, So the idea being that IBM is saying, all of that for the benefit of other companies. and any kind of learning model can start to spin When we were early in analyzing Twitter tweets I'm going to guess, I probably know the celebrity's dog So we learned how to filter those things out I guess that's the question. and we can analyze it further after the fact, right? to build reliable test models. and that trusted dataset from which to work. So that comes back to the original point which is that but at the same time you do that hacker culture, and then you know enough to start scaling it It's great to have you on the show. Rebecca, for the spot. we will have more from the IBM CDO summit just after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Joe Selle | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Thomas Ward | PERSON | 0.99+ |
Garry Kasparov | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
six-week | QUANTITY | 0.99+ |
Tom Ward | PERSON | 0.99+ |
MSNBC | ORGANIZATION | 0.99+ |
25 | QUANTITY | 0.99+ |
CNN | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
three days | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
multimillions of dollars | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Risk Insights | TITLE | 0.97+ |
Kardashian | PERSON | 0.97+ |
Deep Blue | TITLE | 0.97+ |
hundreds of trusted news sources | QUANTITY | 0.97+ |
one day | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
one reason | QUANTITY | 0.95+ |
IBM CDO Summit | EVENT | 0.95+ |
couple hundred thousand people | QUANTITY | 0.92+ |
IBM CDO Fall Summit 2018 | EVENT | 0.91+ |
Risk Insights | ORGANIZATION | 0.86+ |
90's | DATE | 0.86+ |
Hurricane Florence | EVENT | 0.86+ |
Hurricane Michael | EVENT | 0.85+ |
millions of dollars | QUANTITY | 0.84+ |
this morning | DATE | 0.83+ |
one place | QUANTITY | 0.82+ |
IBM Chief Data Officer Summit | EVENT | 0.81+ |
billions of dollars | QUANTITY | 0.8+ |
Inderpal | PERSON | 0.77+ |
Inderpal | ORGANIZATION | 0.75+ |
One of | QUANTITY | 0.71+ |
thousand of events | QUANTITY | 0.68+ |
Risk | ORGANIZATION | 0.68+ |
CDO | EVENT | 0.59+ |
questions | QUANTITY | 0.56+ |
waves | EVENT | 0.56+ |
theCUBE | ORGANIZATION | 0.34+ |
Farah Papaioannou and Kilton Hopkins, Edgeworx.io | CUBEConversation, 2018
(intense orchestral music) >> Hey, welcome back everybody, Jeff Frick here with theCUBE, we're at our Palo Alto studios for a CUBEConversation, and we're talking about startups today, which we don't often get to do but it's really one of the more exciting things that we get to do, because that's what really, what keeps Silicon Valley Silicon Valley; and this next new company is playing on a very hot space which is edge, you're all about cloud then the next big move is edge, especially with internet things and industrial internet things. So we're really happy to welcome Edgeworx here, fresh off the announcement of the new company and their funding. We got the, both Founders, we have Farah Papaioannou, and she is the President, and Kilton Hopkins, the CEO, both of Edgeworx, welcome. >> Thank you, >> Thanks. >> thanks for having us. >> So for those of us that aren't familiar, give us kind of the quick 101 on Edgeworx. >> So I've been looking at the space as a venture capitalist before I've joined up with Kilton, and I've been looking at edge computing for a long time because it just made intuitive sense to me. You're looking at all these devices that are now not just devices but they're compute platforms, or you know generating all this data; well how are we going to address all that data? If you think about sending all that back to the cloud, latency, bandwidth, and cost, you talk about breaking the internet, this is what's going to break the internet not Kim Kardashian's you know butt photo right? (guys laugh) So, how do you solve that problem? You know if you think about autonomous vehicles for example these are now computers on wheels, they're not just a transportation mechanism. If they're generating all this data, and they need to interact with each other, and make decisions in near realtime; how are they going to do that if they have to send all that data back to the cloud? >> Right, great. >> So that's where I came across Kilton's company, or actually the technology that he'd built, and we formed a company together. I looked at everything, and the technology that he'd developed, was far, leaps and bounds beyond anything anyone else had come to to date, so. >> So, Kilton, how did you start on that project? >> Yeah, so this actually goes way back, this goes way back to like about 2010. Back in Chicago I was looking at what architecture is going to allow us to do the types of processing that's really expensive, and do it close to where the data is? This architecture was in the back of my mind. When I came to the bay area, I jumped in with the city of San Francisco as an IOT Advisor; and everywhere I looked I saw the same problems. Nobody was doing secure processing at the edge in any kind of way that was manageable, so I started to solve it. Then, years later after doing, you know I did some deployments myself, and after seeing how was this stuff working, it finally arrived at an architecture that I thought: okay, this thing's passing all these trials, and now I think we've got this pretty well nailed, so. I basically got into it before the terms fog and edge computing were being thrown around, and just said this is what has to happen. And then of course, it turns out that the world catches up, and now of course there's terms for it, and everyone's talking about the edge. >> So it's an interesting problem, right, it's the same old problem we've been having forever, which is do you move the data to the compute or do you move the compute to the data? And then we've had these other things happening with suddenly this you know huge swell of data flow, and that's even before we start you know kind of the IOT connection on the data flow, luckily the networks are getting faster, 5G's around the corner, chips are getting faster and cheaper, memory's getting cheaper and faster. And then we had the development of the cloud and really the hyper growth of the public cloud. But that still doesn't help you with kind of these low latency applications that you have to execute on the edge. And obviously we've talked to GE a lot, and everyone wants to talk about turbines and you know harsh conditions and you know nasty weather, and it's not this pristine data center; how do you put compute, and how much compute do you put at the edge, and how do you manage kind of that data flow? What can you deal with there, what do you have to send up? And of course this pesky thing called physics and latency, which just prohibits, as you said, the ability to get stuff up to some compute and get it back in time necessarily to do something about it. So what is the approach that you guys are taking? What's a little bit different about what you've built with Edgeworx? >> Sure. >> So, in most cases, people think about the edge as like almost a lead into the cloud. They say: how can I pre-process the data, maybe curtail some of the bandwidth volume that I need in order to send data up to the cloud? But that doesn't actually solve the problem, you'll never get rid of cloud latency if you're sending just smaller packages. And in addition, you have done nothing to address the security issues of the edge, if you're just trying to package data, maybe reduce it a bit and send it to the cloud. So what's different about us is with us you can use the cloud, but you don't have to, we're completely at the edge. So you can run software with Edgeworx that stays within the four walls of a factory, if you so choose, and no data will ever leave the building; and that is a stark difference from the approaches that've been taken to date which've been tied to the cloud, but we do a little at the edge, it's like come on, this is real edge. >> Right, right. And so is it a software layer that sits on top of whatever kind of bios and firmware are on a lot of these dumb sensors, is that kind of the idea? >> Yeah, no actually it sits, exactly, it sits above the bios level, it sits above the firmware. It creates an application runtime, so it allows developers to write applications that are containerized, so we run containers at the edge, which allows our developers to run applications they've already developed for the cloud, to write new applications, but they don't have to learn an entirely new framework or an entirely new SDK, they can write using tools they already know: Java, C#, C++, Python, if you can write that language, we can run it, and at the edge. Which again allows people to use skillsets that they already know, they don't have to like learn specialized skillsets for the edge, why should they have to do that you know? >> I think, and you know good for you guys, to get Stacey Higginbotham to write a nice article about the company long before you launched, which is good. But I thought she had a really interesting breakdown on kind of edge computing, and she broke it down into four layers: the device, the sensors, as you said as dumb as it can be, right, you want a lot of these things. Then this gateway layer that collects the data. You know some level of compute close to the edge, not necessarily in the camera or in any of these sensors, but close, and then of course a connection back to the cloud. So you guys run in the sensor, or probably more likely in that gateway layer? Or do you see, in some of the early customers you're talkin' to, are they putting these like little micro data centers? I mean how are you actually seeing this stuff deployed in the field at scale? >> So we actually gave Stacey that four layer chart because were trying to explain people to the edge, to people who didn't understand what that was, and again, people refer to all these different layers at the edge. We actually think that the layer right above the sensors is actually the most difficult to solve for. And the reason we don't want to run on the sensor level is because sensors are becoming more and more commoditized, a customer would rather have a thousand dumb sensors where they could get more and more data, than have like 10 really smart sensors where they could run compute on them. So, unless there's special circumstances, like you know a case of a camera where we're actually working with a camera that has GPU capability, where they can actually run on the edge, we'd like to run at a level up there, and there's a couple of reasons for that. One is, if you run on the devices itself, you can't really aggregate each other's devices, you can't aggregate-- a temperature sensor cannot aggregate a pressure sensor's data, you need to set up a layer above. Also we're able to serve as a broker between low levels of you know Wi-Fi and Bluetooth, versus you know high levels of TCP/IP, right, which you also cannot do at the sensor level. If you were run at the sensor, you'd basically have to do what Amazon does, which is device-to-cloud; which doesn't really afford you the capability of running real software at the edge. >> Right. So, when you're out, let's just say the camera, we talked a little bit before we turned the cameras on about the surveillance and surveillance cameras, I mean where are those gateways, and where's the power and the connectivity to that gateway, what're you seeing in some of these early examples? >> So, you know, for cameras you've got basically two choices, either the camera is a dumb camera that puts a video feed to some kind of a compute box that's nearby, or is on a wired network, or wireless network that's private to it, so. In building cameras that are already in place, that are analog, you can put a box in the building that can take the feeds, but the better option than that even is to have smart cameras, so probably a new greenfield deploy would have smart cameras that have the ability to do the AI processing right there in the module. So the answer is: somewhere you have a feed of sensor data, whether it be video, audio, or just like a temperature, you know time series data, and then it hits a point of where you're still on the edge, but you can do compute. Sometimes they're in the same unit, sometimes they're a little spread out, sometimes they're over wireless; that first layer up is where we sit no matter how the compute is done. >> Okay. And I'm just curious on some of the early use cases. How do people see the opportunity now to have kind of a software-driven IOT device that's separate from the actual firmware that's in the in the sensor? What is that going to enable them to do that they're excited to do they couldn't do before? >> Yeah, so if you think about the older model, it's: how can I make this device, get it's sensor readings and somehow communicate that data, and I'm going to write low-level code, probably C code or whatever to operate that and it's how often do I pull the sensor? And you're really thinking about just jeez I need this data somewhere to make useable. And when you use us you think: okay, I have streams of data, what would I do if I wanted to run software right where the data is, I can increase my sampling frequency, I can undo everything we were going to do in the cloud, and do it right there for free once it's deployed there's no bandwidth cost. So it opens the world of, of thinking, we're now running software at the edge, instead of running firmware, so I can just move the data upstream. You stop moving the data, and you start moving the applications, and that's what's like the world changer for everybody. >> Right, right. >> Plus you can use the same skillsets you have for the cloud and up until now programming IOT devices has been a matter of saying oh, you know, if I know how to work the GPIO pins you know and you know I can write in C, maybe I can make it work. And now you say: I know Python, and I know how to do data analytics with Python, I can just move that into the sensor module, if it's smart enough, or the gateway right there, and I can pretty much push my code into the factory instead of waiting for the factory to wire the data to me. >> And we actually have a customer right now that's doing real-time surveillance at the edge, and they have smart city deployments and they're looking at an example of, border control for example. And what they want to be able to do is put these cameras out there and say: well, I've detected something on the maritime border here, is it a whale, is it debris, or is it a boat full of refugees, or is it a boat full of like pirates, or is it a boat full of migrants? Well before what they would have to do is okay well, as an edge device maybe I, at the basic level of processing I could run is to say let me compress that video data and send some of it back, right, and then do the analysis back there; well that's not really going to be that helpful, because if I have to send it back to some cloud and do some analysis, by the time I've recognized what's out there: too late. What we can do now with our software capability, because we have our platform running on these cameras is we can deploy software that says: okay well I can detect, right there, right at the edge, what we're seeing, and I can not just send back video data, which I don't really want to do, that's really you know heavy on bandwidth and latency, cost as well, is I can just send back text data and say: well, I've actually detected something, so let's take some sort of action on it, and say okay the next camera should be able to detect it or pick it up or send some notifications that we need to address it back here. If I'm sending textual data back, and say I've already done that processing right there and then, I can run thousands of cameras out there at the edge versus just 10 or you know, 10 or 12 because of the amount of cost and latency. And then the customer can decide, well you know what, I want to add another application that you know does target tracking of certain individual terrorists, right? Okay, well that's easy for me to deploy that software because our platform's already running. We can, you know, and just push it out there at the edge. Oh, you know what, I'm able to model train at the edge, and I can actually do better detection, I can go from 80% to 90%, well I can just push that data and do an upgrade right there at the edge as opposed to going out there and flashing that board, and you know upgrading that way, or sending out some sort of firmware upgrade; so it allows a lot of flexibility that we couldn't do before. >> Right. Well I just got to ask ya now, you got a pile of money, which is exciting, and congratulations. >> Thank you. >> I was going to say, kind of, where do you kind of focus on your go-to-market, you know within any particular vertical, or any specific horizontal application? But it sounds like, I think we've use cameras now three or four times (laughs) in the last three or four questions, so I'm guessin' that's a, that's a good-- >> That's been a strong one for us. >> You know kind of early early adopt to market for you guys. >> That one's been a strong one for us, yeah. We've had some real success with telco's, another use case that we've seen some real good traction is being able to detect quality-of-service issues on Wi-Fi routers, so, that's one that we're looking at as well that's had some adoption. Oil and gas has been pretty strong for us as well. So it seems to be kind of a horizontal play for us, and we're excited about the opportunity. >> Alright. Well thanks for comin' on and tellin' the story, and congratulations on your funding and launching the company, and, >> Thank you. >> And bringin' it to reality. >> Great, thanks. >> Alright, Kilton, Farah, I'm Jeff, you're watchin' theCUBE, thanks for watchin' we'll see ya next time. (intense orchestral music)
SUMMARY :
and she is the President, So for those of us that aren't familiar, and they need to interact with each other, and the technology that he'd developed, and do it close to where the data is? and you know harsh conditions from the approaches that've been taken to date which've been is that kind of the idea? for the edge, why should they have to do that you know? about the company long before you launched, which is good. is actually the most difficult to solve for. what're you seeing in some of these early examples? that have the ability to do the AI And I'm just curious on some of the early use cases. and you start moving the applications, if I know how to work the GPIO pins you know and and say okay the next camera should be able to Well I just got to ask ya now, you got a pile of money, So it seems to be kind of a horizontal play for us, and launching the company, and, you're watchin' theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Stacey Higginbotham | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Kilton | PERSON | 0.99+ |
Farah Papaioannou | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Edgeworx | ORGANIZATION | 0.99+ |
Stacey | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
12 | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Farah | PERSON | 0.99+ |
Kilton Hopkins | PERSON | 0.99+ |
C++ | TITLE | 0.99+ |
Jeff | PERSON | 0.99+ |
Kim Kardashian | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
two choices | QUANTITY | 0.99+ |
four times | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
90% | QUANTITY | 0.98+ |
four questions | QUANTITY | 0.98+ |
thousands of cameras | QUANTITY | 0.98+ |
10 really smart sensors | QUANTITY | 0.98+ |
C# | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Silicon Valley | LOCATION | 0.98+ |
first layer | QUANTITY | 0.97+ |
IOT | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.92+ |
today | DATE | 0.9+ |
SDK | TITLE | 0.9+ |
2018 | DATE | 0.89+ |
years later | DATE | 0.87+ |
Kilton | ORGANIZATION | 0.84+ |
four layer | QUANTITY | 0.79+ |
2010 | DATE | 0.77+ |
four walls | QUANTITY | 0.76+ |
Edgeworx.io | OTHER | 0.6+ |
theCUBE | ORGANIZATION | 0.54+ |
sensors | QUANTITY | 0.53+ |
couple | QUANTITY | 0.52+ |
Edgeworx | TITLE | 0.46+ |
thousand | QUANTITY | 0.42+ |
CUBEConversation | EVENT | 0.42+ |