Image Title

Search Results for Excenture:

Scott Noteboom, Litbit – When IoT Met AI: The Intelligence of Things - #theCUBE


 

>> Announcer: From the Fairmont Hotel in the heart of Silicon Valley, it's The Cube. Covering When IoT met AI: The Intelligence of Things. Brought to you by Western Digital. >> Hey, welcome back, everybody. Jeff Frick here with The Cube. We're in downtown Los Angeles at the Fairmont Hotel at a interesting little show called When IoT Met AI: The Intelligence of Things. A lot of cool startups here along with some big companies. We're really excited go have our next guest, taking a little different angle. He's Scott Noteboom. He is the co-founder and CEO of a company called Litbit. First off, Scott, welcome. >> Yeah, thank you very much. >> Absolutely. For folks that aren't familiar, what is Litbit, what's your core mission? >> Well, probably, the simplest way to put it is, is in business we enable our users who have a lot of experience in a lot of different areas to take their expertise and experience which may not be coding software, or understanding, or even being able to spell what an algorithm is on the data science perspective, and being able to give them an easy interface so they can kind of create their own Siro or Alexa, an AI but an AI that's based on their own subject matter expertise that they can put to work in a lot of different ways. >> So, there's often a lot of talk about kind of tribal knowledge, and how does tribal knowledge get passed down so people know how to do things. Whether it's with new employees, or as you were talking about a little bit off camera, just remote locations for this or that. And there hasn't really been a great system to do that. So, you're really attacking that, not only with the documentation, but then making an AI actionable piece of software that can then drive machines and using IoT to do things. Is that correct? >> That's right. So, if you created, say an AI that I've been passionate about 'cause I ran data centers for a lot of years, is DAC. So, DAC's an AI that has a lot of expertise, and how to run a data center by, and kind of fueled and mentored by a lot of the experts in the industry. So, how can you take DAC and put Dak to work in a lot of places? And the people who need the best trained DAC aren't people who are building apps. They are people who have their area of subject matter expertise, and we view these AI personas that can be put to work as kind of apps of the future, where can people can prescribe to personas that are build directly by the experts, which is a pretty pure way to connect AIs with the right people, and then be able to get them and put them-- >> So, there's kind of two steps to the process. How does the information get from the experts into your system? How's that training happen? >> So, where we spend a lot of attention is, a lot of people question and go, "Well, an AI lives in this virtual logical world "that's disconnected from the physical world." And I always questions for people to close their eyes and imagine their favorite person that loves them in the world. And when they picture that person hear that person's voice in their head, that's actually a very similar virtual world as what AIs working. It's not the physical world. And what connects us as people to the physical world, our senses, our sight, our hearing, our touch, our feeling. And what we've done is we've enabled using IoT sensors, the ability of combining those sensors with AI to turn sensors into senses, which then provide the ability for the AI to connect really meaningful ways to the physical world. And then the experts can teach the AI this is what this looks like, this is what this sounds like, this is what it's supposed to feel like. If it's greater than 80 degrees in an office location, it's hot. Really teaching the AI to be able to form thoughts based on a specific expertise and then be able to take the right actions to do the right things when those thoughts are formed. >> How do you deal with nuance, 'cause I'm sure there's a lot of times where people, as you said, are sensing or smelling or something, but they don't even necessarily consciously know that that's an input into their decision process, even though it really is. They just haven't really thought of it as a discrete input. How do you separate out all these discreet inputs so you get a great model that represents your best of breed technicians? >> Well, to try to answer the question, first of all, the more training the better. So, the good way to think of the AI is, unlike a lot of technologies that typically age and go out of life over time, an AI continuously gets smarter the more it's mentored by people, which would be supervised learning. And the more it can adjust and learn on it's own combined with real day to day data activity combined with that supervised learning and unsupervised learning approach, so enabling it to continuously get better over time. We've figure out some ways that it can produce some pretty meaningful results with a small amount of training. So, yeah. >> Okay. What are some of the applications, kind of your initial go to market? >> We're a small startup, and really, what we've done is we've developed a platform that we really like to, our goal is for it to be very horizontal in nature. And then the applications or the AI personas can be very vertical or subject matter experts across different silos. So, what we're doing is, is we're working with partners right now in different silos developing AIs that have expertise in the oil and gas business, in the pharmaceutical space, in the data center space, in the corporate facilities manage space, and really making sure that people who aren't technologists in all of those spaces, whether you're a very specific scientists who're running a lab, or a facilities guy in a corporate building, can successfully make that experiential connection between themselves and the AI, and put it to practical use. And then as we go, there's a lot of efforts that can be very specific to specific silos, whatever they may be. >> So, those personas are actually roles of individuals, if you will, performing certain tasks within those verticals. >> Absolutely. What we call them is coworkers, and the way things are designed is, one of the things that I think is really important in the AI world is that we approach everything from a human perspective because it's a big disruptive shift, and there's a lot of concern over it. So, if you get people to connect to it in a humanistic way, like coworker Viv works along with coworker Sophia, and Viv has this expertise, Sophia has this expertise, and has better improving ways to interface with people who have names that aren't a lot different from them and have skillsets that aren't a lot different. When you look at the AIS, they don't mind working longer hours. Let them work the weekends so I can spend hours with my family. Let them work the crazy shifts. So, things are different in that regard. But the relationship aspect of how the workplace works, try not to disrupt that too much. >> So, then on a consumption side, with the person coworker that's working with the persona, how do they interact with it, how do they get the data out, and I guess even more importantly, maybe, how do they get the new data back in to continue to train the model? >> So, the biggest thing you have to focus on with a human and machine learning interface that doesn't require a program or a data science, is that the language that the AI is taught in is human language, natural human language. So, we developed a lot of natural human language files that are pretty neat because a human coworker in California here could be interfacing in english to their coworker, and at the same time, someone speaking Mandarin in Shanghai could be interfacing with the same coworker speaking mandarin unless you can get multilingual functionality. Right now, to answer your question, people are doing it in a text based scenario. But the future vision, I think when the industry timing is right, is we view that every one of the coworkers we're developing will have a very distinct unique fingerprint of a voice. So, therefor, when you're engaging with your coworker using voice, you'll begin to recognize, oh, that's Dax, or that's Viv, or that's Sophia, based on their voice. So, like many people, this is how we're communicating with voice, and we believe the same thing's going to occur. And a lot of that's in timing. That's the direction where things are headed. >> Interesting. The whole voice aspect is just a whole 'nother interesting thing in terms of what type of voice personality attributes associated with voice. That's probably going to be a huge piece in terms of the adoption, in terms of having a true coworker experience, if you will. >> One of the things we haven't figure out, and these are important questions, and there's so many unknowns, is we feel really confident that the AI persona should have a unique voice because then I know who I'm engaging with, and I can connect by ear without them saying what their name is. But what does an AI persona look like? That's something where actually we don't know that, and we explore different things and, oh, that looks scary, or oh, that doesn't make sense. Should it look like anything? Which has largely been the approach of what does an Alexa or a Siri look like. As you continue to advance those engagements, and particularly when augmented reality comes into play, through augmented reality, if you're able to look and say, "Oh, a coworker's working over there," there's some value in that. But what is it going to look like? That's interesting, and we don't know that. >> Hopefully, better than those things at the San Jose Airport that are running around. >> Yeah, exactly. >> Classic robot. All right, Scott, very interesting story. I look forward to watching you grow and develop over time. >> Awesome, it's good to talk. >> Absolutely, all right, he's Scott Noteboom, he's from Litbit. I'm Jeff Frick, you're watching The Cube. We're at When IoT met AI: The Intelligence of Things, here at San Jose California. We'll be right back after the short break. Thanks for watching. (upbeat music)

Published Date : Jul 2 2017

SUMMARY :

in the heart of Silicon Valley, We're in downtown Los Angeles at the Fairmont Hotel For folks that aren't familiar, that they can put to work in a lot of different ways. And there hasn't really been a great system to do that. by a lot of the experts in the industry. the experts into your system? Really teaching the AI to be able to that represents your best of breed technicians? So, the good way to think of the AI is, What are some of the applications, in the pharmaceutical space, in the data center space, So, those personas are actually and the way things are designed is, So, the biggest thing you have to in terms of the adoption, in terms of One of the things we haven't figure out, at the San Jose Airport that are running around. I look forward to watching you We'll be right back after the short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

CaliforniaLOCATION

0.99+

SophiaPERSON

0.99+

ScottPERSON

0.99+

Scott NoteboomPERSON

0.99+

Western DigitalORGANIZATION

0.99+

LitbitORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

ShanghaiLOCATION

0.99+

SiriTITLE

0.99+

two stepsQUANTITY

0.99+

San Jose CaliforniaLOCATION

0.99+

San Jose AirportLOCATION

0.99+

MandarinOTHER

0.99+

The CubeTITLE

0.98+

greater than 80 degreesQUANTITY

0.98+

The CubeORGANIZATION

0.98+

mandarinOTHER

0.98+

VivPERSON

0.98+

oneQUANTITY

0.97+

FirstQUANTITY

0.95+

Fairmont HotelORGANIZATION

0.94+

When IoT Met AI: The Intelligence of ThingsTITLE

0.94+

AlexaTITLE

0.88+

AI: The Intelligence of ThingsTITLE

0.86+

When IoT met AI: The Intelligence of ThingsTITLE

0.86+

When IoTTITLE

0.83+

Los AngelesLOCATION

0.78+

AISORGANIZATION

0.77+

OneQUANTITY

0.77+

englishOTHER

0.72+

SiroTITLE

0.72+

#theCUBETITLE

0.64+

LitbitTITLE

0.58+

timesQUANTITY

0.55+

firstQUANTITY

0.52+

lotQUANTITY

0.49+

VivORGANIZATION

0.41+

DaxORGANIZATION

0.4+