Dave Tokic, Algolux | Autotech Council 2018
>> Announcer: From Milpitas, California, at the edge of Silicon Valley, it's the Cube, covering autonomous vehicles. Brought to you by Western Digital. >> Hey, welcome back here ready, Jeff Frick here with the Cube. We're at Western Digital's office in Milpitas, California at the Autotech Council Autonomous Vehicle event. About 300 people talking about all the various problems that have to be overcome to make this thing kind of reach the vision that we all have in mind and get beyond the cute. Way more cars driving around and actually get to production fleet, so a lot of problems, a lot of opportunity, a lot of startups, and we're excited to have our next guest. He's Dave Tokic, the VP of Marketing and Strategic Partnerships from Algolux. Dave, great to see you. >> Great, thank you very much, glad to be here. >> Absolutely, so you guys are really focused on a very specific area, and that's about imaging and all the processing of imaging and the intelligence out of imaging and getting so much more out of those cameras that we see around all these autonomous vehicles. So, give us a little bit of the background. >> Absolutely, so, Algolux, we're totally focused on driving safety and autonomous vision. It's really about addressing the limitations today in imaging and computer vision systems for perceiving much more effectively and robustly the surrounding environment and the objects as well as enabling cameras to see more clearly. >> Right, and we've all seen the demo in our twitter feeds of the chihuahua and the blueberry muffin, right? This is not a simple equation, and somebody like Google and those types of companies have the benefit of everybody uploading their images, and they can run massive amounts of modeling around that. How do you guys do it in an autonomous vehicle, it's a dynamic situation, it's changing all the time, there's lots of different streets, different situations. So, what are some of the unique challenges, and how are you guys addressing those? >> Great, so, today, for both 8S systems and autonomous driving, the companies out there are focusing on really the simpler problems of being able to properly recognize an object or an obstacle in good conditions, fair weather in Arizona, or Mountain View or Tel Aviv, et cetera. But really the, we would live in the real world. There's bad weather, there is low light, there's lens issues, lens dirty, and so on. Being able to address those difficult issues is not really being done well today. There's difficulties in today's system architectures to be able to do that. We take a very different, novel approach to how we process and learn through deep learning the ability to do that much more robustly and much more accurately than today's systems. >> How much of that's done kind of in the car, how much of it's done where you're building your algorithms offline and then feeding them back into the car, how does that loop kind of work? >> Great question, so the objective for this, we're deploying on, is the intent to deploy on systems that are in the car, embedded, right? We're not looking to the cloud-based system where it's going to be processed in the cloud and the latency issues and so on that are a problem. Right now, it's focused on the embedded platform in the car, and we do training of the datasets, but we take a novel approach with training as well. We don't need as much training data because we augmented it with very specific synthetic data that understands the camera itself as well as taking in the difficult critical cases like low light and so on. >> Do you have your own dedicated camera or is it more of a software solution that you can use for lots of different types of inbound sensors? >> Yeah, what we have today is, we call it, CANA. It is a full end-to-end stack that starts from the sensor output, so say, an imaging sensor or a path to fusion like LIDAR, radar, et cetera, all the way up to the perception output that would then be used by the car to make a decision like emergency braking or turning or so on. So, we provided that full stack. >> So perception is a really interesting word to use in the context of a car, car visioning and computer vision cause it really implies a much higher level of understanding as to what's going, it really implies context, so how do you help it get beyond just identifying to starting to get perception so that you can make some decisions about actions. >> Got it, so yeah, it's all about intelligent decisions and being able to do that robustly across all types of operating conditions is paramount, it's mission critical. We've seen recent cases, Uber and Tesla and others, where they did not recognize the problem. That's where we start first with is to make sure that the information that goes up into the stack is as robust and accurate as possible and from there, it's about learning and sharing that information upstream to the control stacks of the car. >> It's weird cause we all saw the video from the Uber accident with the fatality of the gal unfortunately, and what was weird to me on that video is she came into the visible light, at least on the video we saw, very, very late. But ya got to think, right, visible light is a human eye thing, that's not a computer, that's not, ya know, there are so many other types of sensors, so when you think of vision, is it just visible light, or you guys work within that whole spectrum? >> Fantastic question, really the challenge with camera-based systems today, starting with cameras, is that the way the images are processed is meant to create a nice displayed image for you to view. There are definite limitations to that. The processing chain removes noise, removes, does deblurring, things of that nature, which removes data from that incoming image stream. We actually do perception prior to that image processing. We actually learn how to process for the particular task like seeing a pedestrian or bicyclist et cetera, and so that's from a camera perspective. It gives up quite the advantage of being able to see more that couldn't be perceived before. We're also doing the same for other sensing modalities such as LIDAR or radar and other sensing modalities. That allows us to take in different disparate sort of sensor streams and be able to learn the proper way of processing and integrating that information for higher perception accuracy using those multiple systems for sensor fusion. >> Right, I want to follow up on kind of what is sensor fusion because we hear and we see all these startups with their self-driving cars running around Menlo Park and Palo Alto all the time, and some people say we've got LIDAR, LIDAR's great, LIDAR's expensive, we're trying to do it with just cameras, cameras have limitations, but at the end of the day, then there's also all this data that comes off the cars are pretty complex data receiving vehicles as well, so in pulling it all together that must give you tremendous advantages in terms of relying on one or two or a more singular-type of input system. >> Absolutely, I think cameras will be ubiquitous, right? We know that OEMs and Tier-1s are focused heavily on camera-based systems with a tremendous amount of focus on other sensing modalities such as LIDARs as an example. Being able to kit out a car in a production fashion effectively and commercially, economically, is a challenge, but that'll, with volume, will reduce over time, but doing that integration of that today is a very manually intensive process. Each sensing mode has its own way of processing information and stitching that together, integrating, fusing that together is very difficult, so taking an approach where you learn through deep learning how to do that is a way of much more quickly getting that capability into the car and also providing higher accuracy as the merged data is combined for the particular task that you're trying to do. >> But will you system, at some point, kind of check in kind of like the Teslas, they check in at night, get the download, so that you can leverage some of the offline capabilities to do more learning, better learning, aggregate from multiple sources, those types of things? >> Right, so for us, the type of data that would be most interesting is really the escapes. The things where the car did not detect something or told the driver to pay attention or take the wheel and so on. Those are the corner cases where the system failed. Being able to accumulate those particular, I'll call it, snips of information, send that back and integrate that into the overall training process will continue to improve robustness. There's definitely a deployed model that goes out that's much more robust than what we've seen in the market today, and then there's the ongoing learning to then continue to improve the accuracy and robustness of the system. >> I think people so underestimate the amount of data that these cars are collecting in terms of just the way streets operate, the way pedestrians operate, but whether there's a incident or not, they're still gathering all that data and making judgements and identifying pedestrians, identifying bicyclists and capturing what they do, so hopefully, the predictiveness will be significantly better down the road. >> That's the expectation, but like numerous studies have said, there's a lot of data that's collected that's just sort of redundant data, so it's really about those corner cases where there was a struggle by the system to actually understand what was going on. >> So, just give us kind of where you are with Algolux, state of the company, number of people, where are ya on your lifespan? >> Algolux is the startup based in Montreal with offices in Palo Alto and Munich. We have about 26 people worldwide, most of them in Montreal, very engineering heavy these days, and we will continue to do so. We have some interesting forthcoming news that please keep an eye out for of accelerating what we're doing. I'll just hint it that way. The intent really is to expand the team to continue to productize what we've built and start to scale out, to engage more of the automotive companies we're working with. We are engaged today at the Tier-2, Tier-1, and OEM levels in automotive, and the technology is scalable across other markets as well. >> Pretty exciting, we look forward to watching, and you're giving it the challenges of real weather unlike the Mountain View guys who we don't really deal with real weather here. (laughing) >> There ya go. (laughing) Fair enough. >> All right Dave, well, thanks for taking a few minutes out of your day, and we, again, look forward to watching the story unfold. >> Excellent, thank you, Jeff. >> All right. >> All right, appreciate it. >> He's Dave, I'm Jeff, you're watching the Cube. We're are Western Digital in Milpitas at Autotech Council Autonomous Vehicle event. Thanks for watching, we'll catch ya next time.
SUMMARY :
at the edge of Silicon Valley, the vision that we all have in mind and get beyond the cute. and all the processing of imaging and the intelligence It's really about addressing the limitations today of the chihuahua and the blueberry muffin, right? the ability to do that much more robustly on systems that are in the car, embedded, right? all the way up to the perception output that would then in the context of a car, car visioning and being able to do that robustly across all types at least on the video we saw, very, very late. is that the way the images are processed is meant and Palo Alto all the time, and some people say as the merged data is combined for the particular send that back and integrate that into the overall of just the way streets operate, That's the expectation, but like numerous studies of the automotive companies we're working with. and you're giving it the challenges There ya go. look forward to watching the story unfold. We're are Western Digital in Milpitas
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Western Digital | ORGANIZATION | 0.99+ |
Dave Tokic | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Arizona | LOCATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Montreal | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Munich | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Algolux | ORGANIZATION | 0.99+ |
Menlo Park | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
Milpitas | LOCATION | 0.99+ |
Milpitas, California | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Mountain View | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Teslas | ORGANIZATION | 0.98+ |
CANA | ORGANIZATION | 0.97+ |
About 300 people | QUANTITY | 0.97+ |
Autotech Council Autonomous Vehicle | EVENT | 0.94+ |
Cube | COMMERCIAL_ITEM | 0.93+ |
ORGANIZATION | 0.92+ | |
about 26 people | QUANTITY | 0.89+ |
Tier-1 | OTHER | 0.88+ |
Tier-2 | OTHER | 0.88+ |
first | QUANTITY | 0.86+ |
Each sensing mode | QUANTITY | 0.77+ |
Cube | PERSON | 0.77+ |
Autotech | EVENT | 0.77+ |
2018 | EVENT | 0.58+ |
Council | ORGANIZATION | 0.55+ |
Cube | ORGANIZATION | 0.51+ |
Tier-1s | OTHER | 0.45+ |
LIDAR | TITLE | 0.42+ |
8S | COMMERCIAL_ITEM | 0.32+ |
LIDAR | COMMERCIAL_ITEM | 0.28+ |