Joseph Nelson, Roboflow | Cube Conversation
(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Joseph Nelson | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
400 million | QUANTITY | 0.99+ |
Evan Spiegel | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
RoboFlow | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Rivian | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cardinal Health | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
roboflow.com/careers | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both worlds | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
two years later | DATE | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
third way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
Magic Sudoku | TITLE | 0.98+ |
second | QUANTITY | 0.98+ |
Nvidia | ORGANIZATION | 0.98+ |
Sudoku | TITLE | 0.98+ |
MWC | EVENT | 0.97+ |
today | DATE | 0.97+ |
billion dollar | QUANTITY | 0.97+ |
one single thing | QUANTITY | 0.97+ |
over a hundred thousand developers | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
third | QUANTITY | 0.96+ |
Elon | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.96+ |
Tesla | ORGANIZATION | 0.96+ |
Jetson | COMMERCIAL_ITEM | 0.96+ |
Elon | PERSON | 0.96+ |
RoboFlow | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Twilio | ORGANIZATION | 0.95+ |
twenties | QUANTITY | 0.95+ |
Product Hunt AR | TITLE | 0.95+ |
Moore | PERSON | 0.95+ |
both researchers | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Monica Livingston, Intel | HPE Discover 2020
>> Narrator: From around the globe, it's theCUBE! Covering HPE Discover Virtual Experience, brought to you by HPE. >> Artificial Intelligence, Monica Livingston, hey Monica, welcome to theCUBE! >> Hi Lisa, thank you for having me. >> So, AI is a big topic, but let's just get an understanding, Intel's approach to artificial intelligence? >> Yeah, so at Intel, we look at AI As a workload and a tool that is becoming ubiquitous across all of our compute solutions. We have customers that are using AI in the Cloud, in the data center, at the Edge, so our goal is to infuse as much performance as we can for AI into our base platform and then where acceleration is needed we will have accelerator solutions for those particular areas. An example of where we are infusing AI performance into our base platform is the Intel Deep Learning Boost feature set which is in our second generation Intel Xeon Scalable Processors and this feature alone provides up to 30x performance improvement for Deep Learning Inference on the CPU over the previous generation and we are continuing infusing AI into our base platform with the third generation Intel Xeon Scalable Processors which are launching later this month. Intel will continue that leadership by including support for bfloat16. Bfloat16 is a new format that enables Deep Learning training with similar accuracy but essentially using less data so it increases AI throughput. Another example is memory, so both inference and training require quite a bit of memory and with Intel Octane for system memory, customers are able to expand large pools of memory closer to the CPU, and where that's particularly relevant is in areas where data sets are enlarged like imaging, with lots of images and lots of high resolution images, like medical diagnostic or seismic imaging, we are able to perform some of these models without tiling, and tiling is where, if you are memory-constrained, you essentially have to take that picture and chop it up in little pieces and process each piece and then stitch it back together at the end whereas that loses a lot of context for the AI model, so if you're able to process that entire picture, then you are getting a much better result and that is the benefit of having that memory accessible to the compute. So, when you are buying the latest and greatest HPE servers, you will have built-in AI performance with Intel Xeon Scalable and Octane for system memory. >> A couple things that you said that piqued my interests are 30x improvement in performance, if you talk about that with respect to the Deep Learning Booster, 30x is a huge factor and you also said that your solution from a memory perspective doesn't require tiling and I heard context. Context is key to have context in the data, to be able to understand and interpret and make inferences, so, talk to me about some of those big changes that you're releasing, what were some of the customer-compelling events or maybe industry opportunities that drove Intel to make such huge performance gains in second generation. >> Right, so second generation, these are the processors that are out now, so these are features that our customers are using today, third generation is coming out this month but for second generation, Deep Learning Boost, what's really important is the software optimization and the fact that we're able to use the hooks that we've built into the hardware but then use software to make sure that we are optimizing performance on those platforms and it's extremely relevant to talk about software in the AI space because AI solutions can get super expensive, you can easily pay 2 to 3x what you should be paying if you don't have optimized software because then what you do is you're just throwing more and more compute, more and more hardware at the problem, but it's not optimized and so what's really impactful is being able to run a vast number of AI applications on your base platform, that essentially means that you can run that in a mixed workload environment together with your other applications and you're not standing up separate infrastructure. Now, of course, there will be some applications that do need separate infrastructure that do need alliances and accelerators and for that, we will have a host of accelerators, we have FPGAs today for real time low latency inference, we have Movidius VPU for low-power vision applications at the Edge, but by and large, if you're looking at classical machine learning, if you're looking at analytics, Deep Learning inference, that can run on a base platform today and I think that's what's important in ensuring that more and more customers are able to run AI at scale, it's not just a matter of running a POC in a back lab, you do that on the infrastructure that you have available, not an issue, but when you are looking to scale, the cost is going to be significantly important and that's why it's important for us to make sure that we are building in as much performance as is feasible into the base platform and then offering software tools to allow customers to see that performance. >> Okay, so talking about the technology components, performance, memory, what's needed to scale on the technology side, I want to then kind of look at the business side, because we know a lot of customers in any industry undertake AI projects and they run into pitfalls where they're not able to even get off the ground, so converse to the technology side, what is it that you're seeing, what are the pitfalls that customers can avoid on the business side to get these AI projects designed and launched? >> Yeah, so on the business side, I mean you really have to start with a very solid business plan for why you're doing AI and it's even less about just the AI piece, but you have to have a very solid business plan for your solution as a whole. If you're doing AI just to do AI because you saw that it's a top trend for 2020 so you must do AI, that's likely going to not result in success. You have to make sure that you're understanding why you're doing AI, if you have a workload that could be easily solved, or a problem that could be easily solved with data analytics, use data analytics, AI should be used where appropriate, a way to provide true benefit and I think if you can demonstrate that, you're a long way in getting your project off the ground, and then there's several other pitfalls like data, do you have enough data, is it close enough to your compute in order to be accessible and feasible, do you have the resources that are skilled in AI that can get your solution off the ground, do you have a plan for what to do after you've deployed your solution because these files need to be maintained on a regular basis, so some sort of maintenance program needs to be in place and then infrastructure, cost can be prohibitive a lot of times if you're not able to leverage a good amount of your base infrastructure and that's really where we spend a lot of time with customers in trying to understand what their model is trying to do and can they use their base infrastructure, can they reuse as much of what they have, what is their current utilization, do they maybe have cycles in off times if their utilization is diurnal and during the night they have early Utilization, can you train your models at night rather than putting up a whole new set of infrastructure that likely will not be approved by management, let's be honest. >> And I imagine that that is all part of the joint better marketing strategy that HPE and Intel have together to have such conversations like that with customers, to help really build a robust business plan. >> Yeah, so HPE's fantastic at consulting with customers from beginning to end, looking at solutions and they've got a whole suite of storage solutions as well which are crucial for AI and Intel works together with HPE to create reference architectures for AI and then we do joint training as well. But yes, talking to your HPE rep and leveraging your ecosystem I think is incredibly important because the ecosystem is so diverse and there are a lot of resources available from ISVs to hardware providers to consulting companies that are able to support with AI. >> So Monica, the ecosystem is incredibly important, but how do you work with customers, HPE and Intel together, to help the customer, whether its in biotech or manufacturing to build an ecosystem or partnership that can help the customer really define the business plan of what they want to do to get that for us functional collaboration and buy-in and support and launch a successful AI project. >> Yeah it really does take a village, but both Intel and HPE have an extensive partner network, these are partners that we work with to optimize their solution, in HPE's case, they validate their solutions on HPE hardware to ensure that it runs smoothly and for our customers, we have the ability to match-make with partners in the ecosystem and generally, the way it works, is in specific segments, we have a list of partners that we can draw from and we introduce those to the customer, the customer generally has a couple of meetings with them to see which one is a better fit, and then they go from there, but essentially, it is just making sure that solutions are validated and optimized and then giving our customers a choice of which partners are the best fit for them. >> Last question for you, Monica, we are in the middle of COVID-19 and we see things on the news every day about contact tracing, for example, social distancing, and a lot of the things that are talked about on the news are human contact tracers, people being involved in manual processes, what are some of the opportunities that you see for AI to really help drive some of these because time is of the essence, yet, there's the ethics issue with AI, right? >> Yes, yes, and the ethics issue is not something that AI can solve on its own, unfortunately, the ethics conversation is something we need to have broader as a society and from a privacy perspective, how are we going to be mindful and respectful while also being able to use some of the data to protect society especially in a situation like this, so, contact tracing is extremely important, this is something that in areas that have a wide system of cameras installed, that's something that is doable from an algorithmic perspective and there's several partners of ours that are looking at that, and actually, the technology itself, I don't think is as insurmountable as the logistical aspect and the privacy and the ethical aspect and regulation around it, making sure that it's not used for the wrong purposes, but certainly with COVID, there is a new aspect of AI use cases, and contact tracing is obviously one of them, the others that we are seeing is essentially, companies are adapting a lot of their existing AI solutions or solutions that use AI to accommodate or to account for COVID, like, companies that have observations done and so if they were doing facial recognition either in metro stations or stadiums or banks, they now are adding features to their systems to detect social distancing, for example, or detect if somebody is wearing a mask. The technology, again, itself is not that difficult, but in the implementation and the use and the governance around it, I think, is a lot more complex, and then, I would be remiss not to mention remote learning which is huge now, I think all of our children are learning remote at this point and being able to use AI in curriculums and being able to really pinpoint where a child is having a hard time understanding a concept and then giving them more support in that area is definitely something that our partners are looking at and it's something that (webcam scrambles) with my children and the tools that they're using and so instead of reading to their teacher for their reading test, they're reading to their computer and the computer's able to pinpoint some very specific issues that maybe a teacher would not see as easily and then of course, the teacher has the ability to go back with you and listen and make sure that there weren't any issues with dialects or anything like that, so it's really just an interesting reinforcement of the teacher/student learning with the added algorithmic impact as well. >> Right, a lot of opportunity is going to come out of COVID, some maybe more accelerated than others because as you mentioned, it's very complex. Monica, I wish we had more time, this has been a really fascinating conversation about what Intel and HPE are doing with respect to AI. Glad to have you back 'cause this topic is just too big, but we thank you so much for your time. >> Thank you. >> For my guest Monica Livingston, I'm Lisa Martin, you're watching theCUBE's coverage of HPE Discover 2020, thanks for watching.
SUMMARY :
brought to you by HPE. and that is the benefit of having and make inferences, so, talk to me the cost is going to be to be accessible and feasible, do you have like that with customers, are able to support with AI. that can help the customer really define and generally, the way it and so instead of reading to their teacher Glad to have you back 'cause of HPE Discover 2020, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Monica Livingston | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
2 | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
each piece | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
30x | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
Octane | COMMERCIAL_ITEM | 0.97+ |
HPE Discover 2020 | TITLE | 0.97+ |
today | DATE | 0.96+ |
bfloat16 | COMMERCIAL_ITEM | 0.95+ |
both | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.95+ |
Bfloat16 | COMMERCIAL_ITEM | 0.93+ |
this month | DATE | 0.93+ |
Xeon Scalable | COMMERCIAL_ITEM | 0.92+ |
later this month | DATE | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.91+ |
theCUBE | ORGANIZATION | 0.86+ |
one of them | QUANTITY | 0.83+ |
several partners | QUANTITY | 0.8+ |
up to 30x | QUANTITY | 0.76+ |
lot | QUANTITY | 0.76+ |
customers | QUANTITY | 0.75+ |
time | QUANTITY | 0.69+ |
couple things | QUANTITY | 0.63+ |
COVID | OTHER | 0.62+ |
Movidius | ORGANIZATION | 0.6+ |
Processors | COMMERCIAL_ITEM | 0.58+ |
Octane | ORGANIZATION | 0.56+ |
Learning Boost | OTHER | 0.56+ |
day | QUANTITY | 0.55+ |
Deep | COMMERCIAL_ITEM | 0.55+ |
images | QUANTITY | 0.53+ |
couple of | QUANTITY | 0.51+ |
Last | QUANTITY | 0.5+ |
Scalable | OTHER | 0.45+ |
COVID | TITLE | 0.43+ |
Deep Learning Boost | COMMERCIAL_ITEM | 0.39+ |
VPU | TITLE | 0.34+ |
Jonathan Ballon, Intel | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their Ecosystem partners. >> Oh welcome back, to theCUBE. Continuing coverage here from AWS re:Invent, as we start to wind down our coverage here on the second day. We'll be here tomorrow as well, live on theCUBE, bringing you interviews from Hall D at the Sands Expo. Along with Justin Warren, I'm John Walls, and we're joined by Jonathan Ballon, who's the Vice President of the internet of things at Intel. Jonathan, thank you for being with us today. Good to see you, >> Thanks for having me guys. >> All right, interesting announcement today, and last year it was all about DeepLens. This year it's about DeepRacer. Tell us about that. >> What we're really trying to do is make AI accessible to developers and democratize various AI tools. Last year it was about computer vision. The DeepLens camera was a way for developers to very inexpensively get a hold of a camera, the first camera that was a deep-learning enabled, cloud connected camera, so that they could start experimenting and see what they could do with that type of device. This year we took the camera and we put it in a car, and we thought what could they do if we add mobility to the equation, and specifically, wanted to introduce a relatively obscure form of AI called reinforcement learning. Historically this has been an area of AI that hasn't really been accessible to most developers, because they haven't had the compute resources at their disposal, or the scale to do it. And so now, what we've done is we've built a car, and a set of tools that help the car run. >> And it's a little miniature car, right? I mean it's a scale. >> It's 1/118th scale, it's an RC car. It's four-wheel drive, four-wheel steering. It's got GPS, it's got two batteries. One that runs the car itself, one that runs the compute platform and the camera. It's got expansion capabilities. We've got plans for next year of how we can turbo-charge the car. >> I love it. >> Right now it's baby steps, so to speak, and basically giving the developer the chance to write a reinforcement learning model, an algorithm that helps them to determine what is the optimum way that this car can move around a track, but you're not telling the car what the optimum way is, you're letting the car figure it out on their own. And that's really the key to reinforcement learning is you don't need a large dataset to begin with, it's pre-trained. You're actually letting, in this case, a device figure it out for themselves, and this becomes very powerful as a tool, when you think about it being applied to various industries, or various use-cases, where we don't know the answer today, but we can allow vast amounts of computing resources to run a reinforcement model over and over, perhaps millions of times, until they find the optimum solution. >> So how do you, I mean that's a lot of input right? That's a lot, that's a crazy number of variables. So, how do you do that? So, how do you, like in this case, provide a car with all the multiple variables that will come into play. How fast it goes, and which direction it goes, and all that, and on different axes and all those things, to make these own determinations, and how will that then translate to a real specific case in the workplace? >> Well, I mean the obvious parallel is of course autonomous driving. AWS had Formula One on stage today during Andy Jassy's keynote, that's also an Intel customer, and what Formula One does is they have the fastest cars in the world, and they have over 120 sensors on that car that are bringing in over a million pieces of data per second. Being able to process that vast amount of data that quickly, which includes a variety of data, like it's not just, it's also audio data, it's visual data, and being able to use that to inform decisions in close to real time, requires very powerful compute resources, and those resources exist both in the cloud as well as close to the source of the data itself at the edge, in the physical environment. >> So, tell us a bit about the software that's involved here, 'cause people think of Intel, you know that some people don't know about the software heritage that Intel has. It's not just about, the Intel inside isn't just the hardware chips that's there, there's a lot of software that goes into this. So, what's the Intel angle here on the software that powers this kind of distributed learning. >> Absolutely, software is a very important part of any AI architecture, and for us we've a tremendous amount of investment. It's almost perhaps, equal investment in software as we do in hardware. In the case of what we announced today with DeepRacer and AWS, there's some toolkits that allow developers to better harness the compute resources on the car itself. Two things specifically, one is we have a tool called, RL Coach or Reinforcement Learning Coach, that is integrated into SageMaker, AWS' machine learning toolkit, that allows them to access better performance in the cloud of that data that's coming into the, off their model and into their cloud. And then we also have a toolkit called OpenVINO. It's not about drinking wine. >> Oh darn. >> Alright. >> Open means it's an opensource contribution that we made to the industry. Vino, V-I-N-O is Visual Inference and Neural Network Optimization, and this is a powerful tool, because so much of AI is about harnessing compute resources efficiently, and as more and more of the data that we bring into our compute environments is actually taking place in the physical world, it's really important to be able to do that in a cost-effective and power-efficient way. OpenVINO allows developers to actually isolate individual cores or an integrated GPU on a CPU without knowing anything about hardware architecture, and it allows them then to apply different applications, or different algorithms, or inference workloads very efficiently onto that compute architecture, but it's abstracted away from any knowledge of that. So, it's really designed for an application developer, who maybe is working with a data scientist that's built a neural network in a framework like TensorFlow, or Onyx, or Pytorch, any tool that they're already comfortable with, abstract away from the silicon and optimize their model onto this hardware platform, so it performs at orders of magnitude better performance then what you would get from a more traditional GPU approach. >> Yeah, and that kind of decision making about understanding chip architectures to be able to optimize how that works, that's some deep magic really. The amount of understanding that you would need to have to do that as a human is enormous, but as a developer, I don't know anything about chip architectures, so it sounds like the, and it's a thing that we've been hearing over the last couple of days, is these tools allow developers to have essentially superpowers, so you become an augmented intelligence yourself. Rather than just giving everything to an artificial intelligence, these tools actually augment the human intelligence and allow you to do things that you wouldn't otherwise be able to do. >> And that's I think the key to getting mass market adoption of some of these AI implementations. So, for the last four or five years since ImageNet solved the image recognition problem, and now we have greater accuracy from computer models then we do from our own human eyes, really AI was limited to academia, or large IT tech companies, or proof-of-concepts. It didn't really scale into these production environments, but what we've seen over the couple of years is really a democratization of AI by companies like AWS and Intel that are making tools available to developers, so they don't need to know how to code in Python to optimize a compute module, or they don't need to, in many cases, understand the fundamental underlying architectures. They can focus on whatever business problem they're tryin' to solve, or whatever AI use-case it is that they're working on. >> I know you talked about DeepLens last year, and now we've got DeepRacer this year, and you've got the contest going on throughout this coming year with DeepRacer, and we're going to have a big race at the AWS re:Invent 2019. So what's next? I mean, or what are you thinking about conceptually to, I guess build on what you've already started there? >> Well, I can't reveal what next years, >> Well that I understand >> Project will be. >> But generally speaking. >> But what I can tell you, what I can tell you is what's available today in these DeepRacer cars is a level playing field. Everyone's getting the same car and they have essentially the same tool sets, but I've got a couple of pro-tips for your viewers if they want to win some of these AWS Summits that are going to be around the world in 2019. Two pro-tips, one is they can leverage the OpenVINO toolkit to get much higher inference performance from what's already on that car. So, I encourage them to work with OpenVINO. It's integrated into SageMaker, so that they have easy access to it if they're an AWS developer, but also we're going to allow an expansion of, almost an accelerator of the car itself, by being able to plug in an Intel Neural Compute Stick. We just released the second version of this stick. It's a USB form factor. It's got a Movidius Myriad X Vision processing unit inside. This years version is eight times more powerful than last years version, and when they plug it into the car, all of that inference workload, all of those images, and information that's coming off those sensors will be put onto the VPU, allowing all the CPU, and GPU resources to be used for other activities. It's going to allow that car to go at turbo speed. >> To really cook. >> Yeah. (laughing) >> Alright, so now you know, you have no excuse, right? I mean Jonathan has shared the secret sauce, although I still think when you said OpenVINO you got Justin really excited. >> It is vino time. >> It is five o'clock actually. >> Alright, thank you for being with us. >> Thanks for having me guys. >> And good luck with DeepRacer for the coming year. >> Thank you. >> It looks like a really, really fun project. We're back with more, here at AWS re:Invent on theCUBE, live in Las Vegas. (rhythmic digital music)
SUMMARY :
Brought to you by Amazon Web Services, Intel, Good to see you, and last year it was all about DeepLens. that hasn't really been accessible to most developers, And it's a little miniature car, right? One that runs the car itself, And that's really the key to reinforcement learning to a real specific case in the workplace? and being able to use that to inform decisions It's not just about, the Intel inside that allows them to access better performance in the cloud and as more and more of the data that we bring Yeah, and that kind of decision making about And that's I think the key to getting mass market adoption I mean, or what are you thinking about conceptually to, so that they have easy access to it I mean Jonathan has shared the secret sauce, on theCUBE, live in Las Vegas.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jonathan Ballon | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Justin | PERSON | 0.99+ |
two batteries | QUANTITY | 0.99+ |
first camera | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
eight times | QUANTITY | 0.99+ |
five o'clock | DATE | 0.99+ |
Two things | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
Two pro-tips | QUANTITY | 0.98+ |
over a million pieces | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 120 sensors | QUANTITY | 0.98+ |
OpenVINO | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
four-wheel | QUANTITY | 0.97+ |
Sands Expo | EVENT | 0.97+ |
DeepRacer | ORGANIZATION | 0.97+ |
SageMaker | TITLE | 0.96+ |
Myriad X Vision | COMMERCIAL_ITEM | 0.95+ |
DeepLens | COMMERCIAL_ITEM | 0.95+ |
V-I-N-O | TITLE | 0.94+ |
second day | QUANTITY | 0.94+ |
TensorFlow | TITLE | 0.94+ |
both | QUANTITY | 0.94+ |
millions of times | QUANTITY | 0.92+ |
Pytorch | TITLE | 0.92+ |
Onyx | TITLE | 0.91+ |
Neural Compute Stick | COMMERCIAL_ITEM | 0.91+ |
RL Coach | TITLE | 0.91+ |
Movidius | ORGANIZATION | 0.89+ |
Invent 2018 | EVENT | 0.86+ |
coming year | DATE | 0.86+ |
Reinforcement Learning Coach | TITLE | 0.85+ |
this coming year | DATE | 0.82+ |
ImageNet | ORGANIZATION | 0.82+ |
theCUBE | ORGANIZATION | 0.82+ |
five years | QUANTITY | 0.81+ |
re:Invent 2019 | EVENT | 0.8+ |
Vino | TITLE | 0.78+ |
last couple of days | DATE | 0.77+ |
Formula One | TITLE | 0.75+ |
AWS re:Invent 2018 | EVENT | 0.72+ |
Hall D | LOCATION | 0.71+ |
couple of years | QUANTITY | 0.71+ |
four | QUANTITY | 0.71+ |
data per second | QUANTITY | 0.69+ |
re: | EVENT | 0.67+ |
1/118th scale | QUANTITY | 0.67+ |
DeepRacer | COMMERCIAL_ITEM | 0.67+ |
Formula | ORGANIZATION | 0.67+ |
DeepRacer | TITLE | 0.65+ |
Bill Jenkins, Intel | Super Computing 2017
>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing 17. Brought to you by Intel. (techno music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at the Super Computing Conference 2017. About 12 thousand people, talking about the outer edges of computing. It's pretty amazing. The keynote was huge. The square kilometer array, a new vocabulary word I learned today. It's pretty exciting times, and we're excited to have our next guest. He's Bill Jenkins. He's a Product Line Manager for AI on FPGAs at Intel. Bill, welcome. Thank you very much for having me. Nice to meet you, and nice to talk to you today. So you're right in the middle of this machine-learning AI storm, which we keep hearing more and more about. Kind of the next generation of big data, if you will. That's right. It's the most dynamic industry I've seen since the telecom industry back in the 90s. It's evolving every day, every month. Intel's been making some announcements. Using this combination of software programming and FPGAs on the acceleration stack to get more performance out of the data center. Did I get that right? Sure, yeah, yeah. Pretty exciting. The use of both hardware, as well as software on top of it, to open up the solution stack, open up the ecosystem. What of those things are you working on specifically? I really build first the enabling technology that brings the FPGA into that Intel ecosystem. Where Intel is trying to provide that solution from top to bottom to deliver AI products. >> Jeff: Right. Into that market. FPGAs are a key piece of that because we provide a different way to accelerate those machine-learning and AI workloads. Where we can be an offload engine to a CPU. We can be inline analytics to offload the system, and get higher performance that way. We tie into that overall Intel ecosystem of tools and products. Right. So that's a pretty interesting piece because the real-time streaming data is all the rage now, right? Not in batch. You want to get it now. So how do you get it in? How do you get it written to the database? How do you get it into the micro-processor? That's a really, really important piece. That's different than even two years ago. You didn't really hear much about real-time. I think it's, like I said, it's evolving quite a bit. Now, a lot of people deal with training. It's the science behind it. The data scientists work to figure out what topologies they want to deploy and how they want to deploy 'em. But now, people are building products around it. >> Jeff: Right. And once they start deploying these technologies into products, they realize that they don't want to compensate for limitations in hardware. They want to work around them. A lot of this evolution that we're building is to try to find ways to more efficiently do that compute. What we call inferencing, the actual deployed machine-learning scoring, as they will. >> Jeff: Right. In a product, it's all about how quickly can I get the data out. It's not about waiting two seconds to start the processing. You know, in an autonomous-driven car where someone's crossing the road, I'm not waiting two seconds to figure out it's a person. Right, right. I need it right away. So I need to be able to do that with video feeds, right off a disk drive, from the ethernet data coming in. I want to do that directly in line, so that my processor can do what it's good at, and we offload that processor to get better system performance. Right. And then on the machine-learning specifically, 'cause that is all the rage. And it is learning. So there is a real-time aspect to it. You talked about autonomous vehicles. But there's also continuous learning over time, that's not necessarily dependent on learning immediately. Right. But continuous improvement over time. What are some of the unique challenges in machine-learning? And what are some of the ways that you guys are trying to address those? Once you've trained the network, people always have to go back and retrain. They say okay, I've got a good accuracy, but I want better performance. Then they start lowering the precision, and they say well, today we're at 32-bit, maybe 16-bit. Then they start looking into eight. But the problem is, their accuracy drops. So they retrain that into eight topology, that network, to get the performance benefit, but with the higher accuracy. The flexibility of FPGA actually allows people to take that network at 32-bit, with the 32-bit trained weights, but deploy it in lower precision. So we can abstract away the fact that the hardware's so flexible, we can do what we call floating point 11-bit floating point. Or even 8-bit floating point. Even here today at the show, we've got a binary and ternary demo, showcasing the flexibility that the FPGA can provide today with that building block piece of hardware that the FPGA can be. And really provide, not only the topologies that people are trying to build today, but tomorrow. >> Jeff: Right. Future proofing their hardware. But then the precisions that they may want to do. So that they don't have to retrain. They can get less than a 1% accuracy loss, but they can lower that precision to get all the performance benefits of that data scientist's work to come up with a new architecture. Right. But it's interesting 'cause there's trade-offs, right? >> Bill: Sure. There's no optimum solution. It's optimum as to what you're trying to optimize for. >> Bill: Right. So really, the ability to change the ability to continue to work on those learning algorithms, to be able to change your priority, is pretty key. Yeah, a lot of times today, you want this. So this has been the mantra of the FPGA for 30 plus years. You deploy it today, and it works fine. Maybe you build an ASIC out of it. But what you want tomorrow is going to be different. So maybe if it's changing so rapidly, you build the ASIC because there's runway to that. But if there isn't, you may just say, I have the FPGA, I can just reprogram it to do what's the next architecture, the next methodology. Right. So it gives you that future proofing. That capability to sustain different topologies. Different architectures, different precisions. To kind of keep people going with the same piece of hardware. Without having to say, spin up a new ASIC every year. >> Jeff: Right, right. Which, even then, it's so dynamic it's probably faster then, every year, the way things are going today. So the other thing you mentioned is topography, and it's not the same topography you mentioned, but this whole idea of edge. Sure. So moving more and more compute, and store, and smarts to the edge. 'Cause there's just not going to be time, you mentioned autonomous vehicles, a lot of applications to get everything back up into the cloud. Back into the data center. You guys are pushing this technology, not only in the data center, but progressively closer and closer to the edge. Absolutely. The data center has a need. It's always going to be there, but they're getting big. The amount of data that we're trying to process every day is growing. I always say that the telecom industry started the Information Age. Well, the Information Age has done a great job of collecting a lot of data. We have to process that. If you think about where, maybe I'll allude back to autonomous vehicles. You're talking about thousands of gigabytes, per day, of data generated. Smart factories. Exabytes of data generated a day. What are you going to do with all that? It has to be processed. We need that compute in the data center. But we have to start pushing it out into the edge, where I start thinking, well even a show like this, I want security. So, I want to do real-time weapons detection, right? Security prevention. I want to do smart city applications. Just monitoring how traffic moves through a mall, so that I can control lighting and heating. All of these things at the edge, in the camera, that's deployed on the street. In the camera that's deployed in a mall. All of that, we want to make those smarter, so that we can do more compute. To offload the amount of data that needs to be sent back to the data center. >> Jeff: Right. As much as possible. Relevant data gets sent back. No shortage of demand for compute store networking, is there? No, no. It's really a heterogeneous world, right? We need all the different compute. We need all the different aspects of transmission of the data with 5G. We need disk space to store it. >> Jeff: Right. We need cooling to cool it. It's really becoming a heterogeneous world. All right, well, I'm going to give you the last word. I can't believe we're in November of 2017. Yeah. Which is bananas. What are you working on for 2018? What are some of your priorities? If we talk a year from now, what are we going to be talking about? Intel's acquired a lot of companies over the past couple years now on AI. You're seeing a lot of merging of the FPGA into that ecosystem. We've got the Nervana. We've got Movidius. We've got Mobileye acquisitions. Saffron Technologies. All of these things, when the FPGA is kind of a key piece of that because it gives you that flexibility of the hardware, to extend those pieces. You're going to see a lot more stuff in the cloud. A lot more stuff with partners next year. And really enabling that edge to data center compute, with things like binary neural networks, ternary neural networks. All the different next generation of topologies to kind of keep that leading edge flexibility that the FPGA can provide for people's products tomorrow. >> Jeff: Exciting times. Yeah, great. All right, Bill Jenkins. There's a lot going on in computing. If you're not getting your computer science degree, kids, think about it again. He's Bill Jenkins. I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. Thank you. (techno music)
SUMMARY :
Kind of the next generation of big data, if you will. We can be inline analytics to offload the system, A lot of this evolution that we're building is to try to of hardware that the FPGA can be. So that they don't have to retrain. It's optimum as to what you're trying to optimize for. So really, the ability to change the ability to continue We need that compute in the data center. We need all the different aspects of of the hardware, to extend those pieces. There's a lot going on in computing.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bill Jenkins | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
November of 2017 | DATE | 0.99+ |
8-bit | QUANTITY | 0.99+ |
16-bit | QUANTITY | 0.99+ |
32-bit | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
Bill | PERSON | 0.99+ |
30 plus years | QUANTITY | 0.99+ |
11-bit | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
eight | QUANTITY | 0.98+ |
Movidius | ORGANIZATION | 0.98+ |
Super Computing Conference 2017 | EVENT | 0.98+ |
a day | QUANTITY | 0.96+ |
Saffron Technologies | ORGANIZATION | 0.96+ |
thousands of gigabytes | QUANTITY | 0.95+ |
Mobileye | ORGANIZATION | 0.95+ |
About 12 thousand people | QUANTITY | 0.95+ |
two years ago | DATE | 0.95+ |
90s | DATE | 0.94+ |
less than a 1% | QUANTITY | 0.94+ |
Nervana | PERSON | 0.94+ |
FPGA | ORGANIZATION | 0.9+ |
both hardware | QUANTITY | 0.89+ |
first | QUANTITY | 0.84+ |
Exabytes of data | QUANTITY | 0.76+ |
Super Computing 2017 | EVENT | 0.75+ |
past couple years | DATE | 0.73+ |
every year | QUANTITY | 0.69+ |
year | QUANTITY | 0.69+ |
per day | QUANTITY | 0.6+ |
5G | QUANTITY | 0.58+ |
Super Computing 17 | EVENT | 0.55+ |
theCUBE | ORGANIZATION | 0.52+ |
FPGA | TITLE | 0.42+ |
Patrick Moorhead, Moor Insights & Strategy | Samsung Developer Conference 2017
>> Narrator: Live from San Francisco, it's theCUBE covering Samsung Developer Conference 2017, brought to you by Samsung. >> Hello, everyone. Welcome back to theCUBE's live coverage, exclusive coverage of Samsung Developer Conference, SDC 2017. I'm John Furrier, the co-founder of SiliconANGLE Media. Next guest is Patrick Moorhead who is the president and principal analyst at Moor Insights and Strategy, friend of theCUBE. We see him everywhere we go. He's quoted in the Wall Street Journal, New York Times, all the top publications, and today, he was just on Power Lunch on CNBC. Here for our Power Cube segment, welcome to theCUBE. Good to see you again. >> Hey, thanks for being here, and I appreciate you putting up with me heckling you from outside of theCUBE. >> Always great to have you on. Hard hitting, you're one of the best analysts in the business. We know you work hard, we see you at all the events that we go to. I got to get your take, Samsung. Obviously now obviously you run in parallel, at some point on Amazon, obviously winning in the cloud. Samsung downplaying their cloud, but calling about smart things. I get that, the cloud is kind of fragmented, they're trying to hide the ball there, I get that. But they talk about IOT which you got to talk about cloud without IOT, what's your analysis of Samsung? >> Yeah so first off, Samsung is a collection of really really successful stovepiped companies, right? You have displays, you have semiconductors, you have mobile phones, you have all these different areas and they say a lot of times your strength is sometimes your weakness, and the divisions just don't talk a whole lot. But what they did, and this is the first time I've seen this in a long time, is they got on the same page and said you know, we have to work together because IOT and connected and intelligent connectedness can't be done in stovepipes, we can't all go do our thing. So they're agreeing on standards, they're doing some really good stuff. >> And obviously we know from the cloud game now go back to the enterprises, more consumer, backing in from the edge, obviously the edge being devices and other things, I get that. But now the horizontally scalable nature of the cloud is the holy grail, we've seen Amazon's success continue to boom, they do more compute than any other cloud out I think combined. Maybe outside Google with their internal cloud. That horizontal resource pool, serverless as example trend, IOT, you got to have, the stovepipes got to be decimated. However, you need specialism at the application level. >> That's exactly right, and a smartphone will act a little bit differently from a camera which would be different from a refrigerator as we saw, right? Samsung wants the new meeting area to be, well not the new meeting area, we all meet in the kitchen, but the connected meeting area. So they all act differently, so they have to have even though they're different devices they have to connect into that horizontal cloud to make it efficient enough and effective enough for good responsiveness. >> I like the message of smart things, I think that's phenomenal, and I like that 'cause it connects their things, which are consumer things, and people like 'em, like you said very successful stovepipes. The question that I ask here and I try to get the execs to talk about it but they weren't answering yet, and I think it's by design. They're not talking about the data. Because again at the end of the day what's different from Alibaba again last week when I was in China, they are very up front. We're all about data acquisition and using the data to fuel the user experience. >> Right. >> That has to traverse across stovepipes. So is Samsung baked in that area, they have things going on, what's your analysis of data traversal across, is Bixby 2.0 the answer? >> So companies have to take, particularly consumer companies related to the cloud, have to have one or two paths. The one that says, we're not going to mine personal data to either sell you products or run ads, so Facebook, AWS and even Google, that's their business model, and then the other side you have people like Apple who are only going to use the data to make the products and experiences better. I think, I'll just pontificate here, the reason you're not getting a straight answer is I don't think they know exactly what they want to do yet. Because look at the market cap of Facebook. Apple, and even Amazon is planning to start and expand their own ad network. So I just don't think they know yet. Now what I would recommend to them is- >> Or they might not have visibility on it product-wise. So there's knowing what to do, or how to do it, versus the product capability. >> Well they have access to a ton of data, so if you're using Samsung Mail, if you're using, they know every application gets deleted, usage models of those applications. So they know a lot more than I think people think. They have a lot more data than people probably give them credit for. >> So they're going to hide the ball, I think they said that they're buying more time, I would agree with you there. Alright, question on IOT. Do you think that hangs together, that strategy? Obviously security updates to chip-level, that's one thing, can they succeed with IOT in this emerging stovepipe collapse fabric that they're bringing out? >> So I need to do a little bit more research on the security and also their scalability. 'Cause if you're going to connect billions of devices you have to have scalability and we already saw what GE Predix did, right? They did an about-face and partnered up with AWS realizing they just couldn't handle the scale and the complexity. And the second thing is the security model and how things like RM Embed Cloud and the latest announcements from Intel which is how from a gateway perspective you secure this work. So I have to go do some research on this. >> And by the way it's a moving train, you mentioned the GE thing, great example, I mean let's take that example, I got to ask you about cloud, because let's talk about Amazon, Cloud Foundry. Cloud Foundry became this thing and Pivotal tried to take and shape it, now they're claiming huge success, some are questioning the numbers. They're claiming victory on one hand, and I hear record, record, record! But I just don't see any cloud on Cloud Foundry out there. >> Yeah and I think the reason is, PCF, Pivotal Cloud Foundry is a Fortune 500 thing. And if I compare Fortune 500 to startups and other people, there's not nearly as much activity in the Fortune 500 as there is with the startups and the cloud native companies. So I'm optimistic. >> So you're saying Pivotal Cloud is more Fortune 500, less cloud native? >> Exactly, exactly. >> How about Amazon, what's your take, I know you were on Power Lunch kind of, now you're on the Power Cube, our new segment that you just invented by being here. (laughing) What is the Amazon take, 'cause that Reinvent event's coming up, what's the preview? Obviously we're going to have some one on ones with Jassi and the team beforehand, theCUBE will be there with two sets to come on if you're going to be there I'd love to have you on. >> I'd love to. >> Again, what's the preview for AWS Reinvent? >> AWS right, they had a seven-year headstart on almost everybody and then Azure and GCP just recently jumped in, and if you notice over the past year they've been firing canons at each other. One vendor says hey, I do by the minute pricing, and then another one says, oh, I have the by-the-second pricing, right, and I'm going to accept VMWare, oh no I'm not doing VMWare, I'm doing SAP. So what you have now is a feature fest and a fistfight now. AWS is no longer the only man standing here. So what I'm expecting is they are going to come in and make the case that, okay, we still are the best choice not just for IAS but also for PAS, okay? Because they have a lot of competition. And also I think they're going to fill in gaps in some of the regional services where oh they don't have GPUs in a certain country. Oh, I don't have FPGAs over here. I think they're going to fill that in to look better against GCP and Azure. >> I know you cover Intel as well, I was just over there and saw some of the folks there, I saw some of the Linux Foundation folks, obviously you're seeing Intel be more a computing company, not a chip company anymore, they have that Five-G end to end UK Mind and Mobile World Congress, talked a little bit about Five-G. End-to-end is big message here at Samsung, how is Intel positioned in all this, what's your take on Intel? >> Yes so I think related to Intel, I think in some areas they're competitors, because they have their own gateway solutions, they don't have cloud solutions but they have the gateway solutions. Regarding to some of the endpoints, Intel has exited the small cork endpoints in watches, so I would say right now there's less overlap with Intel now. >> From Samsung perspective? >> Exactly, now on the back end it's more than likely there's a 99% chance that the back end doing the cloud processing is going to be Intel. >> If I'm Samsung, why wouldn't I want to partner within Samsung? 'Cause they make their own chips, is that the issue or is it more a...? >> No, I think Samsung up until this point hasn't taken a lot of responsibility for the cloud. So this is a first step, and I think it would make a good partnership. >> And Intel could get the home theater market, the home, how connected home is, but every CES going back 10 years has been a connected home theme. Finally they could get it here. >> That's right, and I have seen Intel get into things, a lot of Amazon's products with the cameras in the bedroom and in the bathroom, scary stuff. But Movidius, silicon that's doing object recognition, that is a place where I think they compete which frankly Samsung could develop the silicon but they just don't have it. Silicon doesn't have capability that a Movidius has. That can be used in any type of camera. >> Okay so final question I know we got to break here and I appreciate you coming on, making room for you, PowerCUBE segment here in San Francisco at SDC 2017. Ecosystem, we hear the host of SDC, Thomas Coe, come up and saying we're going to be honest and transparent to the community here at large in San Francisco and around the globe, kind of incurring that they've been kind of stovepiped and they're going to open up, they believe in open cloud, open IOT, and he talks about ecosystem, I'm not seeing a lot of ecosystem partners around here. What does Samsung need to do to, well first of all, what's your letter grade on the ecosystem and certainly they got an opportunity. What moves should they be making to build a robust healthy ecosystem, because we know you can't do it end to end without support in the white spaces. >> Yeah so I go to a lot of the developer conferences, whether it's Microsoft Build, Apple WWDC, and even the enterprise ones, and this is a smaller, low-key event and I think first and foremost, operating system drives a lot of the ecosystem. And other than Tizen they don't have an operating system. So what they're doing is they're working on the connectedness of it, which is a different kind of ecosystems, it's farther up in the stack, but I think what they can do is they have to be very clear and differentiated and I think back to our earlier, our first conversation, they're not going to mine the data, therefore they're the safe place for you, consumer and our smart things ecosystem, to put your data. And we're going to help you make money to do that, because I don't think Google is as interested in that and I don't think Amazon is as interested in that either. >> They were clear, they said permission-based and even if they don't know what their permission is offering we're going to take the conservative route and protect the data, but they still got to use the data. They got to get their cloud story together, if they want to do the data play, cloud has to be more clear at least in my mind. >> Well I think what they can do is they're sitting on and they will sit on a bigger treasure trove of data that can help their partners deliver better experiences and products, because if you're at the epicenter and you're at that smart things hub? You know everything that's going on in that home whether it's your stuff or your partner's stuff. >> Yeah and they got to be trusted, and they got to be transparent, okay. Patrick Moorhead from Moorhead Insights here on theCUBE, great analyst, follow him everywhere on Twitter, your Twitter handle is, let me just get the Twitter handle. >> It's @patrickmoorhead. >> Okay, @patrickmoorhead on Twitter. He travels the world, gets the data and so does theCUBE, traveling for you, this is John Furrier. More after this short break. (electronic beats)
SUMMARY :
brought to you by Samsung. Good to see you again. and I appreciate you putting up with me I get that, the cloud is kind of fragmented, they're on the same page and said you know, backing in from the edge, obviously the edge being So they all act differently, so they have to have the execs to talk about it but they weren't they have things going on, what's your analysis Apple, and even Amazon is planning to start and expand So there's knowing what to do, or how to do it, Well they have access to a ton of data, So they're going to hide the ball, I think they said and the complexity. I mean let's take that example, I got to ask you and the cloud native companies. What is the Amazon take, 'cause that Reinvent event's and make the case that, okay, we still are and saw some of the folks there, I saw some of Yes so I think related to Intel, doing the cloud processing is going to be Intel. 'Cause they make their own chips, is that the issue taken a lot of responsibility for the cloud. And Intel could get the home theater market, in the bedroom and in the bathroom, scary stuff. San Francisco and around the globe, kind of incurring Yeah so I go to a lot of the developer conferences, and protect the data, but they still got to use the data. and they will sit on a bigger treasure trove of data Yeah and they got to be trusted, and they Okay, @patrickmoorhead on Twitter.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Patrick Moorhead | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
China | LOCATION | 0.99+ |
Thomas Coe | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
seven-year | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Jassi | PERSON | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
@patrickmoorhead | PERSON | 0.99+ |
SDC 2017 | EVENT | 0.98+ |
CES | EVENT | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
first step | QUANTITY | 0.98+ |
two sets | QUANTITY | 0.98+ |
Samsung Developer Conference 2017 | EVENT | 0.98+ |
Samsung Developer Conference | EVENT | 0.98+ |
two paths | QUANTITY | 0.98+ |
CNBC | ORGANIZATION | 0.98+ |
Linux Foundation | ORGANIZATION | 0.98+ |
VMWare | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
Moor Insights and Strategy | ORGANIZATION | 0.97+ |
Reinvent | EVENT | 0.97+ |
first time | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
PCF | ORGANIZATION | 0.97+ |
first conversation | QUANTITY | 0.96+ |
second thing | QUANTITY | 0.96+ |
Pivotal | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.95+ |
Pivotal Cloud | ORGANIZATION | 0.94+ |
Fortune 500 | ORGANIZATION | 0.94+ |
fistfight | QUANTITY | 0.94+ |
Movidius | ORGANIZATION | 0.93+ |
second | QUANTITY | 0.91+ |
New York Times | ORGANIZATION | 0.9+ |
AI for Good Panel - Autonomous World | SXSW 2017
>> Welcome everyone. Thank you for coming to the Intel AI lounge and joining us here for this economist world event. My name is Jack. I'm the chief architect of our autonomist driving solutions at Intel and I'm very happy to be here and to be joined by an esteemed panel of colleagues who are joining to, I hope, engage you all in a frayed dialogue and discussion. There will be time for questions as well, so keep your questions in mind. Jot them down so you ask them to us later. So first, let me introduce the panel. Next to me we have Michelle, who's the co-founder and CEO of Fine Mind. She just did an interview here shortly. Fine Mind is a company that provides a technology platform for retailers and brands that uses artificial intelligence as the heart of the experiences that her company's technology provides. Joe from Intel is the head of partnerships and acquisitions for artificial intelligence and software technologies. He participated in the recent acquisition of Movidius, a computer vision company that Intel recently acquired and is involved in a lot of smart city activities as well. And then finally, Sarush, who is data scientist by training, but now has JDA labs, which is researching emerging technologies and their application in the supply chain worldwide. So at the end of the day, the internet things that artificial intelligence really promises to improve our lives in quite incredible ways and change the way that we live and work. Often times the first thing that we think about when we think about AI is Skynet, but we at Intel believe in AI for good and that there's a lot of things that can happen to improve the way people live, work, and enjoy life. So as things in the Internet, as things become connected, smart, and automated, artificial intelligence is really going to be at the heart of those new experiences. So as I said my role is the architect for autonomous driving. It's a common place when people think about artificial intelligence, because what we're trying to do is replace a human brain with a machine brain, which means we need to endow that machine with intelligent thoughts, contexts, experiences. All of these things that sort of make us human. So computer vision is the space, obviously, with cameras in your car that people often think about, but it's actually more complicated than that. How many of us have been in a situation on a two lane road, maybe there's a car coming towards us, there's a road off to the right, and you sort of sense, "You know what? That car might turn in front of me." There's no signal. There's no real physical cue, but just something about what that driver's doing where they're looking tells us. So what do we do? We take our foot off the accelerator. We maybe hover it over the brake, just in case, right? But that's intelligence that we take for granted through years and years and years of driving experience that tells us something interesting is happening there. And so that's the challenge that we face in terms of how to bring that level of human intelligence into machines to make our lives better and richer. So enough about automated vehicles though, let's talk to our panelists about some of the areas in which they have expertise. So first for Michelle, I'll ask... Many of us probably buy stuff online everyday, every week, every hour, hourly delivery now. So a lot has been written about the death of traditional retail experiences. How will artificial intelligence and the technology that your company has rejuvenate that retail experience, whether it be online or in the traditional brick and mortar store? >> Yeah, excuse me. So one of the things that I think is a common misconception. You hear about the death of the brick and mortar store, the growth of e-commerce. It's really that e-commerce is beating brick and mortar in growth only and there's still over 90% of the world's commerce is done in physical brick and mortar store. So e-commerce, while it has the growth, has a really long way to go and I think one of the things that's going to be really hard to replace is the very human element of interaction and connection that you get by going to a store. So just because a robot named Pepper comes up to you and asks you some questions, they might get you the answer you need faster and maybe more efficiently, but I think as humans we crave interaction and shopping for certain products especially, is an experience better enjoyed in person with other people, whether that's an associate in the store or people you come with to the store to enjoy that experience with you. So I think artificial intelligence can help it be a more frictionless experience, whether you're in store or online to get you from point A to buying the thing you need faster, but I don't think that it's going to ever completely replace the joy that we get by physically going out into the world and interacting with other people to buy products. >> You said something really profound. You said that the real revolution for artificial intelligence in retail will be invisible. What did you mean by that? >> Yeah, so right now I think that most of the artificial intelligence that's being applied in the retail space is actually not something that shoppers like you and I see when we're on a website or when we're in the store. It's actually happening behind the scenes. It's happening to dynamically change the webpage to show you different stuff. It's happening further up the supply chain, right? With how the products are getting manufactured, put together, packaged, shipped, delivered to you, and that efficiency is just helping retailers be smarter and more effective with their budgets. And so, as they can save money in the supply chain, as they can sell more product with less work, they can reinvest in experience, they can reinvest in the brand, they can reinvest in the quality of the products, so we might start noticing those things change, but you won't actually know that that has anything to do with artificial intelligence, because not always in a robot that's rolling up to you in an aisle. >> So you mentioned the supply chain. That's something that we hear about a lot, but frankly for most of us, I think it's very hard to understand what exactly that means, so could you educate us a bit on what exactly is the supply chain and how is artificial intelligence being implied to improve it? >> Sure, sure. So for a lot of us, supply chain is the term that we picked up when we went to school or we read about it every so often, but we're not that far away from it. It is in fact a key part of what Michelle calls the invisible part of one's experience. So when you go to a store and you're buying a pair of shoes or you're picking up a box of cereal, how often do we think about, "How did it ever make it's way here?" We're the constituent components. They probably came from multiple countries and so they had to be manufactured. They had to be assembled in these plants. They had to then be moved, either through an ocean vessel or through trucks. They probably have gone through multiple warehouses and distribution centers and then finally into the store. And what do we see? We want to make sure that when I go to pick up my favorite brand of cereal, it better be there. And so, one of the things where AI is going to help and we're doing a lot of active work in this, is in the notion of the self learning supply chain. And what that means is really bringing in these various assets and actors of the supply chain. First of all, through IOT and others, generating the data, obviously connecting them, and through AI driving the intelligence, so that I can dynamically figure out the fact that the ocean vessel that left China on it's way to Long Beach has been delayed by 24 hours. What does that mean when you go to a Foot Locker to buy your new pair of shoes? Can I come up with alternate sourcing decisions, so it's not just predicting. It's prescribing and recommending as well. So behind the scenes, bringing in a lot of the, generating a lot of the data, connecting a lot of these actors and then really deriving the smarts. That's what the self learning supply chain is all about. >> Are supply chains always international or can they be local as well? >> Definitely local as well. I think what we've seen over the last decades, it's kind of gotten more and more global, but a lot of the supply chain can really just be within the store as well. You'd be surprised at how often retailers do not know where their product is. Even is it in the front of the store? Is it in the back of the store? Is it in the fitting room? Even that local information is not really available. So to have sensors to discover where things are and to really provide that efficiency, which right now doesn't exist, is a key part of what we're doing. >> So Joe, as you look at companies out there to partner or potentially acquire, do you tend to see technologies that are very domain specific for retail or supply chain or do you see technologies that could bridge multiple different domains in terms of the experiences we could enjoy? >> Yeah, definitely. So both. A lot of infant technologies start out in very niched use cases, but then there are technologies that are pervasive across multiple geographies and multiple markets. So, smart cities is a good way to look at that. So let's level set really quick on smart cities and how we think about that. I have a little sheet here to help me. Alright, so, if anybody here played Sim City before, you have your little city that's a real world that sits here, okay? So this is reality and you have little buildings and cars and they all travel around and you have people walking around with cell phones. And what's happening is as we develop smart cities, we're putting sensors everywhere. We're putting them around utilities, energies, water. They're in our phones. We have cameras and we have audio sensors in our phones. We're placing these on light poles, which is existing sustaining power points around the city. So we have all these different sensors and they're not just cameras and microphones, but they're particulate sensors. They're able to do environmental monitoring and things like that. And so, what we have is we have this physical world with all these sensors here. And then what we have is we've created basically this virtual world that has a great memory because it has all the data from all the sensors and those sensors really act as ties, if you think of it like a quilt, trying a quilt together. You bring it down together and everywhere you have a stitch, you're stitching that virtual world on top of the physical world and that just enables incredible amounts of innovation and creation for developers, for entrepreneurs, to do whatever they want to do to create and solve specific problems. So what really makes that possible is communications, connectivity. So that's where 5G comes in. So with 5G it's not just a faster form of connectivity. It's new infrastructure. It's new communication. It includes multiple types of communication and connectivity. And what it allows it to do is all those little sensors can talk to each other again. So the camera on the light pole can talk to the vehicle driving by or the sensor on the light pole. And so you start to connect everything and that's really where artificial intelligence can now come in and sense what's going on. It can then reason, which is neat, to have computer or some sort of algorithm that actually reasons based on a situation that's happening real time. And it acts on that, but then you can iterate on that or you can adapt that in the future. So if we think of an actual use case, we'll think of a camera on a light post that observes an accident. Well it's programmed to automatically notify emergency services that there's been an accident. But it knows the difference between a fender bender and an actual major crash where we need to send an ambulance or maybe multiple firetrucks. And then you can create iterations and that learns to become more smart. Let's say there was a vehicle that was in the accident that had a little yellow placard on it that said hazard. You're going to want to send different types of emergency services out there. So you can iterate on what it actually does and that's a fantastic world to be in and that's where I see AI really playing. >> That's a great example of what it's all about in terms of making things smart, connective, and autonomous. So Michelle as somebody who has founded the company and the space with technology that's trying to bring some of these experiences to market, there may be folks in the audience who have aspirations to do the same. So what have you learned over the course of starting your company and developing the technology that you're now deploying to market? >> Yeah, I think because AI is such a buzz word. You can get a dot AI domain now, doesn't mean that you should use it for everything. Maybe 7, 10, 15 years ago... These trends have happened before. In the late 90s, it was technology and there was technology companies and they sat over here and there was everybody else. Well that not true anymore. Every company uses technology. Then fast forward a little bit, there was social media was a thing. Social media was these companies over here and then there was everybody else and now every company needs to use social media or actually maybe not. Maybe it's a really bad idea for you to spend a ton of money on social media and you have to make that choice for yourself. So the same thing is true with artificial intelligence and what I tell... I did a panel on AI for Adventure Capitalists last week, trying to help them figure out when to invest and how to evaluate and all that kind of stuff. And what I would tell other aspiring entrepreneurs is "AI is means to an end. "It's not an end in itself." So unless you're a PH.D in machine learning and you want to start an AI as a service business, you're probably not going to start an AI only company. You're going to start a company for a specific purpose, to solve a problem, and you're going to use AI as a means to an end, maybe, if it makes sense to get there, to make it more efficient and all that stuff. But if you wouldn't get up everyday for ten years to do this business that's going to solve whatever problem you're solving or if you wouldn't invest in it if AI didn't exist, then adding dot AI at the end of a domain is not going to work. So don't think that that will help you make a better business. >> That's great advice. Thank you. Surash, as you talked about the automation then of the supply chain, what about people? What about the workers whose jobs may be lost or displaced because of the introduction of this automation? What's your perspective on that? >> Well, that's a great question. It's one that I'm asked quite a bit. So if you think about the supply chain with a lot of the manufacturing plants, with a lot of the distribution centers, a lot of the transportation, not only are we talking about driverless cars as in cars that you and I own, but we're talking about driverless delivery vehicles. We're talking about drones and all of these on the surface appears like it's going to displace human beings. What humans used to do, now machines will do and potentially do better. So what are the implications around human beings. So I'm asked that question quite a bit, especially from our customers and my general perception on this is that I'm actually cautiously optimistic that human beings will continue to do things that are strategic. Human beings will continue to do things that are creative and human being will probably continue to do things that are truly catastrophic, that machines simply have not been able to learn because it doesn't happen very often. One thing that comes to mind is when ATM machines came about several years ago before my time, that displaced a lot of teller jobs in the banking industry, but the banking industry did not go belly up. They found other things to do. If anything, they offered more services. They were more branches that were closed and if I were to ask any of you now if you would go back and not have 24/7 access to cash, you would probably laugh at me. So the thing is, this is AI for good. I think these things might have temporary impact in terms of what it will do to labor and to human beings but I think we as human beings will find bigger, better, different things to do and that's just in the nature of the human journey. >> Yeah, there's definitely a social acceptance angle to this technology, right? Many of us technologists in the room, it's easier for us to understand what the technology is, how it works, how it was created, but for many of our friends and family, they don't. So there's a social acceptance angle to this. So Michelle as you see this technology deployed in retail environments, which is a space where almost every person in every country goes, how do you think about making it feel comfortable for people to interact with this kind of technology and not be afraid of the robots or the machines behind the curtain. >> Yeah, that's a great question. I think that user experience always has to come first, so if you're using AI for AI's sake or for the cool factor, the wow factor, you're already doing it wrong. Again, it needs to solve a problem and what I tend to tell people who are like, "Oh my God. AI sounds so scary. "We can't let this happen." I'm like, "It's already happening "and you're already liking it. "You just don't know "because it's invisible in a lot of ways." So if you can point of those scenarios where AI has already benefited you and it wasn't scary because it was a friendly kind of interaction, you might not even have realized it was there versus something that looks so different and... Like panic driving. I think that's why the driverless car thing is a big deal because you're so used to seeing, in America at least, someone on the left side of the car in the front seat. And not seeing that is like, woah, crazy. So I think that it starts with the experience and making it an acceptable kind of interface or format that doesn't give you that, "Oh my God. Something is wrong here," kind of feeling. >> Yeah, that's a great answer. In fact, it reminds me there was this really amazing study by a Professor Nicholas Eppily that was published in the journal of social psychology and the name of this study was called A Mind In A Machine. And what he did was he took subjects and had a fully functional automated vehicle and then a second identical fully functional automated vehicle, but this one had a name and it had a voice and it had sort of a personality. So it had human anthropomorphics characteristics. And he took people through these two different scenarios and in both scenarios he's evil and introduced a crash in the scenario where it was unavoidable. There was nothing going to happen. You were going to get into an accident in these cars. And then afterwards, he pulled the subjects and said, "Well, what did you feel about that accident? "First, what did you feel about the car?" They were more comfortable in the one that had anthropomorphic features. They felt it was safer and they'd be more willing to get into it, which is not terribly surprising, but the kicker was the accident. In the vehicle that had a voice and a name, they actually didn't blame the self-driving car they were in. They blamed the other car. But in the car that didn't have anthropomorphic features, they blamed the machine. They said there's something wrong with that car. So it's one of my favorite studies because I think it does illustrate that we have to remember the human element to these experiences and as artificial intelligence begins to replace humans, or some of us even, we need to remember that we are still social beings and how we interact with other things, whether they be human or non-human, is important. So, Joe, you talk about evaluating companies. Michelle started a company. She's gotten funding. As you go out and look at new companies that are starting up, there's just so much activity, companies that just add dot AI to the name as Michelle said, how do you cut through the noise and try to get to the heart of is there any value in a technology that a company's bringing or not? >> Definitely. Well, each company has it's unique, special sauce, right? And so, just to reiterate what Michelle was talking about, we look for companies that are really good at doing what they do best, whatever that may be, whatever that problem that they're solving that a customer's willing to pay for, we want to make sure that that company's doing that. No one wants a company that just has AI in the name. So we look for that number one and the other thing we do is once we establish that we have a need or we're looking at a company based on either talent or intellectual property, we'll go in and we'll have to do a vetting process and it takes a whole. It's a very long process and there's legal involved but at the end of the day, the most important thing for the start up to remember is to continue doing what they do best and continue to build upon their special sauce and make sure that it's very valuable to their customer. And if someone else wants to look at them for acquisition so be it, but you need to be meniacally focused on your own customer. That's my two cents. >> I'm thinking again about this concept of embedding human intelligence, but humans have biases right? And sometimes those biases aren't always good. So how do we as technologists in this industry try to create AI for good and not unintentionally put some of our own human biases into models that we train about what's socially acceptable or not? Anyone have any thoughts on that? >> I actually think that the hype about AI taking over and destroying humanity, it's possible and I don't want to disagree with Steven Hawking as he's way smarter than I am. But he kind of recognizes it could go both ways and so right now, we're in a world where we're still feeding the machine. And so, there's a bunch of different issues that came up with humans feeding the machine with their foibles of racism and hatred and bias and humans experience shame which causes them to lash out and what to put somebody else down. And so we saw that with Tay, the Microsoft chatbot. We saw that with even Google's fake news. They're like picking sources now to answer the question in the top box that might be the wrong source. Ads that Google serves often show men high paying jobs, $200,000 a year jobs, and women don't get those same ones. So if you trace that back, it's always coming back to the inputs and the lens that humans are coming at it from. So I actually think that we could be in a way better place after this singularity happens and the machines are smarter than us and they take over and they become our overlords. Because when we think about the future, it's a very common tendency for humans to fill in the blanks of what you don't know in the future with what's true today. And I was talking to you guys at lunch. We were talking about this harbored psychology professor who wrote a book and in the book he was talking about how 1950s, they were imagining the future and all these scifi stories and they have flying cars and hovercrafts and they're living in space, but the woman still stays at home and everyone's white. So they forgot to extrapolate the social things to paint the picture in, but I think when we're extrapolating into the future where the computers are our overlords, we're painting them with our current reality, which is where humans are kind of terrible (laughs). And maybe computers won't be and they'll actually create this Utopia for us. So it could be positive. >> That's a very positive view. >> Thanks. >> That's great. So do we have this all figured out? Are there any big challenges that remain in our industries? >> I want to add a little bit more to the learning because I'm a data scientist by training and a lot of times, I run into folks who think that everything's been figured out. Everything is done. This is so cool. We're good to go and one of the things that I share with them is something that I'm sure everyone here can relate to. So if a kindergartner goes to school and starts to spell profanity, that's not because the kid knows anything good or bad. That is what the kid has learned at home. Likewise, if we don't train machines well, it's training will in fact be biased to your point. So one of the things that we have to kep in mind when we talk about this is we have to be careful as well because we're the ones doing the training. It doesn't automatically know what is good or bad unless that set of data is also fed to it. So I just wanted to kind of add to your... >> Good. Thank you. So why don't we open it up a little bit for questions. Any questions in the audience for our panelists? There's one there looks like (laughs). Emily, we'll get to you soon. >> I had a question for Sarush based on what you just said about us training or you all training these models and teaching them things. So when you deploy these models to the public with them being machine learning and AI based, is it possible for us to retrain them and how do you build in redundancies for the public like throwing off your model and things like that? What are some of the considerations that go into that? >> Well, one thing for sure is training is continuous. So no system should be trained once, deployed, and then forgotten. So that is something that we as AI professionals need to absolutely, because... Trends change as well. What was optimal two years ago is no longer optimal. So that part needs to continue to happen and we're the where the whole IOT space is so important is it will continue to generate relevant consumable data that these machines can continuously learn. >> So how do you decide what data though, is good or bad, as you retrain and evolve that data over time? As a data scientist, how do you do selection on data? >> So, and I want to piggyback on what Michelle said because she's spot on. What is the problem that you're trying to solve? It always starts from there because we have folks who come in to CIOs, "Oh look. "When big data was hot, we started to collect "a lot of the data, but nothing has happened." But data by itself doesn't automatically do magic for you, so we ask, "What kind of problem are you trying to solve? "Are you trying to figure out "what kinds of products to sell? "Are you trying to figure out "the optimal assortment mix for you? "Are you trying to find the shortest path "in order to get to your stores?" And then the question is, "Do you now have the right data "to solve that problem?" A lot of times we put the science and I'm a data scientist by training. I would love to talk about the science, but really, it's the problem first. The data and the science, they come after. >> Thanks, good advice. Any other questions in the audience? Yes, one right up here. (laughing) >> Test, test. Can you hear me? >> Yep. >> So with AI machinery becoming more commonplace and becoming more accessible to developers and visionaries and thinkers alike rather than being just a giant warehouse of a ton of machines and you get one tiny machine learning, do you foresee more governance coming into play in terms of what AI is allowed to do and the decisions of what training data is allowed to be fed to Ais in terms of influence? You talk about data determining if AI will become good or bad, but humans being the ones responsible for the training in the first place, obviously, they can use that data to influence as they, just the governance and the influence. >> Jack: Who wants to take that one? >> I'll take a quick stab at it. So, yes, it's going to be an open discussion. It's going to have to take place, because really, they're just machines. It's machine learning. We teach it. We teach it what to do, how to act. It's just an extension of us and in fact, I think you had a really great conversation or a statement at lunch where you talked about your product being an extension of a designer because, and we can get into that a little bit, but really, it's just going to do what we tell it to do. So there's definitely going to have to be discussions about what type of data we feed. It's all going to be centered around the use case and what that solves the use case. But I imagine that that will be a topic of discussion for a long time about what we're going to decide to do. >> Jack: Michelle do you want to comment on this thought of taking a designer's brain and putting it into a model somehow? >> Well, actually, what I wanted to say was that I think that the regulation and the governance around it is going to be self imposed by the the developer and data science community first, because I feel like even experts who have been doing this for a long time don't rally have their arms fully around what we're dealing with here. And so to expect our senators, our congressmen, women, to actually make regulation around it is a lot, because they're not technologists by training. They have a lot of other stuff going on. If the community that's already doing the work doesn't quite know what we're dealing with, then how can we expect them to get there? So I feel like that's going to be a long way off, but I think that the people who touch and feel and deal with models and with data sets and stuff everyday are the kind of people who are going to get together and self-regulate for a while, if they're good hearted people. And we talk about AI for good. Some people are bad. Those people won't respect those convenance that we come up with, but I think that's the place we have to start. >> So really you're saying, I think, for data scientists and those of us working in this space, we have a social, ethical, or moral obligation to humanity to ensure that our work is used for good. >> Michelle: No pressure. (laughing) >> None taken. Any other questions? Anything else? >> I just wanted to talk about the second part of what she said. We've been working with a company that builds robots for the store, a store associate if you will. And one of their very interesting findings was that the greatest acceptance of it right now has been at car dealerships because when someone goes to the car dealer and we all have had terrible experiences doing that. That's why we try to buy it online, but just this perception that a robot would be unbiased, that it will give you the information without trying to push me one way or the other. >> The hard sell. >> So there's that perception side of it too that, it isn't that the governance part of your question, but more the biased perception side of what you said. I think it's fascinating how we're already trained to think that this is going to have an unbiased opinion, whether or not that true. >> That's fascinating. Very cool. Thank you Sarush. Any other questions in the audience? No, okay. Michelle, could I ask, you've got a station over there that talks a little bit more about your company, but for those that haven't seen it yet, could you tell us a little bit about what is the experience like or how is the shopping experience different for someone that's using your company's technology than what it was before? >> Oh, free advertising. I would love to. No, but actually, I started this company because as a consumer I found myself going back to the user experience piece, just constantly frustrated with the user experience of buying products one at a time and then getting zero help. And then here I am having to google how to wear a white blazer to not look like an idiot in the morning when I get dressed with my white blazer that I just bought and I was excited about. And it's a really simple thing, which is how do I use the product that I'm buying and that really simple thing has been just abysmally handled in the retail industry, because the only tool that the retailers have right now are manual. So in fashion, some of our fashion customers like John Varvatos is an example we have over there, it's like a designer for high-end men's clothing, and John Varvatos is a person, it's not just the name of the company. He's an actual person and he has a vision for what he wants his products to look like and the aesthetic and the style and there's a rockstar vibe and to get that information into the organization, he would share it verbally with PDFs, thing like that. And then his team of merchandisers would literally go manually and make outfits on one page and then go make an outfit on another page with the same exact items and then products would go out of stock and they'd go around in circles and that's a terrible, terrible job. So to the conversation earlier about people losing jobs because of artificial intelligence. I hope people do lose jobs and I hope they're the terrible jobs that no one wanted to do in the first place, because the merchandisers that we help, like the one form John Varvatos, literally said she was weeks away from quitting and she got a new boss and said, "If you don't ix this part of my job, I'm out of here." And he had heard about us. He knew about us and so he brought us in to solve that problem. So I don't think it's always a bad thing, because if we can take that route, boring, repetitive task off of human's plates, what more amazing things can we do with our brain that is only human and very unique to us and how much more can we advance ourselves and our society by giving the boring work to a robot or a machine. >> Well, that's fantastic. So Joe, when you talk about Smart Cities, it seems like people have been talking about Smart Cities for decades and often people cite funding issues, regulatory environment or a host of other reasons why these things haven't happened. Do you think we're on the cusp of breaking through there or what challenges still remain for fulfilling that vision of a smart city? >> I do, I do think we're on the cusp. I think a lot of it has to do, largely actually, with 5G and connectivity, the ability to process and send all this data that needs to be shared across the system. I also think that we're getting closer and more conscientious about security, which is a major issue with IOT, making sure that our in devices or our edge devices, those things out there sensing, are secure. And I think interocular ability is something that we need to champion as well and make sure that we basically work together to enable these systems. So very, very difficult to create little, tiny walled gardens of solutions in a smart city. You may corner a certain part of the market, but you're definitely not going to have that ubiquitous benefit to society if you establish those little walled gardens, so those are the areas I think we need to focus on and I think we are making serious progress in all of them. >> Very good. Michelle, you mentioned earlier that artificial intelligence was all around us in lots of places and things that we do on a daily basis, but we probably don't realize it. Could you share a couple examples? >> Yeah, so I think everything you do online for the most part, literally anything you might do, whether that's googling something or you go to some article, the ads might be dynamically picked for you using machine learning models that have decided what is appropriate based on you and your treasure trove of data that you have out there that you're giving up all the time and not really understanding you're giving up >> The shoes that follow you around the internet right? >> Yeah, exactly. So that's basically anything online. I'm trying to give in the real-world. I think that, to your point earlier about he supply chain, just picking a box of cereal off the shelf and taking it home, there's not artificial intelligence in that at all, but the supply chain behind it. So the supply chain behind pretty much everything we do even in television, like how media gets to us and get consumed. At some point in the supply chain, there's artificial intelligence playing in there as well. >> So to start us in the supply chain where we can get the same day even within the hour delivery. How do you get better than that? What's coming that's innovative in the supply chain that will be new in the future? >> Well, so that is one example of it, but you'd be surprised at how inefficient the supply chain is, even with all the advances that have already gone in, whether it's physical advances around building modern warehouses and modern manufacturing plants, whether it's through software and others that really help schedule things and optimize things. What has happened in the supply chain just given how they've evolved is they're very siloed, so a lot of times the manufacturing plant does things that the distribution folks do not know. The distribution folks do things that the transportation folks don't know and then the store folks know nothing other than when the trucks pulls up, that's the first time they find out about things. So where the great opportunity in my mind is, in the space that I'm in, is really the generation of data, the connection of data, and finally, deriving the smarts that really help us improve efficiency. There's huge opportunity there. And again, we don't know it because it's all invisible to us. >> Good. Let me pause and see if there's any questions in the audience. There, we got one there. >> Thank you. Hi guys, you alright? I just had a question about ethics and the teaching of ethics. As you were saying, we feed the artificial intelligence, whereas in a scenario which is probably a little bit more attuned to automated driving, in a car crash scenario between do we crash these two people or three people? I would be choosing two, whereas the scenario may be it's actually better to just crash the car and kill myself. That thought would never go through my mind, because I'm human. My rule number one is self preservation. So how do we teach the computer this sort of side of it? Is there actually the AI ethic going to be better than our own ethics? How do we start? >> Yeah, that's a great question. I think the opportunity is there as Michelle was talking earlier about maybe when you cross that chasm and you get this new singularity, maybe the AI ethics will be better than human ethics because the machine will be able to think about greater concerns perhaps other than ourselves. But I think just from my point of view, working in the space of automated vehicles, I think it is going to have to be something that the industry, and societies are different, different geographies, and different countries. We have different ways of looking at the world. Cultures value different things and so I think technologists in those spaces are going to have to get together and agree amongst the community from a social contract theory standpoint perhaps in a way that's going to be acceptable to everyone who lives in that environment. I don't think we can come up with a uniform model that would apply to all spaces, but it's got to be something though that we all, as members of a community, can accept. And so yeah, that would be the right thing to do in that situation and that's not going to be an easy task by any means, which is, I think, one of the reasons why you'll continue to see humans have an important role to play in automated vehicles so that the human could take over in exactly that kind of scenario, because the machines perhaps aren't quite smart enough to do it or maybe it's not the smarts or the processing capability. It's maybe that we haven't as technologists and ethicists gotten together long enough to figure out what are those moral and ethical frameworks that we could use to apply to those situations. Any other thoughts? >> Yeah, I wanted to jump in there real quick. Absolutely questions that need to be answered, but let's come together and make a solution that needs to have those questions answered. So let's come together first and fix the problems that need to be fixed now so that we can build out those types of scenarios. We can now put our brainpower to work to decide what to do next. There was a quote I believe by Andrew Ningh Bidou and he was saying in concerning deep questions about what's going to happen in the future with AI. Are we going to have AI overlords or anything like that? And it's kind of like worrying about overpopulation at the point of Mars. Because maybe we're going to get there someday and maybe we're going to send people there and maybe we're going to establish a human population on Mars and then maybe it will get too big and then maybe we'll have problems on Mars, but right now we haven't landed on the planet and I thought that really does a good job of putting in perspective that that overall concern about AI taking over. >> So when you think about AI being applied for good and Michelle you talked about don't do AI just for AI's sake, have a problem to solve, I'll open it up to any of the three of you, what's a problem in your life or in your work experience that you'd love somebody out here would go solve with AI? >> I have one. Sorry, I wanted to do this real quick. There's roads blocked off and it's raining and I have to walk a mile to find a taxi in the rain right now after this to go home. I would love for us to have some sort of ability to manage parking spaces and determine when and who can come in to which parts of the city and when there's a spot downtown, I want my autonomous vehicle to know which one's available and go directly to that spot and I want it to be cued in a certain manner to where I'm next in line and I know. And so I would love for someone to go solve that problem. There's been some development on the infrastructure side for that kind of solution. We have a partnership Intel does with GE and we're putting sensors that have, it's an IOT sensor basically. It's called City IQ. It has environmental monitoring, audio, visual sensors and it allows this type of use case to take place. So I would love to see iterations on that. I would love to see, sorry there's another one that I'm particular about. Growing up I lived in Southern California right against the hills, a housing development, because the hills and there was not a factory, but a bunch of oil derricks back there. I would love to have sensor that senses the particulate in the air to see if there was too many fumes coming from that oil field into my yard growing up as a little kid. I would love for us to solve problems like that, so that's the type of thing that we'll be able to solve. Those are the types of innovations that will be able to take place once we have these sensors in place, so I'm going to sit down on that one and let someone else take over. >> I'm really glad you said the second one because I was thinking, "What I'm about to say is totally going to "trivialize Joe's pain and I don't want to do that." But cancer is my answer, because there's so much data in health and all these patterns are there waiting to be recognized. There's so many things you don't know about cancer and so many indicators that we could capture if we just were able to unmask the data and take a look, but I knew a brilliant company that was using artificial intelligence specifically around image processing to look at CAT scans and figure out what the leading indicators might be in a cancerous scenario. And they pivoted to some way more trivial problem which is still a problem and not to trivialize parking an whatnot, but it's not cancer. And they pivoted away from this amazing opportunity because of the privacy and the issues with HIPPA around health data. And I understand there's a ton of concern with it getting into the wrong hands and hacking and all of this stuff. I get that, but the opportunity in my mind far outweighs the risk and the fact that they had to change their business model and change their company essentially broke my heart because they were really onto something. >> Yeah that's a shame and it's funny you mention that. Intel has an effort that we're calling the cancer cloud and what we're trying to do is provide some infrastructure to help with that problem and the way cancer treatments work today is if you go to a university hospital let's say here in Texas, how you interpret that scan and how you respond and apply treatment, that knowledge is basically just kept within that hospital and within that staff. And so on the other side of the country, somebody could go in and get a scan and maybe that scan brand new to that facility and so they don't know how to treat it, but if you had an opportunity with machine learning to be able to compare scans from people, not only just in this country, but around the world and understand globally, all of the hundreds of different treatment pads that were applied to that particular kind of cancer, think how many lives could be saved, because then you're sharing knowledge with what courses of treatment worked. But it's one of those things like you say, sometimes it's the regulatory environment or it's other factors that hold us back from applying this technology to do some really good things, so it's a great example. Okay, any other questions in the audience? >> I have one. >> Good Emily. >> So this goes off of the HIPPA question, which is, and you were talking about just dynamically displaying ads earlier. What does privacy look like in a fully autonomous world? Anybody can answer that one. Are we still private citizens? What does it look like? >> How about from a supply chain standpoint? You can learn a lot about somebody in terms of the products that they buy and I think to all of us, we sort of know maybe somebody's tracking what we're buying but it's still creepy when we think about how people could potentially use that against us. So, how do you from a supply chain standpoint approach that problem? >> Yeah and it's something that comes up in my life almost every day because one of the thing's we'd like to do is to understand consumer behavior. How often am I buying? What kinds of products am I buying? What am I returning? And so for that you need transactional data. You really get to understand the individual. That then starts to get into this area of privacy. Do you know too much about me? And so a lot of times what we do is data is clearly anonymized so all we know is customer A has this tendency, customer B has this tendency. And that then helps the retailers offer the right products to these customers, but to your point, there are those privacy concerns and I think issues around governance, issues around ethics, issues around privacy, these will continue to be ironed out. I don't think there's a solid answer for any of these just yet. >> And it's largely a reflection of society. How comfortable are we with how much privacy? Right now I believe we put the individual in control of as much information as possible that they are able to release or not. And so a lot of what you said, everyone's anonymizing everything at the moment, but that may change as society's values change slightly and we'll be able to adapt to what's necessary. >> Why don't we try to stump the panel. Anyone have any ideas on things in your life you'd like to be solved with AI for good? Any suggestions out there that we could then hear from our data scientist and technologist and folks here? Any ideas? No? Alright good. Alright, well, thank you everyone. Really appreciate your time. Thank you for joining Intel here at the AI lounge at Autonomous World. We hope you've enjoyed the panel and we wish you a great rest of your event here at South by Southwest. (audience clapping) (bright music)
SUMMARY :
and change the way that we live and work. So one of the things that I think is a common misconception. You said that the real revolution to show you different stuff. So you mentioned the supply chain. and so they had to be manufactured. and to really provide that efficiency, and that learns to become more smart. and the space with technology that's trying at the end of a domain is not going to work. of the supply chain, what about people? and that's just in the nature of the human journey. and not be afraid of the robots or format that doesn't give you that, and the name of this study was called A Mind In A Machine. And so, just to reiterate what Michelle was talking about, that we train about what's socially acceptable or not? and the machines are smarter than us So do we have this all figured out? So one of the things that we have to kep in mind Any questions in the audience for our panelists? and how do you build in redundancies for the public So that part needs to continue to happen so we ask, "What kind of problem are you trying to solve? Any other questions in the audience? Can you hear me? and the decisions of what training data is allowed So there's definitely going to have to be discussions So I feel like that's going to be a long way off, to humanity to ensure that our work is used for good. Michelle: No pressure. Any other questions? for the store, a store associate if you will. but more the biased perception side of what you said. Any other questions in the audience? and the aesthetic and the style and there's a rockstar vibe So Joe, when you talk about Smart Cities, and make sure that we basically work together in lots of places and things that we do on a daily basis, in that at all, but the supply chain behind it. So to start us in the supply chain where we can get that the transportation folks don't know There, we got one there. and the teaching of ethics. in that situation and that's not going to be that need to be fixed now so that in the air to see if there was too many fumes coming and so many indicators that we could capture and maybe that scan brand new to that facility and you were talking about of the products that they buy and I think to all of us, And so for that you need transactional data. that they are able to release or not. here at the AI lounge at Autonomous World.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michelle | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Steven Hawking | PERSON | 0.99+ |
Emily | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Joe | PERSON | 0.99+ |
America | LOCATION | 0.99+ |
Mars | LOCATION | 0.99+ |
Southern California | LOCATION | 0.99+ |
ten years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Fine Mind | ORGANIZATION | 0.99+ |
Andrew Ningh Bidou | PERSON | 0.99+ |
John Varvatos | PERSON | 0.99+ |
Sim City | TITLE | 0.99+ |
Nicholas Eppily | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sarush | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Long Beach | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one page | QUANTITY | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Googl | ORGANIZATION | 0.99+ |
first thing | QUANTITY | 0.99+ |
both ways | QUANTITY | 0.98+ |
Movidius | ORGANIZATION | 0.98+ |
two cents | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
1950s | DATE | 0.98+ |
second part | QUANTITY | 0.98+ |
two different scenarios | QUANTITY | 0.98+ |
over 90% | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
7 | DATE | 0.98+ |
Surash | PERSON | 0.98+ |
China | LOCATION | 0.98+ |
late 90s | DATE | 0.97+ |
each company | QUANTITY | 0.97+ |
several years ago | DATE | 0.97+ |
SXSW 2017 | EVENT | 0.97+ |
today | DATE | 0.97+ |
second one | QUANTITY | 0.97+ |
ORGANIZATION | 0.96+ | |
10 | DATE | 0.94+ |
second | QUANTITY | 0.94+ |
Tay | PERSON | 0.94+ |
Autonomous World | LOCATION | 0.94+ |