Image Title

Search Results for OPenVINO:

Pierluca Chiodelli, Dell Technologies & Dan Cummins, Dell Technologies | MWC Barcelona 2023


 

(intro music) >> "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> We're not going to- >> Hey everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante, I'm here with Dave Nicholson, day four of MWC23. I mean, it's Dave, it's, it's still really busy. And you walking the floors, you got to stop and start. >> It's surprising. >> People are cheering. They must be winding down, giving out the awards. Really excited. Pier, look at you and Elias here. He's the vice president of Engineering Technology for Edge Computing Offers Strategy and Execution at Dell Technologies, and he's joined by Dan Cummins, who's a fellow and vice president of, in the Edge Business Unit at Dell Technologies. Guys, welcome. >> Thank you. >> Thank you. >> I love when I see the term fellow. You know, you don't, they don't just give those away. What do you got to do to be a fellow at Dell? >> Well, you know, fellows are senior technical leaders within Dell. And they're usually tasked to help Dell solve you know, a very large business challenge to get to a fellow. There's only, I think, 17 of them inside of Dell. So it is a small crowd. You know, previously, really what got me to fellow, is my continued contribution to transform Dell's mid-range business, you know, VNX two, and then Unity, and then Power Store, you know, and then before, and then after that, you know, they asked me to come and, and help, you know, drive the technology vision for how Dell wins at the Edge. >> Nice. Congratulations. Now, Pierluca, I'm looking at this kind of cool chart here which is Edge, Edge platform by Dell Technologies, kind of this cube, like cubes course, you know. >> AK project from here. >> Yeah. So, so tell us about the Edge platform. What, what's your point of view on all that at Dell? >> Yeah, absolutely. So basically in a, when we create the Edge, and before even then was bringing aboard, to create this vision of the platform, and now building the platform when we announced project from here, was to create solution for the Edge. Dell has been at the edge for 30 years. We sold a lot of compute. But the reality was people want outcome. And so, and the Edge is a new market, very exciting, but very siloed. And so people at the Edge have different personas. So quickly realize that we need to bring in Dell, people with expertise, quickly realize as well that doing all these solution was not enough. There was a lot of problem to solve because the Edge is outside of the data center. So you are outside of the wall of the data center. And what is going to happen is obviously you are in the land of no one. And so you have million of device, thousand of million of device. All of us at home, we have all connected thing. And so we understand that the, the capability of Dell was to bring in technology to secure, manage, deploy, with zero touch, zero trust, the Edge. And all the edge the we're speaking about right now, we are focused on everything that is outside of a normal data center. So, how we married the computer that we have for many years, the new gateways that we create, so having the best portfolio, number one, having the best solution, but now, transforming the way that people deploy the Edge, and secure the Edge through a software platform that we create. >> You mentioned Project Frontier. I like that Dell started to do these sort of project, Project Alpine was sort of the multi-cloud storage. I call it "The Super Cloud." The Project Frontier. It's almost like you develop, it's like mission based. Like, "Okay, that's our North Star." People hear Project Frontier, they know, you know, internally what you're talking about. Maybe use it for external communications too, but what have you learned since launching Project Frontier? What's different about the Edge? I mean you're talking about harsh environments, you're talking about new models of connectivity. So, what have you learned from Project Frontier? What, I'd love to hear the fellow perspective as well, and what you guys are are learning so far. >> Yeah, I mean start and then I left to them, but we learn a lot. The first thing we learn that we are on the right path. So that's good, because every conversation we have, there is nobody say to us, you know, "You are crazy. "This is not needed." Any conversation we have this week, start with the telco thing. But after five minutes it goes to, okay, how I can solve the Edge, how I can bring the compute near where the data are created, and how I can do that secure at scale, and with the right price. And then can speak about how we're doing that. >> Yeah, yeah. But before that, we have to really back up and understand what Dell is doing with Project Frontier, which is an Edge operations platform, to simplify your Edge use cases. Now, Pierluca and his team have a number of verticalized applications. You want to be able to securely deploy those, you know, at the Edge. But you need a software platform that's going to simplify both the life cycle management, and the security at the Edge, with the ability to be able to construct and deploy distributed applications. Customers are looking to derive value near the point of generation of data. We see a massive explosion of data. But in particular, what's different about the Edge, is the different computing locations, and the constraints that are on those locations. You know, for example, you know, in a far Edge environment, the people that service that equipment are not trained in the IT, or train, trained in it. And they're also trained in the safety and security protocols of that environment. So you necessarily can't apply the same IT techniques when you're managing infrastructure and deploying applications, or servicing in those locations. So Frontier was designed to solve for those constraints. You know, often we see competitors that are doing similar things, that are starting from an IT mindset, and trying to shift down to cover Edge use cases. What we've done with Frontier, is actually first understood the constraints that they have at the Edge. Both the operational constraints and technology constraints, the service constraints, and then came up with a, an architecture and technology platform that allows them to start from the Edge, and bleed into the- >> So I'm laughing because you guys made the same mistake. And you, I think you learned from that mistake, right? You used to take X86 boxes and throw 'em over the fence. Now, you're building purpose-built systems, right? Project Frontier I think is an example of the learnings. You know, you guys an IT company, right? Come on. But you're learning fast, and that's what I'm impressed about. >> Well Glenn, of course we're here at MWC, so it's all telecom, telecom, telecom, but really, that's a subset of Edge. >> Yes. >> Fair to say? >> Yes. >> Can you give us an example of something that is, that is, orthogonal to, to telecom, you know, maybe off to the side, that maybe overlaps a little bit, but give us an, give us an example of Edge, that isn't specifically telecom focused. >> Well, you got the, the Edge verticals. and Pierluca could probably speak very well to this. You know, you got manufacturing, you got retail, you got automotive, you got oil and gas. Every single one of them are going to make different choices in the software that they're going to use, the hyperscaler investments that they're going to use, and then write some sort of automation, you know, to deploy that, right? And the Edge is highly fragmented across all of these. So we certainly could deploy a private wireless 5G solution, orchestrate that deployment through Frontier. We can also orchestrate other use cases like connected worker, or overall equipment effectiveness in manufacturing. But Pierluca you have a, you have a number. >> Well, but from your, so, but just to be clear, from your perspective, the whole idea of, for example, private 5g, it's a feature- >> Yes. >> That might be included. It happened, it's a network topology, a network function that might be a feature of an Edge environment. >> Yes. But it's not the center of the discussion. >> So, it enables the outcome. >> Yeah. >> Okay. >> So this, this week is a clear example where we confirm and establish this. The use case, as I said, right? They, you say correctly, we learned very fast, right? We brought people in that they came from industry that was not IT industry. We brought people in with the things, and we, we are Dell. So we have the luxury to be able to interview hundreds of customers, that just now they try to connect the OT with the IT together. And so what we learn, is really, at the Edge is different personas. They person that decide what to do at the Edge, is not the normal IT administrator, is not the normal telco. >> Who is it? Is it an engineer, or is it... >> It's, for example, the store manager. >> Yeah. >> It's, for example, the, the person that is responsible for the manufacturing process. Those people are not technology people by any means. But they have a business goal in mind. Their goal is, "I want to raise my productivity by 30%," hence, I need to have a preventive maintenance solution. How we prescribe this preventive maintenance solution? He doesn't prescribe the preventive maintenance solution. He goes out, he has to, a consult or himself, to deploy that solution, and he choose different fee. Now, the example that I was doing from the houses, all of us, we have connected device. The fact that in my house, I have a solar system that produce energy, the only things I care that I can read, how much energy I produce on my phone, and how much energy I send to get paid back. That's the only thing. The fact that inside there is a compute that is called Dell or other things is not important to me. Same persona. Now, if I can solve the security challenge that the SI, or the user need to implement this technology because it goes everywhere. And I can manage this in extensively, and I can put the supply chain of Dell on top of that. And I can go every part in the world, no matter if I have in Papua New Guinea, or I have an oil ring in Texas, that's the winning strategy. That's why people, they are very interested to the, including Telco, the B2B business in telco is looking very, very hard to how they recoup the investment in 5g. One of the way, is to reach out with solution. And if I can control and deploy things, more than just SD one or other things, or private mobility, that's the key. >> So, so you have, so you said manufacturing, retail, automotive, oil and gas, you have solutions for each of those, or you're building those, or... >> Right now we have solution for manufacturing, with for example, PTC. That is the biggest company. It's actually based in Boston. >> Yeah. Yeah, it is. There's a company that the market's just coming right to them. >> We have a, very interesting. Another solution with Litmus, that is a startup that, that also does manufacturing aggregation. We have retail with Deep North. So we can do detecting in the store, how many people they pass, how many people they doing, all of that. And all theses solution that will be, when we will have Frontier in the market, will be also in Frontier. We are also expanding to energy, and we going vertical by vertical. But what is they really learn, right? You said, you know you are an IT company. What, to me, the Edge is a pre virtualization area. It's like when we had, you know, I'm, I've been in the company for 24 years coming from EMC. The reality was before there was virtualization, everybody was starting his silo. Nobody thought about, "Okay, I can run this thing together "with security and everything, "but I need to do it." Because otherwise in a manufacturing, or in a shop, I can end up with thousand of devices, just because someone tell to me, I'm a, I'm a store manager, I don't know better. I take this video surveillance application, I take these things, I take a, you know, smart building solution, suddenly I have five, six, seven different infrastructure to run this thing because someone say so. So we are here to democratize the Edge, to secure the Edge, and to expand. That's the idea. >> So, the Frontier platform is really the horizontal platform. And you'll build specific solutions for verticals. On top of that, you'll, then I, then the beauty is ISV's come in. >> Yes. >> 'Cause it's open, and the developers. >> We have a self certification program already for our solution, as well, for the current solution, but also for Frontier. >> What does that involve? Self-certification. You go through you, you go through some- >> It's basically a, a ISV can come. We have a access to a lab, they can test the thing. If they pass the first screen, then they can become part of our ecosystem very easily. >> Ah. >> So they don't need to spend days or months with us to try to architect the thing. >> So they get the premature of being certified. >> They get the Dell brand associated with it. Maybe there's some go-to-market benefits- >> Yes. >> As well. Cool. What else do we need to know? >> So, one thing I, well one thing I just want to stress, you know, when we say horizontal platform, really, the Edge is really a, a distributed edge computing problem, right? And you need to almost create a mesh of different computing locations. So for example, even though Dell has Edge optimized infrastructure, that we're going to deploy and lifecycle manage, customers may also have compute solutions, existing compute solutions in their data center, or at a co-location facility that are compute destinations. Project Frontier will connect to those private cloud stacks. They'll also collect to, connect to multiple public cloud stacks. And then, what they can do, is the solutions that we talked about, they construct that using an open based, you know, protocol, template, that describes that distributed application that produces that outcome. And then through orchestration, we can then orchestrate across all of these locations to produce that outcome. That's what the platform's doing. >> So it's a compute mesh, is what you just described? >> Yeah, it's, it's a, it's a software orchestration mesh. >> Okay. >> Right. And allows customers to take advantage of their existing investments. Also allows them to, to construct solutions based on the ISV of their choice. We're offering solutions like Pierluca had talked about, you know, in manufacturing with Litmus and PTC, but they could put another use case that's together based on another ISV. >> Is there a data mesh analog here? >> The data mesh analog would run on top of that. We don't offer that as part of Frontier today, but we do have teams working inside of Dell that are working on this technology. But again, if there's other data mesh technology or packages, that they want to deploy as a solution, if you will, on top of Frontier, Frontier's extensible in that way as well. >> The open nature of Frontier is there's a, doesn't, doesn't care. It's just a note on the mesh. >> Yeah. >> Right. Now, of course you'd rather, you'd ideally want it to be Dell technology, and you'll make the business case as to why it should be. >> They get additional benefits if it's Dell. Pierluca talked a lot about, you know, deploying infrastructure outside the walls of an IT data center. You know, this stuff can be tampered with. Somebody can move it to another room, somebody can open up. In the supply chain with, you know, resellers that are adding additional people, can open these devices up. We're actually deploying using an Edge technology called Secure Device Onboarding. And it solves a number of things for us. We, as a manufacturer can initialize the roots of trust in the Dell hardware, such that we can validate, you know, tamper detection throughout the supply chain, and securely transfer ownership. And that's different. That is not an IT technique. That's an edge technique. And that's just one example. >> That's interesting. I've talked to other people in IT about how they're using that technique. So it's, it's trickling over to that side of the business. >> I'm almost curious about the friction that you, that you encounter because the, you know, you paint a picture of a, of a brave new world, a brave new future. Ideally, in a healthy organization, they have, there's a CTO, or at least maybe a CIO, with a CTO mindset. They're seeking to leverage technology in the service of whatever the mission of the organization is. But they've got responsibilities to keep the lights on, as well as innovate. In that mix, what are you seeing as the inhibitors? What's, what's the push back against Frontier that you're seeing in most cases? Is it, what, what is it? >> Inside of Dell? >> No, not, I'm saying out, I'm saying with- >> Market friction. >> Market, market, market friction. What is the push back? >> I think, you know, as I explained, do yourself is one of the things that probably is the most inhibitor, because some people, they think that they are better already. They invest a lot in this, and they have the content. But those are again, silo solutions. So, if you go into some of the huge things that they already established, thousand of store and stuff like that, there is an opportunity there, because also they want to have a refresh cycle. So when we speak about softer, softer, softer, when you are at the Edge, the software needs to run on something that is there. So the combination that we offer about controlling the security of the hardware, plus the operating system, and provide an end-to-end platform, allow them to solve a lot of problems that today they doing by themselves. Now, I met a lot of customers, some of them, one actually here in Spain, I will not make the name, but it's a large automotive. They have the same challenge. They try to build, but the problem is this is just for them. And they want to use something that is a backup and provide with the Dell service, Dell capability of supply chain in all the world, and the diversity of the portfolio we have. These guys right now, they need to go out and find different types of compute, or try to adjust thing, or they need to have 20 people there to just prepare the device. We will take out all of this. So I think the, the majority of the pushback is about people that they already established infrastructure, and they want to use that. But really, there is an opportunity here. Because the, as I said, the IT/OT came together now, it's a reality. Three years ago when we had our initiative, they've pointed out, sarcastically. We, we- >> Just trying to be honest. (laughing) >> I can't let you get away with that. >> And we, we failed because it was too early. And we were too focused on, on the fact to going. Push ourself to the boundary of the IOT. This platform is open. You want to run EdgeX, you run EdgeX, you want OpenVINO, you want Microsoft IOT, you run Microsoft IOT. We not prescribe the top. We are locking down the bottom. >> What you described is the inertia of, of sunk dollars, or sunk euro into an infrastructure, and now they're hanging onto that. >> Yeah. >> But, I mean, you know, I, when we say horizontal, we think scale, we think low cost, at volume. That will, that will win every time. >> There is a simplicity at scale, right? There is a, all the thing. >> And the, and the economics just overwhelm that siloed solution. >> And >> That's inevitable. >> You know, if you want to apply security across the entire thing, if you don't have a best practice, and a click that you can do that, or bring down an application that you need, you need to touch each one of these silos. So, they don't know yet, but we going to be there helping them. So there is no pushback. Actually, this particular example I did, this guy said you know, there are a lot of people that come here. Nobody really described the things we went through. So we are on the right track. >> Guys, great conversation. We really appreciate you coming on "theCUBE." >> Thank you. >> Pleasure to have you both. >> Okay. >> Thank you. >> All right. And thank you for watching Dave Vellante for Dave Nicholson. We're live at the Fira. We're winding up day four. Keep it right there. Go to siliconangle.com. John Furrier's got all the news on "theCUBE.net." We'll be right back right after this break. "theCUBE," at MWC 23. (outro music)

Published Date : Mar 2 2023

SUMMARY :

that drive human progress. And you walking the floors, in the Edge Business Unit the term fellow. and help, you know, drive cubes course, you know. about the Edge platform. and now building the platform when I like that Dell started to there is nobody say to us, you know, and the security at the Edge, an example of the learnings. Well Glenn, of course you know, maybe off to the side, in the software that they're going to use, a network function that might be a feature But it's not the center of the discussion. is really, at the Edge Who is it? that the SI, or the user So, so you have, so That is the biggest company. There's a company that the market's just I take a, you know, is really the horizontal platform. and the developers. We have a self What does that involve? We have a access to a lab, to try to architect the thing. So they get the premature They get the Dell As well. is the solutions that we talked about, it's a software orchestration mesh. on the ISV of their choice. that they want to deploy It's just a note on the mesh. as to why it should be. In the supply chain with, you know, to that side of the business. In that mix, what are you What is the push back? So the combination that we offer about Just trying to be honest. on the fact to going. What you described is the inertia of, you know, I, when we say horizontal, There is a, all the thing. overwhelm that siloed solution. and a click that you can do that, you coming on "theCUBE." And thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TelcoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Dan CumminsPERSON

0.99+

Dave NicholsonPERSON

0.99+

SpainLOCATION

0.99+

DellORGANIZATION

0.99+

EliasPERSON

0.99+

PierlucaPERSON

0.99+

TexasLOCATION

0.99+

Papua New GuineaLOCATION

0.99+

Pierluca ChiodelliPERSON

0.99+

30%QUANTITY

0.99+

BostonLOCATION

0.99+

Dave NicholsonPERSON

0.99+

GlennPERSON

0.99+

telcoORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

DavePERSON

0.99+

FrontierORGANIZATION

0.99+

EdgeORGANIZATION

0.99+

John FurrierPERSON

0.99+

LitmusORGANIZATION

0.99+

20 peopleQUANTITY

0.99+

fiveQUANTITY

0.99+

hundredsQUANTITY

0.99+

BarcelonaLOCATION

0.99+

24 yearsQUANTITY

0.99+

EMCORGANIZATION

0.99+

PTCORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

one exampleQUANTITY

0.99+

this weekDATE

0.99+

five minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

first screenQUANTITY

0.98+

sixQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

PierPERSON

0.98+

sevenQUANTITY

0.98+

Three years agoDATE

0.98+

EdgeTITLE

0.98+

OpenVINOTITLE

0.97+

Project FrontierORGANIZATION

0.97+

firstQUANTITY

0.97+

thousandQUANTITY

0.97+

BothQUANTITY

0.96+

first thingQUANTITY

0.96+

EdgeXTITLE

0.96+

Bill Pearson, Intel | CUBE Conversation, August 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome back everybody. Jeff Frick here with theCUBE we are in our Palo Alto studios today. We're still getting through COVID, thankfully media was a necessary industry, so we've been able to come in and keep a small COVID crew, but we can still reach out to the community and through the magic of the internet and camera's on laptops, we can reach out and touch base with our friends. So we're excited to have somebody who's talking about and working on kind of the next big edge, the next big cutting thing going on in technology. And that's the internet of things you've heard about it the industrial Internet of Things. There's a lot of different words for it. But the foundation of it is this company it's Intel. We're happy to have joined us Bill Pearson. He is the Vice President of Internet of Things often said IoT for Intel, Bill, great to see you. >> Same Jeff. Nice to be here. >> Yeah, absolutely. So I just was teasing getting ready for this interview, doing a little homework and I saw you talking about Internet of Things in a 2015 interview, actually referencing a 2014 interview. So you've been at this for a while. So before we jump into where we are today, I wonder if you can share, you know, kind of a little bit of a perspective of what's happened over the last five or six years. >> I mean, I think data has really grown at a tremendous pace, which has changed the perception of what IoT is going to do for us. And the other thing that's been really interesting is the rise of AI. And of course we need it to be able to make sense of all that data. So, you know, one thing that's different is today where we're really focused on how do we take that data that is being produced at this rapid rate and really make sense of it so that people can get better business outcomes from that. >> Right, right. But the thing that's so interesting on the things part of the Internet of Things and even though people are things too, is that the scale and the pace of data that's coming off, kind of machine generated activity versus people generated is orders of magnitude higher in terms of the frequency, the variety, and all kind of your classic big data meme. So that's a very different challenge then, you know, kind of the growth of data that we had before and the types of data, 'cause it's really gone kind of exponential across every single vector. >> Absolutely. It has, I mean, we've seen estimates that data is going to increase by about five times as much as it is today, over the next, just a couple years. So it's exponential as you said. >> Right. The other thing that's happened is Cloud. And so, you know, kind of breaking the mold of the old mold roar, all the compute was either in your mini computer or data center or mainframe or on your laptop. Now, you know, with Cloud and instant connectivity, you know, it opens up a lot of different opportunities. So now we're coming to the edge and Internet of Things. So when you look at kind of edge in Internet of Things, kind of now folding into this ecosystem, you know, what are some of the tremendous benefits that we can get by leveraging those things that we couldn't with kind of the old infrastructure and our old way kind of gathering and storing and acting on data? >> Yeah. So one of the things we're doing today with the edge is really bringing the compute much closer to where all the data is being generated. So these sensors and devices are generating tons and tons of data and for a variety of reasons, we can't send it somewhere else to get processed. You know, there may be latency requirements for that control loop that you're running in your factory or there's bandwidth constraints that you have, or there's just security or privacy reasons to keep it onsite. And so you've got to process a lot of this data onsite and maybe some estimates or maybe half of the data is going to remain onsite here. And when you look at that, you know, that's where you need compute. And so the edge is all about taking compute, bringing it to where the data is, and then being able to use the intelligence, the AI and analytics to make sense of that data and take actions in real time. >> Right, right. But it's a complicated situation, right? 'Cause depending on where that edge is, what the device is, does it have power? Does it not have power? Does it have good connectivity? Does it not have good connectivity? Does it have even the ability to run those types of algorithms or does it have to send it to some interim step, even if it doesn't have, you know, kind of the ability to send it all the way back to the Cloud or all the way back to the data center for latency. So as you kind of slice and dice all these pieces of the chain, where do you see the great opportunity for Intel, where's a good kind of sweet spot where you can start to bring in some compute horsepower and you can start to bring in some algorithmic processing and actually do things between just the itty-bitty sensor at the itty-bitty end of the chain versus the data center that's way, way upstream and far, far away. >> Yeah. Our business is really high performance compute and it's this idea of taking all of these workloads and bringing them in to this high performance compute to be able to run multiple software defined workloads on single boxes, to be able to then process and analyze and store all that data that's being created at the edge, do it in a high performance way. And whether that's a retail smart shelf, for example, that we can do realtime inventory on that shelf, as things are coming and going, or whether it's a factory and somebody's doing, you know,real time defect detection of something moving across their textile line. So all of that comes down to being able to have the compute horsepower, to make sense of the data and do something with it. >> Right, right. So you wouldn't necessarily like in your shelf example that the compute might be done there at the local store or some aggregation point beyond just that actual, you know, kind of sensor that's underneath that one box of tide, if you will. >> Absolutely. Yeah, you could have that on-prem, a big box that does multiple shelves, for example. >> Okay, great. So there's a great example and you guys have the software development kit, you have a lot of resources for developers and in one of the case studies that I just wanted to highlight before we jump into the dev side was I think Audi was the customer. And it really illustrates a point that we talked about a lot in kind of the big data meme, which is, you know, people used to take action on a sample of data after the fact. And I think this case here we're talking about running 1,000 cars a day through this factory, they're doing so many welds, 5 million welds a day, and they would pull one at the end of the day, sample a couple welds and did we have a good day or not? Versus what they're doing now with your technology is actually testing each and every weld as it's being welded, based on data that's coming off the welding machine and they're inspecting every single weld. So I just love you've been at this for a long time. When you talk to customers about what is possible from a business point of view, when you go from after the fact with a sample of data, to in real time with all the data, how that completely changes your view and ability to react to your business. >> Yeah. I mean, it makes people be able to make better decisions in real time. You know, as you've got cameras on things like textile manufacturers or footwear manufacturers, or even these realtime inventory examples you mentioned, people are going to be able to make and can make decisions in real time about how to stock that shelf, what to order about what to pull off the line, am I getting a good product or not? And this has really changed, as you said, we don't have to go back and sample anymore. You can tell right now as that part is passing through your manufacturing line, or as that item is sitting on your shelf, what's happening to it. It's really incredible. >> So let's talk about developers. So you've got a lot of resources available for developers and everyone knows Intel obviously historically in PCs and data centers. And you would do what they call design wins back when I was there, many moons ago, right? You try to get a design win and then, you know, they're going to put your microprocessors and a bunch of other components in a device. When you're trying to work with, kind of Cutting Edge Developers in kind of new fields and new areas, this feels like a much more direct touch to the actual people building the applications than the people that are really just designing the systems of which Intel becomes a core part of. I wonder if you could talk about, you know, the role developers and really Intel's outreach to developers and how you're trying to help them, you know, kind of move forward in this new crazy world. >> Yeah, developers are essential to our business. They're essential to IoT. Developers, as you said, create the applications that are going to really make the business possible. And so we know the value of developers and want to make sure that they have the tools and resources that they need to use our products most effectively. We've done some things around OpenVINO toolkit as an example, to really try and simplify, democratize AI application so that more developers can take advantage of this and, you know, take the ambitions that they have to do something really interesting for their business, and then go put it into action. And the whole, you know, our whole purpose is making sure we can actually accomplish that. >> Right. So let's talk about OPenVINO. It's an interesting topic. So I actually found out what OpeVINO means, Open Visual Inference and Neural Optimization toolkit,. So it's a lot about computer vision. So I will, you know, and computer vision is an interesting early AI application that I think a lot of people are familiar with through Google photos or other things where, you know, suddenly they're putting together little or a highlight movies for you, or they're pulling together all the photos of a particular person or a particular place. So the computer vision is pretty interesting. Inference is a special subset of AI. So I wonder, you know, you guys are way behind OpenVINO. Where do you see the opportunities in visualization? What are some of the instances that you're seeing with the developers out there doing innovative things around computer vision? >> Yeah, there's a whole variety of used cases with computer vision. You know, one that we talked about earlier here was looking at defect detection. There's a company that we work with that has a 360 degree view. They use cameras all around their manufacturing line. And from there, they didn't know what a good part looks like and using inference and OpenVINO, they can tell when a bad part goes through or there's a defect in their line and they can go and pull that and make corrections as needed. We've also seen, you know, use cases like smart shopping, where there's a point of sale fraud detection. We call it, you know, is the item being scanned the same as the item that is actually going through the line. And so we can be much smarter about understanding retail. One example that I saw was a customer who was trying to detect if it was a vodka or potatoes that was being scanned in an automated checkout system. And again, using cameras and OpenVINO, they can tell the difference. >> We haven't talked about a computer testing yet. We're still sticking with computer vision and the natural language processing. I know one of the areas you're interested in and it's going to only increase in importance is education. Especially with what's going on, I keep waiting for someone to start rolling out some national, you know, best practice education courses for kindergartens and third graders and sixth graders. And you know, all these poor teachers that are learning to teach on the fly from home, you guys are doing a lot of work in education. I wonder if you can share, I think your work doing some work with Udacity. What are you doing? Where do you see the opportunity to apply some of this AI and IoT in education? >> Yeah, we launched the Nanodegree with Udacity, and it's all about OpenVINO and Edge AI and the idea is, again, get more developers educated on this technology, take a leader like your Udacity, partner with them to make the coursework available and get more developers understanding using and building things using Edge AI. And so we partnered with them as part of their million developer goal. We're trying to get as many developers as possible through that. >> Okay. And I would be remiss if we talked about IoT and I didn't throw 5G into the conversation. So 5G is a really big deal. I know Intel has put a ton of resources behind it and have been talking about it for a long, long time. You know, I think the huge value in 5G is a lot around IoT as opposed to my handset going faster, which is funny that they're actually releasing 5G handsets out there. But when you look at 5G combined with the other capabilities in IoT, again, how do you see 5G being this kind of step function in ability to do real time analysis and make real time business decisions? >> Well, I think it brings more connectivity certainly and bandwidth and reduces latency. But the cool thing about it is when you look at the applications of it, you know, we talked about factories. A lot of those factors may want to have a private 5G networks that are running inside that factory, running all the machines or robots or things in there. And so, you know, it brings capabilities that actually make a difference in the world of IoT and the things that developers are trying to build. >> That's great. So before I let you go, you've been at this for a while. You've been at Intel for a while. You've seen a lot of big sweeping changes, kind of come through the industry, you know, as you sit back with a little bit of perspective, and it's funny, even IoT, like you said, you've been talking about it for five years and 5G we've been been waiting for it, but the waves keep coming, right? That's kind of the fun of being in this business. As you sit there where you are today, you know, kind of looking forward the next couple of years, couple of four or five years, you know, what has just surprised you beyond compare and what are you still kind of surprised that's it's still a little bit lagging that you would have expected to see a little bit more progress at this point. >> You know, to me the incredible thing about the computing industry is just the insatiable demand that the world has for compute. It seems like we always come up with, our customers always come up with more and more uses for this compute power. You know, as we've talked about data and the exponential growth of data and now we need to process and analyze and store that data. It's impressive to see developers just constantly thinking about new ways to apply their craft and, you know, new ways to use all that available computing power. And, you know, I'm delighted 'cause I've been at this for a while, as you said, and I just see this continuing to go far as far as the eye can see. >> Yeah, yeah. I think you're right. There's no shortage of opportunity. I mean, the data explosion is kind of funny. The data has always been there, we just weren't keeping track of it before. And the other thing that as I look at Jira, Internet of Things, kind of toolkit, you guys have such a broad portfolio now where a lot of times people think of Intel pretty much as a CPU company, but as you mentioned, you got to FPGAs and VPUs and Vision Solutions, stretch applications Intel has really done a good job in terms of broadening the portfolio to go after, you know, kind of this disparate or kind of sharding, if you will, of all these different types of computer applications have very different demands in terms of power and bandwidth and crunching utilization to technical (indistinct). >> Yeah. Absolutely the various computer architectures really just to help our customers with the needs, whether it's high power or low performance, a mixture of both, being able to use all of those heterogeneous architectures with a tool like OpenVINO, so you can program once, right once and then run your application across any of those architectures, help simplify the life of our developers, but also gives them the compute performance, the way that they need it. >> Alright Bill, well keep at it. Thank you for all your hard work. And hopefully it won't be five years before we're checking in to see how far this IoT thing is going. >> Hopefully not, thanks Jeff. >> Alright Bill. Thanks a lot. He's bill, I'm Jeff. You're watching theCUBE. Thanks for watching, we'll see you next time. (upbeat music)

Published Date : Sep 1 2020

SUMMARY :

all around the world. And that's the internet of and I saw you talking And the other thing that's is that the scale and the pace of data So it's exponential as you said. And so, you know, kind of breaking the AI and analytics to kind of the ability to send it So all of that comes down to being able just that actual, you know, Yeah, you and in one of the case studies And this has really changed, as you said, to help them, you know, And the whole, you know, So I wonder, you know, you We've also seen, you know, and the natural language processing. and the idea is, again, But when you look at 5G and the things that developers couple of four or five years, you know, to apply their craft and, you know, to go after, you know, a mixture of both, being able to use Thank you for all your hard work. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

2015DATE

0.99+

2014DATE

0.99+

Bill PearsonPERSON

0.99+

Palo AltoLOCATION

0.99+

August 2020DATE

0.99+

five yearsQUANTITY

0.99+

360 degreeQUANTITY

0.99+

AudiORGANIZATION

0.99+

BillPERSON

0.99+

IntelORGANIZATION

0.99+

one boxQUANTITY

0.99+

BostonLOCATION

0.99+

OpenVINOTITLE

0.98+

UdacityORGANIZATION

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

One exampleQUANTITY

0.97+

billPERSON

0.97+

1,000 cars a dayQUANTITY

0.96+

oneQUANTITY

0.96+

eachQUANTITY

0.95+

one thingQUANTITY

0.94+

single boxesQUANTITY

0.94+

OpeVINOTITLE

0.93+

coupleQUANTITY

0.91+

sixth gradersQUANTITY

0.91+

million developerQUANTITY

0.91+

about five timesQUANTITY

0.91+

5 million welds a dayQUANTITY

0.9+

Edge AITITLE

0.9+

GoogleORGANIZATION

0.9+

onceQUANTITY

0.89+

next couple of yearsDATE

0.89+

six yearsQUANTITY

0.87+

COVIDORGANIZATION

0.86+

many moons agoDATE

0.86+

fourQUANTITY

0.84+

half ofQUANTITY

0.84+

5GTITLE

0.83+

CloudTITLE

0.82+

Internet ofORGANIZATION

0.8+

tons and tons of dataQUANTITY

0.8+

single vectorQUANTITY

0.78+

OPenVINOTITLE

0.78+

JiraORGANIZATION

0.76+

single weldQUANTITY

0.74+

OpenVINOORGANIZATION

0.71+

thirdQUANTITY

0.68+

VicePERSON

0.67+

couple yearsQUANTITY

0.65+

everyQUANTITY

0.63+

ton of resourcesQUANTITY

0.62+

case studiesQUANTITY

0.58+

dataQUANTITY

0.55+

lastDATE

0.53+

Open Visual InferenceTITLE

0.51+

COVIDOTHER

0.5+

Naveen Rao, Intel | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)

Published Date : Dec 3 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

20 wattsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2014DATE

0.99+

10 millionQUANTITY

0.99+

Naveen RaoPERSON

0.99+

Justin WarrenPERSON

0.99+

20 millionQUANTITY

0.99+

oneQUANTITY

0.99+

TaiwanLOCATION

0.99+

2013DATE

0.99+

100 radiologistsQUANTITY

0.99+

Alan TuringPERSON

0.99+

NaveenPERSON

0.99+

IntelORGANIZATION

0.99+

MITORGANIZATION

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

billionsQUANTITY

0.99+

a monthQUANTITY

0.99+

2020DATE

0.99+

two partQUANTITY

0.99+

Las VegasLOCATION

0.99+

one pieceQUANTITY

0.99+

ThursdayDATE

0.99+

Kate DarlingPERSON

0.98+

early 2000sDATE

0.98+

two billionQUANTITY

0.98+

first smartphonesQUANTITY

0.98+

one sideQUANTITY

0.98+

Sands Convention CenterLOCATION

0.97+

todayDATE

0.97+

OpenVINOTITLE

0.97+

one radiologistQUANTITY

0.96+

Dr.PERSON

0.96+

16 year oldQUANTITY

0.95+

two phasesQUANTITY

0.95+

trillions of parametersQUANTITY

0.94+

firstQUANTITY

0.94+

a million timesQUANTITY

0.93+

seven yearsQUANTITY

0.93+

billions of observationsQUANTITY

0.92+

one thingQUANTITY

0.92+

one extremeQUANTITY

0.91+

two competing sidesQUANTITY

0.9+

500 trillion modelQUANTITY

0.9+

a yearQUANTITY

0.89+

fiveQUANTITY

0.88+

eachQUANTITY

0.88+

One areaQUANTITY

0.88+

a couple of months agoDATE

0.85+

one sortQUANTITY

0.84+

two neuralQUANTITY

0.82+

GANsORGANIZATION

0.79+

couple of weeksQUANTITY

0.78+

DeepRacerTITLE

0.77+

millions ofQUANTITY

0.76+

PhotoshopTITLE

0.72+

deepfakesORGANIZATION

0.72+

next few yearsDATE

0.71+

yearQUANTITY

0.67+

re:Invent 2019EVENT

0.66+

threeQUANTITY

0.64+

Invent 2019EVENT

0.64+

aboutQUANTITY

0.63+

Jonathan Ballon, Intel | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their Ecosystem partners. >> Oh welcome back, to theCUBE. Continuing coverage here from AWS re:Invent, as we start to wind down our coverage here on the second day. We'll be here tomorrow as well, live on theCUBE, bringing you interviews from Hall D at the Sands Expo. Along with Justin Warren, I'm John Walls, and we're joined by Jonathan Ballon, who's the Vice President of the internet of things at Intel. Jonathan, thank you for being with us today. Good to see you, >> Thanks for having me guys. >> All right, interesting announcement today, and last year it was all about DeepLens. This year it's about DeepRacer. Tell us about that. >> What we're really trying to do is make AI accessible to developers and democratize various AI tools. Last year it was about computer vision. The DeepLens camera was a way for developers to very inexpensively get a hold of a camera, the first camera that was a deep-learning enabled, cloud connected camera, so that they could start experimenting and see what they could do with that type of device. This year we took the camera and we put it in a car, and we thought what could they do if we add mobility to the equation, and specifically, wanted to introduce a relatively obscure form of AI called reinforcement learning. Historically this has been an area of AI that hasn't really been accessible to most developers, because they haven't had the compute resources at their disposal, or the scale to do it. And so now, what we've done is we've built a car, and a set of tools that help the car run. >> And it's a little miniature car, right? I mean it's a scale. >> It's 1/118th scale, it's an RC car. It's four-wheel drive, four-wheel steering. It's got GPS, it's got two batteries. One that runs the car itself, one that runs the compute platform and the camera. It's got expansion capabilities. We've got plans for next year of how we can turbo-charge the car. >> I love it. >> Right now it's baby steps, so to speak, and basically giving the developer the chance to write a reinforcement learning model, an algorithm that helps them to determine what is the optimum way that this car can move around a track, but you're not telling the car what the optimum way is, you're letting the car figure it out on their own. And that's really the key to reinforcement learning is you don't need a large dataset to begin with, it's pre-trained. You're actually letting, in this case, a device figure it out for themselves, and this becomes very powerful as a tool, when you think about it being applied to various industries, or various use-cases, where we don't know the answer today, but we can allow vast amounts of computing resources to run a reinforcement model over and over, perhaps millions of times, until they find the optimum solution. >> So how do you, I mean that's a lot of input right? That's a lot, that's a crazy number of variables. So, how do you do that? So, how do you, like in this case, provide a car with all the multiple variables that will come into play. How fast it goes, and which direction it goes, and all that, and on different axes and all those things, to make these own determinations, and how will that then translate to a real specific case in the workplace? >> Well, I mean the obvious parallel is of course autonomous driving. AWS had Formula One on stage today during Andy Jassy's keynote, that's also an Intel customer, and what Formula One does is they have the fastest cars in the world, and they have over 120 sensors on that car that are bringing in over a million pieces of data per second. Being able to process that vast amount of data that quickly, which includes a variety of data, like it's not just, it's also audio data, it's visual data, and being able to use that to inform decisions in close to real time, requires very powerful compute resources, and those resources exist both in the cloud as well as close to the source of the data itself at the edge, in the physical environment. >> So, tell us a bit about the software that's involved here, 'cause people think of Intel, you know that some people don't know about the software heritage that Intel has. It's not just about, the Intel inside isn't just the hardware chips that's there, there's a lot of software that goes into this. So, what's the Intel angle here on the software that powers this kind of distributed learning. >> Absolutely, software is a very important part of any AI architecture, and for us we've a tremendous amount of investment. It's almost perhaps, equal investment in software as we do in hardware. In the case of what we announced today with DeepRacer and AWS, there's some toolkits that allow developers to better harness the compute resources on the car itself. Two things specifically, one is we have a tool called, RL Coach or Reinforcement Learning Coach, that is integrated into SageMaker, AWS' machine learning toolkit, that allows them to access better performance in the cloud of that data that's coming into the, off their model and into their cloud. And then we also have a toolkit called OpenVINO. It's not about drinking wine. >> Oh darn. >> Alright. >> Open means it's an opensource contribution that we made to the industry. Vino, V-I-N-O is Visual Inference and Neural Network Optimization, and this is a powerful tool, because so much of AI is about harnessing compute resources efficiently, and as more and more of the data that we bring into our compute environments is actually taking place in the physical world, it's really important to be able to do that in a cost-effective and power-efficient way. OpenVINO allows developers to actually isolate individual cores or an integrated GPU on a CPU without knowing anything about hardware architecture, and it allows them then to apply different applications, or different algorithms, or inference workloads very efficiently onto that compute architecture, but it's abstracted away from any knowledge of that. So, it's really designed for an application developer, who maybe is working with a data scientist that's built a neural network in a framework like TensorFlow, or Onyx, or Pytorch, any tool that they're already comfortable with, abstract away from the silicon and optimize their model onto this hardware platform, so it performs at orders of magnitude better performance then what you would get from a more traditional GPU approach. >> Yeah, and that kind of decision making about understanding chip architectures to be able to optimize how that works, that's some deep magic really. The amount of understanding that you would need to have to do that as a human is enormous, but as a developer, I don't know anything about chip architectures, so it sounds like the, and it's a thing that we've been hearing over the last couple of days, is these tools allow developers to have essentially superpowers, so you become an augmented intelligence yourself. Rather than just giving everything to an artificial intelligence, these tools actually augment the human intelligence and allow you to do things that you wouldn't otherwise be able to do. >> And that's I think the key to getting mass market adoption of some of these AI implementations. So, for the last four or five years since ImageNet solved the image recognition problem, and now we have greater accuracy from computer models then we do from our own human eyes, really AI was limited to academia, or large IT tech companies, or proof-of-concepts. It didn't really scale into these production environments, but what we've seen over the couple of years is really a democratization of AI by companies like AWS and Intel that are making tools available to developers, so they don't need to know how to code in Python to optimize a compute module, or they don't need to, in many cases, understand the fundamental underlying architectures. They can focus on whatever business problem they're tryin' to solve, or whatever AI use-case it is that they're working on. >> I know you talked about DeepLens last year, and now we've got DeepRacer this year, and you've got the contest going on throughout this coming year with DeepRacer, and we're going to have a big race at the AWS re:Invent 2019. So what's next? I mean, or what are you thinking about conceptually to, I guess build on what you've already started there? >> Well, I can't reveal what next years, >> Well that I understand >> Project will be. >> But generally speaking. >> But what I can tell you, what I can tell you is what's available today in these DeepRacer cars is a level playing field. Everyone's getting the same car and they have essentially the same tool sets, but I've got a couple of pro-tips for your viewers if they want to win some of these AWS Summits that are going to be around the world in 2019. Two pro-tips, one is they can leverage the OpenVINO toolkit to get much higher inference performance from what's already on that car. So, I encourage them to work with OpenVINO. It's integrated into SageMaker, so that they have easy access to it if they're an AWS developer, but also we're going to allow an expansion of, almost an accelerator of the car itself, by being able to plug in an Intel Neural Compute Stick. We just released the second version of this stick. It's a USB form factor. It's got a Movidius Myriad X Vision processing unit inside. This years version is eight times more powerful than last years version, and when they plug it into the car, all of that inference workload, all of those images, and information that's coming off those sensors will be put onto the VPU, allowing all the CPU, and GPU resources to be used for other activities. It's going to allow that car to go at turbo speed. >> To really cook. >> Yeah. (laughing) >> Alright, so now you know, you have no excuse, right? I mean Jonathan has shared the secret sauce, although I still think when you said OpenVINO you got Justin really excited. >> It is vino time. >> It is five o'clock actually. >> Alright, thank you for being with us. >> Thanks for having me guys. >> And good luck with DeepRacer for the coming year. >> Thank you. >> It looks like a really, really fun project. We're back with more, here at AWS re:Invent on theCUBE, live in Las Vegas. (rhythmic digital music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon Web Services, Intel, Good to see you, and last year it was all about DeepLens. that hasn't really been accessible to most developers, And it's a little miniature car, right? One that runs the car itself, And that's really the key to reinforcement learning to a real specific case in the workplace? and being able to use that to inform decisions It's not just about, the Intel inside that allows them to access better performance in the cloud and as more and more of the data that we bring Yeah, and that kind of decision making about And that's I think the key to getting mass market adoption I mean, or what are you thinking about conceptually to, so that they have easy access to it I mean Jonathan has shared the secret sauce, on theCUBE, live in Las Vegas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Jonathan BallonPERSON

0.99+

JonathanPERSON

0.99+

John WallsPERSON

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

AWS'ORGANIZATION

0.99+

Last yearDATE

0.99+

Las VegasLOCATION

0.99+

Andy JassyPERSON

0.99+

IntelORGANIZATION

0.99+

2019DATE

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

next yearDATE

0.99+

JustinPERSON

0.99+

two batteriesQUANTITY

0.99+

first cameraQUANTITY

0.99+

This yearDATE

0.99+

second versionQUANTITY

0.99+

tomorrowDATE

0.99+

eight timesQUANTITY

0.99+

five o'clockDATE

0.99+

Two thingsQUANTITY

0.99+

this yearDATE

0.98+

Two pro-tipsQUANTITY

0.98+

over a million piecesQUANTITY

0.98+

todayDATE

0.98+

over 120 sensorsQUANTITY

0.98+

OpenVINOTITLE

0.98+

OneQUANTITY

0.98+

four-wheelQUANTITY

0.97+

Sands ExpoEVENT

0.97+

DeepRacerORGANIZATION

0.97+

SageMakerTITLE

0.96+

Myriad X VisionCOMMERCIAL_ITEM

0.95+

DeepLensCOMMERCIAL_ITEM

0.95+

V-I-N-OTITLE

0.94+

second dayQUANTITY

0.94+

TensorFlowTITLE

0.94+

bothQUANTITY

0.94+

millions of timesQUANTITY

0.92+

PytorchTITLE

0.92+

OnyxTITLE

0.91+

Neural Compute StickCOMMERCIAL_ITEM

0.91+

RL CoachTITLE

0.91+

MovidiusORGANIZATION

0.89+

Invent 2018EVENT

0.86+

coming yearDATE

0.86+

Reinforcement Learning CoachTITLE

0.85+

this coming yearDATE

0.82+

ImageNetORGANIZATION

0.82+

theCUBEORGANIZATION

0.82+

five yearsQUANTITY

0.81+

re:Invent 2019EVENT

0.8+

VinoTITLE

0.78+

last couple of daysDATE

0.77+

Formula OneTITLE

0.75+

AWS re:Invent 2018EVENT

0.72+

Hall DLOCATION

0.71+

couple of yearsQUANTITY

0.71+

fourQUANTITY

0.71+

data per secondQUANTITY

0.69+

re:EVENT

0.67+

1/118th scaleQUANTITY

0.67+

DeepRacerCOMMERCIAL_ITEM

0.67+

FormulaORGANIZATION

0.67+

DeepRacerTITLE

0.65+