Image Title

Search Results for ASNI:

Michael Greene, Intel - #SparkSummit - #theCUBE


 

>> Announcer: Live from San Francisco, it's the Cube covering Spark Summit 2017. Brought to you by Data Bricks. >> Welcome back to the Cube. Continuing our coverage here at Spark Summit 2017. What a great lineup of guests. I can't wait to introduce this gentleman. We have Intel's VP of the software and service group, Mr. Michael Green. Michael, welcome. >> Thank you for having me. >> All right, we also have George with us over here and George and I will both be peppering you with questions. Are you ready for that? >> I am. I've got the salt to go with the pepper. (laughs) >> Well, you just got off the stage. You did the keynote this morning. What do you think was the most important message you delivered in your keynote? >> Well, it was interesting. One of the things that we're looking at with Big DL, so the big DL framework, was we're hearing a lot of the challenges of making sure that these AI-type workloads scale easily. And one of the things when we open-source Big DL, we really were designing it to leverage that sparkability for massive scale from the beginning. So I thought that that was one of the things that connected with several of the keynotes ahead of me was talking about if this is your challenge, here is one of many solutions but a very good one that will let you take advantage of the scale that people have in their infrastructure, lots of Xeons out there. Might also make sure to fully utilize running the workloads of the future, AI. >> Okay, so Intel not just a hardware company. You do software, right? (laughs) >> Well, you know, Intel's a solutions company, right? And hardware's awesome, but hardware without software is a brick. Maybe a warm one, but it doesn't do much- >> Not a data brick. >> That's right, not a data brick, just a brick. >> And not melted down, either. >> That's right, that's right. So sand without software doesn't go very far. And I see it as software is used to ignite the hardware so that you actually get useful productivity out of it. So as a software solution and as customers, they have problems to solve. It's rare that they come in and say that, "Nope, I just need a nail," right? They're usually like, "I need a home." Well, you can't just provide the nail, you have to provide all the pieces, and one of the things that's exciting for me being part of Intel is that we provide silicon, of course, right? The processors Xeon, Accelerators, and now, software, tools, frameworks, to make sure that a customer can actually really get the value of the entire solution. >> Host: Okay, go ahead, George. >> So Michael, help those of us who've been watching from afar but aren't up-to-date on the day-to-day tactics and strategy of what Intel's doing with (mumbles) in terms of where does Big DL fit? And then the acquisition of the floating point (mumbles) technology so that there's a special purpose acceleration on the chip, so how do those two work together along with the rest of the ecosystem? >> Sure, great question. Do if you think of Intel, really, we're always looking at how we can leverage Moore's Law to get more and more integrated into the solution. And if you quickly step through a brief history, at one point, we had a 386, which was a great integer processor, which was partnered with a 387 for the floating point accelerate. 46 combined that because we're able to leverage Moore's Law to bring those two together. Got a lot of reuse for the instruction set with the acceleration. As we bring in - Altera was recently integrated into Intel - they come with a suite of incredible FPGAs and accelerators, as well as another company with Nirvana, that also accelerators, and we're looking at those special case opportunities to accelerate the user experience. So we're going to continue to follow that trend and make sure that you have the general purpose capabilities and where new workloads are coming in, and we really see a lot of growth in AI. As I think I said in the keynote, about 12x growth by 2020. We need to make sure that we have the silicon, as well as the software, and that's for Big DL to pull those two together to make sure that we're getting the full benefit of the solution. >> So a couple years ago, we were told that Intel actually thought that there was going to be more Hadoop servers, and Hadoop is umbrella term for the ecosystem, than database servers in three to fives years' time. When you look at deep learning, because we know it's so much more compute-intensive than the traditional statistical machine learning, if you look out three to five years, how much of the compute cycles, share of workloads, do you see deep learning comprising? >> I think that maybe in the last year, deep learning, or AI, as a workload's about seven percent. But if you grow by 12x, it's definitely growing quickly. So what we're expecting is that AI will become inherent in pretty much every application. An example of this is, at one point, facial detection was something that was the new thing. You can't buy a camera that doesn't do that. So if you pull up your camera and you see the little square show up, it's just commonplace. We're expecting that AI will just become an integral part of solutions, not a solution in and of itself. It's there to make software solutions smarter, it's there to make them go further, it's there to make them smarter. It's not there to be independent. It's like, "Wow, we've identified a cat." That's cool, but if we're identifying problems or making sure that the autonomous delivery systems don't kill a cat, there's a little bit more that needs to go one, so it's going to be part of the solution. >> What about the trade-off between processing at the edge and learning in the cloud? I mean, you can learn on the edge, you can learn in the cloud, you can do the analysis on either end of the run time. How do you guys see that being split up in the future? >> Absolutely, I think that the deep learning training, there's always opportunities that go through vast amount of data to figure out how to identify what's interesting, identify new insights. Once you have those models trained, then you want to use them everywhere. And what makes sense, then, then we're switching from training to inference. Inference at the edge allows you to be more real-time. In some cases, if you've imagined a smart camera, even from a smart camera point-of-view, do I send all the data stream to the data center? Well, maybe not. Let's assume that it's being used for highway patrol. If you identify the car speeding, then send the information, except leave me out. (laughs) Kidding on that. But it's that kind of piece where you allow both sides to be smart. More information for the continual training in the cloud, but also more ability to add compute to the edge so that we can do some really cool activities right at the edge, real-time, without having to send all the information. >> If you had to describe to people working on architectures for the new distributed computing in IOT, what would an edge device look like in its hardware footprint in terms of compute, memory, connectivity? >> So in terms of connectivity, we're expecting an explosion of 5G. A lot of high bandwidth, multiple things being connected with some type of communication, 5G capability. It won't just be about, let's just say, cars feeding back where they are from their GPS, but it's going to be cars talking to other cars. Maybe one needs to move over a lane. Can they adjust? We're talking autonomous world. There's going to be so much interconnection through 5G, so I expect to see 5G show up in most edge devices. And to your point, I think it's very important to add that we expect edge devices to all have some kind of compute capability. Not just sensors, but ability to sense and make some decisions based on what they're sensing. We're going to continue to see more and more compute go to the edge devices. So again, where we look at leveraging the power of Moore's Law, we're going to be able to move that compute that today is like, the cloud is just incredible with its collective compute power, that will slowly move away. And now, we've seen that from mainframe to workstations to PC, the phones, and to edge devices. I think that trend will continue and we'll continue to see bigger data centers and other use cases that require deeper analysis. So from a developer's point of view, if you're working on an edge device, make sure it has great connectivity and compute. >> So one last follow-up from me. Google is making a special effort to build their own framework, open source (mumbles) flow, and then marry it to specialized hardware, tenser processing units. So specialization versus generalization. Do you have a sense for someone who's running TPU in the cloud, do you have a sense for if they're learning tenser flow models or tenser flow-based models, would there be an advantage for that narrow set running on tenser processing units? Or would that be supported just as well on Intel hardware? >> You know, specialization is anything that's purpose-built. As you said, it's just not general purpose, but as I mentioned, over time, the specialized capabilities slide into general purpose opportunities. Recently, we added ASNI, which is an encryption algorithm, into our processors very specialized for encryption/decryption. But because it was so generally used now, it's now just part of our processor offering, it's just part of our instruction set. I expect to continue to see that trend, so many things may start off specialized, which is great, it's a great way to innovate, and then, over time, if it becomes general purpose or if it's so specialized that everyone's using it, it's not general purpose and it slides into the general purpose opportunity. I think that will be a continuation. We've seen that since the dawn of the computer, specialized memory, specialized compute, specialized floating point capabilities, are now just generally available. And so when we deploy things like Big DL, a lot of the benefit of it is that we know the Xeon processor has so much capability because it has pulled in, over time, the best of the specialized use cases that are now generally used. >> Great deep-dive questions, George. We have a couple of minutes left so I know you brought a lot to this conference. They put you up on stage. So what were you hoping to gain from the conference? Maybe you came here to learn or have you had any interesting conversations so far? >> You know, what I'm always excited about at these conferences is that the open-source community is just one that is so incredibly adaptive and innovative, so we're always out there looking to see where the world is going. By doing that, we're learning where- Because again, where the software goes, we want to make sure that the hardware that supports it, we're there to meet their needs. So today, we're learning about new frameworks coming out, the next spark on the roadmap, what they're looking at doing. I expect that we'll hear a little more about scripting languages as well. All of that is just fantastic because I'm always impressed but have come to expect a lot of innovation, but still impressed by the amount of innovation. So it's good to be in the right place and as we approach things from an Intel point of view, we know we approach it from a portfolio solutions set. It's not just silicon, it's not just an accelerator, but it's from the hardware through the software solution. So we know that we can really help to accelerate and usher in the next compute paradigm. So this has been fun. >> That would be a great ending but I got to ask you this. When you're sitting in this chair next year at Spark 2018, what do you hope to be talking about? >> Well, one of the things that we're looking and talking about is this massive amounts of data. I would love to be here next year talking more about the new memory technologies that are coming out that allow for tremendous more storage at incredible speeds, better SSDs and how they will impact the performance of the overall solution, and of course, we're going to continue to accelerate our processing cores, accelerators for unique capabilities. I want to come back in and say, "Wow, what did we 10x this year?" That's always fun. It's a great challenge to the engineering team who just heard that and said, "Ugh, he's starting off with 10x again?" (laughs) >> Great, Michael. That's a great wrap-up, too. We appreciate you coming on and sharing with the Cube audience the exciting things happening at Intel with Spark. >> Well, thank you for the time. I really appreciate it. >> All right, and thank you all for joining us for this segment. We'll be back with more guests in just a few. You're watching the Cube. (electronic music)

Published Date : Jun 6 2017

SUMMARY :

Brought to you by Data Bricks. We have Intel's VP of the software and service group, and George and I will both be peppering you with questions. I've got the salt to go with the pepper. Well, you just got off the stage. One of the things that we're looking at with Big DL, Okay, so Intel not just a hardware company. Well, you know, Intel's a solutions company, right? so that you actually get useful productivity out of it. as the software, and that's for Big DL to pull those two how much of the compute cycles, share of workloads, So if you pull up your camera and you see the little square in the cloud, you can do the analysis on either end Inference at the edge allows you to be more real-time. is like, the cloud is just incredible with its collective in the cloud, do you have a sense for if they're learning We've seen that since the dawn of the computer, specialized We have a couple of minutes left so I know you brought So it's good to be in the right place and as we approach what do you hope to be talking about? of the overall solution, and of course, we're going to continue We appreciate you coming on and sharing Well, thank you for the time. All right, and thank you all for joining us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

Michael GreenPERSON

0.99+

MichaelPERSON

0.99+

Michael GreenePERSON

0.99+

San FranciscoLOCATION

0.99+

threeQUANTITY

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

five yearsQUANTITY

0.99+

12xQUANTITY

0.99+

next yearDATE

0.99+

NirvanaORGANIZATION

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.98+

both sidesQUANTITY

0.98+

this yearDATE

0.98+

Spark Summit 2017EVENT

0.98+

last yearDATE

0.98+

bothQUANTITY

0.97+

about seven percentQUANTITY

0.97+

fives years'QUANTITY

0.97+

10xQUANTITY

0.97+

386COMMERCIAL_ITEM

0.96+

one pointQUANTITY

0.95+

couple years agoDATE

0.92+

AlteraORGANIZATION

0.91+

46QUANTITY

0.91+

Moore's LawTITLE

0.89+

OneQUANTITY

0.88+

this morningDATE

0.86+

XeonCOMMERCIAL_ITEM

0.86+

Spark 2018EVENT

0.84+

one of the thingsQUANTITY

0.82+

CubeCOMMERCIAL_ITEM

0.76+

BricksORGANIZATION

0.75+

about 12xQUANTITY

0.74+

387COMMERCIAL_ITEM

0.72+

MooreTITLE

0.7+

thingsQUANTITY

0.62+

Big DLTITLE

0.61+

SparkORGANIZATION

0.6+

coupleQUANTITY

0.53+

XeonsOTHER

0.53+

DataPERSON

0.49+

big DLTITLE

0.47+

minutesQUANTITY

0.47+

Big DLCOMMERCIAL_ITEM

0.45+

5GOTHER

0.37+

ASNIORGANIZATION

0.34+