Image Title

Search Results for Programmable Solutions Group:

Bernhard Friebe, Intel Programmable Solutions Group | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE. Covering Super Computing 2017 brought to you by Intel. (upbeat music) >> Hey, welcome back everybody. Jeffrey Frick here with theCube. We're in Denver, Colorado at Super Computing 17. I think it's the 20th year of the convention. 12,000 people. We've never been here before. It's pretty amazing. Amazing keynote, really talking about space, and really big, big, big computing projects, so, excited to be here, and we've got our first guest of the day. He's Bernard Friebe, he is the Senior Director of FPGA, I'll get that good by the end of the day, Software Solutions for Intel Programmable group. First off, welcome, Bernard. >> Thank you. I'm glad to be here. >> Absolutely. So, have you been to this conference before? >> Yeah, a couple of times before. It's always a big event. Always a big show for us, so I'm excited. >> Yeah, and it's different, too, cuz it's got a lot of academic influence, as well, as you walk around the outside. It's pretty hardcore. >> Yes, it's wonderful, and you see a lot of innovation going on, and we need to move fast. We need to move faster. That's what it is. And accelerate. >> And that's what you're all about, acceleration, so, Intel's making a lot of announcements, really, about acceleration at FPGA. For acceleration and in data centers and in big data, and all these big applications. So, explain just a little bit how that seed is evolving and what some of the recent announcements are all about. >> The world of computing must accelerate. I think we all agree on that. We all see that that's a key requirement. And FPGA's are a truly versatile, multi-function accelerator. It accelerates so many workloads in the high-performance computing space, may it be financial, genomics, oil and gas, data analytics, and the list goes on. Machine learning is a very big one. The list goes on and on. And, so, we're investing heavily in providing solutions which makes it much easier for our users to develop and deploy FPGA in a high-performance computing environment. >> You guys are taking a lot of steps to make the software programming at FPGA a lot easier, so you don't have to be a hardcore hardware engineer, so you can open it up to a broader ecosystem and get a broader solution set. Is that right? >> That's right, and it's not just the hardware. How do you unlock the benefits of FPGA as a versatile accelerator, so their parallelism, their ability to do real-time, low-latency, acceleration of many different workloads, and how do you enable that in an environment which is truly dynamic and multi-function, like a data center. And so, the product we've recently announced is the acceleration stack for xeon with FPGA, which enables that use more. >> So, what are the components for that stack? >> It starts with hardware. So, we are building a hardware accelerator card, it's a pc express plugin card, it's called programmable accelerator card. We have integrated solutions where you have everything on an FPGA in package, but what's common is a software framework solution stack, which sits on top of these different hardware implementation, which really makes it easy for a developer to develop an accelerator, for a user to then deploy that accelerator and run it in their environment, and it also enables a data center operator to basically enable the FPGA like any other computer resources by integrating it into their orchestration framework. So, multiple levels taking care of all those needs. >> It's interesting, because there's a lot of big trends that you guys are taking advantage of. Obviously, we're at Super Computing, but big data, streaming analytics, is all the rage now, so more data faster, reading it in real time, pumping it into the database in real time, and then, right around the corner, we have IoT and internet of things and all these connected devices. So the demand for increased speed, to get that data in, get that data processed, get the analytics back out, is only growing exponentially. >> That's right, and FPGAs, due to their flexibility, have distinct advantages there. The traditional model is look aside of offload, where you have a processor, and then you offload your tasks to your accelerator. The FPGA, with their flexible I/Os and flexible core can actually run directly in the data path, so that's what we call in-line processing. And what that allows people to do is, whatever the source is, may it be cameras, may it be storage, may it be through the network, through ethernet, can stream directly into the FPGA and do your acceleration as the data comes in in a streaming way. And FPGAs provide really unique advantages there versus other types of accelerators. Low-latency, very high band-width, and they're flexible in a sense that our customers can build different interfaces, different connectivity around those FPGAs. So, it's really amazing how versatile the usage of FPGA has become. >> It is pretty interesting, because you're using all the benefits that come from hardware, hardware-based solutions, which you just get a lot of benefits when things are hardwired, with the software component and enabling a broader ecosystem to write ready-made solutions and integrations to their existing solutions that they already have. Great approach. >> The acceleration stack provides a consistent interface to the developer and the user of the FPGA. What that allows our ecosystem and our customers to do is to define these accelerators based on this framework, and then they can easily migrate those between different hardware platforms, so we're building in future improvements of the solution, and the consistent interfaces then allow our customers and partners to build their software stacks on top of it. So, their investment, once they do it and we target our Arria 10 programmable accelerator card can easily be leveraged and moved forward into the next generation strategy, and beyond. We enable, really, and encourage a broad ecosystem, to build solutions. You'll see that here at the show, many partners now have demos, and they show their solutions built on Intel FPGA hardware and the acceleration stack. >> OK, so I'm going to put you on the spot. So, these are announced, what's the current state of the general availability? >> We're sampling now on the cards, the acceleration stack is available for delivery to customers. A lot of it is open source, by the way, so it can already be downloaded from GitHub And the partners are developing the solutions they are demonstrating today. The product will go into volume production in the first half of next year. So, we're very close. >> All right, very good. Well, Bernard, thanks for taking a few minutes to stop by. >> Oh, it's my pleasure. >> All right. He's Bernard, I'm Jeff. You're watching theCUBE from Super Computing 17. Thanks for watching. (upbeat music)

Published Date : Nov 14 2017

SUMMARY :

brought to you by Intel. I'll get that good by the end of the day, I'm glad to be here. So, have you been to this conference before? Yeah, a couple of times before. Yeah, and it's different, too, and you see a lot of innovation going on, For acceleration and in data centers and the list goes on. and get a broader solution set. and how do you enable that in an environment and run it in their environment, and all these connected devices. and FPGAs, due to their flexibility, and enabling a broader ecosystem and the consistent interfaces then OK, so I'm going to put you on the spot. A lot of it is open source, by the way, Well, Bernard, thanks for taking a few minutes to stop by. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BernardPERSON

0.99+

Bernard FriebePERSON

0.99+

Bernhard FriebePERSON

0.99+

Jeffrey FrickPERSON

0.99+

JeffPERSON

0.99+

Intel Programmable Solutions GroupORGANIZATION

0.99+

12,000 peopleQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

20th yearQUANTITY

0.98+

Super Computing 17EVENT

0.97+

FPGAORGANIZATION

0.97+

Super Computing 2017EVENT

0.97+

todayDATE

0.96+

FirstQUANTITY

0.96+

GitHubORGANIZATION

0.95+

first half of next yearDATE

0.95+

first guestQUANTITY

0.95+

IntelORGANIZATION

0.95+

FPGATITLE

0.85+

theCubeORGANIZATION

0.84+

Arria 10COMMERCIAL_ITEM

0.73+

theCUBEORGANIZATION

0.54+

SuperEVENT

0.41+

ComputingTITLE

0.39+

17EVENT

0.36+

John Sakamoto, Intel | The Computing Conference


 

>> SiliconANGLE Media Presents the CUBE! Covering Alibaba's Cloud annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello there, and welcome to theCUBE here on the ground in China for Intel's booth here at the Alibaba Cloud event. I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. We're here with John Sakamoto who is the vice president of the Programmable Solutions Group. Thanks for stopping by. >> Thank you for having me, John. >> So FPGAs, field-programmable gate arrays, kind of a geeky term, but it's really about software these days. What's new with your group? You came to the Intel through an acquisition. How's that going? >> Yeah, so far it's been great. As being part of a company with the resources like Intel and really having access to data center customers, and some of the data center technologies and frameworks that they've developed and integrating MPJs into that, it's been a great experience. >> One of the hot trends here, I just interviewed Dr. Wong, at Alibaba Cloud, the founder, and we were talking about Intel's relationship, but one of the things he mentioned was striking to me is that, they got this big city brain IOT project, and I asked him about the compute at the Edge and how data moves around, and he said "for all the Silicon at the Edge, one piece of Silicon at the Edge is going to be 10X inside the data center, inside the cloud or data center," which is fundamentally the architecture these days. So it's not just about the Edge, it's about how the combination of software and compute are moving around. >> Right. >> That means that data center is still relevant for you guys. What is the impact of FPGA in the data center? >> Well, I think FPGA is really our great play in the data center. You mentioned City Brain. City Brain is a great example where they're streaming live video into the data center for processing, and that kind of processing power to do video live really takes a lot of horsepower, and that's really where FPGAs come into play. One of the reasons that Intel acquired Altera was really to bring that acceleration into the data center, and really that is a great complement to Xeon's. >> Take a minute on FPGA. Do you have to be a hardware geek to work with FPGA? I mean, obviously, software is a big part of it. What's the difference between the hardware side and the software side on the programmability? >> Yes, that's a great question. So most people think FPGAs are hard to use, and that they were for hardware geeks. The transitional flow had been using RTL-based flows, and really what we've recognized is to get FPGA adoption very high within the data center, we have to make it easier, and we've invested quite a bit in acceleration stacked to really make it easier for FPGAs to be used within the data center. And what we've done is we've created frameworks and pre-optimized accelerators for the FPGAs to make it easy for people to access that FPGA technology. >> What's the impact of developers because you look at the Acceleration Stack that you guys announced last month? >> Yes, that's correct. >> Okay, so last month. This is going to move more into software model. So it's almost programmability as a dev-ops, kind of a software mindset. So the hardware can be programmed. >> Right. >> What's the impact of the developer make up, and how does that change the solutions? How does that impact the environment? >> So the developer make up, what we're really targeting is guys that really have traditionally developed software, and they're used to higher level frameworks, or they're used to designing INSEE. So what we're trying to do is really make those designers, those developers, really to be able to use those languages and frameworks they're used to and be able to target the FPGA. And that's what the acceleration stack's all about. And our goal is to really obfuscate that we actually have an FPGA that's that accelerator. And so we've created, kind of, standard API's to that FPGA. So they don't really have to be an FPGA expert, and we've taken things, basically standardized some things like the connection to the processor, or connections to memory, or to networking, and made that very easy for them to access. >> We see a lot of that maker culture, kind of vibe and orientation come in to this new developer market. Because when you think of a field-programmable gate array, the first thing that pops into my mind is oh my God, I got to be a computer engineering geek. Motherboards, the design, all these circuits, but it's really not that. You're talking about Acceleration-as-a-Service. >> That's right. >> This is super important, because this brings that software mindset to the marketplace for you guys. So talk about that Accelerations-as-a-Service. What is it? What does it mean? Define it and then let's talk about what it means. >> Yeah. Okay, great. So Acceleration-as-a-Service is really having pre-optimized software or applications that really are running on the FPGA. So the user that's coming in and trying to use that acceleration service, doesn't necessarily need to know there's an FPGA there. They're just calling in and wanting to access the function, and it just happens to be accelerated by the FPGA. And that's why one of the things we've been working with with Alibaba, they announce their F1 service that's based on Intel's Arria 10 FPGAs. And again we've created a partner ecosystem that have developed pre-optimized accelerators for the FPGA. So users are coming in and doing things like Genomics Sequencing or database acceleration, and they don't necessarily need to know that there's an FPGA actually doing that acceleration. >> So that's just a standard developer just doing, focusing in on an app or a use case with big data, and that can tap into the hardware. >> Absolutely, and they'll get a huge performance increase. So we have a partner in Falcon Computing, for example, that can really increase the performance of the algorithm, and really get a 3X improvement in the overall gene sequencing. And really improve the time it takes to do that. >> Yeah, I mean, Cloud and what you're doing is just changing society. Congratulations, that's awesome. Alright, I want to talk about Alibaba. What is the relationship with Intel and Alibaba? We've been trying to dig that out on this trip. For your group, obviously you mentioned City Brain. You mentioned the accelerations of service, the F1 instances. >> Right. >> What specifically is the relationship, how tight is it? What are you guys doing together? >> Well the Intel PSG group, our group, has been working very closely with Alibaba on a number of areas. So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. We announced the Arria 10 version today, but will continue to develop with them in the next generation Intel FPGAs, such as Stratix 10 which is based on 14 nanometer. And eventually with our Falcon Mesa product which is a 10 nanometer product. So clearly, acceleration's a focus. Building that ecosystem out with them is going to be a continued focus. We're also working with them on servers and trying to enhance the performance >> Yeah. >> of those servers. >> Yeah. >> And I can't really talk about the details of all of those things, but certainly there are certain applications that FPGAs, they're looking to accelerate the overall performance of their custom servers, and we're partnering with them on that. >> So one of the things I'm getting out of this show here, besides the conversion stuff, eCommerce, entertainment, and web services which is Alibaba's, kind of like, aperture is that it's more of a quantum mindset. And we talked about Blockchain in my last interview. You see quantum computing up on their patent board. >> Yeah. >> Some serious IT kinds of things, but from a data perspective. How does that impact your world, because you provide acceleration. >> Right. >> You got the City Brains thing which is a huge IOT and AI opportunity. >> Right. >> How does someone attack that solution with FPGAs? How do you get involved? What's your role in that whole play? >> Again, we're trying to democratize FPGAs. We're trying to make it very easy for them to access that, and really that's what working with Alibaba's about. >> Yeah. >> They are enabling FPGA access via their Cloud. Really in two aspects, one which we talked about which we have some pre-optimized accelerators that people can access. So applications that people can access that are running on FPGAs. But we're also enabling a developer environment where people can use the tradit RTL flow, or they can use an OpenCL Flow to take their code, compile it into the FPGA, and really get that acceleration that FPGAs can provide. So it's not only building, bringing that ecosystem accelerators, but also enabling developers to develop on that platform. >> You know, we do a lot of Cloud computing coverage, and a lot of people really want to know what's inside the Cloud. So, it's one big operation, so that's the way I look at it. But there's a lot going on there under the hood. What is some of the things that Alibaba's saying to you guys in terms of how the relationship's translating into value for them. You've mentioned the F1 instances, any anecdotal soundbites you can share on the feedback, and their direction? >> Yeah, so one of the things they're trying to do is lower the total TCO of the data center. And one of the things they have is when you look at the infrastructure cost, such as networking and storage, these are cycles that are running on the processor. And when there's cycles running on the processor, they monetize that with the customers. So one of the areas we're working with is how do we accelerate networking and storage functions on a FPGA, and therefore, freeing up HORVS that they can monetize with their own customers. >> Yeah. >> And really that's the way we're trying to drop the TCO down with Alibaba, but also increase the revenue opportunity they have. >> What's some updates from the field from you guys? Obviously, Acceleration's pretty hot. Everyone wants low latency. With IOT, you need to have low latency. You need compute at the edge. More application development is coming in with Vertical Specialty, if you will. City Brains is more of an IOT, but the app is traffic, right? >> Yeah. >> So that managing traffic, there's going to be a million more use cases. What are some of the things that you guys are doing with the FPGAs outside of the Alibaba thing. >> Well I think really what we're trying to do is really focus on three areas. If you look at, one is to lower the cost of infrastructure which I mentioned. Networking and storage functions that today people are using running those processes on processors, and trying to lower that and bring that into the FPGA. The second thing we're trying to do is, you look at high cycle apps such as AI Applications, and really trying to bring AI really into FPGAs, and creating frameworks and tool chains to make that easier. >> Yeah. >> And then we already talked about the application acceleration, things like database, genomics, financial, and really those applications running much quicker and more efficiently in FPGAs. >> This is the big dev-ops movement we've seen with Cloud. Infrastructure as code, it used to be called. I mean, that's the new normal now. Software guys programming infrastructure. >> Absolutely. >> Well congratulations on the great step. John Sakamoto, here inside theCUBE. Studios here at the Intel booth, we're getting all the action roving reporter. We had CUBE conversations here in China, getting all the action about Alibaba Cloud. I'm John Furrier, thanks for watching.

Published Date : Oct 24 2017

SUMMARY :

SiliconANGLE Media Presents the CUBE! I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. You came to the Intel through an acquisition. center customers, and some of the data center technologies and frameworks that they've developed one piece of Silicon at the Edge is going to be 10X inside the data center, inside the What is the impact of FPGA in the data center? the data center, and really that is a great complement to Xeon's. What's the difference between the hardware side and the software side on the programmability? So most people think FPGAs are hard to use, and that they were for hardware geeks. So the hardware can be programmed. So the developer make up, what we're really targeting is guys that really have traditionally Motherboards, the design, all these circuits, but it's really not that. This is super important, because this brings that software mindset to the marketplace for So the user that's coming in and trying to use that acceleration service, doesn't necessarily So that's just a standard developer just doing, focusing in on an app or a use case And really improve the time it takes to do that. What is the relationship with Intel and Alibaba? So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. And I can't really talk about the details of all of those things, but certainly there So one of the things I'm getting out of this show here, besides the conversion stuff, How does that impact your world, because you provide acceleration. We're trying to make it very easy for them to access that, and really that's what working So it's not only building, bringing that ecosystem accelerators, but also enabling developers What is some of the things that Alibaba's saying to you guys in terms of how the relationship's And one of the things they have is when you look at the infrastructure cost, such as networking And really that's the way we're trying to drop the TCO down with Alibaba, but also City Brains is more of an IOT, but the app is traffic, right? What are some of the things that you guys are doing with the FPGAs outside of the Alibaba The second thing we're trying to do is, you look at high cycle apps such as AI Applications, And then we already talked about the application acceleration, things like database, genomics, This is the big dev-ops movement we've seen with Cloud. Studios here at the Intel booth, we're getting all the action roving reporter.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

John SakamotoPERSON

0.99+

John FurrierPERSON

0.99+

ChinaLOCATION

0.99+

JohnPERSON

0.99+

Alibaba CloudORGANIZATION

0.99+

WongPERSON

0.99+

10 nanometerQUANTITY

0.99+

two aspectsQUANTITY

0.99+

secondQUANTITY

0.99+

last monthDATE

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

SiliconANGLEORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

Falcon ComputingORGANIZATION

0.99+

14 nanometerQUANTITY

0.99+

Programmable Solutions GroupORGANIZATION

0.98+

first thingQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

3XQUANTITY

0.98+

10XQUANTITY

0.98+

OneQUANTITY

0.98+

todayDATE

0.96+

SiliconANGLE MediaORGANIZATION

0.95+

Dr.PERSON

0.95+

AlteraORGANIZATION

0.94+

City BrainORGANIZATION

0.93+

Alibaba CloudEVENT

0.92+

one pieceQUANTITY

0.91+

EdgeORGANIZATION

0.91+

XeonORGANIZATION

0.91+

Arria 10 FPGAsCOMMERCIAL_ITEM

0.9+

Stratix 10COMMERCIAL_ITEM

0.88+

OpenCL FlowTITLE

0.88+

three areasQUANTITY

0.86+

Computing ConferenceEVENT

0.79+

a million more use casesQUANTITY

0.77+

one big operationQUANTITY

0.73+

FalconORGANIZATION

0.71+

Intel PSGORGANIZATION

0.71+

Arria 10COMMERCIAL_ITEM

0.71+

MesaCOMMERCIAL_ITEM

0.68+

CloudEVENT

0.61+

CloudORGANIZATION

0.51+

City BrainsTITLE

0.44+

F1EVENT

0.4+

HORVSORGANIZATION

0.38+

StudiosORGANIZATION

0.33+