Scott Masepohl, Intel PSG | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE covering AWS re:Invent 2017. Presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone. We are here live at AWS re:Invent in Las Vegas. This is 45,000 people are here inside the Sands Convention Center at the Venetian, the Palazzo, and theCUBE is here >> Offscreen: I don't have an earpiece, by the way. >> for the fifth straight year, and we're excited to be here, and I wanna say it's our fifth year, we've got two sets, and I wanna thank Intel for their sponsorship, and of course our next guest is from Intel. Scott Macepole, director of the CTO's office at Intel PSG. Welcome to theCUBE. >> Thank you. >> Thanks for coming on. So, had a lot of Intel guests on, lot of great guests from customers of Amazon, Amazon executives, Amy Jessup coming on tomorrow. The big story is all this acceleration. of software development. >> Scott: Right. >> You guys at the FPGA within intel are doing acceleration at a whole nother level. 'Cause these clouds have data centers, they have to power the machines even though it's going serverless. What's going on with FPGAs, and how does that relate to the cloud world? >> Well, FPGAs I think have a unique place in the cloud. They're used in a number of different areas, and I think the great thing about them is they're inherently parallel. So you know, they're programmable hardware, so instead of something like a GPU or a purpose-built accelerator, you can make them do a whole bunch of different things, so they can do computer acceleration, they can do network acceleration, and they can do those at the same time. They can also do things like machine learning, and there's structures built inside of them that really help them achieve all of those tasks. >> Why is it gonna pick up lately? Because what are they doing differently now with FPGAs than they were before? Because there's more talk of that now more than ever. >> You know, I mean, I think it's just finally come to a confluence where the programmability is finally really needed. It's very difficult to actually create customized chips for specific markets, and it takes a long time to actually go do that. So by the time you actually create this chip, you may have not had the right solution. FPGAs are unique in that they're programmable, and you can actually create the solution on the fly, and if the solution's not correct you can go and you can actually change that, and they're actually pretty performant now. So the performance has continued to increase generation to generation, and I think that's really what sets them apart. >> So what's the relationship with Amazon? Because now I'm kinda connecting the dots in my head. Amazon's running full speed ahead. >> Scott: Yeah. And they're moving fast, I mean thousands of services. Does FPGAs give you guys faster time to market when they do joint designs with Intel? And how does your relationship with Amazon connect on all this? >> Absolutely, we have a number of relationships with Amazon, clearly the Xeon processors being one of them. The FPGAs are something that we continue to try to work with them on, but we're also in a number of their other applications, such as Alexa, so and there's actually technologies within Alexa that we could take and implement either in Xeon CPUs or actually in FPGAs to further accelerate those, so a lot of the speech processing, a lot of the AI that's behind that, and that's something that, it's not very prevalent now, but I think it'll be in the future. >> So, all that speech stuff matters for you guys, right? That helps you guys, the speech, all the voice stuff that's happening, and the Alexa news, machine learning. >> Right. >> That's good for you, right? I mean, that, I mean... >> It's very good, and it's actually, it's really in the FPGA sweet spot. There's a lot of structures within the FPGAs that make them a lot better for AI than a GPU. So for instance, they have a lot of memory on the inside of the device, and you can actually do the compute and the memory right next to where it needs to be, and that's actually very important, because you want the latency to be very low so that you can process these things very quickly. And there's just a phenomenal amount of bandwidth inside of an FPGA today. There's over 60 terabytes a second of bandwidth in our mid-range Stratix 10 device. And when you couple that together with the unique math capabilities, you can really build exactly what you want. So when you look at GPUs, they're kinda limited to double precision floating pointers, single precision, or integer. The FPGAs can do all of those and more, and you can actually custom build your mathematical path to what you need, save power, be more efficient, and lower the latency. So... >> So Andy Jessup talked about this is a builder's conference. The developers, giving the tools to the developers they need to create amazing things. One of the big announcements was the bare metal servers from AWS. >> Scott: Yeah. How do you see something like an FPGA playing in a service like that? >> Well, the FPGAs could use to help provide security for that. They could obviously be used to help do some of the network processing as well. In addition, they could be used in a lot of classical modes that they could be used in, whether it's like an attached solution for pure acceleration. So just because it's bare metal doesn't mean it can't be bare metal with FPGA to do acceleration. >> And then, let's talk about some of the... You guys, FPGAs is pretty big in the networking space. >> Scott: Yeah. >> Let's talk about some of the surrounding Intel technologies around FPGAs. How are you guys enabling your partners, network partners, to take advantage of X86, Xeon, FPGAs, and accelerating networking services inside of a solution like Amazon. >> We have a number of solutions that we're developing, both with partners and ourselves, to attach to our nix, and other folks' nix, to help accelerate those. We've also released what's called the acceleration stack, and what that's about is really just kinda lowering the barrier of entry for FPGAs, and it has actually a driver solution that goes with it as well, it's called OPAE, and what that driver solution does, it actually creates kind of a containerized environment with an open source software driver so that it just really helps remove the barrier of, you know, you have this FPGA next to a CPU. How do I talk to it? How can we connect to it with our software? And so we're trying to make all of this a lot simpler, and then we're making it all open so that everybody can contribute and that the market can grow faster. >> Yeah, and let's talk about ecosystem around data, the telemetry data coming off of systems. A lot of developers want as much telemetry data, even from AWS, as possible. >> Scott: Yeah. >> Are you guys looking to expose any of that to developers? >> It's always something under consideration, and one of the things that FPGAs are really good at is that you can kinda put them towards the edge so that they can actually process the data so that you don't have to dump the full stream of data that gets generated down off to some other processing vehicle, right? So you can actually do a ton of the processing and then send limited packets off of that. >> So we looked at the camera today, super small device doing some really amazing things, how does FPGAs playing a role in that, the IOT? >> They do a lot of, FPGAs are great for image processing. They can do that actually much quicker than most other things. When you start listening, or reading a little bit about AI, you'll see that a lot of times when you're processing images, you'll have to take a whole batch of them for GPUs to be efficient. FPGAs can operate down at a batch size of one, so they can respond very quickly. They can work on individual images, and again, they can actually do it not just efficiently in terms of the, kinda the amount of hardware that you implement, but efficiently in the power that's required to go do that. >> So when we look at advanced IOT use cases, what are some of the things that end-user customers will be able to do potentially with FPGAs out to the edge, of course less data, less power needed to go back to the cloud, but practically, what are some of the business outcomes from using FPGAs out at the edge? >> You know, there's a number of different applications, you know, for the edge. If you go back to the Alexa, there's a lot of processing smarts that actually go on there. This is an example where the FPGA could actually be used right next to the Xeons to further accelerate some of the speech, and that's stuff that we're looking at now. >> What's the number one use case you're seeing that people, what's the number one use case that you're seeing that people could relate to? Is it Alexa? Is it the video-- >> For the edge, or? >> Host: For FPGAs, the value of accelerating. >> For FPGAs, I mean, while there's usage well beyond data center, you know. There's a classic what we would call wire line where it's used in everything today. You know, if you're making a cellphone call, it likely goes through an FPGA at some point. In terms of data center, I think where it's really being used today, there's been a couple of very public announcements. Obviously in network processing in some of the top cloud providers, as well as AI. So, you know, and I think a lot of people were surprised by some of those announcements, but as people look into them a little further, I think they'll see that there's a lot of merit to that. >> The devices get smaller and faster and just the deep lens device has got a graphics engine that would've been on a mainframe a few years ago. I mean, it's huge software power. >> Yeah. >> You guys accelerate that, right? I mean I'm looking, is that a direction? What is the future direction for you guys? What's the future look like for FPGAs? >> It's fully programmable, so, you know, it's really limited by what our customers and us really wanna go invest in. You know, one of the other things that we're trying to do to make FPGAs more usable is remove the kind of barrier where people traditionally do RTL, if you're familiar with that, they actually do the design, and really make it a lot more friendly for software developers, so that they can write things in C or openCL, and that application will actually end up on the inside of the FPGA using some of these other frameworks that I talked about, the acceleration stack. So they don't have to really go and build all the guts of the FPGA, they just focus on their application, you have the FPGA here whether it's attached to the network, coherently attached to a processor, or next to a processor on a, on PCI Express, all of those can be supported, and there's a nice software model to help you do all that development. >> So you wanna make it easy for developers. >> Scott: We wanna make it very easy. >> What specifically do you have for them right now? >> We have the, they call it the DLA framework, the deep learning framework that we released. As I said before, we have the acceleration stack, we have the OPEA which is the driver stack that goes along with that, as well of all our, what we call our high-level synthesis tools, HLS, and that supports C and openCL. So it basically will take your classic software and convert it into gates, and help you get that on the FPGA. >> Will bots be programming this soon? Soon AI's going to be programming the FPGAs? Software, programming software? >> That might be a little bit of a stretch right now, but you know, in the coming years perhaps. >> Host: Scott, thanks for coming onto theCUBE, really appreciate it. >> Thanks for having me. >> Scott Macepole who is with Intel, he's the director of the CTO's office at Intel PSG, they make FPGAs, really instrumental device in software to help accelerate the chips, make it better for developers, power your phone, Alexa, all the things pretty much in our life. Thanks for coming on the Cube, appreciate it. >> Thank you. >> We'll be back with more live coverage. 45,000 people here in Las Vegas, it's crazy. It's Amazon Web Services re:Invent, we'll be right back. (soft electronic music)
SUMMARY :
and our ecosystem of partners. the Sands Convention Center at the Venetian, of the CTO's office at Intel PSG. So, had a lot of Intel guests on, and how does that relate to the cloud world? and they can do those at the same time. Because what are they doing differently now with FPGAs So by the time you actually create this chip, Because now I'm kinda connecting the dots in my head. Does FPGAs give you guys faster time to market a lot of the AI that's behind that, and the Alexa news, machine learning. I mean, that, I mean... and you can actually do the compute and the memory One of the big announcements was How do you see something like an FPGA in a lot of classical modes that they could be used in, You guys, FPGAs is pretty big in the networking space. Let's talk about some of the surrounding and that the market can grow faster. the telemetry data coming off of systems. and one of the things that FPGAs are really good at kinda the amount of hardware that you implement, you know, for the edge. in some of the top cloud providers, as well as AI. and just the deep lens device has got a graphics engine and build all the guts of the FPGA, and help you get that on the FPGA. but you know, in the coming years perhaps. really appreciate it. Thanks for coming on the Cube, appreciate it. We'll be back with more live coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Macepole | PERSON | 0.99+ |
Andy Jessup | PERSON | 0.99+ |
Amy Jessup | PERSON | 0.99+ |
Scott Masepohl | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
Sands Convention Center | LOCATION | 0.99+ |
fifth straight year | QUANTITY | 0.99+ |
tomorrow | DATE | 0.98+ |
over 60 terabytes | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
CTO | ORGANIZATION | 0.96+ |
Venetian | LOCATION | 0.96+ |
Alexa | TITLE | 0.96+ |
both | QUANTITY | 0.95+ |
openCL | TITLE | 0.95+ |
today | DATE | 0.95+ |
One | QUANTITY | 0.92+ |
Cube | COMMERCIAL_ITEM | 0.91+ |
thousands of services | QUANTITY | 0.9+ |
AWS | EVENT | 0.88+ |
few years ago | DATE | 0.86+ |
Stratix 10 | COMMERCIAL_ITEM | 0.85+ |
C | TITLE | 0.84+ |
Intel PSG | ORGANIZATION | 0.82+ |
re:Invent 2017 | EVENT | 0.82+ |
Invent | EVENT | 0.8+ |
Palazzo | LOCATION | 0.8+ |
X86 | COMMERCIAL_ITEM | 0.78+ |
a second | QUANTITY | 0.77+ |
re:Invent | EVENT | 0.76+ |
intel | ORGANIZATION | 0.72+ |
Xeon | COMMERCIAL_ITEM | 0.71+ |
single | QUANTITY | 0.71+ |
theCUBE | ORGANIZATION | 0.7+ |
OPAE | TITLE | 0.63+ |
a ton | QUANTITY | 0.62+ |
DLA | TITLE | 0.58+ |
OPEA | ORGANIZATION | 0.5+ |
Xeons | COMMERCIAL_ITEM | 0.48+ |
Xeon | ORGANIZATION | 0.48+ |
PSG | LOCATION | 0.37+ |
theCUBE | TITLE | 0.33+ |
John Sakamoto, Intel | The Computing Conference
>> SiliconANGLE Media Presents the CUBE! Covering Alibaba's Cloud annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello there, and welcome to theCUBE here on the ground in China for Intel's booth here at the Alibaba Cloud event. I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. We're here with John Sakamoto who is the vice president of the Programmable Solutions Group. Thanks for stopping by. >> Thank you for having me, John. >> So FPGAs, field-programmable gate arrays, kind of a geeky term, but it's really about software these days. What's new with your group? You came to the Intel through an acquisition. How's that going? >> Yeah, so far it's been great. As being part of a company with the resources like Intel and really having access to data center customers, and some of the data center technologies and frameworks that they've developed and integrating MPJs into that, it's been a great experience. >> One of the hot trends here, I just interviewed Dr. Wong, at Alibaba Cloud, the founder, and we were talking about Intel's relationship, but one of the things he mentioned was striking to me is that, they got this big city brain IOT project, and I asked him about the compute at the Edge and how data moves around, and he said "for all the Silicon at the Edge, one piece of Silicon at the Edge is going to be 10X inside the data center, inside the cloud or data center," which is fundamentally the architecture these days. So it's not just about the Edge, it's about how the combination of software and compute are moving around. >> Right. >> That means that data center is still relevant for you guys. What is the impact of FPGA in the data center? >> Well, I think FPGA is really our great play in the data center. You mentioned City Brain. City Brain is a great example where they're streaming live video into the data center for processing, and that kind of processing power to do video live really takes a lot of horsepower, and that's really where FPGAs come into play. One of the reasons that Intel acquired Altera was really to bring that acceleration into the data center, and really that is a great complement to Xeon's. >> Take a minute on FPGA. Do you have to be a hardware geek to work with FPGA? I mean, obviously, software is a big part of it. What's the difference between the hardware side and the software side on the programmability? >> Yes, that's a great question. So most people think FPGAs are hard to use, and that they were for hardware geeks. The transitional flow had been using RTL-based flows, and really what we've recognized is to get FPGA adoption very high within the data center, we have to make it easier, and we've invested quite a bit in acceleration stacked to really make it easier for FPGAs to be used within the data center. And what we've done is we've created frameworks and pre-optimized accelerators for the FPGAs to make it easy for people to access that FPGA technology. >> What's the impact of developers because you look at the Acceleration Stack that you guys announced last month? >> Yes, that's correct. >> Okay, so last month. This is going to move more into software model. So it's almost programmability as a dev-ops, kind of a software mindset. So the hardware can be programmed. >> Right. >> What's the impact of the developer make up, and how does that change the solutions? How does that impact the environment? >> So the developer make up, what we're really targeting is guys that really have traditionally developed software, and they're used to higher level frameworks, or they're used to designing INSEE. So what we're trying to do is really make those designers, those developers, really to be able to use those languages and frameworks they're used to and be able to target the FPGA. And that's what the acceleration stack's all about. And our goal is to really obfuscate that we actually have an FPGA that's that accelerator. And so we've created, kind of, standard API's to that FPGA. So they don't really have to be an FPGA expert, and we've taken things, basically standardized some things like the connection to the processor, or connections to memory, or to networking, and made that very easy for them to access. >> We see a lot of that maker culture, kind of vibe and orientation come in to this new developer market. Because when you think of a field-programmable gate array, the first thing that pops into my mind is oh my God, I got to be a computer engineering geek. Motherboards, the design, all these circuits, but it's really not that. You're talking about Acceleration-as-a-Service. >> That's right. >> This is super important, because this brings that software mindset to the marketplace for you guys. So talk about that Accelerations-as-a-Service. What is it? What does it mean? Define it and then let's talk about what it means. >> Yeah. Okay, great. So Acceleration-as-a-Service is really having pre-optimized software or applications that really are running on the FPGA. So the user that's coming in and trying to use that acceleration service, doesn't necessarily need to know there's an FPGA there. They're just calling in and wanting to access the function, and it just happens to be accelerated by the FPGA. And that's why one of the things we've been working with with Alibaba, they announce their F1 service that's based on Intel's Arria 10 FPGAs. And again we've created a partner ecosystem that have developed pre-optimized accelerators for the FPGA. So users are coming in and doing things like Genomics Sequencing or database acceleration, and they don't necessarily need to know that there's an FPGA actually doing that acceleration. >> So that's just a standard developer just doing, focusing in on an app or a use case with big data, and that can tap into the hardware. >> Absolutely, and they'll get a huge performance increase. So we have a partner in Falcon Computing, for example, that can really increase the performance of the algorithm, and really get a 3X improvement in the overall gene sequencing. And really improve the time it takes to do that. >> Yeah, I mean, Cloud and what you're doing is just changing society. Congratulations, that's awesome. Alright, I want to talk about Alibaba. What is the relationship with Intel and Alibaba? We've been trying to dig that out on this trip. For your group, obviously you mentioned City Brain. You mentioned the accelerations of service, the F1 instances. >> Right. >> What specifically is the relationship, how tight is it? What are you guys doing together? >> Well the Intel PSG group, our group, has been working very closely with Alibaba on a number of areas. So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. We announced the Arria 10 version today, but will continue to develop with them in the next generation Intel FPGAs, such as Stratix 10 which is based on 14 nanometer. And eventually with our Falcon Mesa product which is a 10 nanometer product. So clearly, acceleration's a focus. Building that ecosystem out with them is going to be a continued focus. We're also working with them on servers and trying to enhance the performance >> Yeah. >> of those servers. >> Yeah. >> And I can't really talk about the details of all of those things, but certainly there are certain applications that FPGAs, they're looking to accelerate the overall performance of their custom servers, and we're partnering with them on that. >> So one of the things I'm getting out of this show here, besides the conversion stuff, eCommerce, entertainment, and web services which is Alibaba's, kind of like, aperture is that it's more of a quantum mindset. And we talked about Blockchain in my last interview. You see quantum computing up on their patent board. >> Yeah. >> Some serious IT kinds of things, but from a data perspective. How does that impact your world, because you provide acceleration. >> Right. >> You got the City Brains thing which is a huge IOT and AI opportunity. >> Right. >> How does someone attack that solution with FPGAs? How do you get involved? What's your role in that whole play? >> Again, we're trying to democratize FPGAs. We're trying to make it very easy for them to access that, and really that's what working with Alibaba's about. >> Yeah. >> They are enabling FPGA access via their Cloud. Really in two aspects, one which we talked about which we have some pre-optimized accelerators that people can access. So applications that people can access that are running on FPGAs. But we're also enabling a developer environment where people can use the tradit RTL flow, or they can use an OpenCL Flow to take their code, compile it into the FPGA, and really get that acceleration that FPGAs can provide. So it's not only building, bringing that ecosystem accelerators, but also enabling developers to develop on that platform. >> You know, we do a lot of Cloud computing coverage, and a lot of people really want to know what's inside the Cloud. So, it's one big operation, so that's the way I look at it. But there's a lot going on there under the hood. What is some of the things that Alibaba's saying to you guys in terms of how the relationship's translating into value for them. You've mentioned the F1 instances, any anecdotal soundbites you can share on the feedback, and their direction? >> Yeah, so one of the things they're trying to do is lower the total TCO of the data center. And one of the things they have is when you look at the infrastructure cost, such as networking and storage, these are cycles that are running on the processor. And when there's cycles running on the processor, they monetize that with the customers. So one of the areas we're working with is how do we accelerate networking and storage functions on a FPGA, and therefore, freeing up HORVS that they can monetize with their own customers. >> Yeah. >> And really that's the way we're trying to drop the TCO down with Alibaba, but also increase the revenue opportunity they have. >> What's some updates from the field from you guys? Obviously, Acceleration's pretty hot. Everyone wants low latency. With IOT, you need to have low latency. You need compute at the edge. More application development is coming in with Vertical Specialty, if you will. City Brains is more of an IOT, but the app is traffic, right? >> Yeah. >> So that managing traffic, there's going to be a million more use cases. What are some of the things that you guys are doing with the FPGAs outside of the Alibaba thing. >> Well I think really what we're trying to do is really focus on three areas. If you look at, one is to lower the cost of infrastructure which I mentioned. Networking and storage functions that today people are using running those processes on processors, and trying to lower that and bring that into the FPGA. The second thing we're trying to do is, you look at high cycle apps such as AI Applications, and really trying to bring AI really into FPGAs, and creating frameworks and tool chains to make that easier. >> Yeah. >> And then we already talked about the application acceleration, things like database, genomics, financial, and really those applications running much quicker and more efficiently in FPGAs. >> This is the big dev-ops movement we've seen with Cloud. Infrastructure as code, it used to be called. I mean, that's the new normal now. Software guys programming infrastructure. >> Absolutely. >> Well congratulations on the great step. John Sakamoto, here inside theCUBE. Studios here at the Intel booth, we're getting all the action roving reporter. We had CUBE conversations here in China, getting all the action about Alibaba Cloud. I'm John Furrier, thanks for watching.
SUMMARY :
SiliconANGLE Media Presents the CUBE! I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. You came to the Intel through an acquisition. center customers, and some of the data center technologies and frameworks that they've developed one piece of Silicon at the Edge is going to be 10X inside the data center, inside the What is the impact of FPGA in the data center? the data center, and really that is a great complement to Xeon's. What's the difference between the hardware side and the software side on the programmability? So most people think FPGAs are hard to use, and that they were for hardware geeks. So the hardware can be programmed. So the developer make up, what we're really targeting is guys that really have traditionally Motherboards, the design, all these circuits, but it's really not that. This is super important, because this brings that software mindset to the marketplace for So the user that's coming in and trying to use that acceleration service, doesn't necessarily So that's just a standard developer just doing, focusing in on an app or a use case And really improve the time it takes to do that. What is the relationship with Intel and Alibaba? So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. And I can't really talk about the details of all of those things, but certainly there So one of the things I'm getting out of this show here, besides the conversion stuff, How does that impact your world, because you provide acceleration. We're trying to make it very easy for them to access that, and really that's what working So it's not only building, bringing that ecosystem accelerators, but also enabling developers What is some of the things that Alibaba's saying to you guys in terms of how the relationship's And one of the things they have is when you look at the infrastructure cost, such as networking And really that's the way we're trying to drop the TCO down with Alibaba, but also City Brains is more of an IOT, but the app is traffic, right? What are some of the things that you guys are doing with the FPGAs outside of the Alibaba The second thing we're trying to do is, you look at high cycle apps such as AI Applications, And then we already talked about the application acceleration, things like database, genomics, This is the big dev-ops movement we've seen with Cloud. Studios here at the Intel booth, we're getting all the action roving reporter.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alibaba | ORGANIZATION | 0.99+ |
John Sakamoto | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Alibaba Cloud | ORGANIZATION | 0.99+ |
Wong | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
two aspects | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Falcon Computing | ORGANIZATION | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
Programmable Solutions Group | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
3X | QUANTITY | 0.98+ |
10X | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.96+ |
SiliconANGLE Media | ORGANIZATION | 0.95+ |
Dr. | PERSON | 0.95+ |
Altera | ORGANIZATION | 0.94+ |
City Brain | ORGANIZATION | 0.93+ |
Alibaba Cloud | EVENT | 0.92+ |
one piece | QUANTITY | 0.91+ |
Edge | ORGANIZATION | 0.91+ |
Xeon | ORGANIZATION | 0.91+ |
Arria 10 FPGAs | COMMERCIAL_ITEM | 0.9+ |
Stratix 10 | COMMERCIAL_ITEM | 0.88+ |
OpenCL Flow | TITLE | 0.88+ |
three areas | QUANTITY | 0.86+ |
Computing Conference | EVENT | 0.79+ |
a million more use cases | QUANTITY | 0.77+ |
one big operation | QUANTITY | 0.73+ |
Falcon | ORGANIZATION | 0.71+ |
Intel PSG | ORGANIZATION | 0.71+ |
Arria 10 | COMMERCIAL_ITEM | 0.71+ |
Mesa | COMMERCIAL_ITEM | 0.68+ |
Cloud | EVENT | 0.61+ |
Cloud | ORGANIZATION | 0.51+ |
City Brains | TITLE | 0.44+ |
F1 | EVENT | 0.4+ |
HORVS | ORGANIZATION | 0.38+ |
Studios | ORGANIZATION | 0.33+ |