John Lockwood, Algo Logic Systems | Super Computing 2017
>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. (electronic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Denver, Colorado at Super Computing 2017. 12,000 people, our first trip to the show. We've been trying to come for awhile, it's pretty amazing. A lot of heavy science in terms of the keynotes. All about space and looking into brain mapping and it's heavy lifting, academics all around. We're excited to have our next guest, who's an expert, all about speed and that's John Lockwood. He's the CEO of Algo-Logic. First off, John, great to see you. >> Yeah, thanks Jeff, glad to be here. >> Absolutely, so for folks that aren't familiar with the company, give them kind of the quick overview of Algo. >> Yes, Algo-Logic puts algorithms into logic. So our main focus is taking things are typically done in software and putting them into FPGAs and by doing that we make them go faster. >> So it's a pretty interesting phenomenon. We've heard a lot from some of the Intel execs about kind of the software overlay that now, kind of I guess, a broader ecosystem of programmers into hardware, but then still leveraging the speed that you get in hardware. So it's a pretty interesting combination to get those latencies down, down, down. >> Right, right, I mean Intel certainly made a shift to go on into heterogeneous compute. And so in this heterogeneous world, we've got software running on Xeons, Xeon Phis. And we've also got the need though, to use new compute in more than just the traditional microprocessor. And so with the acquisition of Altera, is that now Intel customers can use FPGAs in order to get the benefit in speed. And so Algo-Logic, we typically provide applications with software APIs, so it makes it really easy for end customers to deploy FPGAs into their data center, into their hosts, into their network and start using them right away. >> And you said one of your big customer sets is financial services and trading desk. So low latency there is critical as millions and millions and millions if not billions of dollars. >> Right, so Algo-Logic we have a whole product line of high-frequency trading systems. And so our Tick-To-Trade system is unique in the fact that it has a sub-microsecond trading latency and this means going from market data that comes in, for example on CME for options and futures trading, to time that we can place a fix order back out to the market. All of that happens in an FPGA. That happens in under a microsecond. So under a millionth of second and that beats every other software system that's being used. >> Right, which is a game change, right? Wins or losses can be made on those time frames. >> It's become a must have is that if you're trading on Wall Street or trading in Chicago and you're not trading with an FPGA, you're trading at a severe disadvantage. And so we make a product that enables all the trading firms to be playing on a fair, level playing field against the big firms. >> Right, so it's interesting because the adoption of Flash and some of these other kind of speed accelerator technologies that have been happening over the last several years, people are kind of getting accustomed to the fact that speed is better, but often it was kind of put aside in this kind of high-value applications like financial services and not really proliferating to a broader use of applications. I wonder if you're seeing that kind of change a little bit, where people are seeing the benefits of real time and speed beyond kind of the classic high-value applications? >> Well, I think the big change that's happened is that it's become machine-to-machine now. And so humans, for example in trading, are not part of the loop anymore and so it's not a matter of am I faster than another person? It's am I faster than the other person's machine? And so this notion of having compute that goes fast has become suddenly dramatically much more important because everything now is going to machine versus machine. And so if you're an ad tech advertiser, is that how quickly you can do an auction to place an ad matters and if you can get a higher value ad placed because you're able to do a couple rounds of an auction, that's worth a lot. And so, again, with Algo-Logic we make things go faster and that time benefit means, that all thing else being the same, you're the first to come to a decision. >> Right, right and then of course the machine-to-machine obviously brings up the hottest topic that everybody loves to talk about is autonomous vehicles and networked autonomous vehicles and just the whole IOT space with the compute moving out to the edge. So this machine-to-machine systems are only growing in importance and really percentage of the total compute consumption by far. >> That's right, yeah. So last year at Super Computing, we demonstrated a drone, bringing in realtime data from a drone. So doing realtime data collection and doing processing with our Key Value Store. So this year, we have a machine learning application, a Markov Decision Process where we show that we can scale-out a machine learning process and teach cars how to drive in a few minutes. >> Teach them how to drive in a few minutes? >> Right. >> So that's their learning. That's not somebody programming the commands. They're actually going through a process of learning? >> Right, well so the Key Value Store is just a part of this. We're just the part of the system that makes the scale-outs that runs well in a data center. And so we're still running the Markov Decision Process in simulations in software. So we have a couple Xeon servers that we brought with us to do the machine learning and a data center would scale-out to be dozens of racks, but even with a few machines though, for simple highway driving, what we can show is we start off with, the system's untrained and that in the Markov Decision Process, we reward the final state of not having accidents. And so at first, the cars drive and they're bouncing into each other. It's like bumper cars, but within a few minutes and after about 15 million simulations, which can be run that quickly, is that the cars start driving better than humans. And so I think that's a really phenomenal step, is the fact that you're able to get to a point where you can train a system how to drive and give them 15 man years of experience in a matter of minutes by the scale-out compute systems. >> Right, 'cause then you can put in new variables, right? You can change that training and modify it over time as conditions change, throw in snow or throw in urban environments and other things. >> Absolutely, right. And so we're not pretending that our machine learning, that application we're showing here is an end-all solution. But as you bring in other factors like pedestrians, deer, other cars running different algorithms or crazy drivers, is that you want to expose the system to those conditions as well. And so one of the questions that came up to us was, "What machine learning application are you running?" So we're showing all 25 cars running one machine learned application and that's incrementally getting better as they learn to drive, but we could also have every car running a different machine learning application and see how different AIs interact with each other. And I think that's what you're going to see on the highway as we have more self-driving cars running different algorithms, we have to make sure they all place nice with each other. >> Right, but it's really a different way of looking at the world, right, using machine learning, machine-to-machine versus single person or a team of people writing a piece of software to instruct something to do something and then you got to go back and change it. This is a much more dynamic realtime environment that we're entering into with IOT. >> Right, I mean the machine-to-human, which was kind of last year and years before, were, "How do you make interactions "between the computers better than humans?" But now it's about machine-to-machine and it's,"How do you make machines interact better "with other machines?" And that's where it gets really competitive. I mean, you can imagine with drones for example, for applications where you have drones against drones, the drones that are faster are going to be the ones that win. >> Right, right, it's funny, we were just here last week at the commercial drone show and it's pretty interesting how they're designing the drones now into a three-part platform. So there's the platform that flies around. There's the payload, which can be different sensors or whatever it's carrying, could be herbicide if it's an agricultural drone. And then they've opened up the STKs, both on the control side as well as the mobile side, in terms of the controls. So it's a very interesting way that all these things now, via software could tie together, but as you say, using machine learning you can train them to work together even better, quicker, faster. >> Right, I mean having a swarm or a cluster of these machines that work with each other, you could really do interesting things. >> Yeah, that's the whole next thing, right? Instead of one-to-one it's many-to-many. >> And then when swarms interact with other swarms, then I think that's really fascinating. >> So alright, is that what we're going to be talking about? So if we connect in 2018, what are we going to be talking about? The year's almost over. What are your top priorities for next year? >> Our top priorities are to see. We think that FPGA is just playing this important part. A GPU for example, became a very big part of the super computing systems here at this conference. But the other side of heterogeneous is the FPGA and the FPGA has seen almost, just very minimal adoption so far. But the FPGA has the capability of providing, especially when it comes to doing network IO transactions, it's speeding up realtime interactions, it has an ability to change the world again for HPC. And so I'm expecting that in a couple years, at this HPC conference, that what we'll be talking about, is the biggest top 500 super computers, is that how big of FPGAs do they have. Not how big of GPUs do they have. >> All right, time will tell. Well, John, thanks for taking a few minutes out of your day and stopping by. >> Okay, thanks Jeff, great to talk to you. >> All right, he's John Lockwood, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. >> Bye. (electronic music)
SUMMARY :
Covering Super Computing '17, brought to you by Intel. A lot of heavy science in terms of the keynotes. that aren't familiar with the company, and by doing that we make them go faster. still leveraging the speed that you get in hardware. And so with the acquisition of Altera, And you said one of your big customer sets Right, so Algo-Logic we have a whole product line Right, which is a game change, right? And so we make a product that enables all the trading firms Right, so it's interesting because the adoption of Flash And so this notion of having compute that goes fast and just the whole IOT space and teach cars how to drive in a few minutes. That's not somebody programming the commands. and that in the Markov Decision Process, Right, 'cause then you can put in new variables, right? And so one of the questions that came up to us was, of looking at the world, right, using machine learning, Right, I mean the machine-to-human, in terms of the controls. you could really do interesting things. Yeah, that's the whole next thing, right? And then when swarms interact with other swarms, So alright, is that what we're going to be talking about? And so I'm expecting that in a couple years, All right, time will tell. All right, he's John Lockwood, I'm Jeff Frick. (electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
John Lockwood | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
millions | QUANTITY | 0.99+ |
25 cars | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Algo-Logic | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Wall Street | LOCATION | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
Altera | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
this year | DATE | 0.99+ |
Algo Logic Systems | ORGANIZATION | 0.99+ |
first trip | QUANTITY | 0.98+ |
under a microsecond | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
dozens of racks | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Super Computing 2017 | EVENT | 0.97+ |
First | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
under a millionth of second | QUANTITY | 0.96+ |
500 super computers | QUANTITY | 0.96+ |
Super Computing '17 | EVENT | 0.94+ |
15 man years | QUANTITY | 0.94+ |
about 15 million simulations | QUANTITY | 0.93+ |
three-part platform | QUANTITY | 0.89+ |
minutes | QUANTITY | 0.88+ |
Xeon | ORGANIZATION | 0.84+ |
theCUBE | ORGANIZATION | 0.82+ |
single person | QUANTITY | 0.78+ |
one of | QUANTITY | 0.75+ |
last several years | DATE | 0.74+ |
Key Value Store | ORGANIZATION | 0.72+ |
couple | QUANTITY | 0.63+ |
couple years | QUANTITY | 0.61+ |
Flash | TITLE | 0.61+ |
years | DATE | 0.55+ |
Xeon Phis | COMMERCIAL_ITEM | 0.51+ |
machines | QUANTITY | 0.5+ |
questions | QUANTITY | 0.5+ |
Value Store | ORGANIZATION | 0.49+ |
Key | TITLE | 0.47+ |
Xeons | ORGANIZATION | 0.4+ |
Markov | ORGANIZATION | 0.39+ |
CME | TITLE | 0.39+ |
John Sakamoto, Intel | The Computing Conference
>> SiliconANGLE Media Presents the CUBE! Covering Alibaba's Cloud annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello there, and welcome to theCUBE here on the ground in China for Intel's booth here at the Alibaba Cloud event. I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. We're here with John Sakamoto who is the vice president of the Programmable Solutions Group. Thanks for stopping by. >> Thank you for having me, John. >> So FPGAs, field-programmable gate arrays, kind of a geeky term, but it's really about software these days. What's new with your group? You came to the Intel through an acquisition. How's that going? >> Yeah, so far it's been great. As being part of a company with the resources like Intel and really having access to data center customers, and some of the data center technologies and frameworks that they've developed and integrating MPJs into that, it's been a great experience. >> One of the hot trends here, I just interviewed Dr. Wong, at Alibaba Cloud, the founder, and we were talking about Intel's relationship, but one of the things he mentioned was striking to me is that, they got this big city brain IOT project, and I asked him about the compute at the Edge and how data moves around, and he said "for all the Silicon at the Edge, one piece of Silicon at the Edge is going to be 10X inside the data center, inside the cloud or data center," which is fundamentally the architecture these days. So it's not just about the Edge, it's about how the combination of software and compute are moving around. >> Right. >> That means that data center is still relevant for you guys. What is the impact of FPGA in the data center? >> Well, I think FPGA is really our great play in the data center. You mentioned City Brain. City Brain is a great example where they're streaming live video into the data center for processing, and that kind of processing power to do video live really takes a lot of horsepower, and that's really where FPGAs come into play. One of the reasons that Intel acquired Altera was really to bring that acceleration into the data center, and really that is a great complement to Xeon's. >> Take a minute on FPGA. Do you have to be a hardware geek to work with FPGA? I mean, obviously, software is a big part of it. What's the difference between the hardware side and the software side on the programmability? >> Yes, that's a great question. So most people think FPGAs are hard to use, and that they were for hardware geeks. The transitional flow had been using RTL-based flows, and really what we've recognized is to get FPGA adoption very high within the data center, we have to make it easier, and we've invested quite a bit in acceleration stacked to really make it easier for FPGAs to be used within the data center. And what we've done is we've created frameworks and pre-optimized accelerators for the FPGAs to make it easy for people to access that FPGA technology. >> What's the impact of developers because you look at the Acceleration Stack that you guys announced last month? >> Yes, that's correct. >> Okay, so last month. This is going to move more into software model. So it's almost programmability as a dev-ops, kind of a software mindset. So the hardware can be programmed. >> Right. >> What's the impact of the developer make up, and how does that change the solutions? How does that impact the environment? >> So the developer make up, what we're really targeting is guys that really have traditionally developed software, and they're used to higher level frameworks, or they're used to designing INSEE. So what we're trying to do is really make those designers, those developers, really to be able to use those languages and frameworks they're used to and be able to target the FPGA. And that's what the acceleration stack's all about. And our goal is to really obfuscate that we actually have an FPGA that's that accelerator. And so we've created, kind of, standard API's to that FPGA. So they don't really have to be an FPGA expert, and we've taken things, basically standardized some things like the connection to the processor, or connections to memory, or to networking, and made that very easy for them to access. >> We see a lot of that maker culture, kind of vibe and orientation come in to this new developer market. Because when you think of a field-programmable gate array, the first thing that pops into my mind is oh my God, I got to be a computer engineering geek. Motherboards, the design, all these circuits, but it's really not that. You're talking about Acceleration-as-a-Service. >> That's right. >> This is super important, because this brings that software mindset to the marketplace for you guys. So talk about that Accelerations-as-a-Service. What is it? What does it mean? Define it and then let's talk about what it means. >> Yeah. Okay, great. So Acceleration-as-a-Service is really having pre-optimized software or applications that really are running on the FPGA. So the user that's coming in and trying to use that acceleration service, doesn't necessarily need to know there's an FPGA there. They're just calling in and wanting to access the function, and it just happens to be accelerated by the FPGA. And that's why one of the things we've been working with with Alibaba, they announce their F1 service that's based on Intel's Arria 10 FPGAs. And again we've created a partner ecosystem that have developed pre-optimized accelerators for the FPGA. So users are coming in and doing things like Genomics Sequencing or database acceleration, and they don't necessarily need to know that there's an FPGA actually doing that acceleration. >> So that's just a standard developer just doing, focusing in on an app or a use case with big data, and that can tap into the hardware. >> Absolutely, and they'll get a huge performance increase. So we have a partner in Falcon Computing, for example, that can really increase the performance of the algorithm, and really get a 3X improvement in the overall gene sequencing. And really improve the time it takes to do that. >> Yeah, I mean, Cloud and what you're doing is just changing society. Congratulations, that's awesome. Alright, I want to talk about Alibaba. What is the relationship with Intel and Alibaba? We've been trying to dig that out on this trip. For your group, obviously you mentioned City Brain. You mentioned the accelerations of service, the F1 instances. >> Right. >> What specifically is the relationship, how tight is it? What are you guys doing together? >> Well the Intel PSG group, our group, has been working very closely with Alibaba on a number of areas. So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. We announced the Arria 10 version today, but will continue to develop with them in the next generation Intel FPGAs, such as Stratix 10 which is based on 14 nanometer. And eventually with our Falcon Mesa product which is a 10 nanometer product. So clearly, acceleration's a focus. Building that ecosystem out with them is going to be a continued focus. We're also working with them on servers and trying to enhance the performance >> Yeah. >> of those servers. >> Yeah. >> And I can't really talk about the details of all of those things, but certainly there are certain applications that FPGAs, they're looking to accelerate the overall performance of their custom servers, and we're partnering with them on that. >> So one of the things I'm getting out of this show here, besides the conversion stuff, eCommerce, entertainment, and web services which is Alibaba's, kind of like, aperture is that it's more of a quantum mindset. And we talked about Blockchain in my last interview. You see quantum computing up on their patent board. >> Yeah. >> Some serious IT kinds of things, but from a data perspective. How does that impact your world, because you provide acceleration. >> Right. >> You got the City Brains thing which is a huge IOT and AI opportunity. >> Right. >> How does someone attack that solution with FPGAs? How do you get involved? What's your role in that whole play? >> Again, we're trying to democratize FPGAs. We're trying to make it very easy for them to access that, and really that's what working with Alibaba's about. >> Yeah. >> They are enabling FPGA access via their Cloud. Really in two aspects, one which we talked about which we have some pre-optimized accelerators that people can access. So applications that people can access that are running on FPGAs. But we're also enabling a developer environment where people can use the tradit RTL flow, or they can use an OpenCL Flow to take their code, compile it into the FPGA, and really get that acceleration that FPGAs can provide. So it's not only building, bringing that ecosystem accelerators, but also enabling developers to develop on that platform. >> You know, we do a lot of Cloud computing coverage, and a lot of people really want to know what's inside the Cloud. So, it's one big operation, so that's the way I look at it. But there's a lot going on there under the hood. What is some of the things that Alibaba's saying to you guys in terms of how the relationship's translating into value for them. You've mentioned the F1 instances, any anecdotal soundbites you can share on the feedback, and their direction? >> Yeah, so one of the things they're trying to do is lower the total TCO of the data center. And one of the things they have is when you look at the infrastructure cost, such as networking and storage, these are cycles that are running on the processor. And when there's cycles running on the processor, they monetize that with the customers. So one of the areas we're working with is how do we accelerate networking and storage functions on a FPGA, and therefore, freeing up HORVS that they can monetize with their own customers. >> Yeah. >> And really that's the way we're trying to drop the TCO down with Alibaba, but also increase the revenue opportunity they have. >> What's some updates from the field from you guys? Obviously, Acceleration's pretty hot. Everyone wants low latency. With IOT, you need to have low latency. You need compute at the edge. More application development is coming in with Vertical Specialty, if you will. City Brains is more of an IOT, but the app is traffic, right? >> Yeah. >> So that managing traffic, there's going to be a million more use cases. What are some of the things that you guys are doing with the FPGAs outside of the Alibaba thing. >> Well I think really what we're trying to do is really focus on three areas. If you look at, one is to lower the cost of infrastructure which I mentioned. Networking and storage functions that today people are using running those processes on processors, and trying to lower that and bring that into the FPGA. The second thing we're trying to do is, you look at high cycle apps such as AI Applications, and really trying to bring AI really into FPGAs, and creating frameworks and tool chains to make that easier. >> Yeah. >> And then we already talked about the application acceleration, things like database, genomics, financial, and really those applications running much quicker and more efficiently in FPGAs. >> This is the big dev-ops movement we've seen with Cloud. Infrastructure as code, it used to be called. I mean, that's the new normal now. Software guys programming infrastructure. >> Absolutely. >> Well congratulations on the great step. John Sakamoto, here inside theCUBE. Studios here at the Intel booth, we're getting all the action roving reporter. We had CUBE conversations here in China, getting all the action about Alibaba Cloud. I'm John Furrier, thanks for watching.
SUMMARY :
SiliconANGLE Media Presents the CUBE! I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. You came to the Intel through an acquisition. center customers, and some of the data center technologies and frameworks that they've developed one piece of Silicon at the Edge is going to be 10X inside the data center, inside the What is the impact of FPGA in the data center? the data center, and really that is a great complement to Xeon's. What's the difference between the hardware side and the software side on the programmability? So most people think FPGAs are hard to use, and that they were for hardware geeks. So the hardware can be programmed. So the developer make up, what we're really targeting is guys that really have traditionally Motherboards, the design, all these circuits, but it's really not that. This is super important, because this brings that software mindset to the marketplace for So the user that's coming in and trying to use that acceleration service, doesn't necessarily So that's just a standard developer just doing, focusing in on an app or a use case And really improve the time it takes to do that. What is the relationship with Intel and Alibaba? So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. And I can't really talk about the details of all of those things, but certainly there So one of the things I'm getting out of this show here, besides the conversion stuff, How does that impact your world, because you provide acceleration. We're trying to make it very easy for them to access that, and really that's what working So it's not only building, bringing that ecosystem accelerators, but also enabling developers What is some of the things that Alibaba's saying to you guys in terms of how the relationship's And one of the things they have is when you look at the infrastructure cost, such as networking And really that's the way we're trying to drop the TCO down with Alibaba, but also City Brains is more of an IOT, but the app is traffic, right? What are some of the things that you guys are doing with the FPGAs outside of the Alibaba The second thing we're trying to do is, you look at high cycle apps such as AI Applications, And then we already talked about the application acceleration, things like database, genomics, This is the big dev-ops movement we've seen with Cloud. Studios here at the Intel booth, we're getting all the action roving reporter.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alibaba | ORGANIZATION | 0.99+ |
John Sakamoto | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Alibaba Cloud | ORGANIZATION | 0.99+ |
Wong | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
two aspects | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Falcon Computing | ORGANIZATION | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
Programmable Solutions Group | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
3X | QUANTITY | 0.98+ |
10X | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.96+ |
SiliconANGLE Media | ORGANIZATION | 0.95+ |
Dr. | PERSON | 0.95+ |
Altera | ORGANIZATION | 0.94+ |
City Brain | ORGANIZATION | 0.93+ |
Alibaba Cloud | EVENT | 0.92+ |
one piece | QUANTITY | 0.91+ |
Edge | ORGANIZATION | 0.91+ |
Xeon | ORGANIZATION | 0.91+ |
Arria 10 FPGAs | COMMERCIAL_ITEM | 0.9+ |
Stratix 10 | COMMERCIAL_ITEM | 0.88+ |
OpenCL Flow | TITLE | 0.88+ |
three areas | QUANTITY | 0.86+ |
Computing Conference | EVENT | 0.79+ |
a million more use cases | QUANTITY | 0.77+ |
one big operation | QUANTITY | 0.73+ |
Falcon | ORGANIZATION | 0.71+ |
Intel PSG | ORGANIZATION | 0.71+ |
Arria 10 | COMMERCIAL_ITEM | 0.71+ |
Mesa | COMMERCIAL_ITEM | 0.68+ |
Cloud | EVENT | 0.61+ |
Cloud | ORGANIZATION | 0.51+ |
City Brains | TITLE | 0.44+ |
F1 | EVENT | 0.4+ |
HORVS | ORGANIZATION | 0.38+ |
Studios | ORGANIZATION | 0.33+ |
Michael Greene, Intel - #SparkSummit - #theCUBE
>> Announcer: Live from San Francisco, it's the Cube covering Spark Summit 2017. Brought to you by Data Bricks. >> Welcome back to the Cube. Continuing our coverage here at Spark Summit 2017. What a great lineup of guests. I can't wait to introduce this gentleman. We have Intel's VP of the software and service group, Mr. Michael Green. Michael, welcome. >> Thank you for having me. >> All right, we also have George with us over here and George and I will both be peppering you with questions. Are you ready for that? >> I am. I've got the salt to go with the pepper. (laughs) >> Well, you just got off the stage. You did the keynote this morning. What do you think was the most important message you delivered in your keynote? >> Well, it was interesting. One of the things that we're looking at with Big DL, so the big DL framework, was we're hearing a lot of the challenges of making sure that these AI-type workloads scale easily. And one of the things when we open-source Big DL, we really were designing it to leverage that sparkability for massive scale from the beginning. So I thought that that was one of the things that connected with several of the keynotes ahead of me was talking about if this is your challenge, here is one of many solutions but a very good one that will let you take advantage of the scale that people have in their infrastructure, lots of Xeons out there. Might also make sure to fully utilize running the workloads of the future, AI. >> Okay, so Intel not just a hardware company. You do software, right? (laughs) >> Well, you know, Intel's a solutions company, right? And hardware's awesome, but hardware without software is a brick. Maybe a warm one, but it doesn't do much- >> Not a data brick. >> That's right, not a data brick, just a brick. >> And not melted down, either. >> That's right, that's right. So sand without software doesn't go very far. And I see it as software is used to ignite the hardware so that you actually get useful productivity out of it. So as a software solution and as customers, they have problems to solve. It's rare that they come in and say that, "Nope, I just need a nail," right? They're usually like, "I need a home." Well, you can't just provide the nail, you have to provide all the pieces, and one of the things that's exciting for me being part of Intel is that we provide silicon, of course, right? The processors Xeon, Accelerators, and now, software, tools, frameworks, to make sure that a customer can actually really get the value of the entire solution. >> Host: Okay, go ahead, George. >> So Michael, help those of us who've been watching from afar but aren't up-to-date on the day-to-day tactics and strategy of what Intel's doing with (mumbles) in terms of where does Big DL fit? And then the acquisition of the floating point (mumbles) technology so that there's a special purpose acceleration on the chip, so how do those two work together along with the rest of the ecosystem? >> Sure, great question. Do if you think of Intel, really, we're always looking at how we can leverage Moore's Law to get more and more integrated into the solution. And if you quickly step through a brief history, at one point, we had a 386, which was a great integer processor, which was partnered with a 387 for the floating point accelerate. 46 combined that because we're able to leverage Moore's Law to bring those two together. Got a lot of reuse for the instruction set with the acceleration. As we bring in - Altera was recently integrated into Intel - they come with a suite of incredible FPGAs and accelerators, as well as another company with Nirvana, that also accelerators, and we're looking at those special case opportunities to accelerate the user experience. So we're going to continue to follow that trend and make sure that you have the general purpose capabilities and where new workloads are coming in, and we really see a lot of growth in AI. As I think I said in the keynote, about 12x growth by 2020. We need to make sure that we have the silicon, as well as the software, and that's for Big DL to pull those two together to make sure that we're getting the full benefit of the solution. >> So a couple years ago, we were told that Intel actually thought that there was going to be more Hadoop servers, and Hadoop is umbrella term for the ecosystem, than database servers in three to fives years' time. When you look at deep learning, because we know it's so much more compute-intensive than the traditional statistical machine learning, if you look out three to five years, how much of the compute cycles, share of workloads, do you see deep learning comprising? >> I think that maybe in the last year, deep learning, or AI, as a workload's about seven percent. But if you grow by 12x, it's definitely growing quickly. So what we're expecting is that AI will become inherent in pretty much every application. An example of this is, at one point, facial detection was something that was the new thing. You can't buy a camera that doesn't do that. So if you pull up your camera and you see the little square show up, it's just commonplace. We're expecting that AI will just become an integral part of solutions, not a solution in and of itself. It's there to make software solutions smarter, it's there to make them go further, it's there to make them smarter. It's not there to be independent. It's like, "Wow, we've identified a cat." That's cool, but if we're identifying problems or making sure that the autonomous delivery systems don't kill a cat, there's a little bit more that needs to go one, so it's going to be part of the solution. >> What about the trade-off between processing at the edge and learning in the cloud? I mean, you can learn on the edge, you can learn in the cloud, you can do the analysis on either end of the run time. How do you guys see that being split up in the future? >> Absolutely, I think that the deep learning training, there's always opportunities that go through vast amount of data to figure out how to identify what's interesting, identify new insights. Once you have those models trained, then you want to use them everywhere. And what makes sense, then, then we're switching from training to inference. Inference at the edge allows you to be more real-time. In some cases, if you've imagined a smart camera, even from a smart camera point-of-view, do I send all the data stream to the data center? Well, maybe not. Let's assume that it's being used for highway patrol. If you identify the car speeding, then send the information, except leave me out. (laughs) Kidding on that. But it's that kind of piece where you allow both sides to be smart. More information for the continual training in the cloud, but also more ability to add compute to the edge so that we can do some really cool activities right at the edge, real-time, without having to send all the information. >> If you had to describe to people working on architectures for the new distributed computing in IOT, what would an edge device look like in its hardware footprint in terms of compute, memory, connectivity? >> So in terms of connectivity, we're expecting an explosion of 5G. A lot of high bandwidth, multiple things being connected with some type of communication, 5G capability. It won't just be about, let's just say, cars feeding back where they are from their GPS, but it's going to be cars talking to other cars. Maybe one needs to move over a lane. Can they adjust? We're talking autonomous world. There's going to be so much interconnection through 5G, so I expect to see 5G show up in most edge devices. And to your point, I think it's very important to add that we expect edge devices to all have some kind of compute capability. Not just sensors, but ability to sense and make some decisions based on what they're sensing. We're going to continue to see more and more compute go to the edge devices. So again, where we look at leveraging the power of Moore's Law, we're going to be able to move that compute that today is like, the cloud is just incredible with its collective compute power, that will slowly move away. And now, we've seen that from mainframe to workstations to PC, the phones, and to edge devices. I think that trend will continue and we'll continue to see bigger data centers and other use cases that require deeper analysis. So from a developer's point of view, if you're working on an edge device, make sure it has great connectivity and compute. >> So one last follow-up from me. Google is making a special effort to build their own framework, open source (mumbles) flow, and then marry it to specialized hardware, tenser processing units. So specialization versus generalization. Do you have a sense for someone who's running TPU in the cloud, do you have a sense for if they're learning tenser flow models or tenser flow-based models, would there be an advantage for that narrow set running on tenser processing units? Or would that be supported just as well on Intel hardware? >> You know, specialization is anything that's purpose-built. As you said, it's just not general purpose, but as I mentioned, over time, the specialized capabilities slide into general purpose opportunities. Recently, we added ASNI, which is an encryption algorithm, into our processors very specialized for encryption/decryption. But because it was so generally used now, it's now just part of our processor offering, it's just part of our instruction set. I expect to continue to see that trend, so many things may start off specialized, which is great, it's a great way to innovate, and then, over time, if it becomes general purpose or if it's so specialized that everyone's using it, it's not general purpose and it slides into the general purpose opportunity. I think that will be a continuation. We've seen that since the dawn of the computer, specialized memory, specialized compute, specialized floating point capabilities, are now just generally available. And so when we deploy things like Big DL, a lot of the benefit of it is that we know the Xeon processor has so much capability because it has pulled in, over time, the best of the specialized use cases that are now generally used. >> Great deep-dive questions, George. We have a couple of minutes left so I know you brought a lot to this conference. They put you up on stage. So what were you hoping to gain from the conference? Maybe you came here to learn or have you had any interesting conversations so far? >> You know, what I'm always excited about at these conferences is that the open-source community is just one that is so incredibly adaptive and innovative, so we're always out there looking to see where the world is going. By doing that, we're learning where- Because again, where the software goes, we want to make sure that the hardware that supports it, we're there to meet their needs. So today, we're learning about new frameworks coming out, the next spark on the roadmap, what they're looking at doing. I expect that we'll hear a little more about scripting languages as well. All of that is just fantastic because I'm always impressed but have come to expect a lot of innovation, but still impressed by the amount of innovation. So it's good to be in the right place and as we approach things from an Intel point of view, we know we approach it from a portfolio solutions set. It's not just silicon, it's not just an accelerator, but it's from the hardware through the software solution. So we know that we can really help to accelerate and usher in the next compute paradigm. So this has been fun. >> That would be a great ending but I got to ask you this. When you're sitting in this chair next year at Spark 2018, what do you hope to be talking about? >> Well, one of the things that we're looking and talking about is this massive amounts of data. I would love to be here next year talking more about the new memory technologies that are coming out that allow for tremendous more storage at incredible speeds, better SSDs and how they will impact the performance of the overall solution, and of course, we're going to continue to accelerate our processing cores, accelerators for unique capabilities. I want to come back in and say, "Wow, what did we 10x this year?" That's always fun. It's a great challenge to the engineering team who just heard that and said, "Ugh, he's starting off with 10x again?" (laughs) >> Great, Michael. That's a great wrap-up, too. We appreciate you coming on and sharing with the Cube audience the exciting things happening at Intel with Spark. >> Well, thank you for the time. I really appreciate it. >> All right, and thank you all for joining us for this segment. We'll be back with more guests in just a few. You're watching the Cube. (electronic music)
SUMMARY :
Brought to you by Data Bricks. We have Intel's VP of the software and service group, and George and I will both be peppering you with questions. I've got the salt to go with the pepper. Well, you just got off the stage. One of the things that we're looking at with Big DL, Okay, so Intel not just a hardware company. Well, you know, Intel's a solutions company, right? so that you actually get useful productivity out of it. as the software, and that's for Big DL to pull those two how much of the compute cycles, share of workloads, So if you pull up your camera and you see the little square in the cloud, you can do the analysis on either end Inference at the edge allows you to be more real-time. is like, the cloud is just incredible with its collective in the cloud, do you have a sense for if they're learning We've seen that since the dawn of the computer, specialized We have a couple of minutes left so I know you brought So it's good to be in the right place and as we approach what do you hope to be talking about? of the overall solution, and of course, we're going to continue We appreciate you coming on and sharing Well, thank you for the time. All right, and thank you all for joining us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Michael Green | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Greene | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
five years | QUANTITY | 0.99+ |
12x | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Nirvana | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
last year | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
about seven percent | QUANTITY | 0.97+ |
fives years' | QUANTITY | 0.97+ |
10x | QUANTITY | 0.97+ |
386 | COMMERCIAL_ITEM | 0.96+ |
one point | QUANTITY | 0.95+ |
couple years ago | DATE | 0.92+ |
Altera | ORGANIZATION | 0.91+ |
46 | QUANTITY | 0.91+ |
Moore's Law | TITLE | 0.89+ |
One | QUANTITY | 0.88+ |
this morning | DATE | 0.86+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
Spark 2018 | EVENT | 0.84+ |
one of the things | QUANTITY | 0.82+ |
Cube | COMMERCIAL_ITEM | 0.76+ |
Bricks | ORGANIZATION | 0.75+ |
about 12x | QUANTITY | 0.74+ |
387 | COMMERCIAL_ITEM | 0.72+ |
Moore | TITLE | 0.7+ |
things | QUANTITY | 0.62+ |
Big DL | TITLE | 0.61+ |
Spark | ORGANIZATION | 0.6+ |
couple | QUANTITY | 0.53+ |
Xeons | OTHER | 0.53+ |
Data | PERSON | 0.49+ |
big DL | TITLE | 0.47+ |
minutes | QUANTITY | 0.47+ |
Big DL | COMMERCIAL_ITEM | 0.45+ |
5G | OTHER | 0.37+ |
ASNI | ORGANIZATION | 0.34+ |
Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE
(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)
SUMMARY :
Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Spelman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nervana | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Photonics | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
72 cores | QUANTITY | 0.99+ |
two day | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
San Hose | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
G Suite | TITLE | 0.99+ |
Office 365 | TITLE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Open Compute Summit | EVENT | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Xeon | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
both places | QUANTITY | 0.98+ |
Altera | ORGANIZATION | 0.98+ |
Riot Games | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
Watson | TITLE | 0.96+ |
over 20% | QUANTITY | 0.95+ |
SaaS | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
one horse | QUANTITY | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
one product | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
one answer | QUANTITY | 0.94+ |
eight new products | QUANTITY | 0.93+ |
one time | QUANTITY | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.92+ |
GM | ORGANIZATION | 0.91+ |
one house | QUANTITY | 0.91+ |
Google Cloud | TITLE | 0.91+ |