Image Title

Search Results for eight FPGAs:

Ravi Pendekanti, Dell EMC | Dell Technologies World 2018


 

(upbeat music) >> Announcer: Live, from Las Vegas, it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Welcome back to theCUBE, day three in Las Vegas at Dell Technologies World. I am Lisa Martin with John Troyer. We have been here for three days, there's over 14,000 people here, 30,000 plus more engaging with video content livestream on demand. We're excited to welcome back to theCUBE, not just back to theCUBE, but back today for a second appearance, he's so in demand, Ravi Pendekanti, Senior Vice President, Servers and Systems Product Management and Marketing at Dell EMC, welcome back! >> Thank you, Lisa, great to be here. >> So, you have so much energy for day three, but so much excitement, lots of announcements. >> Ravi: Yes. >> The theme of this event, "Make It Real," is provocative. We've heard a lot of >> Yes it is. >> Lisa: Interpretations about what that means for different customers and different industries who are looking to take advantage of emerging technologies: AI, machine learning, deep learning, IoT, to make digital transformation real. What's going on in the world of AI and machine learning? >> Lisa, a lot. Now, having said that, I don't think there's a single industry in the, in any part of the world today that we talk to that's not interested in AI, machine learning, for that matter, deep learning. Why is that so? Just think about the fact that each one of us today is probably creating and generating two and a half times more data than a year ago. It's huge. I mean, when I started out, people used to think megabytes is huge, then it went to terabytes, petabytes, exabytes, and now I think very soon we're going to talk about zettabytes, right? I'll leave it to you guys to talk about the number of zeros, but setting that aside, data by itself again, the second they went, so of much of data is being created, data in my view has absolutely no value until you create information out of it. >> Lisa: Absolutely. >> And that's where I think companies are becoming more aware of the fact that you need to start getting some information out of it, wherein starts the whole engine, first of all about going about collecting all of the data. And we have all kinds of data. We have got structured data, unstructured data, and now it's important that we actually get all of the disparate data into a format that can now be executed upon. So that's first and foremost what customers are trying to figure out. And then from there comes all the elements that the data analytics part, and then you can go into the machine learning and deep learning. So that's the way people are looking at it, and you made an interesting comment, Lisa, which is making it real. This is where people are looking at things beyond the buzzwords, right? It's sufficed to say AI is not a new term. I recall as a kid, we used to talk about AI. But now is when businesses are depending on it to ensure they have the competitive edge. >> So, Ravi, you know the pendulum swings, right, and ten years ago, >> It does. >> John: Software is eating the world and the cloud is coming, and at one point it looked like a future of undifferentiated x86 compute somewhere. It turns out, hardware actually matters, and as our application and data needs have grown, the hardware matters. >> It does. >> John: And so, part of your portfolio is the PowerEdge set of PowerEdge servers. I mean, how are you approaching that of making the needs of this new generation of software, this massive data parallelism and throughput real? >> Great question, John. It's interesting, yes, the pendulum keeps swinging, right? And the beauty is, as... It's my only hope that, as the pendulum swings, we're actually learning, too, and we're not making the same thing, the same mistakes. Thankfully, we are not. Now, when people talk about cloud, guess what? To your point, it has to run on something, software has to run on something. So, obviously the hardware. Now, to keep up with the changing tide and the needs, some of the recent things we have done, as an example, with our R840 launch yesterday, you know, NVMe is the talk of the town, too, talking about some of the new technologies. And customers want us to go out and provide a better way and a faster way for them to get access to the data in a much more faster way closer to the compute, so that's where the NVMe drives come in. We have got 24 NVMe drives on R840 today, which is two times more than the closest competitor. More into the R940xa; xa stands for extreme acceleration. Again, we have never had an xa product, this is the first of its kind that we are bringing out, and the beauty of this is, we wanted to makes sure there is a one to one relationship between the GPU and the CPU. So, for every CPU you have a GPU. It's a one to one relationship. If you look at the R940 we introduced earlier, it had, just to give the context to your question, John, it had, it could support four CPUs but only two GPUs. So if we are, think of it this way, if we are doubling the number of GPUs, and that's not it, we are actually enabling our customers to add up to eight FPGAs if they want. Nobody else does it, and this goes back to, I think Lisa, I think when we start to talk about FPGAs, too, and therein comes the issue, wherein customers don't have the flexibility in most of the cases in a lot of products out there. We have decided that flexibility has to be given to our customers because the changing, workload's changing, technologies, and even most customers today, they go in thinking that that's all they need, but sooner or later they realize that they need more than what they planned for. So our goal is to ensure that there is enough of scalability and headroom to enable that to happen. So that's how we, as PowerEdge Team, are building servers today, which actually enables us to provide our customers with an ability to have a headroom and at the same time give them the flexibility to change, whether it is NVMe drives or any kind of SSD drive, GPUs, FPGAs, so there's all the flexibility built into it along with ease of management. >> A couple things that you mention that I think are really important is that data doesn't have any value unless you're able to extract insights from it. >> Ravi: Yeah. >> Companies that are transforming digitally well are able to combine and recombine the same data using it as catalysts across many different applications within a business, that agility is key, that speed is key. >> Ravi: Yes. >> How are you, what are some of the things that you're hearing from the 14,000 plus people that I'm sure are all lined up to want to talk to you this week about what, for example, PowerEdge is going to enable them to do? You talked about flexibility, you talked about speed, what are some of the real applications that you're hearing feedback-wise from some of these new features that you've announced? >> Oh, great, so I think, again, an excellent question in terms of how the customers are reacting to and what are we doing. So now, talking about AI machine learning, think of it this way, right, the permutations and combinations are way too many. And the reason I say that is, keeping the hardware aside, when you talk about frameworks that are available today for most of the AI or machine learnings applications, people talk about TensorFlow, people talk about Caffe2, people talk about CNTK, I mean, there's a whole plethora of frameworks. And then there are different neural network methodologies, right? You hear of DNN, deep neural network, right? And then you hear of things called RNN, there is something called CNN, my point is, there is so many permutations and combinations in the mix that what our customers have come back and told us, going back to where we were earlier, talking about the flexibility in the architecture that we are providing, where we provide seamless scalability on any of the vectors, that they actually love that we are giving them the flexibility because when there are so many software options with frameworks and every other methodology, we wanted to make sure that we also provided the flexibility and the scalability. And our scalability comes in, whether it is the I/O connectability, we talked about PowerEdge MX that's going to be coming up soon that was a preview, but that's where we talked about something called the kinetic infrastructure, which essentially enables our customers to go out and run multiple workloads on the same modular infrastructure. Never happened before, right? Or, you know, the seamless way we do it now is a lot better than anything else. Likewise, to go back into the R940xa. We have the ability to go out and support hard drives, SSDs, FPGAs, GPUs, so the feedback has been that our customers are really excited about the fact that we're giving them the flexibility and agility to go out and match to the needs of their different workloads and the different options they have. So, they love it. >> Ravi, I was talking to some of your team yesterday and I was really impressed as they talked about the product development cycle. They said that we start with the customers and we start with applications. >> Ravi: Yes. >> And then we figure out what technologies are now appropriate to build in what combinations. They don't just start from let's throw the newest thing in because we can. As you talk to CIOs and enterprise architects, it used to be if you just do a server refresh and just check the box and push the button, now you've got to look at cloud readiness and what I keep on prim and what I keep off prim and what's going to fit my applications. What are you hearing from customers and how are you trying to educate them on how to approach their next refresh, well, I think even refresh is probably a bad frame, their next set of applications that they're going to have to build in this digital transformation? >> You know, John, this is actually no different, I mean let's step aside from the compute world for a minute, let's pick up an automobile industry, right? If you get into the automobile industry, a family might say they need a sedan, or a family of five or six with young kids might say they want a minivan, right? And maybe now the kids are grown up or you're still in your 20s or 30s and some of the folks would love to have a sports car, like the McLaren that up >> I'll take that one! >> Ravi: On the stage with Jeff; I know, I would love that too, right? (Lisa laughing) So my point is, when people are trying to decide on what is it they really want to buy, they actually know what they're looking for, right? A family of four doesn't go in and say, "I need a two-seat car," for example. It's a similar thing here, as people start looking at the workload first, they come in and start looking at mapping, "Hey, this is the kind of workload we have now," now let's start looking at what infrastructure can we provide behind it? You know, even if you look at our, something that we have announced in the past, but the 740xd. So, we have a 740 version and 740xd version; xd there stands for extreme density. So, if customers want a 2-CPU box, a 2-U box, a server, but they want more storage, then they have xd version. But they decide that storage is not really crucial, they just need the compute, then we provide the 740 on its own, the R740. So my point being that, accentuating the point you raised, is it's always nice to look at the application, look at what its needs are, whether it's memory, whether it's storage, whether it's the GPUs, the CPUs, and then look at how it transposes itself over the next few years because you really don't want to acquire something and then really decide later that you've run out of room. It's like buying a home and then you know you're going to have your kids or you're going to raise a family, you don't probably want to start off with a single bedroom and you know you're going to have a family in a couple of years. My point again being that, that is where the planning becomes absolutely important. So we are planning, and the planning phase is crucial because once you have that right, you now can rest at ease for the next few years and as we do that, one of the other fundamental design principles of PowerEdge is that we want to really support the platforms for multiple generations. Case in point, when we came out with our PowerEdge m1000e, we said that we will guarantee support for three generations of processors. We actually are up to the fifth generation as we speak right now. And our customers love it, because nobody really wants to go ahead and buy more servers every few years if they can go back with their investment they have made and ensure that there is room to grow. So, to your point, absolutely the right spot to start is start looking at the workload, start looking, once you have pegged it, then start looking at really at growing and what your needs could be. And then start connecting the dots and I think you would be coming out with the better outcome for the long run. >> We had the opportunity to talk, John and I just an hour or two ago, with the CIO, with Bask Iyer, and one of the things that was interesting is we talked to him about how the role of the CIO is changing to be really part of corporate strategy, >> Ravi: Yeah. >> And business strategy; as you talk with customers about building this infrastructure, to set them up for the flexibility and the agility that they need, allowing them to make the right decisions for what they need but also scale it over time, how much are you seeing the boots on the street that you're talking to have to sell this up the stack as this is fundamental to transforming IT, which is fundamental to transforming our business into a digital business? >> Very, very true. By the way, Bask is a great friend and a collaborator, we certainly look to, as the saying goes, "Eat your own dog food." So we work with Bask and team very closely because, as a CIO for a large corporation himself, we learn a lot; there's nothing better than trying to walk in the shoes of our customers so, going back to the comment you made, Lisa, is most of the, by the way, most of the customers today, the CIOs, who are now becoming not cost centers, they're becoming profit centers >> Profit centers, >> Lisa: That's what Michael Dell said on Monday. >> Absolutely, and he's absolutely right, Michael is absolutely right because most of the organizations we speak to today on an average, I would think that the number of CIOs we talk to has probably been dialed up, because we see the kind of questions that they're being asked of, right, to the point that we're making earlier, they're not looking at making point purchases for something that will satisfy them for the next 12 months or 18 months. They're looking at the next horizon, they're looking at a long-term strategy, and then they're looking back at the ROI. So what is it I'm able to go back in and provide to my customers internally, whether it is in terms of the number of users or the performance, whatever the SLAs, the Service Level Agreements may be internally, that's what they're looking for. So, towards that end, the whole concept of ROI and TCO, the total cost of ownership and the return of investment nowadays is probably a much bigger talking point that we need to support with the right factoids. I think that's becoming crucial, and the CIOs are getting more engaged in the discussions than ever in the past, and so it's just not about feeds and speeds, which I guess anyone can look at spec sheets, not as exciting, but at things beyond that that I think are getting more crucial. >> Well, Bask said, "Drinking your own champagne, eating your own dog food." I like champagne and dogs, although I'll go with both. >> I, why not. I just... >> We've got the therapy dogs next door. >> Therapy dogs, exactly. >> Lisa: Isn't that fantastic? >> They're great, they're great. >> So, last question in the last 30 seconds or so, biggest event, 14,000 as I said, expected live over the last three days, and tens of thousands more engaging, any one thing really stand out to you at this inaugural Dell Technologies World? >> The most important thing that has stuck for me is that human progress is indeed possible through technology. And this is the best showcase possible, and when you can enable human progress, which cuts across boundaries of nationality, and boundaries of any other kind, I think we are in the winning streak. >> Well said. Ravi, thanks so much for coming back today, couple times in hanging out with us on theCUBE and sharing some of the insights that you're seeing and that you're enabling your customers to achieve. >> Thank you, Lisa; thank you, John, it's been awesome. It's always wonderful being with you guys, so thank you. >> We want to thank you for watching theCUBE again. Lisa Martin with John Troyer live, day three of Dell Technologies World. Stick around, we'll be right back after a short break. (upbeat music)

Published Date : May 2 2018

SUMMARY :

Brought to you by Dell EMC and its ecosystem partners. not just back to theCUBE, but back today So, you have so much energy for day three, The theme of this event, "Make It Real," is provocative. What's going on in the world of AI and machine learning? I'll leave it to you guys to talk about the number of zeros, and now it's important that we actually get all and the cloud is coming, of making the needs of this new generation of software, and the beauty of this is, we wanted to makes sure A couple things that you mention that I think are able to combine and recombine the same data We have the ability to go out and support and we start with applications. and just check the box and push the button, So my point being that, accentuating the point you raised, going back to the comment you made, Lisa, is most of the, because most of the organizations we speak to today I like champagne and dogs, although I'll go with both. I just... We've got the therapy dogs and when you can enable human progress, and sharing some of the insights that you're seeing It's always wonderful being with you guys, so thank you. We want to thank you for watching theCUBE again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

LisaPERSON

0.99+

MichaelPERSON

0.99+

Ravi PendekantiPERSON

0.99+

John TroyerPERSON

0.99+

RaviPERSON

0.99+

MondayDATE

0.99+

Dell EMCORGANIZATION

0.99+

two timesQUANTITY

0.99+

Michael DellPERSON

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

BaskPERSON

0.99+

Bask IyerPERSON

0.99+

740COMMERCIAL_ITEM

0.99+

R840COMMERCIAL_ITEM

0.99+

tens of thousandsQUANTITY

0.99+

sixQUANTITY

0.99+

20sQUANTITY

0.98+

14,000 plus peopleQUANTITY

0.98+

three daysQUANTITY

0.98+

single bedroomQUANTITY

0.98+

second appearanceQUANTITY

0.98+

two GPUsQUANTITY

0.98+

14,000QUANTITY

0.98+

30sQUANTITY

0.98+

Dell Technologies World 2018EVENT

0.97+

each oneQUANTITY

0.97+

this weekDATE

0.97+

over 14,000 peopleQUANTITY

0.97+

oneQUANTITY

0.97+

day threeQUANTITY

0.96+

740xdCOMMERCIAL_ITEM

0.96+

firstQUANTITY

0.96+

a year agoDATE

0.96+

R740COMMERCIAL_ITEM

0.96+

McLarenORGANIZATION

0.96+

Dell Technologies WorldEVENT

0.94+

R940xaCOMMERCIAL_ITEM

0.94+

fifth generationQUANTITY

0.94+

three generationsQUANTITY

0.94+

R940COMMERCIAL_ITEM

0.94+

BaskORGANIZATION

0.92+

fourQUANTITY

0.92+

2-CPUQUANTITY

0.91+

single industryQUANTITY

0.89+

Ashley Gorakhpurwalla, Dell EMC | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> And welcome back. We are live here in Las Vegas. We're in the Sands right now of day two of Dell Technologies World 2018. I'm John Walls along with Stu Miniman, and it's a pleasure now to welcome Ashley Gorakhpurwalla, who is the President and GM of Server and Infrastructure Systems at Dell EMC. Ashley, good afternoon to you. >> Thank you, great pronunciation of my last name. >> Well, thank you very much, I've worked-- >> Not an easy thing to do. >> I worked on that, how about that? Stu and I were just talking briefly with you. What a cool exhibit floor, right? >> It really is. >> There's just a lot of-- What have you seen out there that's kind of caught your eye so far? >> Well, we brought in a lot of customers this time to show their outcomes. So I'm a car guy, so you know I went straight for the McLaren. >> How 'about that McLaren out there, right? Yeah. >> My son would love the F1 setup with the gaming, virtual reality. Top Golf is a great VxRail customer. We have GoalControl. Try to beat the AI and see if you can score a goal. I mean, there's some very cool demos back there. >> And then overall, just I'm curious about your thoughts about the show then because that's a part of it. >> That's a part of it. >> A lot of client relations you're doing here, business relations. >> Sure. We're only about half way through, but so far very, very positive energy I get. I don't know if you caught or already talked to Michael after the keynote, but certainly. >> Stu did today. Certainly, Michael was on fire at the keynote, and I really, really enjoyed the discussion with Dr. Chip Plater about, and Jeffrey Wright about, how technology connects to helping people. A lot of times engineers, stuck in a lab, looking at R&D, trying to figure out a problem, lose sight of what they're doing. Great opportunity for the team to see that and kind of expand and understand where their technology is going, what it's doing for the world, what the impact is that they're having. >> So, Ashley, your team's been real busy leading up to this, seeing some of the new products in the announcement. Before we get into this though, your role expanded a little bit since the last time we talked about, talked to Tom Burns yesterday as there was the group formerly known as VCE that turned into CPSG, It was split into some pieces, and HCI is now under your domain. >> That's right. So in addition to our server businesses, which are kind of the mainstream PowerEdge business, our Extreme Scale business, our OEM business. We had a reorganization to really kind of unlock the potential that we have in a great product set, a product set before my organization was already number one. It's a position of strength. What we're trying to do is accelerate from that. So if you think about the HCI marketplace, I think you have to be in the server business to win in the HCI business. I don't envy anyone trying to do this from a position of weakness or trying to adopt other people's technology. Our supply chain, our reach, our global services and support, and then the underlying ability to invest in the server technology and beyond and differentiate, innovate on top of that is what it's going to take to win, and maybe not tomorrow, but in the future as HCI takes off. We wanted to really accelerate that by shortening the decision-making loop, making it one mission for the team, and so that came in. In addition, maybe a quick call-out to the storage and data protection platform engineering team who also came into my group to, again, really put our best hardware and platform of systems engineers together from servers and data protection storage and kind of create a powerhouse of R&D. >> Yeah, Ashley, it's actually, it's not surprising to us, From our research side at Wikibon, we actually called it server SAN because it was really taking the functionality and what customers wanted as a business outcome from the SAN and was pulling it closer to the server. But at its core, it's really about software. One of the things that has struck me in the last few years, comparing this to EMC worlds in the past and now Dell, is what I used to see at Dell World, which was Dell is a platform that lots of things live on. So there's lots of storage partners that live on side of Dell. There's HCI partners. Of course, you've got a broad portfolio all from the Dell families, and then OEMs and other partners that fit there. 'Cause you're a team, it makes sense that HCI comes in there because you've got that platform at the server, >> Right. >> and it grows from there. >> If you circle back to just the Dell Legacy world perhaps, much more platform oriented, infrastructure at our heart, bringing that value with prop to our customers. And I've said it before, I think if you give any segment or capability time, I think a standard kind of open infrastructure hardware platform wins. It may not be a server, but it's going to look something like a server going forward. And the specialization and the value move into the IP stack and into the software. So you better be a company that can do the scale of a standards-based platform. You better have the IP, the specialized stacks, as we do in our VM-ware stacks, in our IP stacks or in data protection, storage, networking. You can see where Michael's kind of putting those two together. It's not a tomorrow thing but five, 10 years from now. We've seen it in the carrier space. We've seen it in storage. Everywhere you go, the commoditization curve takes us to standards, infrastructure, and IP in the software. >> You made an interesting point there, saying it might not necessarily be a server. Give us kind of, if you could step back for a second, the state of compute. >> Sure. >> There's compute in the cloud, there's compute at the edge, there's (chuckles) compute all over the place. A few years ago, it was like, ah, it's all going white box and undifferentiated. And in the public cloud, I say, there's probably more skews and compute in the public cloud than if I went to Dell and picked that up there. Whether that's a good or bad thing, you could probably have some insight on. But give us your view on kind of the state of compute in the industry today. >> Sure. So if I think back 10 years when we started our business with the hyper-scale, building those infrastructures as a service, multi-tenant public clouds, there really wasn't any other choice. You either did it in a legacy mode with your IT, maybe slightly modernizing, but you're still probably siloed. You probably had storage admins and networking admins, compute admins, or you went cloud. And it was such a different experience. Since then, what customers have said consistently is, why am I having to make that choice? I either go to this rent version, which is very expensive as I scale up, or I own it or I have to own it and it's different. So multi-cloud, hybrid cloud, private cloud, however you want to instantiate it. And something like hyper converged infrastructure just didn't exist. They didn't have a choice. Now, with a pushing of a few buttons, you can scale up your infrastructure, perhaps on prem or in a hosted environment. That is fairly seamless with that, and now you have that portability. >> Yeah, and I'm sorry, Ashley. I wasn't trying to poke at the cloud piece. Compute at edge use cases is a little bit different than traditional-- >> Yep, absolutely. >> Servers, what's happening with the Blade market. Definitely need to, I know we need to talk about the new PowerEdges. But there's the MX we're going to cover, too, but was just kind of, if there are form factors of servers. >> You bring up a good point. It's maybe emerging, so there's probably a little bit more hype than there is reality behind it. But there are going to be billions of sensors, trillions of sensors or things that create data outside of data center environments. That's where all the data's going to be produced, and that's where decisions are going to be made. Today, the theory is, it has to go back somewhere, although I don't think any of us are getting in an autonomous car if it has to talk back to a data center and decide what to do. >> Right. >> So there's already examples of what I would call edge compute. But what if your data center has to live at a base of a cell tower at the end of a 30 mile dirt road where someone only visits 45 days apart, and they're not an IT individual? How do you extend that infrastructure, that management domain, that security domain? How do you bring it all the way out there? How do you ruggedize it? Well, you're probably going to start with a company that's been doing fresh air cooling with 13, 14 billion server hours now, operating in fresh air environments. We understand how to bring that environment the way we've been working on that remote management, lights out management style, our security. I'll give you another emerging trend that's going to come out of that. Just at the time where we're going to extend our environments out of the safety of the data center, we're also going to go back to a stateful compute. With persistent memory, nonvolatile memory, storage class memories, and security paradigms are already shifting. We're getting ahead of that with our customers of what if it wasn't just the hard drive you had to protect but almost everything in that edge device. So the form factors will change, the connectivity will change, but what we know is, you'll likely gather as much data as you can. You'll throw some of it away 'cause it won't be useful. Right now, there's a sensor telling this building that these lights are on. Until they go off, it's not useful data. But in a car, it's very useful data. Some of that data will go back, it'll get trained because humans won't be able to take in all this data. You'll need a machine. You can't write the algorithm ahead of time. You have to learn something. Back goes that IP into the edge, and then decisions will be made at that stage. >> Before we head off, we've talked about some new products. You've alluded a little bit. So you've had a launch this week. Just run through that, if you would, real quick. >> Ashley: Sure, sure, we had a few things. >> It's nice to have a new baby to talk about. >> Sure, it's pretty exciting. And it really does stem from what we just talked about. So if I start on the PowerEdge side, if you have a strategy that is to help your customers with that digital transformation from cloud to data center and core all the way to edge, you can start to see why we're launching certain products and why they have certain technologies in them and innovations. So starting with the 940 XA, extreme acceleration, might have to rename it if you watched the keynote. Jeff called it extreme performance. He is the boss, so I think it's XP now. (Stu and John laughing) No, we'll keep it at extreme acceleration for now. That really is about large datasets training very quickly in database environments. So you want host to GPGPU to be a one-to-one ratio. You want large datasets to be local, so you need massive storage, 32 drives for instance. And you need the capability to, again, make sure it brings the tenets of security, manageability, the ecosystem with it. So, very excited about that one. I think there's some use cases we're just not even ready for. We've already have the technology today to put eight FPGAs in that system, direct connect. And there's very few workloads or even talent in the customer set to be able to enable that, but you got to get there first with the technology to allow that innovation to happen. And we want to stoke that. Then on the R840, this really was about, once you get the data in, you're going to have to make decisions. You need, still, that processing power. Maybe you don't need 20,000 cores in the box like a 940 XA. Maybe you need a little bit less, but you do need a massive storage localized in VME direct connect. That's more direct connect that any server, I think, period in the industry. And it's really about streaming those analytics, making those realtime choices. So it really fits into the strategy that we're undertaking. >> All right, Ashley, last thing I wanted to cover. It's a bit of preview that you showed at the show. The PowerEdge MX. >> Yep. >> Modular infrastructure, no midplane, should be able to upgrade it a lot more. So are we beyond where Blade Servers have gone? Do you consider this to fit into, some call it composable infrastructure. How would you position this kind of-- >> Well, I don't have some positioning yet. It's just a sneak peek. But let me tell you how we thing about it. Is it a Blade Server or not? I'm not sure the question is something we've considered yet. It's a form factor that we think for the future is really necessary, which is, we want to get to a stage, and we're putting our research into a stage of a journey where we want to get to the point where you can utilize the resources that you bring into your environment, whether they be your environment or someone else's. Today, so much is stranded connected to a CPU, and it's just the architecture that we have today. Whether it's memory, source class memory, persistent memory, GPGPUs, heterogeneous compute FPGAs, ASICs, memory semantics, IO semantics, have to leave the box. Then we can get you things like pooled up resources that can be utilized unbound, put together, then composed, if you want to use your word, or really just aligned around a workload then retired and put back in. APIs and software, we're starting to build that out. It's starting to emerge from certain management orchestration layers we have today. But we're going to need that fabric. And so, as you know, we're showing actually here today a Gen-Z demo where we're starting to build that fabric that has the latency, almost a memory-like latencies from load to store and usage, all the way out to it has the memory semantics that go all the way through from CPU all the way out to memory so that, all sudden, the node no longer traps and stands the resources. How do you do that? You better have an architecture that treats everything in the box, not just the compute part, as a first class citizen for power, for thermals, for management. Second thing, if you have a midplane, you have a point of failure, but you also are not upgradeable to these fabrics that are coming and these capabilities that are on the horizon, some of which are not even in Silicon or in a lab just yet. So when you build infrastructure, let me call it infrastructure for a second, people want it as an investment. That's the part we've talked about. There's a lot more to come, so the team's excited to get it out there. I tried to hold them back a little bit, but we cheated a little bit a showed it. >> A little demo goes a long way. Ashley, thanks for being with us. Thanks for telling your story, we appreciate the time. Look forward to seeing you down the road. >> Appreciate it, thanks, guys. >> You bet. Back with more. We are live here in Las Vegas at Dell Technologies World 2018. (electronic musical flourish)

Published Date : May 2 2018

SUMMARY :

Brought to you by Dell EMC and it's a pleasure now to Stu and I were just straight for the McLaren. How 'about that McLaren if you can score a goal. about the show then A lot of client after the keynote, Great opportunity for the team to see that in the announcement. the server business to win One of the things that has and IP in the software. the state of compute. and compute in the public cloud and now you have that portability. at the cloud piece. about the new PowerEdges. Today, the theory is, it Back goes that IP into the edge, if you would, real quick. we had a few things. It's nice to have a and core all the way to edge, It's a bit of preview that How would you position this kind of-- so the team's excited to get it out there. Look forward to seeing you down the road. Back with more.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AshleyPERSON

0.99+

MichaelPERSON

0.99+

JeffPERSON

0.99+

StuPERSON

0.99+

Jeffrey WrightPERSON

0.99+

Stu MinimanPERSON

0.99+

John WallsPERSON

0.99+

Ashley GorakhpurwallaPERSON

0.99+

DellORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

JohnPERSON

0.99+

20,000 coresQUANTITY

0.99+

45 daysQUANTITY

0.99+

Tom BurnsPERSON

0.99+

yesterdayDATE

0.99+

30 mileQUANTITY

0.99+

VCEORGANIZATION

0.99+

Las VegasLOCATION

0.99+

twoQUANTITY

0.99+

TodayDATE

0.99+

WikibonORGANIZATION

0.99+

McLarenORGANIZATION

0.99+

Chip PlaterPERSON

0.99+

this weekDATE

0.99+

tomorrowDATE

0.99+

todayDATE

0.98+

Dell Technologies World 2018EVENT

0.98+

32 drivesQUANTITY

0.98+

trillions of sensorsQUANTITY

0.97+

Dr.PERSON

0.97+

firstQUANTITY

0.97+

billions of sensorsQUANTITY

0.96+

one missionQUANTITY

0.96+

OneQUANTITY

0.96+

fiveQUANTITY

0.96+

CPSGORGANIZATION

0.95+

HCIORGANIZATION

0.95+

Second thingQUANTITY

0.94+

R840COMMERCIAL_ITEM

0.94+

eight FPGAsQUANTITY

0.91+

day twoQUANTITY

0.91+

PowerEdgesCOMMERCIAL_ITEM

0.91+

PowerEdgeORGANIZATION

0.91+

Dell WorldORGANIZATION

0.9+

first classQUANTITY

0.9+

last few yearsDATE

0.89+

10 yearsQUANTITY

0.88+

13, 14 billion server hoursQUANTITY

0.86+

few years agoDATE

0.86+

940 XACOMMERCIAL_ITEM

0.86+

940 XACOMMERCIAL_ITEM

0.77+

oneQUANTITY

0.76+

aboutQUANTITY

0.64+

PowerEdge MXCOMMERCIAL_ITEM

0.63+

secondQUANTITY

0.62+

SandsLOCATION

0.61+

EMCORGANIZATION

0.61+

F1ORGANIZATION

0.59+

VMEORGANIZATION

0.43+

John Sakamoto, Intel | The Computing Conference


 

>> SiliconANGLE Media Presents the CUBE! Covering Alibaba's Cloud annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello there, and welcome to theCUBE here on the ground in China for Intel's booth here at the Alibaba Cloud event. I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. We're here with John Sakamoto who is the vice president of the Programmable Solutions Group. Thanks for stopping by. >> Thank you for having me, John. >> So FPGAs, field-programmable gate arrays, kind of a geeky term, but it's really about software these days. What's new with your group? You came to the Intel through an acquisition. How's that going? >> Yeah, so far it's been great. As being part of a company with the resources like Intel and really having access to data center customers, and some of the data center technologies and frameworks that they've developed and integrating MPJs into that, it's been a great experience. >> One of the hot trends here, I just interviewed Dr. Wong, at Alibaba Cloud, the founder, and we were talking about Intel's relationship, but one of the things he mentioned was striking to me is that, they got this big city brain IOT project, and I asked him about the compute at the Edge and how data moves around, and he said "for all the Silicon at the Edge, one piece of Silicon at the Edge is going to be 10X inside the data center, inside the cloud or data center," which is fundamentally the architecture these days. So it's not just about the Edge, it's about how the combination of software and compute are moving around. >> Right. >> That means that data center is still relevant for you guys. What is the impact of FPGA in the data center? >> Well, I think FPGA is really our great play in the data center. You mentioned City Brain. City Brain is a great example where they're streaming live video into the data center for processing, and that kind of processing power to do video live really takes a lot of horsepower, and that's really where FPGAs come into play. One of the reasons that Intel acquired Altera was really to bring that acceleration into the data center, and really that is a great complement to Xeon's. >> Take a minute on FPGA. Do you have to be a hardware geek to work with FPGA? I mean, obviously, software is a big part of it. What's the difference between the hardware side and the software side on the programmability? >> Yes, that's a great question. So most people think FPGAs are hard to use, and that they were for hardware geeks. The transitional flow had been using RTL-based flows, and really what we've recognized is to get FPGA adoption very high within the data center, we have to make it easier, and we've invested quite a bit in acceleration stacked to really make it easier for FPGAs to be used within the data center. And what we've done is we've created frameworks and pre-optimized accelerators for the FPGAs to make it easy for people to access that FPGA technology. >> What's the impact of developers because you look at the Acceleration Stack that you guys announced last month? >> Yes, that's correct. >> Okay, so last month. This is going to move more into software model. So it's almost programmability as a dev-ops, kind of a software mindset. So the hardware can be programmed. >> Right. >> What's the impact of the developer make up, and how does that change the solutions? How does that impact the environment? >> So the developer make up, what we're really targeting is guys that really have traditionally developed software, and they're used to higher level frameworks, or they're used to designing INSEE. So what we're trying to do is really make those designers, those developers, really to be able to use those languages and frameworks they're used to and be able to target the FPGA. And that's what the acceleration stack's all about. And our goal is to really obfuscate that we actually have an FPGA that's that accelerator. And so we've created, kind of, standard API's to that FPGA. So they don't really have to be an FPGA expert, and we've taken things, basically standardized some things like the connection to the processor, or connections to memory, or to networking, and made that very easy for them to access. >> We see a lot of that maker culture, kind of vibe and orientation come in to this new developer market. Because when you think of a field-programmable gate array, the first thing that pops into my mind is oh my God, I got to be a computer engineering geek. Motherboards, the design, all these circuits, but it's really not that. You're talking about Acceleration-as-a-Service. >> That's right. >> This is super important, because this brings that software mindset to the marketplace for you guys. So talk about that Accelerations-as-a-Service. What is it? What does it mean? Define it and then let's talk about what it means. >> Yeah. Okay, great. So Acceleration-as-a-Service is really having pre-optimized software or applications that really are running on the FPGA. So the user that's coming in and trying to use that acceleration service, doesn't necessarily need to know there's an FPGA there. They're just calling in and wanting to access the function, and it just happens to be accelerated by the FPGA. And that's why one of the things we've been working with with Alibaba, they announce their F1 service that's based on Intel's Arria 10 FPGAs. And again we've created a partner ecosystem that have developed pre-optimized accelerators for the FPGA. So users are coming in and doing things like Genomics Sequencing or database acceleration, and they don't necessarily need to know that there's an FPGA actually doing that acceleration. >> So that's just a standard developer just doing, focusing in on an app or a use case with big data, and that can tap into the hardware. >> Absolutely, and they'll get a huge performance increase. So we have a partner in Falcon Computing, for example, that can really increase the performance of the algorithm, and really get a 3X improvement in the overall gene sequencing. And really improve the time it takes to do that. >> Yeah, I mean, Cloud and what you're doing is just changing society. Congratulations, that's awesome. Alright, I want to talk about Alibaba. What is the relationship with Intel and Alibaba? We've been trying to dig that out on this trip. For your group, obviously you mentioned City Brain. You mentioned the accelerations of service, the F1 instances. >> Right. >> What specifically is the relationship, how tight is it? What are you guys doing together? >> Well the Intel PSG group, our group, has been working very closely with Alibaba on a number of areas. So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. We announced the Arria 10 version today, but will continue to develop with them in the next generation Intel FPGAs, such as Stratix 10 which is based on 14 nanometer. And eventually with our Falcon Mesa product which is a 10 nanometer product. So clearly, acceleration's a focus. Building that ecosystem out with them is going to be a continued focus. We're also working with them on servers and trying to enhance the performance >> Yeah. >> of those servers. >> Yeah. >> And I can't really talk about the details of all of those things, but certainly there are certain applications that FPGAs, they're looking to accelerate the overall performance of their custom servers, and we're partnering with them on that. >> So one of the things I'm getting out of this show here, besides the conversion stuff, eCommerce, entertainment, and web services which is Alibaba's, kind of like, aperture is that it's more of a quantum mindset. And we talked about Blockchain in my last interview. You see quantum computing up on their patent board. >> Yeah. >> Some serious IT kinds of things, but from a data perspective. How does that impact your world, because you provide acceleration. >> Right. >> You got the City Brains thing which is a huge IOT and AI opportunity. >> Right. >> How does someone attack that solution with FPGAs? How do you get involved? What's your role in that whole play? >> Again, we're trying to democratize FPGAs. We're trying to make it very easy for them to access that, and really that's what working with Alibaba's about. >> Yeah. >> They are enabling FPGA access via their Cloud. Really in two aspects, one which we talked about which we have some pre-optimized accelerators that people can access. So applications that people can access that are running on FPGAs. But we're also enabling a developer environment where people can use the tradit RTL flow, or they can use an OpenCL Flow to take their code, compile it into the FPGA, and really get that acceleration that FPGAs can provide. So it's not only building, bringing that ecosystem accelerators, but also enabling developers to develop on that platform. >> You know, we do a lot of Cloud computing coverage, and a lot of people really want to know what's inside the Cloud. So, it's one big operation, so that's the way I look at it. But there's a lot going on there under the hood. What is some of the things that Alibaba's saying to you guys in terms of how the relationship's translating into value for them. You've mentioned the F1 instances, any anecdotal soundbites you can share on the feedback, and their direction? >> Yeah, so one of the things they're trying to do is lower the total TCO of the data center. And one of the things they have is when you look at the infrastructure cost, such as networking and storage, these are cycles that are running on the processor. And when there's cycles running on the processor, they monetize that with the customers. So one of the areas we're working with is how do we accelerate networking and storage functions on a FPGA, and therefore, freeing up HORVS that they can monetize with their own customers. >> Yeah. >> And really that's the way we're trying to drop the TCO down with Alibaba, but also increase the revenue opportunity they have. >> What's some updates from the field from you guys? Obviously, Acceleration's pretty hot. Everyone wants low latency. With IOT, you need to have low latency. You need compute at the edge. More application development is coming in with Vertical Specialty, if you will. City Brains is more of an IOT, but the app is traffic, right? >> Yeah. >> So that managing traffic, there's going to be a million more use cases. What are some of the things that you guys are doing with the FPGAs outside of the Alibaba thing. >> Well I think really what we're trying to do is really focus on three areas. If you look at, one is to lower the cost of infrastructure which I mentioned. Networking and storage functions that today people are using running those processes on processors, and trying to lower that and bring that into the FPGA. The second thing we're trying to do is, you look at high cycle apps such as AI Applications, and really trying to bring AI really into FPGAs, and creating frameworks and tool chains to make that easier. >> Yeah. >> And then we already talked about the application acceleration, things like database, genomics, financial, and really those applications running much quicker and more efficiently in FPGAs. >> This is the big dev-ops movement we've seen with Cloud. Infrastructure as code, it used to be called. I mean, that's the new normal now. Software guys programming infrastructure. >> Absolutely. >> Well congratulations on the great step. John Sakamoto, here inside theCUBE. Studios here at the Intel booth, we're getting all the action roving reporter. We had CUBE conversations here in China, getting all the action about Alibaba Cloud. I'm John Furrier, thanks for watching.

Published Date : Oct 24 2017

SUMMARY :

SiliconANGLE Media Presents the CUBE! I'm John Furrier, the co-founder of SiliconANGLE, Wikibon, and theCUBE. You came to the Intel through an acquisition. center customers, and some of the data center technologies and frameworks that they've developed one piece of Silicon at the Edge is going to be 10X inside the data center, inside the What is the impact of FPGA in the data center? the data center, and really that is a great complement to Xeon's. What's the difference between the hardware side and the software side on the programmability? So most people think FPGAs are hard to use, and that they were for hardware geeks. So the hardware can be programmed. So the developer make up, what we're really targeting is guys that really have traditionally Motherboards, the design, all these circuits, but it's really not that. This is super important, because this brings that software mindset to the marketplace for So the user that's coming in and trying to use that acceleration service, doesn't necessarily So that's just a standard developer just doing, focusing in on an app or a use case And really improve the time it takes to do that. What is the relationship with Intel and Alibaba? So clearly the acceleration, the FPGA acceleration is one of those areas that are big, big investors. And I can't really talk about the details of all of those things, but certainly there So one of the things I'm getting out of this show here, besides the conversion stuff, How does that impact your world, because you provide acceleration. We're trying to make it very easy for them to access that, and really that's what working So it's not only building, bringing that ecosystem accelerators, but also enabling developers What is some of the things that Alibaba's saying to you guys in terms of how the relationship's And one of the things they have is when you look at the infrastructure cost, such as networking And really that's the way we're trying to drop the TCO down with Alibaba, but also City Brains is more of an IOT, but the app is traffic, right? What are some of the things that you guys are doing with the FPGAs outside of the Alibaba The second thing we're trying to do is, you look at high cycle apps such as AI Applications, And then we already talked about the application acceleration, things like database, genomics, This is the big dev-ops movement we've seen with Cloud. Studios here at the Intel booth, we're getting all the action roving reporter.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

John SakamotoPERSON

0.99+

John FurrierPERSON

0.99+

ChinaLOCATION

0.99+

JohnPERSON

0.99+

Alibaba CloudORGANIZATION

0.99+

WongPERSON

0.99+

10 nanometerQUANTITY

0.99+

two aspectsQUANTITY

0.99+

secondQUANTITY

0.99+

last monthDATE

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

SiliconANGLEORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

Falcon ComputingORGANIZATION

0.99+

14 nanometerQUANTITY

0.99+

Programmable Solutions GroupORGANIZATION

0.98+

first thingQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

3XQUANTITY

0.98+

10XQUANTITY

0.98+

OneQUANTITY

0.98+

todayDATE

0.96+

SiliconANGLE MediaORGANIZATION

0.95+

Dr.PERSON

0.95+

AlteraORGANIZATION

0.94+

City BrainORGANIZATION

0.93+

Alibaba CloudEVENT

0.92+

one pieceQUANTITY

0.91+

EdgeORGANIZATION

0.91+

XeonORGANIZATION

0.91+

Arria 10 FPGAsCOMMERCIAL_ITEM

0.9+

Stratix 10COMMERCIAL_ITEM

0.88+

OpenCL FlowTITLE

0.88+

three areasQUANTITY

0.86+

Computing ConferenceEVENT

0.79+

a million more use casesQUANTITY

0.77+

one big operationQUANTITY

0.73+

FalconORGANIZATION

0.71+

Intel PSGORGANIZATION

0.71+

Arria 10COMMERCIAL_ITEM

0.71+

MesaCOMMERCIAL_ITEM

0.68+

CloudEVENT

0.61+

CloudORGANIZATION

0.51+

City BrainsTITLE

0.44+

F1EVENT

0.4+

HORVSORGANIZATION

0.38+

StudiosORGANIZATION

0.33+

Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)

Published Date : Mar 9 2017

SUMMARY :

Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa SpelmanPERSON

0.99+

GoogleORGANIZATION

0.99+

NervanaORGANIZATION

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

San FranciscoLOCATION

0.99+

AlphabetORGANIZATION

0.99+

two coresQUANTITY

0.99+

OracleORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

IBMORGANIZATION

0.99+

IntelORGANIZATION

0.99+

UberORGANIZATION

0.99+

last yearDATE

0.99+

Silicon PhotonicsORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

72 coresQUANTITY

0.99+

two dayQUANTITY

0.99+

last weekDATE

0.99+

San HoseLOCATION

0.99+

oneQUANTITY

0.99+

G SuiteTITLE

0.99+

Office 365TITLE

0.99+

Stu MinimanPERSON

0.99+

Open Compute SummitEVENT

0.98+

Mobile World CongressEVENT

0.98+

XeonORGANIZATION

0.98+

tomorrowDATE

0.98+

both placesQUANTITY

0.98+

AlteraORGANIZATION

0.98+

Riot GamesORGANIZATION

0.97+

OneQUANTITY

0.97+

WikibonORGANIZATION

0.97+

WatsonTITLE

0.96+

over 20%QUANTITY

0.95+

SaaSTITLE

0.95+

firstQUANTITY

0.95+

one horseQUANTITY

0.94+

Silicon ValleyLOCATION

0.94+

one productQUANTITY

0.94+

eachQUANTITY

0.94+

one answerQUANTITY

0.94+

eight new productsQUANTITY

0.93+

one timeQUANTITY

0.92+

XeonCOMMERCIAL_ITEM

0.92+

GMORGANIZATION

0.91+

one houseQUANTITY

0.91+

Google CloudTITLE

0.91+