Action Item | Why Hardware Matters
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (funky electronic music) We're broadcasting, once again, from theCUBE studios in lovely Palo Alto. And I've got the Wikibon research team assembled here with me. I want to introduce each of them. David Floyer. >> Hi. >> George Gilbert are here in the studio with me. Remote we have Jim Kobielus, Stu Miniman, and Neil Raden. Thanks everybody for joining. Now, we're going to talk about something that is increasingly overlooked, that we still think has enormous importance in the industry. And that is, does hardware matter? For 50 years, in many respects, the rate of change in industry has been strongly influenced, if not determined by the rate of change in the underlying hardware technologies. As hardware technologies improved, the result was that software developers would create software that would fill up that capacity. But we're experiencing a period where some of the traditional approaches to improving hardware performance are going down. We're also seeing that there is an enormous, obviously, move to the cloud. And the cloud is promising different ways of procuring the infrastructure capacity that businesses need. So that raises the question with potential technologies constraints on the horizon, and an increasing emphasis on utilization of the cloud, is systems integration and hardware going to continue to be a viable business option? And something that users are going to have to consider as they think about how to source their infrastructure? Now there are a couple of considerations today that are making this important right now. Jim Kobielus, what are some of those considerations that increase the likelihood that we'll see some degree of specialization that's likely to turn into different hardware options? >> Yeah Peter, hi everybody. I think one of the core considerations is that edge computing has become the new approach to architecting enterprise and consumer grade applications everywhere. And edge computing is nothing without hardware on the edge, devices as well as hubs and gateways and so forth, to offload and the handle much of the processing needed. And increasingly, it's AI, artificial intelligence. deep learning, machine learning. So going forward now, looking at how it's shaping up, hardware's critically important. Burning AI, putting AI onto chipsets, low power, low cost chips that can do deep learning, machine learning, natural language processing, fast, cheaply, in an embedded form factor, critically important for the development of edge computing as a truly end-to-end distributed fabric for the next generation of application. >> So Jim, are we likely to see greater specialization of some of those AI algorithms and data structures and what not, drive specialization and the characteristics of the chips that support it, or is it all going to be just default down to tensor flow or GPUs? >> It has been GPUs for AI. Much of AI, in terms of training and inferencing, has been in the cloud, and much of it has been based historically, heretofore, on GPUs, and video being the predominant provider. However, GPUs historically have not been optimized for AI, because they've been built for gaming and consumer applications. However, the next generation, the current generation, from Nvidia and others, are chipsets in the cloud and other form factors for AI, incorporates what's called tensor core processing, really a highly densely packed tensor core processing components to be able to handle deep learning neural networks, very fast, very efficiently for inferencing and training. So Nvidia and everybody else now is making a big bet on tensor core processing architecture. Of course Google's got one of the more famous ones, their TPU architecture, but they're not the only ones. So going forward, we're looking at, in the AI ecosystem, especially for edge computing, there increasingly will be a blend of GPUs like for cloud based core processing, TPUs or similar architecture, or device-level processing. But also, FPGAs, A6, and CPUs are not out of the running because for example, CPUs are critically important for systems on the chip, which are quite fundamentally important for unattended operation as well as attended operation in terms of edge devices to handle things like natural language processing for conversational UIs. >> So that suggests that we're going to see a lot of new architecture thinking introduced as a consequence of trying to increase the parallelism through a system by incorporating more processing at the edge. >> Jim: Right. >> That's going to have an impact on volume economics and where the industry goes from an architecture standpoint. David Floyer, does that ultimately diminish the importance of systems integration as we move from the edge back towards the core and towards cloud in whatever architectural form it takes? >> I think the opposite, it actually is, systems integration becomes more important. And the key question has been can software do everything? Do we need specialized hardware for anything? And the answer is yes, because the standard x86 systems are just not improving in speed at all. >> Why not? >> That's a long answer to that. But it's to do with the amount of heat that's produced, and the degree of density that you can achieve. Even the chip itself-- >> So the ability to control bits flying around the chip-- >> Correct. >> Is going down-- >> Right. >> As a consequence of dispersion of energy and heat into the chip. >> Right, There are a lot of other factors as well. >> Other reasons as well, sure. >> But the important thing is, how do you increase the speed? And a standard x86 cycle time with it's instruction set, that's now fixed. So what can you do? Well, you can obviously, reduce the number of instructions and then parallelize those instructions within that same space. And that's going to give you a very significant improvement. And that's the basis of GPUs and FPGAs. So GPUs for example, you could have floating point arithmetic, or standard numbers or extended floating point arithmetic. All of those help in calculations, large scale calculations. The FPGAs are much more flexible. They can be programmed in very good ways, so they're useful for smaller volume things. A6 are important, but what we're seeing is a movement to specialized hardware to process AI in particular. And one area is very interesting to me is, to take the devices at the edge, what we call the level one systems. Those devices need to be programmed very, very intently for what is happening there. They are bringing all the data in, they're making that first line reduction of data, they're making the inferences, they're taking the decisions based on that information coming in and then sending much less data up to the level twos above it. So what are examples of this type of system that exist now? Because in hardware, volume matters. The amount of stuff you produce, the costs go down dramatically. >> And software too, in the computing industry, volume matters. >> Absolutely, absolutely. >> I think it's pretty safe to say that. >> Yeah, absolutely. So volume matters, so it's interesting to look at one of the first real volume AI applications, which is in the iPhone X. And Apple have introduced the latest chipset. It has neural networks within it. It has GPUs built in, and it's being used for simple things like face recognition and other areas of AI. And the interesting thing is the cost of this. The cost of that whole set, the chip itself, is $27. The total cost with all the senors and everything, to do that sort of AI work is $100. And that's a very low bar, and very, very difficult to introduce in other ways. So this level of integration for the consumer business in my opinion, is going to have a very significant effect on the choices that are made by manufacturers of devices going into industry and other things. They're going to take advantage of this in a big way. >> So Neil Raden, we've heard, or we've been down the FPGA road for example, in the past, data warehousing introduced, or it was thought that data warehouse workloads which did not necessarily lend themselves to a lot of the prevailing architectures in the early 90s, could get this enormous acceleration by giving users greater programmable control over the hardware. How'd that work out? >> Well, for Intersil for example, what actually worked out pretty well for awhile. But what they did is they used that PGA to handle the low-level data stuff and maybe reducing the complexity of the query before it was passed on to the CPUs where things ran in parallel. But that was before Intel introduced multi-core chips. And it kind of killed the effectiveness. And the other thing was, it was highly proprietary which made it impossible to take up to the cloud. And there was no programming. I always laugh when people say FPGA because it should have been called FGA. Because there was no end user computing of an FPGA. >> So that means that, although we still think we're going to see some benefit from this. But it kind of brings us back to the cloud, because if hardware economics are improved to scale, then that says that there are a few companies that are likely to drive a lot of the integration issues. If things like FPGAs don't get broadly diffused and programmed by large numbers of people, but we can see how they could, in fact, dramatically improve the performance, and quality of workloads, then it suggests that some of these hyperscalers are going to have an enormous impact ultimately on defining what constitutes systems integration. Stu, take us through some of the challenges that we've heard recently on the cloud, or on theCUBE at reinvent and other places, about how we start seeing some of the hyperscalers make commitments about specialized hardware, the role that systems integration's going to play, and then we'll talk about whether that could be replicated across more on-premise types of systems. >> Sure Peter, and to go back to your opening remarks for this segment, does hardware matter? When we first saw cloud computing roll out, many people thought that this was just undifferentiated commodity equipment. But if you really dig in and understand what the hyperscalers, the public cloud companies are doing, they really do what I've called hyperoptimize the solution. So when James Hamilton and AWS talks about their infrastructure, they don't just take components and throw a bunch of stuff from off the shelf out there. They build for every application, a configuration, and they just scale that to tens of thousands of nodes. So like what we had done in the enterprise before, which was build a stack for an application, now the public cloud does that for services and for applications that they're building up the stack. So hardware absolutely matters. And if we look not only at the public cloud, but you mentioned on the enterprise side, it's where do I need to think about hardware? Where do I need to put time and effort? What David Floyer's talked about is that integration is still critically important. But the enterprise should not be worrying about taking all of the pieces and putting them together. They should be able to buy solutions, leverage platforms that take care of that environment. Very timely discussion about all of the Intel issues that are happening. If I'm using a public cloud, well I don't have to necessarily worry about, I need to worry about that there was an issue, but I need to go to my supplier (chuckles) and make sure that they are handling that. And if I'm using serverless technology, obviously I'm a little bit detached from what that, whether or not I have that issue, and how that gets resolved. So absolutely, hardware is important. It's just, who manages that hardware, what pieces I need to think about, and where that happens. And the fascinating stuff happening in the AI pieces that Jim's been talking about, where you're really seeing some of the differentiation and innovation happening at the hardware level, to make sure that it can react for those applications that need it. >> So we've got this tension in the model right now. We've got this tension in the marketplace, where a lot of the new design decisions are going to be driven by what's happening at the edge. As we try to put more software out to where more human activity or system activity's actually taking place. And at the same time, a lot of the new design and architecture decisions being, first identified and encountered by some of the hyperscalers. The workloads are at the edge, the new design decisions are at the hyperscaler, latency is going to ensure that there is a fair amount of, a lot of workload that remains at the edge, as well as cost. So what does that mean for that central class of system? Are we going to see, as we talk about, TPC, true private cloud, becoming a focal point for new classes of designs, new classes of engineering? Are we going to see a Dell-EMC box that says, "designed in Texas," or "designed in Hopkinton," and is that going to matter to users? David Floyer, what do we think? >> So it's really important from the consumer point, from the customer's point of view, that they can deal with a total system. So if they want a system at the very edge, the level one we want, to do something in the manufacturing, they may go to Dell, but they may also go to Sony or they may go to Honeywell or NCL-- >> Rahway, or who knows. >> Rahway, yes, Alibaba. There are a whole number of probably new people that are going to be in that space. When you're talking about systems on site for the high level systems, level two and above, then they are going to be very, it will be very important to them that the service level that comes from the manufacturer, the integration of all the different components, both software and hardware, come from that manufacturer. He is organizing it from a service perspective. All of those things become actually more important in this environment. It's more complex, there are more components. There are more FPGAs and GPUs and all sorts of other things, connected together, it'll be their responsibility as the deliverer of a solution, to put that together and to make sure it works, and that it can be serviced. >> And very importantly to make sure, as you said, that it works and it can be serviced. >> Yeah. >> So that's going to be there. So the differentiation will be, does the design and engineering lead to simpler configuration, simpler change. >> Absolutely. >> Accommodate the programming requirements, accommodate the application requirements, all that are-- >> All in there, yes. >> Approximate to the realities of where data needs to be. George, you had a comment? >> Yeah, I got to say, having gone to IBM's IOT event a year ago in Munich, it was pretty clear that, when you're selling these new types of systems that we're alluding to here, it's like a turnkey appliance. It's not just bringing the Intel chip down. It's as David and Jim pointed out, it's a system on a chip that's got transistor real estate for specialized functions. And because it's not running the same scalable clustered software that you'd find in the cloud, you have small footprint software that's highly verticalized or specialized. So we're looking at lower volume, specialized turnkey appliances, that don't really share the architectural and compatibility traits of the enterprise and true private cloud cousins. And we're selling it, for the most part, to new customers, the operations technology folks, not IT, and often, you're selling it in conjunction with the supply chain master. In other words, auto OEM might go to their suppliers in conjunction with another vendor and sell these edge devices or edge gateways. >> And so that raises another very important question. Stu, I'm going to ask this of you. We're not going to be able to answer this question today. It's a topic for another conversation. But one of the things that the industry's not spending enough time talking about is that we are in the midst of a pretty consequential shift from a product orientation in business models to a service orientation in business models. We talk about APIs, we talk about renting, we talk about pay-as-you-go. And there is still an open question about how well those models are going to are going to end up on premise in a lot of circumstances. But Stu, when we think about this notion of the cloud experience, providing a common way of thinking about a cloud operating model, clearly the design decisions that are going to have to be made by the traditional providers of integrated systems are going to have to start factoring that question of how do we move from a product to a service orientation along with their business models, their way of financing, et cetera. What do you think is happening? Where's the state of the art in that today? >> Yeah, and Peter, it actually goes back to when we at Wikibon launched the true private cloud research a little bit over two years ago. It was not just saying, "How do we do something "better than virtualization?" It was really looking at, as you said, that cloud operating model. And what we're hearing very loud from customers today is, it's not that they have a public cloud strategy and an private cloud strategy. They have a cloud strategy (chuckles). And one of the challenges that they're really having is, how do they get their arms around that? Because today their private cloud and their public cloud a lot of times it's different suppliers, it's different operating environments as you said. We could spend a whole nother call on just discussing some of the nuance and pieces here. But the real trend we've been seeing, and kind of the second half of last year, and big thing we'll see, I'm sure, through this year, is what are the solutions? And how can customers manage this much simpler? And what are the technology pieces? And operational paradigms that are going to help them through this environment? And yeah, it's a little bit detached from some of the hardware discussion we're having here. Because of course, at the end of the day, it shouldn't matter what hardware or what locale I'm in, it's how I manage the entire environment. >> But it does (laughs). >> Yeah. >> It shouldn't matter, but the reality is, I think we're concluding that it does. >> Right, we think back to, oh back in the early days, "Oh, virtualization, great. "I can take any x86. "Oh wait, but I had a BIOS problem, "and that broke things." So when containers rolled out, we had the same kind of discussion, this, "Oh wait." There was something down at the storage or networking layer that broke. So it's always, where is the proper layer? How do we manage that? >> Right, I for one just continue to hope that we're going to see the Harry Potter computing model show up at some point in time. But until then, magic is not going to run software. It's going to have to run on hardware, and that has physical and other realities. All right, thanks guys. Let's wrap this one up. Let me give some, what the action item is. So this week, we've talked about the importance of hardware in the marketplace going forward. And partly, it's catalyzed by an event that occurred this week. A security firm discovered a couple of flaws in some of the predominant, common, standard volume CPUs, including Intel's, that have long term ramifications. And while one of the fixes is not going to be easy, the other one can be fixed by software. But the suggestion is that the fix, that software fix would take out 30% of the computing power of the chip. And we were thinking to ourselves, what would happen if the world suddenly lost 30% of their computing power overnight? And the reality is, a lot of bad things would happen. And it's very clear that hardware still matters. And we have this tension between what's happening at the edge, where we're starting to see a need for greater distribution of function that's performing increasingly specialized workloads, utilizing increasingly new technology, that's not, that the prevailing stack is not necessarily built for. So the edge is driving new opportunities for design that's going to turn into new requirements for hardware that will only be possible if there's new volume markets capable of supporting it, and new suppliers bringing it to market. That doesn't however mean that the whole concept of systems integration goes away. On the contrary, even though we're going to see this enormous amount of change at the edge, there's an enormous net new invention in what does it mean to do systems integration? We're seeing a lot of that happen in the hyperscalers first, in companies like Amazon, and Google, and elsewhere. But don't be fooled. The HPE's the IBM's, the Dell-EMC's are all very cognizant of these approaches and these changes, and these challenges. And in many respects, a lot of the original work, a lot of the original invention is still being performed in their labs. So the expectation is the new design model is being driven by the edge. Plus the new engineering model's being driven by the hyperscalers, will not mean that it all ends up in two tiers. But we will see a need for modern systems integration happening in the true private cloud, on the premise, where a lot of the data and a lot of the workloads and a lot of the intellectual property is still going to reside. That however, does not mean that the model going forward is the same. Some of the new engineering dynamics, or some of the new design dynamics will have to start factoring in how the hardware simplifies configuration. For example, FPGAs have been around for a long time. But end users don't program FPGAs. So what good does it do to reflect the FPGA capability inside a box, inside a true private cloud box, if the user doesn't have any simple, straightforward, meaningful way to make use of it? So a lot of new emphasis on improve manageability, AI for ITOM, ways of providing application developers access to accelerated devices. This is where the new systems and design issues are going to manifest themselves in the marketplace. Underneath this, when we talk about unigrid, we're talking about some pretty consequential changes ultimately in how design and engineering of some of these big systems works. So our conclusion is, lots that the hardware still matters, but that the industry continued to move and drive in a direction that reduces the complexity of the underlying hardware. But that doesn't mean that users aren't going to have to, aren't going to encounter serious, serious decisions and serious issues regarding which supplier they should work with. So the action item is this. As we move from a product to a service orientation in the marketplace, hardware is still going to matter. That creates a significant challenge for a lot of users, because now we're talking about how that hardware is rendered as platforms that will have long-term consequences inside a business. So CIOs, start thinking about 2018 as the year in which you start to consider the new classes of platforms that you're going to move to. Because those platforms will be the basis for simplifying a lot of underlying decisions regarding where is the best design and engineering of infrastructure going forward. Once again, I want to thank my Wikibon teammates. George Gilbert, David Floyer, Stu Miniman, Neil Raden, Jim Kobielus, for a great Action Item. From theCUBE studios in Palo Alto, this has been Action Item. Talk to you soon. (funky electronic music)
SUMMARY :
And I've got the Wikibon research team So that raises the question with potential is that edge computing has become the new But also, FPGAs, A6, and CPUs are not out of the running by incorporating more processing at the edge. the importance of systems integration And the answer is yes, and the degree of density that you can achieve. and heat into the chip. Right, There are a lot of other And that's the basis of GPUs and FPGAs. And software too, in the computing industry, And the interesting thing is the cost of this. a lot of the prevailing architectures in the early 90s, And it kind of killed the effectiveness. the role that systems integration's going to play, at the hardware level, to make sure that it can and is that going to matter to users? the level one we want, that the service level that comes from the manufacturer, And very importantly to make sure, as you said, So the differentiation will be, Approximate to the realities of where data needs to be. And because it's not running the same of the cloud experience, and kind of the second half of last year, It shouldn't matter, but the reality is, or networking layer that broke. but that the industry continued to move
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
$100 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$27 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
George | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Alibaba | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Munich | LOCATION | 0.99+ |
iPhone X. | COMMERCIAL_ITEM | 0.99+ |
two tiers | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
Hopkinton | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
this week | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
a year ago | DATE | 0.98+ |
today | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Rahway | PERSON | 0.98+ |
early 90s | DATE | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
2018 | DATE | 0.97+ |
Honeywell | ORGANIZATION | 0.96+ |
this year | DATE | 0.96+ |
Dell-EMC | ORGANIZATION | 0.93+ |
theCUBE | ORGANIZATION | 0.92+ |
Intersil | ORGANIZATION | 0.91+ |
second half of last year | DATE | 0.89+ |
one area | QUANTITY | 0.88+ |
two years ago | DATE | 0.88+ |
Harry Potter | PERSON | 0.83+ |
level twos | QUANTITY | 0.83+ |
over | DATE | 0.77+ |
James Hamilton | PERSON | 0.76+ |
Rahway | ORGANIZATION | 0.74+ |
NCL | ORGANIZATION | 0.67+ |