Image Title

Search Results for $500 anight:

Action Item | Converged & Hyper Converged Infrastructure


 

Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (electronic music) Every week, we bring together the Wikibon research team and we present the action items that we believe are most crucial for users to focus on against very important topics. This week, I'm joined by George Gilbert, David Floyer, here in the Cube studios in Palo Alto. And on the phone we have Ralph Phinos, Dave Vellante, and Jim Kobielus. Thank you guys, thank you team for being part of today's conversation. What were going to talk about today in Action Item is the notion of what we're calling enterprise hyperscale. Now we're going to take a route to get there that touches upon many important issues, but fundamentally the question is, at what point should enterprises choose to deploy their own hardware at scale to support applications that will have a consequential business impact on their shareholder, customer, and employee value? Now to kick us off here, 'cause this is a very complex topic, and it involves a lot of different elements, David Floyer, first question to you. What is the core challenge that enterprises face today as they think about build, buy, or rent across this increasingly mushed hardware continuum, or system continuum? >> So the biggest challenge from the traditional way that enterprises have put together systems is that the cost and the time to manage these systems is going up and up. And as we go from just systems of record, to with analytic systems being mainly in bash mode, towards systems of intelligence where the real-time analytics are going to combine in with the systems of record. So the complexity of the systems and the software layers are getting more and more complicated. And it takes more and more time and effort and elapsed time to keep things current. >> Why is it that not everybody can do this, David? Is there a fundamental economic reason to play here? >> Well, if you take systems, and build them yourself and put them together yourself, you'll always end up with the cheapest system. The issue is that the cost of maintaining those systems, and even more, the elapsed time cost of maintaining those systems, the time to value to putting in new releases, etc., has been extending. And there comes a time when that cost of delaying implementing new systems overwhelms the cost that you can save in the hardware itself. >> So there's some scale efficiencies in thinking about integration from a time standpoint. Dave Vellante, we've been looking at this for quite some time, and we think about true private could, for example. But if you would, kind of give us that core dynamic in simple terms between what is valuable to the business, what isn't valuable to the business, and the different options between renting and buying your cells, what is that kind of core dynamic at play? >> OK, so as we talked about a lot in our true private cloud research, hyper-converged systems are an attempt to substantially mimic public cloud environments on-prem. And this creates this bifurcated buying dynamic that I think is worth exploring a little bit. The big cloud players, as everybody talks about, have lots of engineers running around, they have skill, and they have time. So they'll spend time to build proprietary technologies and use their roll-your-own components to automate processes. In other words, they'll spend time to save money. And this is essentially the hyperscale as a form of their R&D, and they have an end-year lead, whatever it's five, six, four years on the enterprise. And that's not likely to change, that dynamic. The enterprise buyers, on the other hand, they don't have the resources, they're stretched thin, so they'll spend money to save time. So enterprises they want to cut labor costs, and shift useless IT labor to so-called vendor R&D. To wit, our forecasts show that about $150 billion is going to come out of low-value IT operations over the next ten years, and will shift to integrated products. >> So ultimately we end up seeing the vendors effectively capturing a lot of that spend that otherwise had been internally. Now this raises a new dynamic, when we think about this, David Floyer, in that there are still vendors that have to return something to their shareholders. There's this increased recognition that businesses or enterprises want this cloud experience, but not everybody is able to offer it, and we end up then with some really loosely-defined definitions. What's the continuum of where systems are today, from traditional all the way out to cloud, what does that look like? >> So a useful way of looking at it is to see what has happened over time and where we think it's going. We started with separate systems completely. Converged systems then came in, where the vendor put them together and reduced the time a little bit to value. But really the maintenance was still a responsibility of-- >> [Peter] But what was brought together? >> [David F] It was the traditional arrays, it was the servers-- >> Racks, power supplies-- >> All of that stuff put together, and delivered as a package. The next level up was so-called hyper-converged, where certainly some of the hyperconverged vendors went and put in software for each layer, software for the storage layer, software for the networking layer, put in more management. But a lot of vendors really took hyperconverged as being the old stuff with a few extra flavors. >> So they literally virtualized those underlying hardware sources, got some new efficiencies and economies. >> That's right, so they software virtualized each of those components. When you look at the cloud vendors, just skipping one there, they have gone hyperscale. And they have put in, as Dave spoke earlier, they have put in all of their software to make that hyperscale work. What we think in the middle of that is enterprise hyperscale, which is coming in, where you have the what we call service end. We have that storage capability, we have the networking capability, and the CPU capabilities, all separated, able to be scaled in whatever direction is required, and any processor to be able to get at any data through that network, with very, very little overhead. And it's software for the storage, it's software and firmware for the networking. The processor is relieved of all that processing. We think that architecture is going to mimic what the hyperscale have. But the vendors now have an opportunity of putting in the software to emulate that cloud experience, and take away from the people who want on-site equipment, take away all of the work that's necessary to keep that software stack up to date. The vendors are going to maintain that software stack as high as they can go. >> So David, is this theory, or are there practical examples of this happening today? >> Oh, absolutely, there are practical examples of those happening. There are practical examples at the lower levels, with people like Micron and SolidScale. That's at a technology level, when we're talking about hyperscale-- Well if you're looking at it from a practical point of view, ARCOL have put it into the marketplace. ARCOL cloud on-premises, ARCOL converged systems, where they are taking the responsibility of maintaining all of the software, all the way up to the database stack. And in the future, probably beyond that, towards the ARCOL applications as well. So they're taking that approach, putting it in, and arguing, persuasively, that the customer should focus on time to value as opposed to cost of just the hardware. >> Well we can also look at SaaS vendors right, who many of the have come off of infrastructure as a service, deployed their own enterprise hyperscale, increasingly starting to utilize some of this hyperscale componentry, as a basis for building things out. Now one of the key reasons why we want to do this, and George I'll turn it to you, is because as David mentioned earlier, the idea is we want to bring analytics and operations more closely together to improve automation, augmentation, other types of workloads. What is it about that effort that's encouraging this kind of adoption of these new approaches? >> [George] Well databases typically make great leaps forward when we have changes in the underlying trade-offs or relative price performance of compute storage and networking. What we're talking about with hyperscale, I guess either on-prem or the cloud version, is that we can build scale out that databases can support without having to be rewritten, so that they work just the way they did on tightly-coupled symmetric multiprocessors, shared memory. And so now they can go from a few nodes, or half a dozen nodes, or even say a dozen nodes, to thousands. And as David's research has pointed out, they have latency to get to memory in any node from any node in five microseconds. So building up from that, the point is we can now build databases that really do have the horsepower to handle the analytics to inform the transactions in the same database. Or, if you do separate them, because you don't want to touch a current system of record, you have a very powerful analytic system that can apply more data and do richer analytics to inform a decision in the form of a transaction, than you could with traditional architectures. >> So it's the data that's driving the need for a data-rich system that's architected in the context of data needs, that's driving a lot of this change. Now, David Floyer, we've talked about data tiering. We've talked about the notion of primary, secondary, and tertiary data. Without revisiting that entirely, what is it about this notion of enterprise hyperconverge that's going to make it easier to naturally place data where it belongs in the infrastructure? >> Well underlying this is that moving data is extremely expensive, so you want to, where possible, move the processing to the data itself. The origin of that data may be at the edge, for example, in IOT. It may be in a large central headquarters. It may be in the cloud, it may be operational data, end-user data, for people using their phones, which is available from the cloud. So there are multiple sources. So you want to place the processing as close to that data as possible so that you have the least cost of both moving it, and you have the lowest latency. And that's particularly important when you've got systems of intelligence where you want to combine the two. >> So Jim Kobielus, it seems as though there's a compelling case to be made here to focus on time, time to value, time to deploy, on the one hand, as well as another aspect of time, the time associated with latency, the time associated with reducing path length, and optimizing for path length. Which again has a scale impact. What are developers thinking? Are developers actually going to move the market to these kinds of solutions, or are they going to try to do something different? >> I think what developers will do is that they will begin to move the market towards hyperconverged systems. Much of the development that's going on now is for artificial intelligence, deep learning, and so forth, where you're building applications that have an increasing degree of autonomy, being able to make decisions based on system of record data, system of engagement data, system of insight data, in real time. What that increasingly requires, Peter, is a development platform that combines those different types of data bases, or data stores, and also combines the processing for deep learning, machine learning, and so forth. On devices that are increasingly tinier and tinier, and embedded in mobile devices and what not. So what I'm talking about here is an architecture for development where developers are going to say, I want to be able to develop it in the cloud, I'm going to need to. 'Cause we have huge teams of specialists who are building and training and deploying and iterating these in a cloud environment, a centralized modeling context, but then deploying their results of their work down to the smallest systems where these models will need to run, if not autonomously, in some loosely-coupled fashion with tier two and tier three systems, which will also be hyperconverged. And each of those systems in each of those tiers will need a self-similar data fabric, and an AI processing fabric. So what developers are saying is, I want to be able to take it and model it, and deploy it to these increasingly nano-scopic devices at the edge, and I need each of those components at every tier to have the same capabilities and hyperconverged form factors, essentially. >> For hyperscale, so here's where we are, guys. Where we are is that there are compelling economic reasons why we're going to see this notion of enterprise hyperscale emerge. It appears that the workloads are encouraging that. Developers seem to be moving towards adopting these technologies. But there's another group that we haven't talked about. Dave Vellante, the computing industry is not a simple go-to-market model. There's a lot of reasons why channels, partnerships, etc. are so complex. How are they going to weigh in on this change? >> [Dave Vellante] Well the cloud clearly is having an impact on the channel. I mean if you look at sort of the channel guys, you got the sort of box sellers, which still comprises most of the channel. You got more solution orientation, and then increasingly, you know, the developers are becoming a form of a channel. And I think the channel still has a lot of influence over how customers buy, and I think one of the reasons that people buy roll-your-own still, and it's somewhat artificial, is that the channel oftentimes prefers it that way. It's more complicated, and as their margins get squeezed, the channel players can maintain services, on top of those roll-your-own components. So I think buyers got to be careful, and they got to make sure that their service provider's motivations align with, you know, their desired outcomes, and they're not doing the roll-your-own bespoke approach for the wrong reasons. >> Yeah, and we've seen that a fair amount as we've talked to senior IT folks, that there's a clear misalignment, often, between what's being pushed from a technology standpoint and what the application actually requires, and that's one of the reasons why this question is so rich and so important. But Ralph Phinos, kind of sum up, when you think about some of these issues as they pertain to where to make investments, how to make investments. From our perspective, is there a relatively simple approach to thinking this through, and understanding how best to put your money to get the most value out of the technologies that you choose? (static hissing) Alright, I think we've lost Ralph there, so I'll try to answer the question myself. (chuckles) (David laughs) So here's how we would look at it, and David Floyer, help me out and see if you disagree with me. But at the end of the day, what we're looking for is we're suggesting to customers that have a cost orientation should worry a little bit less about risk, a little bit less about flexibility, and they can manage how that cost happens. And the goal is to try to reduce the cost as fast as possible, and not worry so much about the future options that they'll face in terms of how to reduce future types of cost out. And so that might push them more towards this public hyperscale approach. But for companies that are thinking in terms of revenue, that have to ensure that their systems are able to respond to competitive pressures, customer needs, that are increasingly worried about buying future options with today's technology choices. That there's a scale, but that's the group that's going to start looking more at the enterprise hyperscale. Clearly that's where SAS players are. Yeah. And then the question is and what requires further research is, where's that break point going to be? So if I'm looking at this from an automation, from a revenue standpoint, then I need a little bit greater visibility in where that break point's going to be between controlling my own destiny, with the technology that's crucial to my business, versus not having to deal with the near-term costs associated with doing the integration myself. But this time to value, I want to return to this time to value. >> [David] It's time to value that is the crucial thing here, isn't it? >> [Peter] Time to value now, and time to future value. >> And time to future value, yes. What is the consequence of doing everything yourself is that the time to put in new releases, the time to put in patches, the time to make your system secure, is increasingly high. And the more that you integrate systems into systems of intelligence, with the analytics and the systems of record, the more you start to integrate, the more complex the total environment, the more difficult it's going to be for people to manage that themselves. So in that environment, you would be pushing towards getting systems where the vendor is doing as much of that integration as they can-- And that's where they get the economies from. The vendors get the economies of scale because they can feed back into the system faster than anybody else. Rather than taking a snowflake approach, they're taking a volume approach, and they can feed back for example artificial intelligence in operational efficiency, in security. There's many, many opportunities for vendors to push down into the marketplace those findings. And those vendors can be cloud vendors as well. If you look at Microsoft, they can push down into their Azure Stack what they're finding in terms of artificial intelligence and in terms of capabilities. They can push those down into the enterprises themselves. So the more that they can go up the stack into the database layers, maybe even into the application layers, the higher they can go, the lower the cost, the lower the time to value will be for them to deploy applications using that. >> Alright, so we've very quickly got some great observations on this important dynamic. It's time for action items. So Jim Kobielus, let me start with you. What's the action item for this whole notion of hyperscale? Action items, Jim Kobielus. >> Yeah, the action item for hyperscale is to consider the degree of convergence you require at the lowest level of the system, the edge device. How much of that needs to be converged down to a commoditized component that can be flexible enough that you can develop a wide range of applications on top of that-- >> Excellent, hold on, OK. George Gilbert, action item. >> Really quickly you have to determine, are you going to keep your legacy system of record database, and add like an analytic database on a hyperscale infrastructure, so that you're not doing a heart and lung transplant on an existing system. If you can do that and you can manage the latency between the existing database and culling to the analytic database, that's great. Then there's little disruption. Otherwise you have to consider integrating the analytics into a hyperscale-ready legacy database. >> David Vellante, action item. >> Tasks like LUN management, and server provisioning, and just generally infrastructure management, and non-strategic. So as fast as possible, shift your "IT labor resources" up the stack toward more strategic initiatives, whether they're digital initiatives, data orientation, and other value-producing activities. >> David Floyer, action item. >> Well I was just about to say what Dave Vellante just said. So let me focus a little bit more on a step in order to get to that position. >> So Dave Floyer, action item. (David laughs) >> So the action item that I would choose would be that you have to know what your costs are, and you have to be able to, as senior management, look at those objectively and say, "What is my return on spending all of "this money and making the system operate?" The more that you can reduce the complexity, buy in, converge systems, hyperconverge systems, hyperscale systems, that are going to put that responsibility onto the vendors themselves, the better position you're going to be to really add value to the bottom line of applications that really can start to use all of this capability, advanced analytics that's coming into the marketplace. >> So I'm going to add an action item before I do a quick summary. And I'm just going to insert it. My action item, the relationship that you have with your vendors is going to change. It used to be focused on procurement and reducing the cost of acquisition. Increasingly, for those high-value, high-performing, revenue-producing, differentiating applications, it's going to be strategic vendor management. That implies a whole different range of activities. And companies that are going to build their business with technology and digital are going to have to move to a new relationship management framework. Alright, so let's summarize today's action item meeting. First of I want to thank very much George Gilbert, David Floyer, here in the studio with me. David Vellante, Ralph Phinos, Jim Kobielus on the phone. Today we talked about enterprise hyperscale. This is part of a continuum that we see happening, because the economics of technology are continuing to assert themselves in the marketplace, and it's having a significant range of impacts on all venues. When we think about scale economies, we typically think about how many chips we're going to stamp out, or how many copies of an operating system is going to be produced, and that still obtains, and it's very important. But increasingly users have to focus their attention to how we're going to generate economies out of the IT labor that's necessary to keep the digital businesses running. If we can shift some of those labor costs to other players, then we want to support those technology sets that embed those labor costs directly in the form of technology. So over the next few years, we're going to see the emergence of what we're calling enterprise hyperscale that embeds labor costs directly into hyperscale packaging, so that companies can focus more on generating revenue out of technology, and spend less time on the integration of work. The implications of that is that the traditional buying process of trying to economize on the time to purchase, the time to get access to the piece parts, is going to give way to a broader perspective on time to ultimate value of the application or of the outcome that we seek. And that's going to have a number of implications that CIOs have to worry about. From an external standpoint, it's going to mean valuing technology differently, valuing packaging differently. It means less of a focus on the underlying hardware, more of a focus on this common set of capabilities that allow us to converge applications. So whereas converge technology talked about converging hardware, enterprise hyperscale increasingly is about converging applications against common data, so that we can run more complex and interesting workloads and revenue-producing workloads, without scaling the labor and management costs of those workloads. A second key issue is, we have to step back and acknowledge that sometimes the way products go to market, and our outcomes or our desires, do not align. That there is the residual reality in the marketplace that large numbers of channel partners and vendors have an incentive to try to push more complex technologies that require more integration, because it creates a greater need for them and creates margin opportunities. So ensure that as you try to achieve this notion of converged applications and not converged infrastructure necessarily, that you are working with a partner who follows that basic program. And the last thing is I noted a second ago, that that is going to require a new approach to thinking about strategic vendor management. For the last 30 years, we've done a phenomenal job of taking cost out of technology, by focusing on procurement and trying to drive every single dime out of a purchase that we possibly could. Even if we didn't know what that was going to mean from an ongoing maintenance and integration and risk-cost standpoint, what we need to think about now is what will be the cost to the outcome. And not only this outcome, but because we're worried about digital business, future outcomes, that are predicated on today's decisions. So the whole concept here is, from a relationship management standpoint, the idea of what relationship is going to provide us the best time to value today, and streams of time to value in the future. And we have to build our relationships around that. So once again I want to thank the team. This is Peter Burris. Thanks again for participating or listening to the Action Item. From the Cube studios in Palo Alto, California, see you next week. (electronic music)

Published Date : Nov 10 2017

SUMMARY :

And on the phone we have Ralph Phinos, is that the cost and the time to The issue is that the cost of maintaining those systems, and the different options between renting and buying So they'll spend time to build proprietary What's the continuum of where systems are today, But really the maintenance was still a responsibility of-- the old stuff with a few extra flavors. So they literally virtualized those underlying putting in the software to emulate that cloud experience, and arguing, persuasively, that the customer the idea is we want to bring analytics and operations build databases that really do have the horsepower So it's the data that's driving the need for as possible so that you have the least cost the market to these kinds of solutions, in the cloud, I'm going to need to. It appears that the workloads are encouraging that. and they got to make sure that their service provider's And the goal is to try to reduce the cost is that the time to put in new releases, What's the action item for this whole notion of hyperscale? Yeah, the action item for hyperscale is to George Gilbert, action item. culling to the analytic database, that's great. So as fast as possible, shift your "IT labor resources" a step in order to get to that position. So Dave Floyer, action item. hyperscale systems, that are going to put that economize on the time to purchase,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

DavidPERSON

0.99+

George GilbertPERSON

0.99+

Peter BurrisPERSON

0.99+

Jim KobielusPERSON

0.99+

Ralph PhinosPERSON

0.99+

Dave FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GeorgePERSON

0.99+

PeterPERSON

0.99+

WikibonORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

RalphPERSON

0.99+

David FPERSON

0.99+

fiveQUANTITY

0.99+

sixQUANTITY

0.99+

thousandsQUANTITY

0.99+

eachQUANTITY

0.99+

next weekDATE

0.99+

TodayDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

about $150 billionQUANTITY

0.99+

This weekDATE

0.99+

each layerQUANTITY

0.99+

twoQUANTITY

0.99+

ARCOLORGANIZATION

0.99+

first questionQUANTITY

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.98+

four yearsQUANTITY

0.98+

five microsecondsQUANTITY

0.98+

a dozen nodesQUANTITY

0.98+

second key issueQUANTITY

0.98+

half a dozen nodesQUANTITY

0.97+

Azure StackTITLE

0.89+

MicronORGANIZATION

0.84+

last 30 yearsDATE

0.8+

Cube studiosORGANIZATION

0.79+

SASORGANIZATION

0.76+

CubeORGANIZATION

0.74+

singleQUANTITY

0.72+

Action ItemORGANIZATION

0.68+

second agoDATE

0.67+

next few yearsDATE

0.64+

threeOTHER

0.61+

nextQUANTITY

0.58+

tier twoOTHER

0.56+