Image Title

Search Results for Steve Minnamon:

Wikibon 2017 Predictions


 

>> Hello, Wikibon community, and welcome to our 2017 predictions for the technology industry. We're very excited to be able to do this, today. This is one of the first times that Wikibon has undertaken something like this. I've been here since about April, 2016, and it's certainly the first time that I've been part of a gathering like this, with so many members of the Wikibon community. Today I'm joined with, or joined by, Dave Vellante, who's our co-CEO. So I'm the Chief Research Officer, here, and you can see me there on the left, that you can see this is from our being on TheCube at big data, New York City, this past September, and there's Dave on the right-hand side. Dave, you want to say hi? >> Dave: Hi everybody; welcome. >> So, there's a few things that we're going to do, here. The first thing I want to note is that we've got a couple of relatively simple webinar housekeeping issues. The first thing to note is everyone is muted. There is a Q&A option. You can hit the tab and a window will pop up and you can ask questions there. So if you hear anything that requires an answer, something we haven't covered or you'd like to hear again, by all means, hit that window, ask the question, and we'll do our best to get back to you. If you're a Wikibon customer, we'll follow up with you shortly after the call to make sure you get your question answered. If, however, you want to chat with your other members of the community, or with either Dave or myself, you want to comment, then there's also a chat option. On some of the toolbars, it's listed under the More button. So if you go to the More button, and you want to chat, you can probably find that there. Finally, we're also recording the webinar, and we will turn this into a Wikibon deliverable for the overall community. So, very excited to be doing this. Now, Dave, one of the things that we note on this slide is that we have TheCube in the lower left-hand corner. Why don't you take us through a little bit about who we are and what we're doing? >> Okay, great; thanks, Peter. So I think many of you or most of you know that SiliconANGLE Media Inc is sort of the umbrella company, and underneath SiliconAngle, we have three brands: the Wikibon research brand, which was started in the 2007 time frame. It's a community of IT practitioners. TheCube is, some people call it the ESPN of tech. We'll do 100 events this year, and we extensively use TheCUBE as a data-gathering mechanism and a way to communicate to our community. We've got some big shows coming up, pretty much every week, but of course we've got Amazon Reinvent coming up, and we'll be in London with HPE Discover. And so, we cover the world and cover technology, particularly in the enterprise, and then there's the SiliconANGLE publishing team, headed up by Rob Hoaf. It was founded by my co-CEO John Ferrier, and Rob Hoaf, former Business Week, is now leading that team. So those are the three main brands. We've got a new website coming out this month, on SiliconANGLE, so really excited about that and just thank the community for all your feedback and participation, so Peter, back to you. >> Thank you, Dave, so what you're going to hear today is what the analyst team here at Wikibon has pulled together for what we regard as some of the most interesting things that we think are going to happen over the next two years. Wikibon has been known for looking at disruptive technologies, and so while the focus, from a practical standpoint, in 2017, we do go further out. What is the overarching theme? Well, the overarching theme of our research and our conversations with the community is very simple. It's: put more data to work. The industry has developed incredible tools to gather data, to do analysis on data, to have applications use data and store data. I could go on with that list. But the data tends to be quite segmented and quite siloed to a particular application, a particular group, or a particular other activity. And the goal of digital business, in very simple terms, is to find ways to turn that data into an asset, so that it can be applied to other forms of work. That data could include customer data, operational data, financial data, virtually any data that we can imagine. And the number of sources that we're going to have over the next few years are going to be astronomical. Now, what we want to do is we want to find ways so that data can be freed up, almost like energy, in a physical sense, to dramatically improve the quality of the work that a firm produces. Whether it's from an engagement standpoint, or a customer experience standpoint, or actual operations, and increasingly automation. So that's the underlying theme. And as we go through all of these predictions, that theme will come out, and we'll reinforce that message during the course of the session. So, how are we going to do this? The first thing we're going to do is we're going to have six predictions that focus in 2017. Those six predictions are going to answer crucial questions that we're getting from the community. The first one is: what's driving system architecture? Are there new use cases, new applications, new considerations that are going to influence not only how technology companies create the systems and the storage and the networking and the database, and the middleware and the applications, but also how users are going to evolve the way they think about investing? The second one is: do micro-processor options matter? Through 20 years now, we've pretty much focused on one, limited class of micro-processor, the X386, er, the X86 architecture. But will these new workloads drive opportunities or options for new micro-processors? Do we have to worry about that? Thirdly, all this data has to be stored somewhere. Are we going to continue to store it, limited only on HDDs, or are other technologies going to come into vogue? Fourthly, in the 2017 time frame, we see the cloud, a lot's happening, professional developers have flocked to it, enterprises are starting to move to it in a big way, what does it mean to code in the cloud? What kinds of challenges are we going to face? Are they technological? Are they organizational, institutional? Are they sourcing? Related to that, obviously, is Amazon's had enormous momentum over the past few years. Do we expect that to continue? Is everybody else going to be continuing to play catch-up? And the last question for 2017 that we think is going to be very important is this notion of big data complexity. Big data has promised big things, and quite frankly has, except in some limited cases, been a little bit underwhelming. As some would argue, this last election showed. Now, we're going to move, after those six predictions, to 2022, where we'll have three predictions that we're going to focus on. One is: what is the new IT mandate? Is there a new IT mandate? Is it going to be the same old, same old, or is IT going to be asked to do new things? Secondly, when we think about Internet of Things, and we think about Augmented Reality or virtual reality, or some of these other new ways of engaging people, is that going to draw out new classes of applications? And then finally, after years of investing heavily in mobile applications, in mobile websites, and any number of other things, and thinking that there was this tight linkage where mobile equaled digital engagement, we're starting to see that maybe that's breaking, and we have to ask the question: is that all there is to digital engagement, or is there something else on the horizon that we're going to have to do? The last prediction, in 2027, we're going to take a stab here and say: will we all work for AI? So, these are the questions that we hear frequently from our clients, from our community. These are the predictions we're going to attend to and address. If you have others, let us know. If there's other things that you want us to focus on, let us know, but here's where we're starting. Alright. So let's start with 2017. What's driving system architecture? Our prediction for 2017 regarding this is very simple. The IoT edge use cases begin shaping decisions in system and application architecture. Now, the right-hand side, if you look at that chart, you can see a very, very important result of the piece of research that David Foyer recently did. And it shows IoT edge options, three-year costs. From left to right, moving all the data into the cloud over a normal data communications, telecommunications circuit, in the middle, moving that data into a central location, namely using cellular network technologies, which have different performance and security attributes, and then finally, keeping 95 percent of the data at the edge, processing it locally. We can see that the costs are overwhelming, favoring being smarter by how we design these applications and keeping more of that data local. And in fact, we think that so long as data and communications costs remain what they are, that there's going to be an irrevokeable pressure to alter key application architectures and ways of thinking to keep more of that crossing at the edge. The first point to note, here, is it means that data doesn't tend to move to the center as much as many are predicting, but rather, the cloud moves to the edge. The reason for that is that data movement isn't free. That means we're going to have even more distributed, highly autonomous apps, so none of those have to be managed in ways that sustain the firm's behavior in a branded, consistent way. And very importantly, because these apps are going to be distributed and autonomous, close to the data, it ultimately means that there's going to be a lot of operational technology players that impact the key decisions, here, that we're going to see made as we think about the new technologies that are going to be built by vendors and in the application architectures that are going to be deployed by users. >> So, Peter, let me just add to that. I think the key takeaway there is, as you mentioned, and I just don't want it to get lost, is 95 percent of the data, we're predicting, will stay at the edge. That's a much larger figure than I've seen from other firms or other commentary, and that's substantial, that's significant, it says it's not going to move. It's probably going to sit on flash, and the analytics will be done at the edge, as opposed to this sort of first bar, being cloud only. That 95 percent figure has been debated. It's somewhat controversial, but that's where we are today. Just wanted to point that out. >> Yeah, that's a great point, Dave. And the one thing to note, here, that's very important, is that this is partly driven by the cost of telecommunications or data communications, but there also are physical realities that have to be addressed. So, physics, the round trip times because of the speed of light, the need for greater autonomy and automation on the edge, OT and the decisions and the characteristics there, all of these will contribute strongly to this notion of the edge is increasingly going to drive application architectures and new technologies. So what's going to power those technologies? What's going to be behind those technologies? Let's start by looking at the CPUs. Do micro-processor options matter? Well, our prediction is that evolution in workloads, the edge, big data, which we would just, for now, put AI and machine learning, and cognitive underneath many of those big data things, almost as application forms, creates an opening for new micro-processor technologies, which are going to start grabbing market share from x86 servers in the next few years. Two to three percent next year, in 2017. And we can see a scenario where that number grows to double digits in the next three or four years, easily. Now, these micro-processors are going to come from multiple sources, but the factors driving this are, first off, the unbelievable explosion in devices served. That it's just going to require more processing power all over the place, and the processing power has to become much more cost-effective and much more tuned specifically to serving those types of devices. Data volumes and data complexity is another reason. Consumer economics is clearly driving a lot of these factors, has been for years, and it's going to continue to do so. But we will see new, ARM-based processors and other, and GPUs for big data apps, which have the advantage of being also supported in many of the consumer applications out there, driving this new trend. Now, the other two factors. Moore's Law is not out of room. We don't want to suggest that, but it's not the factor that it used to be. We can't presume that we're going to get double the performance out of a single class of technology every year or so, and that's going to remove any and all other types of micro-processor sets. So there's just not as much headroom. There's going to be an opportunity now to drive at these new workloads with more specialized technology. And the final one is: the legacy software issue's never going to go away; it's a big issue, it's going to remain a big issue. But, these new workloads are going to create so much new value in digital business settings, we believe, that it will moderate the degree to which legacy software keeps a hold on the server marketplace. So, we expect a lot of ARM-based servers that are lower cost, tuned and specialized, supporting different types of apps. A lot of significant opportunity for GPUs for big data apps, which do a great job running those kinds of graph-based data models. And a lot of room, still, for RISC in pre-packaged HCI solutions. Which we call: single managed entities. Others call: appliances. So we see a lot of room for new micro-processors in the marketplace over the next few years. >> I guess I'll add to that, and I'll be brief, just in the interest of time, the industry has marched to the cadence of Moore's Law for, as we know, many, many decades, and that's been the fundamental source of innovation. We see the innovation curve shifting and changing to become combinatorial, a combination of technologies. Peter mentioned GPU, certainly visualization's in there. AI, machine learning, deep learning, graph databases, combining to be the fundamental driver of innovation, going forward, so the answer here is: yes, they matter. Workloads are obviously the key. >> Great, Dave. So let's go to the next one. We talked about CPUs, well now, let's talk about HDDs. And more broadly, storage. So the prediction is that anything in a data center that physically moves gets less useful and loses share of wallet. Now, clearly that includes tape, but now it's starting to include HDDs. In our overall enterprise systems, storage systems revenue forecast, which is going to be published very, very shortly, we've identified that we think that the revenue attributable to HDD-based enterprise storage systems is going to drop over the next few years, while flash-based enterprise storage system revenue rises dramatically. Now, we're talking about storage system revenue here, Dave. We're not just talking about the HDDs, themselves. The HDD market starts, continues to grow, perhaps not as fast, partly because, even as the performance side of the HDD market starts to fade a bit, replaced by flash, that bulk, volume part of the HDD marketplace starts to substitute for tape. So, why is this happening? One of the main reasons it's happening is because the storage revenue, the storage systems revenue is very strongly influenced by software. And those software revenues are being bundled into the flash-based systems. Now, there's a couple reasons for this. First off, as we've predicted for quite some time, we do see a flash-only data center option on the horizon. It's coming well into focus. Number two is that, the good news is flash-based products are starting to come down and also are in sight of HDD-based products at the performance level. But very importantly, and here's one of the key notions of the value of data, and finding new ways to increase the use of data: flash, our research shows, offers superior business value, too, precisely because you can make so many copies of it and have a single set of data serve so many different applications and so many users, at scales that just aren't possible with traditional, HDD-based enterprise storage systems. Now, this applies to labor, too, Dave, doesn't it? >> Yeah, so a couple of points here. Yes, labor being one of those, sort of, areas that Peter's talking about are, ah, in jeopardy. We see about $200 billion over the next 10 years shifting from what we often refer to as non-differentiated IT labor, in provisioning and networking configuration and laying cable, et cetera, shifting from where it is today in services and/or on-prem IT labor, to vendor R&D or the cloud. So that's a very important point. I think I just wanted to add some color to what you were talking about before when you talked about HDD revenue continuing to grow, I think you were talking about, specifically, in the enterprise, in this storage systems view. And the other thing I want to add is, Peter, referenced sort of the business value of flash, as you, many of you know, David Floyer and Wikibon predicted, very early on, the impact that flash would have on spinning disk, and not only because of cost related to compression and de-duplication, but also this notion that Peter's talking about, of data sharing. The ability of development organizations to use the same data and minimize the number of copies. Now, the thing to watch, here, and kind of the wildcard is the hyperscale model. Hyperscalers, as we know, are consuming many, many, you know, exabytes and petabytes of data. They do things differently than is done in the enterprise, so that's something that we're watching very closely in terms of that model, that model being the hyperscale model, how it mimics or how it doesn't mimic what traditionally has occurred in the enterprise and how that will affect adoption of both flash and spinning disk. But as Peter said, we'll be releasing this data very shortly, and you'll be able to dig into it with is. >> And very importantly, Dave, in response to one of the comments in the chat, we're not talking about duplication of data everywhere, we're talking about the ability to provide logical and effective copies to single-data sources, so that, just because you can just drive a lot more throughput. So, that's the HDD. Now, let's turn to some of this notion of coding the cloud. What are we going to do with code in the cloud? Well our prediction is that the new cloud development stack, which is centered on containers and APIs, matures rapidly, but institutional habits in development constrain change. Now, why do we say that? I want to draw your attention to the graphic on the right-hand side. Now, this is what we think the mature, or the maturing cloud development stack looks like. As you can see, it's a lot of notions of containers, a lot of notions of other types of technologies. We'll see APIs interspersed throughout here as a primary way of getting to some of these container-based applications, services, microservices, et cetera, but this same, exact chart could be mapped back to SOA from 10 years ago, and even from some of the distributed computing environments that were put forward 20 years ago. The challenge here is that a sizable percentage, and we're estimating about 80 percent of in-house development, is still set up to work the old way. And so long as development organizations are structured to build monolithic apps or take care of monolithic apps, they will tend to create monolithic apps, with whatever technology's available to them. So, while we see these stacks becoming more vogue and more in use, we may not see, in 2017, shops being able to take full advantage of them. Precisely because the institutional work forms are going to change more slowly. Now, big data will partly contravene these habits. Why? Because big data is going to require quite different development approaches, because of the complexity associated of analytic pipelines, building analytic pipelines, managing data, figuring out how to move things from here to there, et cetera; there's some very, very complex data movement that takes place within big data applications. And some of these new application services, like Cognitive, et cetera, will require some new ways of thinking about how to do development. So, there will be a contravening force here, which we'll get to, shortly, but the last one is: ultimately, we think time-to-value metrics are going to be key. As KPI's move from project cost and taking care of the money, et cetera, and move more towards speed, as Agile starts to assert itself, as organizations start to, not only, build part of the development organization around Agile, but also Agile starts bleeding into other management locations, like even finance, then we'll start to see these new technologies really start asserting themselves and having a big impact. >> So, I would add to that, this notion of the iron triangle being these embedded processes, which as we all know, people, processes, and technology, people and process are the hardest to change, I'm interested, Peter, in your thoughts on, you hear a lot about Waterfall versus Agile; how will organizations, sort of, how will that affect organizations, in terms of their ability to adopt some of these, you know, new methodologies like Agile and Scrum? >> Well, the thing we're saying is the technology's going to happen fast, the Agile processes are being well-adopted, and are being used, certainly, in development, but I have had lots of conversations with CIOs, for example, over the last year and a half, two years ago, where they observed that they're having a very difficult time with reconciling the impedance mismatch between Agile development and non-Agile budgeting. And so, a lot of that still has to be worked out, and it's going to be tied back to how we think about the value of data, to be sure, but ultimately, again, it comes back to this notion of people, Dave, if the organization is not set up to fully take advantage of these new classes of technologies, if they're set up to deliver and maintain more monolithic applications, then that's what's going to tend to get built, and that's what's going to get, and that's what the organization is going to tend to have, and that's going to undermine some of the new value propositions that these technologies put forward. Well, what about the cloud? What kind of momentum does Amazon have? And our prediction for 2017 is that Amazon's going to have yet another banner year, but customers are going to start demanding a simplicity reset. Now, TheCUBE is going to be at Amazon Reinvent with John Ferrier and Steve Minnamon are going to be up there, I believe, Dave, and we're very excited. There's a lot of buzz happening about Reinvent. So follow us up there, through TheCUBE at Reinvent. But what I've done on the right-hand side is sent you a piece of Wikibon research. What we did is we wrote up, and we did an analysis of all of the AWS cases put forward, on their website, about how people are using AWS, and there's well over 650, or at least there were when we looked at it, and we looked at about two-thirds of them, and here's what we came up with. Somewhere in the vicinity of 80 percent, or so, of those cases are tied back to firms that we might regard as professional software delivery organizations. Whether they're stash or business services or games, provided games, or social networks. There's a smaller piece of the pie that's dedicated to traditional enterprise-type class of customers. But that's a growing and important piece, and we're not diminishing it at all, but the broad array of this pie chart, folks are relatively able to hire the people and apply the skills and devote the time necessary to learn some of the more complex, 75-plus Amazon services that are now available. The other part of the marketplace, the part that's moving into Amazon, the promise of Amazon is that it's simple, it's straightforward, and it is. Certainly more so than other options, but we anticipate that there will have to be a new type of, and Amazon's going to have to work even harder to simplify it, as it tries to attract more of that enterprise crowd. It's true that the flexibility of Amazon is certainly spawning complexity. We expect to see new tools, in fact, there are new tools on the market from companies like Appfield, for example, for handling and managing AWS billing and services, and that is, our CIOs are telling us, they're actually very helpful and very powerful in helping to manage those relationships, but the big issue here is that other folks, like VM Ware, have done research to suggest that the average shop has two to three big cloud relationships. That makes a lot of sense to us. As we start adding hybrid cloud into this and the complexities of inter-cloud communication and inter-cloud orchestration starts to become very real, that's going to even add more complexity, overall. >> So I'd add to that, just in terms of Amazon momentum, obviously those of you who follow what I read, you know, have been covering this for quite some time, but to me, the marginal economics of Amazon's model continue to be increasingly attractive. You can see it in the operating profits. Amazon's gap, operating profits, are in the mid-20s. 25, 26 percent. Just to give you a sense, EMC, who's an incredibly profitable company, its gap operating profits are in the teens. Amazon's non-gap operating profits are into 30 percent, so it's an incredibly profitable company. The more it grows, the more profitable it gets. Having said that, I think we agree with what Peter's saying in terms of complexity; think about API creep in Amazon. And different proprietary APIs for each of the data services, whether it's Kinesis or EC2 or S3 or Dynamo DB or EMR, et cetera, so the data complexity and the complexity of the data pipeline is growing, and I think that opens the door for the on-prem folks to at least mimic the public cloud experience to a great degree; as great a degree as possible. And you're seeing people, certainly, companies do that in their marketing, and starting to do that in the solutions that they're delivering. So by no means are we saying Amazon takes over the world, despite, you know, the momentum. There's a window open for those that can mimic, to the large extent, the public cloud capabilities. >> Yeah, very important point there. And as we said earlier, we do expect to see the cloud move closer to the edge, and that includes on-prem, in a managed way, as opposed to presuming that everything ends up in the cloud. Physics has something to say about that, as do some of the costs of data movement. Alright, so we've got one more 2017 prediction, and you can probably guess what it is. We've spent a lot of years and have a pretty significant place in spin big data, and we've been pretty aggressive about publishing what we think is going to happen in big data, or what is happening in big data, over the last year or so. One of the reasons why we think Amazon's momentum is going to increase is precisely because we think it's going to become a bigger target for big data. Why? Because big data complexity is a serious concern in many organizations today. Now, it's a serious concern because the spoke nature of the tools that are out there, many of which are individually extremely good, means that shops are spending an enormous amount of time just managing the underlying technology, and not as much time as they need to learning about how to solve big data problems, doing a great job of piloting applications, demonstrating to the business the financial returns are there. So as a result of this bespoked big data tool aggregates, we get multi-source, and we need to cobble it together from a lot of different technology sources, a lot of uncoordinated software and hardware updates that dramatically drive up the cost of on-prem administration. A lot of conflicting commitments, both from the business as well as from the suppliers, and very, very complex contracts. And as a result of that, we think that that's been one of the primary reasons why there's been so many pilot failures and why big data has not taken off the way that it probably should have. We think, however, that in 2017, we're going to see, and here's our prediction, we're going to see failure rates for big data pilots drop by 50 percent, as big vendors, IBM, Microsoft, AWS, and Google, amongst the chief ones, and we'll see if Oracle gets into that list, bring pre-packaged, problem-based analytic pipelines to market. And that's what we mean by this concept, here, of big data, single-managed entities. The idea that we can pull together, a company can pull together, or that it can pull together all the various elements necessary to provide the underlying infrastructure so that a shop can focus more time making sure that they understand the use-case, they understand how to go get the data necessary to serve that use-case, and understand how to pilot and deploy the application, because the underlying hardware and system software is pre-packaged and used. Now, we think that these, the SMEs, that are going to be most successful will be ones that are not predicated only on more proprietary software, but utilize a lot of open-source software. The ones that we see that are most successful today are in fact combining the pre-packaging of technology with the availability, or access, to the enormous value that the open-source market continues to build as it constructs new tools and delivers them out to big data applications. Ultimately, you've seen this before, or you've heard this before, from us: time-to-value becomes the focus. Similar to development, and we think that's one of the convergences that we have, here. We think that big data apps, or app patterns, will start to solidify. George Gilbert's done some leading-edge research on what some of those application patterns are going to be, and how those application patterns are going to drive analytic pipeline decisions, and very important, the process of building out the business capabilities necessary to build out the repeatable big data services to the business. Now, very importantly, some of these app patterns are going to be, are going to look like machine learning, cognitive AI, in many respects, all of these are part of this use-case to app trend that we see. So, we think that big data's kind of an umbrella for all of those different technology classes. It's going to be a lot of marketing done that tries to differentiate machine learning, cognitive AI. Technically, there are some differences, but from our perspective, they're all part of the effort of trying to ensure that we can pull together the technology in a more simple way so that it can be applied to complex business problems more easily. One more point I'll note, Dave, is that, and you adjust that world a lot, so I'd love to get your comments on this, but one of the more successful single-managed entities out there is, in fact, Watson from IBM, and it's actually a set of services and not just a device that you buy. >> Yeah, so a couple comments, there. One is that you can see the complexity in the market data, and we've been covering big data markets for a long time now, and there were two things that stood out when we started covering this. One is that software, as a percentage of the total revenue, is much lower than you would expect, in most markets. And that's because of the open-source contribution and the, you know, the multi-year collapse that we've seen in infrastructure software pricing. Largely due to open-source and cloud. The other piece of that is professional services, which have dominated spending within big data, because of the complexity. I think you're right, when you look at what happened at World of Watson and, you know, what IBM's trying to do, and others, in your prediction, there, are putting together a full, end-to-end data pipeline to do, you know, ingest and data wrangling and collaboration between data scientists, data engineers, and application developers and data quality people, and then bringing in the analytics piece. And essentially, you know, what many companies have done, and IBM included, they've cobbled together sets of tools and they've sort of layered on a way to interact with those tools, so the integration has still been slow in coming, but that's where the market is headed, so that we actually can build commercial, off the shelf applications. There's been a lack of those applications. I remember, probably four years ago, Mike Olsen at a (unintelligible) predicted: this will be the year of the big data app. And it still has not happened, so, and until it does, that complexity is going to reign. >> Yeah, and so it, again, as we said earlier, we anticipate that the big data, the need for developers to become more a part of the big data ecosystem, and the need for developers to get more value out of some of the other new cloud stacks are going to come together and will reinforce each other over the course of the next 24 to 36 months. So those were our 2017 predictions. Now let's take a look at our 2022 predictions, and we've got three. The first one is we do think a new IT mandate's on the horizon. Consistent with all these trends we've talked about, the idea of new ways of thinking about infrastructure and application architecture, based on the realities of the edge, new ways of thinking about how application developers need to participate in the value equation activities of big data, new ways of organizing to try to take greater advantage of the new processes, new technologies for development. We think, very strongly, that IT organizations will organize work to generate greater value from data assets by engineering proximity of applications and data. What do we mean by that? Well, proximity can mean physical proximity, but it also is something that we mean in terms of governance, tool similarity, infrastructure commonality, we think that over the next four to five years, we'll see a lot of effort to try to increase the proximity of not only data assets from a data standpoint, or the raw data, but also understanding from an infrastructure, governance skillset, et cetera, standpoint. So that we can actually do a better job of, again, generating more work out of our data by finding new and interesting ways of weaving together systems of records, big analytics, IOT, and a lot of other new application forms we see on the horizon, including one that I'll talk about in a second. Data value becomes a hot topic. We're going to have to do a better job, as a community, of talking about how data is valuable. How it creates (unintelligible) in the business, how it has to be applied, or has to be thought of as a source of value, in building out those systems. We talked earlier about the notion of people, process, and technology, well, we have to add to that: data. Data needs to be an asset that gets consumed as we think about how business changes. So data value's going to become a hot topic, and it's something we're focused on, as to what it means. We think, as Dave mentioned earlier, it's going to catalyze a true private cloud solutions for legacy applications. Now, I know Dave, you're going to want to talk about, in a second, what this might need. For example, things like the Amazon, VM Ware recent announcement. But it also means that strategic sourcing becomes reality. The idea of just going after the cheapest solution, or cost-optimized solution, which, don't get me wrong, don't get us wrong, is not going to go away, but it means that increasingly we're going to focus on new sourcing arrangements that facilitate creating greater proximity for those crucial aspects that make our shop run. >> Okay, so a couple of thoughts there, Peter. You know, there's a lot of talk, a couple years ago, and it's slowly beginning to happen, of bringing transaction and analytic systems together. What that oftentimes means is somebody takes their mainframe for the transactions and sticks it in finneban pipe into an exodata. I don't think that's what everybody envisioned when you started to sort of discuss that mean. So that's sort of happening slowly. But it's something that we're watching. This notion of data value, and shifting from, really a process economy to a data, or an insight, economy is something that's also occurring. You're seeing the emergence of the chief data officer. And our research shows that there are five things a chief data officer must do to really get started. The first is to understand data value, and how data contributes to the monetization of their company. So not monetizing the data, per se, and I think that's a mistake that a lot of people made, early on, is trying to figure out how to sell their data, but it's really to understand how data contributes to value for your organization. The second piece is how to access that data, who gets access to that data, and what data sources you have. And the third is the quality and trust of that data. And those are sequential things that our research shows a chief data officer has to do. And then the other, sort of parallel items, are relationship with the line of business and re-skilling. And those are complicated issues for most organizations to undertake, and something that's going to take, you know, many, many years to play out. The vast majorities of customers that we talk to say their data-driven, but aren't necessarily data-driven. >> Right, so, the one other thing I wanted to mention, Dave, is that we did some research, for example, on the VM Ware, Amazon relationship, and the reason why we were positive on it is quite simple. That it provides a path for VM Ware's customers, with their legacy applications running under VM Ware, to move those applications and the data associated with those applications, if they choose to, closer to some of the new, big data applications that are showing up in Amazon. So there's an example of this notion of making it more proximate, making applications and data more proximate, based on physics, based on governance, based on overall tooling and skilling, and we anticipate that that's going to become a new design center for a lot of shops over the course of the next few years. Now, coming to this notion of a new design center, the next thing we want to note is that, IoT, the Internet of Things, plus augmented reality, is going to have an impact on the marketplace. We got very excited about IoT, simply by thinking about the things, but our perspective is, increasingly, we have to recognize that people are going to always be a major feature, and perhaps the greatest value-creating feature, of systems. And augmented reality is going to emerge as a crucial actuator for the Internet of Things, and people. And that's kind of what we mean, is that augmented reality becomes an actuator for people. As will Chat Box and other types of technologies. Now, an actuator in an IoT sense is the devices or set of capabilities that take the results of models and actually turn that into a real-world behavior. So, if we think about this virtuous cycle that we have on the right-hand side, the internet, these are the three capabilities that we think people or firms are going to have to build out. They're going to have to build out an Internet of Things and People that are capable of capturing data, and turning analogue data into digital data, so that it can be moved into these big data applications. Again, with machine learning and AI and cognitive, sort of being part of that or underneath that umbrella, so that, then, we can build more models, more insights, more software that then translates into what we're calling systems of enaction. Or systems of "enaction", not "inaction". Systems of enaction. Businesses still serve customers, and these systems of enaction are going to generate real-world outcomes from these models and insights, and these real-world outcomes will certainly be associated with things, but they will also be associated with human being and people. And as a consequence of this, this we think is so powerful and is going to be so important over the course of the next five years that we anticipate that we will see a new set of disciplines focused on social discovery. Historically, in this industry, we've been very focused on turning insights or discovery about physics into hardware. Well, over the next few years, and Dave mentioned moving from the process to some new economy, we're going to see an enormous investment in understanding the social dynamics of how people work together and turn that into software. Not just how accountants do things, but how customers and enterprises come together to make markets happen, and through that social discovery, create these systems of enaction so that businesses can successfully, can successfully attend to and deliver the promises and the, ah, and the predictions that they're making through their other parts of their big data applications. >> So, Peter, you've pointed out many times that the big change, relative to processes, and historically, in the IT business, we've known what the processes are. The technology was sort of unknown, and mysterious. That's flipped. It's now, really the process is the unknown piece. That's the mysterious part. The technology is pretty well-understood. I think, as it relates to what you're talking about here with IoT and AR, what people tell us, the practitioners that are struggling with this, first of all, there's so much analogue data that people are trying to digitize, the other piece is there's a limited budget that folks have, and they're trying to figure out, alright, do I spend it on getting more data, and will that improve my data, increase my observation space? Or do I spend it on better models, and improving my models and iterating? And that's a trade-off that people have to make, and of course the answer is "both", but how those funds are allocated is something that organizations are really trying to better understand. There's a lot of trial and error going on. Because obviously, more data, in theory anyway, means you can make better decisions. But it's that iteration of that model, that trial and error and constant improvement, and both of those take significant resources. And budgets are still tight. >> Very true, Dave, and in fact, George Gilbert's research with the community is starting to demonstrate that more of the value's going to come from the models, as opposed to the raw data. We need the raw data to get to the models, but more of the value's going to come from the models. So that's where we think more people are going to focus their time and attention. Because the value will be in the insights and the models. But to go back to your point: where do you put your money? Well, you got to run these pilots, you got to keep up with your competitors, you got to serve customers better, so you're going to have to build all these models, sometimes in a bespoked way. But George is publishing an enormous amount of research right now that's very valuable to a lot of our community members that really shows how that pipeline, how those analytic pipelines or the capabilities associated with those analytic pipelines are starting to become better understood. So that we can actually start getting experience and generating efficiencies or generating a scale out of those analytic pipelines. And that's going to be a major feature underlying this basic trend. Now, this notion of people is really crucial, because as we think about the move to the Internet of Things and People, we have to ask ourselves: has digital engagement really, fully considered what it means to engage people throughout their customer journey? And the answer is: no, it hasn't. We believe that by 2022, IT will be given greater responsibility for management of demand chains. Working to unify customer journey designs and operations across all engagement functions. And by engagement functions, we mean marketing, sales, we mean product, we mean service, we mean fulfillment. That doesn't mean that they all report to IT. Don't mean that, at all. But it means that IT is going to have to, again, find ways to apply data from all these different sources so that it can, in fact, simplify and unify and bring together consistent design and operations so that all these functions can be successful and support reorganization if necessary, because the underlying systems provide that degree of unity and focus on customer success. Now, this is in strong opposition to the prediction made a few years ago, that marketing was going to emerge as the be-all and end-all, that's going to spend more than IT. That was silly, it hasn't happened, and you'd have to redefine marketing very aggressively to see that actually happening. But, when we think about this notion of putting more data to work, the first thing we note, and this is what all the digital natives have shown us, the data can transform a product into a service. That is the basis for a lot of the new business models we're talking about, a lot of these digital native business models and successes that they've had, and we think it's going to be a primary feature of the IT mandate to help business understand how data, more data can be put to work, transforming products into services. It also means, at a tactical level, that mobile applications have been way too focused on solving the seller's problems. We want to caution folks, don't presume that because your mobile application has gotten lost in some online store somewhere that that means that digital engagement's a failure. No, it means that you have to focus digital engagement on providing value throughout the customer journey, and not just from the problem to the solution, where the transaction for money takes place. Too much mobile applications, or too many mobile applications have been focused, in a limited way, on the marketers' problem within the business, of trying to generate, trying to generate awareness and demand. And it has to be, mobile has to be applied in a coherent and comprehensive way, across the entire journey. And ultimately, I hate to say this, but we think collaboration's going to make a comeback. But collaboration to serve customers. So the business can collaborate better inside, but in support of serving the customers. Major, major feature of what we think is going to happen over the course of the next couple years. >> I think the key point there is we all, there's many mobile apps that we love, and utilize, but there are a lot that are not so great. And the point that we've made to the community, quite often, is that it used to be that the brands had all the power, they had all the information, there was an asymmetry of information, the customer, the consumer didn't really know much about pricing. The web, obviously, has leveled that playing field and what many brands are trying to do is recreate that asymmetry and maybe got over their skis a little bit, before providing value to the customers. And I think your point is that, to the extent that you can provide value to that customer, that information advantage will come back to you. But you can't start with that information advantage. >> Great point, Dave. But it also means that we need to, that IT needs to look at the entire journey and see transactions and the discover, evaluate, buy, apply, use and fix throughout this entire journey and find ways of designing systems that provide value to customers at all times and in all places. So the demand chain notion, which historically has been focused on trying to optimize the value that the buyer gets in the buy process, at a cost-effective way, that notion of demand chain has to be applied to the entire engagement lifecycle. Alright, so that's 2022. Let's take a crack at our big prediction for 2027. And it's at, ah, it's on a lot of people's minds. Will we all work for AI? There've been a lot of studies done, over the course of the past year, year and a half, that have been kind of suggested that 47 percent of jobs are going to go away, for example. And that's not, that's not the only high end. Actually, folks have suggested much more, over the next 10, 15 years. Now, if you take a look at the right-hand side, you see a robot thinker. Now, you may not know this, but when The Thinker was actually first, when Rodan first constructed The Thinker, what he was envisioning was actually someone looking down into the seven levels of Hell as described by Dante. And I think that a lot of people would agree that the notion of no work is a Hell for a lot of people. We don't think that that's going to happen in the same way that most folks do. We believe that AI technology advances will far outpace the social advances. Some tasks will be totally replaced, but most jobs will only be partially replaced. We have to draw a clear distinction between the idea that a job performs only this or that task, as opposed to a job or an individual, an employee, as part of a complex community that ensures that a business is capable of serving customers. It doesn't mean we're not going to see more automation, but automation is going to focus mostly on replacing tasks. And to the degree that that task sets a particular job is replaced, then those jobs will be replaced. But ultimately, there's going to be a lot of social friction that gates how fast this happens. One of the key reasons for the social friction is something in behavioral economics that's known as loss avoidance. People are more afraid of losing something than they are of gaining something. And, whether it's a union or whether it's regulations or any number of other factors, that's going to gate the rate at which this notion that AI crushes employment occurs. AI will tend to compliment, not substitute for labor. And that's been a feature of technology for years. It doesn't, again, mean that some tasks and some task sets, sort of those in line with jobs, aren't replaced; there will be people put out of work as a consequence of this. But for the most part, we will see AI tend to compliment, not fully substitute for most jobs. Now this creates, also, a new design consideration. Historically, as technologists, we've looked at what can be done with technology, and we've asked: can we do it? And if the answer is "yes", we tend to go off and do it. And now, we need to start asking ourselves: should we do it? And this is not just a moral imperative. This has other implications, as well. So, for example, the remarkably bad impact that a lot of automated call centers have had on customer service from a customer experience standpoint. This has to become one of the features of how we think about bringing together, in these systems of enaction, all the different functions that are responsible for serving a customer. Asking ourselves: well, we can do it, from a technical standpoint, but should we do it from a customer experience, from a community relations, and even from a, ah, from a cultural imperative standpoint, as we move forward? >> Okay, I'll be brief, because we're wrapping up here, but first of all, machines have always replaced humans. When, largely with physical tasks, now we're seeing that occur with cognitive tasks. People are concerned, as Peter said. The middle class is obviously under fire. The median income in the United States has dropped from $55,000 in 1999 to just above $50,000 today. So, something's going on, and clearly you can look around and see whether it's an an airport with kiosks or billboards, electronic machines and cognitive functions are replacing human functions. Having said that, we're sanguine, because the, the story I'll tell is that the greatest chess player in the world is not a machine. When Deep Blue beat Gary Kasparov, what Gary Kasparov did is he started a competition to collaborate with other, you know, human chess players with machines, to beat the machine, and they succeeded at that, so this, again, I come back to this combination of technologies. Combinatorial technologies are really what's going to drive the innovation curve over the next, we think, 20 to 50 years. So, it's something that is far out there, in terms of our predictions, but it's also something that is relevant to the society, and obviously the technology industry. So thank you, everybody. >> So, we have one more slide, and it's Conclusions Slide, so let me hit these really quick, and before I do so, let me note that George, our big data analyst is George Gilbert. George Gilbert: G-I-L-B-E-R-T. Alright, so, very quickly, tech architecture question, we think edge IoT is going to have a major effect in how we think about architecture of the future. Micro-processor options? Yup, new micro-processor options are going to have an impact in the marketplace. Whither HDDs? For the performance side of storage, flash is coming on strong. Code in the cloud? Yes, the technologies are great, but development has to change its habits. Amazon momentum? Absolutely going to continue. Big data complexity? It's bad and we have to find ways to make it simpler so that we can focus more on the outcomes and the results, as opposed to the infrastructure and the tooling. 2022, new IT mandate? Drive the value of that data. Get more value out of your data. The Internet of Things and People is going to become the proper way of thinking about how these new systems of enaction work, and we anticipate that demand chain management is going to be crucial to extending the idea of digital engagement. Will we all work for AI? Dave just mentioned, as we said, there's going to be dislocation, there's going to be tasks that are replaced, but not by 2027. Alright, so thank you very much for your time, today. Here is how you can contact Dave and myself. We will be publishing this, the slides and this broadcast. Wikibon's going to deliver three coordinated predictions talks over the course of the next two days, so look for that. Go up to SiliconANGLE, we're up there a fair amount. Follow us on Twitter, and we want to thank you very much for staying with us during the course of this session. Have a great day.

Published Date : Nov 17 2016

SUMMARY :

and it's certainly the first time that I've been part shortly after the call to make sure and just thank the community for all your feedback are predicting, but rather, the cloud moves to the edge. and the analytics will be done at the edge, of the edge is increasingly going to drive application the industry has marched to the cadence of the value of data, and finding new ways to increase Now, the thing to watch, here, and even from some of the distributed computing environments and it's going to be tied back to how we think about and starting to do that in the solutions that the open-source market continues to build One is that software, as a percentage of the total revenue, over the course of the next 24 to 36 months. and it's slowly beginning to happen, moving from the process to some new economy, that the big change, relative to processes, and not just from the problem to the solution, And the point that we've made to the community, And if the answer is "yes", we tend to go off and do it. that is relevant to the society, that demand chain management is going to be crucial

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

PeterPERSON

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

2017DATE

0.99+

Mike OlsenPERSON

0.99+

IBMORGANIZATION

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

$55,000QUANTITY

0.99+

GoogleORGANIZATION

0.99+

2007DATE

0.99+

MicrosoftORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

1999DATE

0.99+

95 percentQUANTITY

0.99+

Steve MinnamonPERSON

0.99+

twoQUANTITY

0.99+

John FerrierPERSON

0.99+

47 percentQUANTITY

0.99+

New York CityLOCATION

0.99+

United StatesLOCATION

0.99+

2027DATE

0.99+

25QUANTITY

0.99+

80 percentQUANTITY

0.99+

20QUANTITY

0.99+

30 percentQUANTITY

0.99+

Gary KasparovPERSON

0.99+

2022DATE

0.99+

threeQUANTITY

0.99+