Wikibon Action Item, Quick Take | Neil Raden, 5/4/2018
hi I'm Peter Burroughs welcome to a wiki bond action item quick take Neal Raiden Terry data announced earnings this week what does it tell us about Terry data and the overall market for analytics well tear date announced their first quarter earnings and they beat estimates for both earnings than revenues but they but lo they announced lower guidance for the fiscal year which I guess you know failed to impress Wall Street but recurring quarter one revenue was up 11% nearly a year to three hundred and two million dollars but perpetual revenue was down 23% from quarter one seventeen consulting was up to 135 million for the quarter you know not not altogether shabby for a company in transition but I think what it shows is that Teradata is executing this transitional program and there are some pluses and minuses but they're making progress jury's out but I think overall I'd consider it a good quarter what does it tell us about the market anything we can glean from their daters results about the market overall Neal it's hard to say there's a lot of you know at the ATW conference last week I listened to the keynote from Mike Ferguson I've known Mike for years and I think I always think that Mike's the real deal because he spends all of his time doing consulting and when he speaks he's there to tell us what's happening it he gave a great presentation about datawarehouse versus data Lake and if if he's correct there is still a market for a company like Terra data so you know we'll just have to see excellent Neil Raiden thanks very much this has been a wiki bond critique or actually it's been a wiki bond action item quick-take talk to you again
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil Raiden | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mike Ferguson | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
5/4/2018 | DATE | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Peter Burroughs | PERSON | 0.99+ |
23% | QUANTITY | 0.98+ |
Terra data | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
Neal | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
up to 135 million | QUANTITY | 0.94+ |
nearly a year | QUANTITY | 0.9+ |
first quarter | DATE | 0.88+ |
Wall Street | ORGANIZATION | 0.87+ |
three hundred and two million dollars | QUANTITY | 0.85+ |
years | QUANTITY | 0.8+ |
11% | QUANTITY | 0.8+ |
ATW conference | EVENT | 0.77+ |
one | QUANTITY | 0.76+ |
seventeen | QUANTITY | 0.75+ |
Terry | PERSON | 0.68+ |
Neal Raiden | ORGANIZATION | 0.67+ |
wiki | TITLE | 0.66+ |
Action Item Quick Take | Neil Raden - Mar 2018
(upbeat music) >> Hi, I'm Peter Burris with another Wikibon Action Item Quick Take. Neil Raden. What's going on with Tableau? >> Well, you know, Tableau software has been a huge success story over the years. Ten years or more. But in the last couple of years they've really exploded. What they did is they allowed in users to take data, analytical data, build some models and generate all sorts of beautiful visualizations from it. Problem was, the people who use Tableau had no tools to work with to prep the data, and that was causing the problem. They work with partners and so forth. But that's all changing. Last year they announced Project Maestro, which is their own data prep product. It's built on a in-memory collinder-oriented data base called Hyper that they bought, and my information, coming from developers who are using the data is that Maestro is going to be a huge success for them. >> Excellent. >> And one other thing, I think it points out that a pure play visualization vendor can't survive. They have to expand horizontally. And it will remain to be seen what Tableau will do after this. This is clearly not its last act. >> Great. Neil Raden talking about Tableau and Project Maestro and expectations for it. This is Peter Burris. Thanks again for watching another Wikibon Action Item Quick Take. (upbeat music)
SUMMARY :
What's going on with Tableau? and that was causing the problem. They have to expand horizontally. and Project Maestro and expectations for it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Ten years | QUANTITY | 0.99+ |
Mar 2018 | DATE | 0.99+ |
Maestro | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.94+ |
last couple of years | DATE | 0.88+ |
Tableau | TITLE | 0.81+ |
Wikibon | ORGANIZATION | 0.77+ |
Hyper | TITLE | 0.75+ |
Project Maestro | TITLE | 0.69+ |
Tableau | ORGANIZATION | 0.62+ |
Action Item Quick Take | Neil Raden - Feb 2018
(upbeat electronic music) >> Hi, I'm Peter Burress with another Wikibon Action Item Quick Take. Neil Raden, you've been out visiting clients this week. What's the buzz about data and big data and related stuff? >> Well, the first thing about big data is the product development cadence is so fast now that organizations can't absorb it. Every week something new comes out, and their decisions process is longer than that. Not one person decides to bring in Plume. It's a committee decision. So that's part of the problem. The other part of the problem is they still run on their legacy systems and having a hard time figuring out how to make the two work together. The third thing, though, is I want to disagree with something Dave Vellante said about the insurance industry. Insurance tech is exploding. That industry is in the midst of a huge digital transformation, and perhaps Dave and I could work together on that and do some research and show some of the very, very interesting things that are happening there. But oh, GDPR. I'm sorry, GDPR is like a runaway train. It reminds me of Y2K without the lead time. Everybody is freaked out about it because it infests every system they have, and they don't even know where to start. So we'll need to keep an eye on that. >> Alright, this is Peter Burress, Neil Raden, another Wikibon Action Item Quick Take. (upbeat electronic music)
SUMMARY :
What's the buzz about data and big data and related stuff? The other part of the problem is they still run (upbeat electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
this week | DATE | 0.97+ |
one person | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.93+ |
Wikibon | ORGANIZATION | 0.92+ |
Y2K | ORGANIZATION | 0.91+ |
Plume | ORGANIZATION | 0.33+ |
Action Item, Graph DataBases | April 13, 2018
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. (electronic music) Once again, we're broadcasting from our beautiful theCUBE Studios in Palo Alto, California. Here in the studio with me, George Gilbert, and remote, we have Neil Raden, Jim Kobielus, and David Floyer. Welcome, guys! >> Hey. >> Hi, there. >> We've got a really interesting topic today. We're going to be talking about graph databases, which probably just immediately turned off everybody. But we're actually not going to talk so much about it from a technology standpoint. We're really going to spend most of our time talking about it from the standpoint of the business problems that IT and technology are being asked to address, and the degree to which graph databases, in fact, can help us address those problems, and what do we need to do to actually address them. Human beings tend to think in terms of relationships of things to each other. So what the graph community talks about is graphed-shaped problems. And by graph-shaped problem we might mean that someone owns something and someone owns something else, or someone shares an asset, or it could be any number of different things. But we tend to think in terms of things and the relationship that those things have to other things. Now, the relational model has been an extremely successful way of representing data for a lot of different applications over the course of the last 30 years, and it's not likely to go away. But the question is, do these graph-shaped problems actually lend themselves to a new technology that can work with relational technology to accelerate the rate at which we can address new problems, accelerate the performance of those new problems, and ensure the flexibility and plasticity that we need within the application set, so that we can consistently use this as a basis for going out and extending the quality of our applications as we take on even more complex problems in the future. So let's start here. Jim Kobielus, when we think about graph databases, give us a little hint on the technology and where we are today. >> Yeah, well, graph databases have been around for quite a while in various forms, addressing various core-use cases such as social network analysis, recommendation engines, fraud detection, semantic search, and so on. The graph database technology is essentially very closely related to relational, but it's specialized to, when you think about it, Peter, the very heart of a graph-shaped business problem, the entity relationship polygram. And anybody who's studied databases has mastered, at least at a high level, entity relationship diagrams. The more complex these relationships grow among a growing range of entities, the more complex sort of the network structure becomes, in terms of linking them together at a logical level. So graph database technology was developed a while back to be able to support very complex graphs of entities, and relationships, in order to do, a lot of it's analytic. A lot of it's very focused on fast query, they call query traversal, among very large graphs, to find quick answers to questions that might involve who owns which products that they bought at which stores in which cities and are serviced by which support contractors and have which connections or interrelationships with other products they may have bought from us and our partners, so forth and so on. When you have very complex questions of this sort, they lend themselves to graph modeling. And to some degree, to the extent that you need to perform very complex queries of this sort very rapidly, graph databases, and there's a wide range of those on the market, have been optimized for that. But we also have graph abstraction layers over RDBMSes and multi-model databases. You'll find them running in IBM's databases, or Microsoft Cosmos DB, and so forth. You don't need graph-specialized databases in order to do graph queries, in order to manipulate graphs. That's the issue here. When does a specialized graph database serve your needs better than a non-graph-optimized but nonetheless graph-enabling database? That's the core question. >> So, Neil Raden, let's talk a little bit about the classes of business problems that could in fact be served by representing data utilizing a graph model. So these graph-shaped problems, independent of the underlying technology. Let's start there. What kinds of problems can business people start thinking about solving by thinking in terms of graphs of things and relationships amongst things? >> It all comes down to connectedness. That's the basis of a graph database, is how things are connected, either weakly or strongly. And these connected relationships can be very complicated. They can be based on very complex properties. A relational database is not based on, not only is it not based on connectedness, it's not based on connectedness at all. I'd like to say it's based on un-connectedness. And the whole idea in a relational database is that the intelligence about connectedness is buried in the predicate of a query. It's not in the database itself. So I don't know how overlaying graph abstractions on top of a relational database are a good idea. On the other hand, I don't know how stitching a relational database into your existing operation is going to work, either. We're going to have to see. But I can tell you that a major part of data science, machine learning, and AI is going to need to address the issue of causality, not just what's related to each other. And there's a lot of science behind using graphs to get at the causality problem. >> And we've seen, well, let's come back to that. I want to come back to that. But George Gilbert, we've kind of experienced a similar type of thing back in the '90s with the whole concept of object-orientated databases. They were represented as a way of re-conceiving data. The problem was that they had to go from the concept all the way down to the physical thing, and they didn't seem to work. What happened? >> Well it turns out, the big argument was, with object-oriented databases, we can model anything that's so much richer, especially since we're programming with objects. And it turns out, though, that theoretically, especially at that time, you could model anything down at the physical level or even the logical level in a relational database, and so those code bases were able to handle sort of similar, both ends of the use cases, both ends of the spectrum. But now that we have such extreme demands on our data management, rather than look at a whole application or multiple applications even sharing a single relational database, like some of the big enterprise apps, we have workloads within apps like recommendation engines, or a knowledge graph, which explains the relationship between people, places, and things. Or digital twins, or mapping your IT infrastructure and applications, and how they all hold together. You could do that in a relational database, but in a graph database, you can organize it so that you can have really fast analysis of these structures. But, the trade-off is, you're going to be much more restricted in how you can update the stuff. >> Alright, so we think about what happened, then, with some of the object-orientated technology, the original world database, the database was bound to the application, and the developer used the database to tell the application where to go find the data. >> George: Right. >> Relational data allowed us not to tell the applications where to find things, but rather how to find things, and that was persisted, and was very successful for a long time. Object-orientated technologies, in many respects, went back to the idea that the developer had to be very concrete about telling the application where the data was, but we didn't want to do things that way. Now, something's happened, David Floyer. One of the reasons why we had this challenge of representing data in a more abstract way across a lot of different forms without having it also being represented physically, and therefore a lot of different copies and a lot of different representations of the data which broke systems of record and everything else, was that the underlying technology was focused on just persisting data and not necessarily delivering it into these new types of datas, databases, data models, et cetera. But Flash changes that, doesn't it? Can't we imagine a world in which we can have our data in Flash and then, which is a technology that's more focused on delivering data, and then having that data be delivered to a lot of different representations, including things like graph databases, graph models. Is that accurate? >> Absolutely. In a moment I'll take it even further. I think the first point is that when we were designing real-time applications, transactional applications, we were very constrained, indeed, by the amount of data that we could get to. So, as a database administrator, I used to have a rule which you could, database developers could not issue more than 100 database calls. And the reason was that, they could always do more than that, but the applications became very unstable and they became very difficult to maintain. The cost of maintenance went up a lot. The whole area of Flash allows us to do a number of things, and the area of UniGrid enables us to do a number of things very differently. So that we can, for example, share data and have many different views of it. We can use UniGrid to be able to bring far greater amounts of power, compute power, GPUs, et cetera, to bear on specific workloads. I think the most useful thing to think about this is this type of architecture can be used to create systems of intelligence, where you have the traditional relational databases dealing with systems of record, and then you can have the AI systems, graph systems, all the other components there looking at the best way of providing data and making decisions in real time that can be fed back into the systems of record. >> Alright, alright. So let's-- >> George: I want to add to something on this. >> So, Neil, let me come back to you very quickly, sorry, George. Let me come back to Neil. I want to spend, go back to this question of what does a graph-shaped problem look like? Let's kind of run down it. We talked about AI, what about IoT, guys? Is IoT going to help us, is IoT going to drive this notion of looking at the world in terms of graphs more or less? What do you think, Neil? >> I don't know. I hadn't really thought about it, Peter, to tell you the truth. I think that one thing we leave out when we talk about graphs is we talk about, you know, nodes and edges and relationships and so forth, but you can also build a graph with very rich properties. And one thing you can get from a graph query that you can't get from a relational query, unless you write careful predicate, is it can actually do some thinking for you. It can tell you something you don't know. And I think that's important. So, without being too specific about IoT, I have to say that, you know, streaming data and trying to relate it to other data, getting down to, very quickly, what's going on, root-cause analysis, I think graph would be very helpful. >> Great, and, Jim Kobielus, how about you? >> I think, yeah I think that IoT is tailor-made for, or I should say, graph modeling and graph databases are tailor-made for the IoT. Let me explain. I think the IoT, the graph is very much a metadata technology, it's expressing context in a connected universe. Where the IoT is concerned it's all about connectivity, and so graphs, increasingly complex graphs of, say, individuals and the devices and the apps they use and locations and various contexts and so forth, these are increasingly graph-based. They're hierarchical and shifting and changing, and so in order to contextualize and personalize experience in a graph, in an IoT world, I think graph databases will be embedded in the very fabric of these environments. Microsoft has a strategy they announced about a year ago to build more of an intelligent edge around, a distributed graph across all their offerings. So I think graphs will become more important in this era, undoubtedly. >> George, what do you think? Business problems? >> Business problems on IoT. The knowledge graph that holds together digital twin, both of these lend themselves to graph modeling, but to use the object-oriented databases as an example, where object modeling took off was in the applications server, where you had the ability to program, in object-oriented language, and that mapped to a relational database. And that is an option, not the only one, but it's an option for handling graph-model data like a digital twin or IT operations. >> Well that suggests that what we're thinking about here, if we talk about graph as a metadata, and I think, Neil, this partly answers the question that you had about why would anybody want to do this, that we're representing the output of a relational data as a node in a network of data types or data forms so that the data itself may still be relationally structured, but from an application standpoint, the output of that query is, itself, a thing that is then used within the application. >> But to expand on that, if you store it underneath, as fully normalized, in relational language, laid out so that there's no duplicates and things like that, it gives you much faster update performance, but the really complex queries, typical of graph data models, would be very, very slow. So, once we have, say, more in memory technology, or we can manage under the covers the sort of multiple representations of the data-- >> Well that's what Flash is going to allow us to do. >> Okay. >> What David Floyer just talked about. >> George: Okay. >> So we can have a single, persistent, physical storage >> Yeah. >> but it can be represented in a lot of different ways so that we avoid some of the problems that you're starting to raise. If we had to copy the data and have physical, physical copies of the data on disc in a lot of different places then we would run into all kinds of consistency and update. It would probably break the model. We'd probably come back to the notion of a single data store. >> George: (mumbles) >> I want to move on here, guys. One really quick thing, David Floyer, I want to ask you. If there's, you mentioned when you were database administrator and you put restrictions on how many database actions an application or transaction was allowed to generate. When we think about what a business is going to have to do to take advantage of this, are there any particular, like one thing that we need to think about? What's going to change within an IT organization to take advantage of graph database? And we'll do the action items. >> Right. So the key here is the number of database calls can grow by a factor of probably a thousand times what it is now with what we can see is coming as technologies over the next couple of years. >> So let me put that in context, David. That's a single transaction now generating a hundred thousand, >> Correct. >> a hundred thousand database calls. >> Well, access calls to data. >> Right. >> Whatever type of database. And the important thing here is that a lot of that is going to move out, with the discussion of IoT, to where the data is coming in. Because the quicker you can do that, the earlier you can analyze that data, and you talked about IoT with possible different sources coming in, a simple one like traffic lights, for example. The traffic lights are being affected by the traffic lights around them within the city. Those sort of problems are ideal for this sort of graph database. And having all of that data locally and being processed locally in memory very, very close to where those sensors are, is going to be the key to developing solutions in this area. >> So, Neil, I've got one question from you, or one question for you. I'm going to put you on the spot. I just had a thought. And here's the thought. We talk a lot about, in some of the new technologies that could in fact be employed here, whether it be blockchain or even going back to SOA, but when we talk about what a system is going to have the authority to do about the idea of writing contracts that describe very, very discretely, what a system is or is not going to do. I have a feeling those contracts are not going to be written in relational terms. I have a feeling that, like most legal documents, they will be written in what looks more like graph terms. I'm extending that a little bit, but this has rights to do this at this point in time. Is that also, this notion of incorporating more contracts directly to how systems work, to assure that we have the appropriate authorities laid out. What do you think? Is that going to be easier or harder as a consequence of thinking in terms of these graph-shaped models? >> Boy, I don't know. Again, another thing I hadn't really thought about. But I do see some real gaps in thinking. Let me give you an analogy. OLAP databases came on the scene back in the '90s whatever. People in finance departments and whatever they loved OLAP. What they hated was the lack of scalability. And now what we see now is scalability isn't a problem and OLAP solutions are suddenly bursting out all over the place. So I think there's a role for a mental model of how you model your data and how you use it that's different from the relational model. I think the relational model has prominence and has that advantage of, what's it called? Occupancy or something. But I think that the graph is going to show some real capabilities that people are lacking right now. I think some of them are at the very high end, things, like I said, getting to causality. But I think that graph theory itself is so much richer than the simple concept of graphs that's implemented in graph databases today. >> Yeah, I agree with that totally. Okay, let's do the action item round. Jim Kobielus, I want to start with you. Jim, action item. >> Yeah, for data professionals and analytic professionals, focus on what graphs can't do, cannot do, because you hear a lot of hyperbolic, they're not useful for unstructured data or for machine learning in database. They're not as useful for schema on read. What they are useful for is the same core thing that relational is useful for which is schema on write applied to structured data. Number one. Number two, and I'll be quick on this, focus on the core use cases that are already proven out for graph databases. We've already ticked them off here, social network analysis, recommendation engines, influencer analysis, semantic web. There's a rich range of mature use cases for which semantic techniques are suited. And then finally, and I'll be very quick here, bear in mind that relational databases have been supporting graph modeling, graph traversal and so forth, for quite some time, including pretty much all the core mature enterprise databases. If you're using those databases already, and they can perform graph traversals and so forth reasonably well for your intended application, stick with that. No need to investigate the pure play, graph-optimized databases on the market. However, that said, there's plenty of good ones, including AWS is coming out with Neptune. Please explore the other alternatives, but don't feel like you have to go to a graph database first and foremost. >> Alright. David Floyer, action item. >> Action item. You are going to need to move your data center and your applications from the traditional way of thinking about it, of handling things, which is sequential copies going around, usually taking it two or three weeks. You're going to have to move towards a shared data model where the same set of data can have multiple views of it and multiple uses for multiple different types of databases. >> George Gilbert, action item. >> Okay, so when you're looking at, you have a graph-oriented problem, in other words the data is shaped like a graph, question is what type of database do you use? If you have really complex query and analysis use cases, probably best to use a graph database. If you have really complex update requirements, best to use a combination, perhaps of relational and graph or something like multi-model. We can learn from Facebook where, for years, they've built their source of truth for the social graph on a bunch of sharded MySQL databases with some layers on top. That's for analyzing the graph and doing graph searches. I'm sorry, for updating the graph and maintaining it and its integrity. But for reading the graph, they have an entirely different layer for comprehensive queries and manipulating and traversing all those relationships. So, you don't get a free lunch either way. You have to choose your sweet spots and the trade-offs associated with them. >> Alright, Neil Raden, action item. >> Well, first of all, I don't think the graph databases are subject to a lot of hype. I think it's just the opposite. I think they haven't gotten much hype at all. And maybe we're going to see that. But another thing is, a fundamental difference when you're looking at a graph and a graph query, it uses something called open world reasoning. A relational database uses closed world reasoning. I'll give you an example. Country has capital city. Now you have in your graph that China has capital city Beijing, China has capital city Beijing. That doesn't violate the graph. The graph simply understands and intuits that they're different names for the same thing. Now, if you love to write correlated sub-queries for many, many different relationships, I'd say stick to your relational database. I see unique capabilities in a graph that would be difficult to implement in a relational database. >> Alright. Thank you very much, guys. Let's talk about what the action item is for all of us. This week we talked about graph databases. We do believe that they have enormous potential, but we first off have to draw a distinction between graph theory, which is a way of looking at the world and envisioning and conceptualizing solutions to problems, and graph database technology, which has the advantages of being able, for certain classes of data models, to be able to very quickly both write and read data that is based on relationships and hierarchies and network structures that are difficult to represent in a normalized relational database manager. Ultimately, our expectation is that over the next few years, we're going to see an explosion in the class of business problems that lend themselves to a graph-modeling orientation. IoT is an example, very complex analytics systems will be an example, but it is not the only approach or the only way of doing things. But what is interesting, what is especially interesting, is over the last few years, a change in the underlying hardware technology is allowing us to utilize and expand the range of tools that we might use to support these new classes of applications. Specifically, the move to Flash allows us to sustain a single physical copy of data and then have that be represented in a lot of different ways to support a lot of different model forms and a lot of different application types, without undermining the fundamental consistency and integrity of the data itself. So that is going to allow us to utilize new types of technologies in ways that we haven't utilized before, because before, whether it was object-oriented technology or OLAP technology, there was always this problem of having to create new physical copies of data which led to enormous data administrative nightmares. So looking forward, the ability to use Flash as a basis for physically storing the data and delivering it out to a lot of different model and tool forms creates an opportunity for us to use technologies that, in fact, may more naturally map to the way that human beings think about things. Now, where is this likely to really play? We talked about IoT, we talked about other types of technologies. Where it's really likely to play is when the domain expertise of a business person is really pushing the envelope on the nature of the business problem. Historically, applications like accounting or whatnot, were very focused on highly stylized data models, things that didn't necessarily exist in the real world. You don't have double-entry bookkeeping running in the wild. You do have it in the legal code, but for some of the things that we want to build in the future, people, the devices they own, where they are, how they're doing things, that lends itself to a real-world experience and human beings tend to look at those using a graph orientation. And the expectations over the next few years, because of the changes in the physical technology, how we can store data, we will be able to utilize a new set of tools that are going to allow us to more quickly bring up applications, more naturally manage data associated with those applications, and, very important, utilize targeted technology in a broader set of complex application portfolios that are appropriate to solve that particular part of the problem, whether it's a recommendation engine or something else. Alright, so, once again, I want to thank the remote guys, Jim Kobielus, Neil Raden, and David Floyer. Thank you very much for being here. George Gilbert, you're in the studio with me. And, once again, I'm Peter Burris and you've been listening to Action Item. Thank you for joining us and we'll talk to you again soon. (electronic music)
SUMMARY :
Here in the studio with me, George Gilbert, and the degree to which graph databases, And to some degree, to the extent that you need to perform independent of the underlying technology. that the intelligence about connectedness from the concept all the way down both ends of the use cases, both ends of the spectrum. and the developer used the database and a lot of different representations of the data and the area of UniGrid enables us to do a number of things So let's-- So, Neil, let me come back to you very quickly, I have to say that, you know, and so in order to contextualize and personalize experience and that mapped to a relational database. so that the data itself may still be relationally But to expand on that, if you store it underneath, so that we avoid some of the problems What's going to change within an IT organization So the key here is the number of database calls can grow So let me put that in context, David. the earlier you can analyze that data, the authority to do about the idea of writing contracts But I think that the graph is going to show some real Okay, let's do the action item round. focus on the core use cases that are already proven out David Floyer, action item. You are going to need to move your data center and the trade-offs associated with them. are subject to a lot of hype. So looking forward, the ability to use Flash as a basis
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
April 13, 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
one question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
Beijing | LOCATION | 0.99+ |
single | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
This week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
first point | QUANTITY | 0.98+ |
MySQL | TITLE | 0.98+ |
more than 100 database calls | QUANTITY | 0.98+ |
China | LOCATION | 0.98+ |
Flash | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.95+ |
one thing | QUANTITY | 0.94+ |
'90s | DATE | 0.91+ |
single data store | QUANTITY | 0.88+ |
double | QUANTITY | 0.87+ |
both ends | QUANTITY | 0.85+ |
a year ago | DATE | 0.85+ |
first | QUANTITY | 0.84+ |
Number two | QUANTITY | 0.84+ |
next couple of years | DATE | 0.83+ |
years | DATE | 0.82+ |
hundred thousand | QUANTITY | 0.79+ |
last 30 years | DATE | 0.72+ |
hundred thousand database | QUANTITY | 0.72+ |
thousand times | QUANTITY | 0.72+ |
Flash | PERSON | 0.68+ |
Cosmos | TITLE | 0.67+ |
Wikibon | ORGANIZATION | 0.63+ |
database | QUANTITY | 0.57+ |
about | DATE | 0.57+ |
Number one | QUANTITY | 0.56+ |
Action Item | How to get more value out of your data, April 06, 2018
>> Hi I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) One of the most pressing strategic issues that businesses face is how to get more value out of their data, In our opinion that's the essence of a digital business transformation, is the using of data as an asset to improve your operations and take better advantage of market opportunities. The problem of data though, it's shareable, it's copyable, it's reusable. It's easy to create derivative value out of it. One of the biggest misnomers in the digital business world is the notion that data is the new fuel or the new oil. It's not, You can only use oil once. You can apply it to a purpose and not multiple purposes. Data you can apply to a lot of purposes, which is why you are able to get such interesting and increasing returns to that asset if you use it appropriately. Now, this becomes especially important for technology companies that are attempting to provide digital business technologies or services or other capabilities to their customers. In the consumer world, it started to reach a head. Questions about Facebook's reuse of a person's data through an ad based business model is now starting to lead people to question the degree to which the information asymmetry about what I'm giving and how they're using it is really worth the value that I get out of Facebook, is something that consumers and certainly governments are starting to talk about. it's also one of the bases for GDPR, which is going to start enforcing significant fines in the next month or so. In the B2B world that question is going to become especially acute. Why? Because as we try to add intelligence to the services and the products that we are utilizing within digital business, some of that requires a degree of, or some sort of relationship where some amount of data is passed to improve the models and machine learning and AI that are associated with that intelligence. Now, some companies have come out and said flat out they're not going to reuse a customer's data. IBM being a good example of that. When Ginni Rometty at IBM Think said, we're not going to reuse our customer's data. The question for the panel here is, is that going to be a part of a differentiating value proposition in the marketplace? Are we going to see circumstances in which companies keep products and services low by reusing a client's data and others sustaining their experience and sustaining a trust model say they won't. How is that going to play out in front of customers? So joining me today here in the studio, David Floyer. >> Hi there. >> And on the remote lines we have Neil Raden, Jim Kobielus, George Gilbert, and Ralph Finos. Hey, guys. >> All: Hey. >> All right so... Neil, let me start with you. You've been in the BI world as a user, as a consultant, for many, many number of years. Help us understand the relationship between data, assets, ownership, and strategy. >> Oh, God. Well, I don't know that I've been in the BI world. Anyway, as a consultant when we would do a project for a company, there were very clear lines of what belong to us and what belong to the client. They were paying us generously. They would allow us to come in to their company and do things that they needed and in return we treated them with respect. We wouldn't take their data. We wouldn't take their data models that we built, for example, and sell them to another company. That's just, as far as I'm concerned, that's just theft. So if I'm housing another company's data because I'm a cloud provider or some sort of application provider and I say well, you know, I can use this data too. To me the analogy is, I'm a warehousing company and independently I go into the warehouse and I say, you know, these guys aren't moving their inventory fast enough, I think I'll sell some of it. It just isn't right. >> I think it's a great point. Jim Kobielus. As we think about the role that data, machine learning play, training models, delivering new classes of services, we don't have a clean answer right now. So what's your thought on how this is likely to play out? >> I agree totally with Neil, first of all. If it's somebody else's data, you don't own it, therefore you can't sell and you can't monetize it, clearly. But where you have derivative assets, like machine learning models that are derivative from data, it's the same phenomena, it's the same issue at a higher level. You can build and train, or should, your machine learning models only from data that you have legal access to. You own or you have license and so forth. So as you're building these derivative assets, first and foremost, make sure as you're populating your data lake, to build and to do the training, that you have clear ownership over the data. So with GDPR and so forth, we have to be doubly triply vigilant to make sure that we're not using data that we don't have authorized ownership or access to. That is critically important. And so, I get kind of queasy when I hear some people say we use blockchain to make... the sharing of training data more distributed and federated or whatever. It's like wait a second. That doesn't solve the issues of ownership. That makes it even more problematic. If you get this massive blockchain of data coming from hither and yon, who owns what? How do you know? Do you dare build any models whatsoever from any of that data? That's a huge gray area that nobody's really addressed yet. >> Yeah well, it might mean that the blockchain has been poorly designed. I think that we talked in one of the previous Action Items about the role that blockchain design's going to play. But moving aside from the blockchain, so it seems as though we generally agree that data is owned by somebody typically and that the ownership of it, as Neil said, means that you can't intercept it at some point in time just because it is easily copied and then generate rents on it yourself. David Floyer, what does that mean from a ongoing systems design and development standpoint? How are we going to assure, as Jim said, not only that we know what data is ours but make sure that we have the right protection strategies, in a sense, in place to make sure that the data as it moves, we have some influence and control over it. >> Well, my starting point is that AI and AI infused products are fueled by data. You need that data, and Jim and Neil have already talked about that. In my opinion, the most effective way of improving a company's products, whatever the products are, from manufacturing, agriculture, financial services, is to use AI infused capabilities. That is likely to give you the best return on your money and businesses need to focus on their own products. That's the first place you are trying to protect from anybody coming in. Businesses own that data. They own the data about your products, in use by your customers, use that data to improve your products with AI infused function and use it before your competition eats your lunch. >> But let's build on that. So we're not saying that, for example, if you're a storage system supplier, since that's a relatively easy one. You've got very, very fast SSDs. Very, very fast NVMe over Fabric. Great technology. You can collect data about how that system is working but that doesn't give you rights to then also collect data about how the customer's using the system. >> There is a line which you need to make sure that you are covering. For example, Call Home on a product, any product, whose data is that? You need to make sure that you can use that data. You have some sort of agreement with the customer and that's a win-win because you're using that data to improve the product, prove things about it. But that's very, very clear that you should have a contractual relationship, as Jim and Neil were pointing out. You need the right to use that data. It can't come beyond the hand. But you must get it because if you don't get it, you won't be able to improve your products. >> Now, we're talking here about technology products which have often very concrete and obvious ownership and people who are specifically responsible for administering them. But when we start getting into the IoT domain or in other places where the device is infused with intelligence and it might be collecting data that's not directly associated with its purpose, just by virtue of the nature of sensors that are out there and the whole concept of digital twin introduces some tension in all this. George Gilbert. Take us through what's been happening with the overall suppliers of technology that are related to digital twin building, designing, etc. How are they securing or making promises committing to their customers that they will not cross this data boundary as they improve the quality of their twins? >> Well, as you quoted Ginni Rometty starting out, she's saying IBM, unlike its competitors, will not take advantage and leverage and monetize your data. But it's a little more subtle than that and digital twins are just sort of another manifestation of industry-specific sort of solution development that we've done for decades. The differences, as Jim and David have pointed out, that with machine learning, it's not so much code that's at the heart of these digital twins, it's the machine learning models and the data is what informs those models. Now... So you don't want all your secret sauce to go from Mercedes Benz to BMW but at the same time the economics of industry solutions means that you do want some of the repeatability that we've always gotten from industry solutions. You might have parts that are just company specific. And so in IBM's case, if you really parse what they're saying, they take what they learn in terms of the models from the data when they're working with BMW, and some of that is going to go into the industry specific models that they're going to use when they're working with Mercedes-Benz. If you really, really sort of peel the onion back and ask them, it's not the models, it's not the features of the models, but it's the coefficients that weight the features or variables in the models that they will keep segregated by customer. So in other words, you get some of the benefits, the economic benefits of reuse across customers with similar expertise but you don't actually get all of the secret sauce. >> Now, Ralph Finos-- >> And I agree with George here. I think that's an interesting topic. That's one of the important points. It's not kosher to monetize data that you don't own but conceivably if you can abstract from that data at some higher level, like George's describing, in terms of weights and coefficients and so forth, in a neural network that's derivative from the model. At some point in the abstraction, you should be able to monetize. I mean, it's like a paraphrase of some copyrighted material. A paraphrase, I'm not a lawyer, but you can, you can sell a paraphrase because it's your own original work that's based obviously on your reading of Moby Dick or whatever it is you're paraphrasing. >> Yeah, I think-- >> Jim I-- >> Peter: Go ahead, Neil. >> I agree with that but there's a line. There was a guy who worked at Capital One, this was about ten years ago, and he was their chief statistician or whatever. This was before we had words like machine learning and data science, it was called statistics and predictive analytics. He left the company and formed his own company and rewrote and recoded all of the algorithms he had for about 20 different predictive models. Formed a company and then licensed that stuff to Sybase and Teradata and whatnot. Now, the question I have is, did that cross the line or didn't it? These were algorithms actually developed inside Capital One. Did he have the right to use those, even if he wrote new computer code to make them run in databases? So it's more than just data, I think. It's a, well, it's a marketplace and I think that if you own something someone should not be able to take it and make money on it. But that doesn't mean you can't make an agreement with them to do that, and I think we're going to see a lot of that. IMSN gets data on prescription drugs and IRI and Nielsen gets scanner data and they pay for it and then they add value to it and they resell it. So I think that's really the issue is the use has to be understood by all the parties and the compensation has to be appropriate to the use. >> All right, so Ralph Finos. As a guy who looks at market models and handles a lot of the fundamentals for how we do our forecasting, look at this from the standpoint of how people are going to make money because clearly what we're talking about sounds like is the idea that any derivative use is embedded in algorithms. Seeing how those contracts get set up and I got a comment on that in a second, but the promise, a number of years ago, is that people are going to start selling data willy-nilly as a basis for their economic, a way of capturing value out of their economic activities or their business activities, hasn't matured yet generally. Do we see like this brand new data economy, where everybody's selling data to each other, being the way that this all plays out? >> Yeah, I'm having a hard time imagining this as a marketplace. I think we pointed at the manufacturing industries, technology industries, where some of this makes some sense. But I think from a practitioner perspective, you're looking for variables that are meaningful that are in a form you can actually use to make prediction. That you understand what the the history and the validity of that of that data is. And in a lot of cases there's a lot of garbage out there that you can't use. And the notion of paying for something that ultimately you look at and say, oh crap, it's not, this isn't really helping me, is going to be... maybe not an insurmountable barrier but it's going to create some obstacles in the market for adoption of this kind of thought process. We have to think about the utility of the data that feeds your models. >> Yeah, I think there's going to be a lot, like there's going to be a lot of legal questions raised and I recommend that people go look at a recent SiliconANGLE article written by Mike Wheatley and edited by our Editor In Chief Robert Hof about Microsoft letting technology partners own right to joint innovations. This is a quite a difference. This is quite a change for Microsoft who used to send you, if you sent an email with an idea to them, you'd often get an email back saying oh, just to let you know any correspondence we have here is the property of Microsoft. So there clearly is tension in the model about how we're going to utilize data and enable derivative use and how we're going to share, how we're going to appropriate value and share in the returns of that. I think this is going to be an absolutely central feature of business models, certainly in the digital business world for quite some time. The last thing I'll note and then I'll get to the Action Items, the last thing I'll mention here is that one of the biggest challenges in whenever we start talking about how we set up businesses and institutionalize the work that's done, is to look at the nature of the assets and the scope of the assets and in circumstances where the asset is used by two parties and it's generating a high degree of value, as measured by the transactions against those assets, there's always going to be a tendency for one party to try to take ownership of it. One party that's able to generate greater returns than the other, almost always makes move to try to take more control out of that asset and that's the basis of governance. And so everybody talks about data governance as though it's like something that you worry about with your backup and restore. Well, that's important but this notion of data governance increasingly is going to become a feature of strategy and boardroom conversations about what it really means to create data assets, sustain those data assets, get value out of them, and how we determine whether or not the right balance is being struck between the value that we're getting out of our data and third parties are getting out of our data, including customers. So with that, let's do a quick Action Item. David Floyer, I'm looking at you. Why don't we start here. David Floyer, Action Item. >> So my Action Item is for businesses, you should focus. Focus on data about your products in use by your customers, to improve, help improve the quality of your products and fuse AI into those products as one of the most efficient ways of adding value to it. And do that before your competition has a chance to come in and get data that will stop you from doing that. >> George Gilbert, Action Item. >> I guess mine would be that... in most cases you you want to embrace some amount of reuse because of the economics involved from your joint development with a solution provider. But if others are going to get some benefit from sort of reusing some of the intellectual property that informs models that you build, make sure you negotiate with your vendor that any upgrades to those models, whether they're digital twins or in other forms, that there's a canonical version that can come back and be an upgraded path for you as well. >> Jim Kobielus, Action Item. >> My Action Item is for businesses to regard your data as a product that you monetize yourself. Or if you are unable to monetize it yourself, if there is a partner, like a supplier or a customer who can monetize that data, then negotiate the terms of that monetization in your your relationship and be vigilant on that so you get a piece of that stream. Even if the bulk of the work is done by your partner. >> Neil Raden, Action Item. >> It's all based on transparency. Your data is your data. No one else can take it without your consent. That doesn't mean that you can't get involved in relationships where there's an agreement to do that. But the problem is most agreements, especially when you look at a business consumer, are so onerous that nobody reads them and nobody understands them. So the person providing the data has to have an unequivocal right to sell it to you and the person buying it has to really understand what the limits are that they can do with it. >> Ralph Finos, Action Item. You're muted Ralph. But it was brilliant, whatever it was. >> Well it was and I really can't say much more than that. (Peter laughs) But I think from a practitioner perspective and I understand that from a manufacturing perspective how the value could be there. But as a practitioner if you're fishing for data out there that someone has that might look like something you can use, chances are it's not. And you need to be real careful about spending money to get data that you're not really clear is going to help you. >> Great. All right, thanks very much team. So here's our Action Item conclusion for today. The whole concept of digital business is predicated in the idea of using data assets in a differential way to better serve your markets and improve your operations. It's your data. Increasingly, that is going to be the base for differentiation. And any weak undertaking to allow that data to get out has the potential that someone else can, through their data science and their capabilities, re-engineer much of what you regard as your differentiation. We've had conversations with leading data scientists who say that if someone were to sell customer data into a open marketplace, that it would take about four days for a great data scientist to re-engineer almost everything about your customer base. So as a consequence, we have to tread lightly here as we think about what it means to release data into the wild. Ultimately, the challenge there for any business will be: how do I establish the appropriate governance and protections, not just looking at the technology but rather looking at the overall notion of the data assets. If you don't understand how to monetize your data and nonetheless enter into a partnership with somebody else, by definition that partner is going to generate greater value out of your data than you are. There's significant information asymmetries here. So it's something that, every company must undertake an understanding of how to generate value out of their data. We don't think that there's going to be a general-purpose marketplace for sharing data in a lot of ways. This is going to be a heavily contracted arrangement but it doesn't mean that we should not take great steps or important steps right now to start doing a better job of instrumenting our products and services so that we can start collecting data about our products and services because the path forward is going to demonstrate that we're going to be able to improve, dramatically improve the quality of the goods and services we sell by reducing the assets specificities for our customers by making them more intelligent and more programmable. Finally, is this going to be a feature of a differentiated business relationship through trust? We're open to that. Personally, I'll speak for myself, I think it will. I think that there is going to be an important element, ultimately, of being able to demonstrate to a customer base, to a marketplace, that you take privacy, data ownership, and intellectual property control of data assets seriously and that you are very, very specific, very transparent, in how you're going to use those in derivative business transactions. All right. So once again, David Floyer, thank you very much here in the studio. On the phone: Neil Raden, Ralph Finos, Jim Kobielus, and George Gilbert. This has been another Wikibon Action Item. (electronic music)
SUMMARY :
and the products that we are utilizing And on the remote lines we have Neil Raden, You've been in the BI world as a user, as a consultant, and independently I go into the warehouse and I say, So what's your thought on how this is likely to play out? that you have clear ownership over the data. and that the ownership of it, as Neil said, That is likely to give you the best return on your money but that doesn't give you rights to then also You need the right to use that data. and the whole concept of digital twin and some of that is going to go into It's not kosher to monetize data that you don't own and the compensation has to be appropriate to the use. and handles a lot of the fundamentals and the validity of that of that data is. and that's the basis of governance. and get data that will stop you from doing that. because of the economics involved from your Even if the bulk of the work is done by your partner. and the person buying it has to really understand But it was brilliant, whatever it was. how the value could be there. and that you are very, very specific,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Mike Wheatley | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
IRI | ORGANIZATION | 0.99+ |
Nielsen | ORGANIZATION | 0.99+ |
April 06, 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
one party | QUANTITY | 0.99+ |
two parties | QUANTITY | 0.99+ |
Mercedes-Benz | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mercedes Benz | ORGANIZATION | 0.99+ |
One party | QUANTITY | 0.99+ |
Robert Hof | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Ralph | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
IMSN | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
Teradata | ORGANIZATION | 0.98+ |
next month | DATE | 0.96+ |
Moby Dick | TITLE | 0.95+ |
about 20 different predictive models | QUANTITY | 0.95+ |
Sybase | ORGANIZATION | 0.95+ |
decades | QUANTITY | 0.93+ |
about ten years ago | DATE | 0.88+ |
about four days | QUANTITY | 0.86+ |
second | QUANTITY | 0.83+ |
once | QUANTITY | 0.82+ |
Wikibon | ORGANIZATION | 0.8+ |
of years ago | DATE | 0.77+ |
Action | ORGANIZATION | 0.68+ |
SiliconANGLE | TITLE | 0.66+ |
twins | QUANTITY | 0.64+ |
Editor In Chief | PERSON | 0.61+ |
Items | QUANTITY | 0.58+ |
twin | QUANTITY | 0.48+ |
Think | ORGANIZATION | 0.46+ |
Action Item | March 30, 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) Once again, we're broadcasting from theCUBE studios in beautiful Palo Alto. Here in the studio with me are George Gilbert and David Floyer. And remote, we have Neil Raden and Jim Kobielus. Welcome everybody. >> David: Thank you. >> So this is kind of an interesting topic that we're going to talk about this week. And it really is how are we going to find new ways to generate derivative use out of many of the applications, especially web-based applications that are have been built over the last 20 years. A basic premise of digital business is that the difference between business and digital business is the data and how you craft data as an asset. Well, as we all know in any universal Turing machine, data is the basis for representing both the things that you're acting upon but also the algorithms, the software itself. Software is data and the basic principles of how we capture software oriented data assets or software assets and then turn them into derivative sources of value and then reapply them to new types of problems is going to become an increasingly important issue as we think about the world of digital business is going to play over the course of the next few years. Now, there are a lot of different domains where this might work but one in particular that's especially as important is in the web application world where we've had a lot of application developers and a lot of tools be a little bit more focused on how we use web based services to manipulate things and get software to do the things we want to do and also it's a source of a lot of the data that's been streaming into big data applications. And so it's a natural place to think about how we're going to be able to create derivative use or derivative value out of crucial software assets. How are we going to capture those assets, turn them into something that has a different role for the business, performs different types of work, and then reapply them. So to start the conversation, Jim Kobielus. Why don't you take us through what some of these tools start to look like. >> Hello, Peter. Yes, so really what we're looking at here, in order to capture these assets, the web applications, we first have to generate those applications and the bulk of that worker course is and remains manual. And in fact, there is a proliferation of web application development frameworks on the market and the range of them continues to grow. Everything from React to Angular to Ember and Node.js and so forth. So one of the core issues that we're seeing out there in the development world is... are there too many of these. Is there any prospect for simplification and consolidation and convergence on web application development framework to make the front-end choices for developers a bit easier and straightforward in terms of the front-end development of JavaScript and HTML as well as the back-end development of the logic to handle the interactions; not only with the front-end on the UI side but also with the infrastructure web services and so forth. Once you've developed the applications, you, a professional programmer, then and only then can we consider the derivative uses you're describing such as incorporation or orchestration of web apps through robotic process automation and so forth. So the issue is how can we simplify or is there a trend toward simplification or will there soon be a trend towards simplification of a front-end manual development. And right now, I'm not seeing a whole lot of action in this direction of a simplification on the front-end development. It's just a fact. >> So we're not seeing a lot of simplification and convergence on the actual frameworks for creating software or creating these types of applications. But we're starting to see some interesting trends for stuff that's already been created. How can we generate derivative use out of it? And also per some of our augmented programming research, new ways of envisioning the role that artificial intelligence machine learning, etc, can play in identifying patterns of utilization so that we are better able to target those types of things that could be used for derivative or could be applied to derivative use. Have I got that right, Jim? >> Yeah, exactly. AI within robotic process automation, anything that could has already been built can be captured through natural language processing, through a computer image recognition, OCR, and so forth. And then trans, in that way, it's an asset that can be repurposed in countless ways and that's the beauty RPA or where it's going. So the issue is then not so much capture of existing assets but how can we speed up and really automate the original development of all that UI logic? I think RPA is part of the solution but not the entire solution, meaning RPA provides visual front-end tools for the rest of us to orchestrate more of the front-end development of the application UI and interaction logic. >> And it's also popping up-- >> That's part of broader low-code-- >> Yeah, it's also popping up at a lot of the interviews that we're doing with CIOs about related types of things but I want to scope this appropriately. So we're not talking about how we're going to take those transaction processing applications, David Floyer, and envelope them and containerize them and segment them and apply a new software. That's not what we're talking about, nor are we talking about the machine to machine world. Robot process automation really is a tool for creating robots out of human time interfaces that can scale the amount of work and recombine it in different ways. But we're not really talking about the two extremes. The hardcore IoT or the hardcore systems of record. Right? >> Absolutely. But one question I have for Jim and yourself is the philosophy for most people developing these days is mobile first. The days of having an HTML layout on a screen have gone. If you aren't mobile first, that's going to be pretty well a disaster for any particular development. So Jim, how does RPA and how does your discussion fit in with mobile and all of the complexity that mobile brings? All of the alternative ways that you can do things with mobile. >> Yeah. Well David, of course, low-code tools, there are many. There are dozens out there. There are many of those that are geared towards primarily supporting of fast automated development of mobile applications to run on a variety of devices and you know, mobile UIs. That's part of the solution as it were but also in the standard web application development world. know there's these frameworks that I've described. Everything from React to Angular to Vue to Ember, everything else, are moving towards a concept, more than concept, it's a framework or paradigm called progressive web apps. And what progressive web apps are all about, that's really the mainstream of web application development now is blurring the distinction between mobile and web and desktop applications because you build applications, JavaScript applications for browsers. The apps look and behave as if they were real-time interactive in memory mobile apps. What that means is that they download fresh content throughout a browsing session progressively. I'm putting to determine air quotes because that's where the progressive web app comes in. And they don't require the end-user to visit an app store or download software. They don't require anything in terms of any special capabilities in terms of synchronizing data from servers to run in memory natively inside of web accessible containers that are local to the browser. They just feel mobile even though they, excuse me, they may be running on a standard desktop with narrowband connectivity and so forth. So they scream and they scream in the context of their standard JavaScript Ajax browser obsession. >> So when we think about this it got, jeez Jim it almost sounds like like client-side Java but I think you're we're talking about something, as you said, that that evolves as the customer uses it and there's a lot of techniques and approaches that we've been using to do some of those things. But George Gilbert, the reason I bring up the notion of client-side Java is because we've seen other initiatives over the years try to do this. Now, partly they failed because, David Floyer, they focused on too much and tried to standardize or presume that everything required a common approach and we know that that's always going to fail. But what are some of the other things that we need to think about as we think about ways of creating derivative use out of software or in digital assets. >> Okay, so. I come at it from two angles. And as Jim pointed out, there's been a Cambrian explosion of creativity and innovation on frankly on client-side development and server-side development. But if you look at how we're going to recombine our application assets, we tried 20 years ago with EAI but that was, and it's sort of like MuleSoft but only was for on-prem apps. And it didn't work because every app was bespoke essentially-- >> Well, it worked for point-to-point classes of applications. >> Yeah, but it required bespoke development for every-- >> Peter: Correct. >> Every instance because the apps were so customized. >> Peter: And the interfaces were so customized. >> Yes. At the same time we were trying to build higher-level application development capabilities on desktop productivity tools with macros and then scripting languages, cross application, and visual development or using applications as visual development building blocks. Now, you put those two things together and you have the ability to work with user interfaces by building on, I'm sorry, to work with applications that have user interfaces and you have the functionality that's in the richer enterprise applications and now we have the technology to say let's program by example on essentially a concrete use case and a concrete workflow. And then you go back in and you progressively generalize it so it can handle more exception conditions and edge conditions. In other words, you start with... it's like you start with the concrete and you get progressively more abstract. >> Peter: You start with the work that the application performs. >> Yeah. >> And not knowledge of the application itself. >> Yes. But the key thing is, as you said, recombining assets because we're sort of marrying the best of EAI world with the best of the visual client-side development world. Where, as Jim points out, machine learning is making it easier for the tools to stay up to date as the user interfaces change across releases. This means that, I wouldn't say this as easy as spreadsheet development, it's just not. >> It's not like building spreadsheet macros but it's more along those lines. >> Yeah, but it's not as low-level as just building raw JavaScript because, and Jim's great example of JavaScript client-side frameworks. Look at our Gmail inbox application that millions of people use. That just downloads a new version whenever they want to drop it and they're just shipping JavaScript over to us. But the the key thing and this is, Peter, your point about digital business. By combining user interfaces, we can bridge applications that were silos then we can automate the work the humans were doing to bridge those silos and then we can reconstitute workflows in much more efficient-- >> Around the digital assets, which is kind of how business ultimately evolves. And that's a crucial element of this whole thing. So let's change direction a little bit because we're talking about, as Jim said, we've been talking about the fact that there are all these frameworks out there. There may be some consolidation on the horizon, we're researching that right now. Although there's not a lot of evidence that it's happening but there clearly is an enormous number of digital assets that are in place inside these web-based applications, whether it be relative to mobile or something else. And we want to find derivative use of or we want to create derivative use out of them and there's some new tools that allow us to do that in a relatively simple straightforward way, like RPA and there are certainly others. But that's not where this ends up. We know that this is increasingly going to be a target for AI, what we've been calling augmented programming and the ability to use machine learning and related types of technologies to be able to reveal, make transparent, gain visibility into, patterns within applications and within the use of data and then have that become a crucial feature of the development process. And increasingly even potentially to start actually creating code automatically based on very clear guidance about what work needs to be performed. Jim, what's happening in that world right now? >> Oh, let's see. So basically, I think what's going to happen over time is that more of the development cycle for web applications will incorporate not just the derivative assets, the AI to be able to decompose existing UI elements and recombine them. Enable flexible and automated recombination in various ways but also will enable greater tuning of the UI in an automated fashion through A/B testing that's in line to the development cycle based on metrics that AI is able to sift through in terms of... different UI designs can be put out into production applications in real time and then really tested with different categories of users and then the best suited or best fit a design based on like reducing user abandonment rates and speeding up access to commonly required capabilities and so forth. The metrics can be rolled in line to the automation process to automatically select the best fit UI design that had been developed through automated means. In other words, this real-world experimentation of the UI has been going on for quite some time in many enterprises and it's often, increasingly it involves data scientists who are managing the predictive models to sort of very much drive the whole promotion process of promoting the best fit design to production status. I think this will accelerate. We'll take more of these in line metrics on UI and then we brought, I believe, into more RPA style environments so the rest of us building out these front ends are automating more of our transactions and many more of the UIs can't take advantage of the fact that we'll let the infrastructure choose the best fit of the designs for us without us having to worry about doing A/B testing and all that stuff. The cloud will handle it. >> So it's a big vision. This notion of it, even eventually through more concrete standard, well understood processes to apply some of these AIML technologies to being able to choose options for the developer and even automate some elements of those options based on policy and rules. Neil Raden, again, we've been looking at similar types of things for years. How's that worked in the past and let's talk a bit about what needs to happen now to make sure that if it's going to work, it's going to work this time. >> Well, it really hasn't worked very well. And the reason it hasn't worked very well is because no one has figured out a representational framework to really capture all the important information about these objects. It's just too hard to find them. Everybody knows that when you develop software, 80% of it is grunt work. It's just junk. You know, it's taking out the trash and it's setting things up and whatever. And the real creative stuff is a very small part of it. So if you could alleviate the developer from having to do all that junk by just picking up pieces of code that have already been written and tested, that would be big. But the idea of this has been overwhelmed by the scale and the complexity. And people have tried to create libraries like JavaBeans and object-oriented programming and that sort of thing. They've tried to create catalogs of these things. They've used relational databases, doesn't work. My feeling and I hate to use the word because it always puts people to sleep is some kind of ontology that's deep enough and rich enough to really do this. >> Oh, hold on Neil, I'm feeling... (laughs) >> Yeah. Well, I mean, what good is it, I mean go to Git, right. You can find a thousand things but you don't know which one is really going to work for you because it's not rich enough, it doesn't have enough information. It needs to have quality metrics. It needs to have reviews by people who have used converging and whatever. So that's that's where I think we run into trouble. >> Yeah, I know. >> As far as robots, yeah? >> Go ahead. >> As far as robots writing code, you're going to have the same problem. >> No, well here's where I think it's different this time and I want to throw it out to you guys and see if it's accurate and we'll get to the action items. Here's where I think it's different. In the past, partly perhaps because it's where developers were most fascinated, we try to create object-oriented database and object oriented representations of data and object oriented, using object oriented models as a way of thinking about it. And object oriented code and object oriented this and and a lot of it was relatively low in the stack. And we try to create everything from scratch and it turned out that whenever we did that, it was almost like CASE from many years ago. You create it in the tool and then you maintain it out of the tool and you lose all organization of how it worked. What we're talking about here, and the reason why I think this is different, I think Neil is absolutely right. It's because we're focusing our attention on the assets within an application that create the actual business value. What does the application do and try to encapsulate those and render those as things that are reusable without necessarily doing an enormous amount of work on the back-end. Now, we have to be worried about the back-end. It's not going to do any good to do a whole bunch of RPA or related types of stuff on the front-end that kicks off an enormous number of transactions that goes after a little server that's 15 years old. That's historically only handled a few transactions a minute. So we have to be very careful about how we do this. But nonetheless, by focusing more attention on what is generating value in the business, namely the actions that the application delivers as opposed to knowledge of the application itself, namely how it does it then I think that we're constraining the problem pretty dramatically subject to the realities of what it means to actually be able to maintain and scale applications that may be asked to do more work. What do you guys think about that? >> Now Peter, let me say one more thing about this, about robots. I think you're all a lot more sanguine about AI and robots doing these kinds of things. I'm not. Let me read to you have three pickup lines that a deep neural network developed after being trained to do pickup lines. You must be a tringle? 'Cause you're the only thing here. Hey baby, you're to be a key? Because I can bear your toot? Now, what kind of code would-- >> Well look, the problems look, we go back 50 years and ELIZA and the whole notion of whatever it was. The interactive psychology. Look, let's be honest about this. Neil, you're making a great point. I don't know that any of us are more or less sanguine and that probably is a good topic for a future action item. What are the practical limits of AI and how that's going to change over time. But let's be relatively simple here. The good news about applying AI inside IT problems is that you're starting with engineered systems, with engineered data forms, and engineered data types, and you're working with engineers, and a lot of that stuff is relatively well structured. Certainly more structured than the outside world and it starts with digital assets. That's why a AI for IT operations management is more likely. That's why AI for application programming is more likely to work as opposed to AI to do pickup lines, which is as you said semantically it's all over the place. There's very, very few people that are going to conform to a set of conventions for... Well, I want to move away from the concept of pickup lines and set conventions for other social interactions that are very, very complex. We don't look at a face and get excited or not in a way that corresponds to an obvious well-understood semantic problem. >> Exactly, the value that these applications deliver is in their engagement with the real world of experience and that's not the, you can't encode the real world of human lived experience in a crisp clear way. It simply has to be proven out in the applications or engagement through people or not through people, with the real world outcome and then some outcomes like the ones that Neil read off there, in terms of those ridiculous pickup lines. Most of those kinds of automated solutions won't make a freaking bit of sense because you need humans with their brains. >> Yeah, you need human engagement. So coming back to this key point, the constraint that we're putting on this right now and the reason why certainly, perhaps I'm a little bit more ebullient than you might be Neil. But I want to be careful about this because I also have some pretty strong feelings about where what the limits of AI are, regardless of what Elon Musk says. That at the end of the day, we're talking about digital objects, not real objects, that are engineered, not, haven't evolved over a few billion years, to deliver certain outputs and data that's been tested and relatively well verified. As opposed to have an unlimited, at least from human experience standpoint, potential set of outcomes. So in that small world and certainly the infrastructure universe is part of that and what we're saying is increasingly the application development universe is going to be part of that as part of the digital business transformation. I think it's fair to say that we're going to start seeing AI machine learning and some of these other things being applied to that realm with some degree of success. But, something to watch for. All right, so let's do action item. David Floyer, why don't we start with you. Action item. >> In addressing this, I think that the keys in terms of business focus is first of all mobiles, you have to design things for mobile. So any use of any particular platform or particular set of tools has to lead to mobile being first. And the mobiles are changing rapidly with the amount of data that's being generated on the mobile itself, around the mobile. So that's the first point I would make from a business perspective. And the second is that from a business perspective, one of the key things is that you can reduce cost. Automation must be a key element of this and therefore designing things that will take out tasks and remove tasks, make things more efficient, is going to be an incredibly important part of this. >> And reduce errors. >> And reduce errors, absolutely. Probably most important is reduce errors. Is to take those out of the of the chain and where you can speed things up by removing human intervention and human tasks and raising what humans are doing to a higher level. >> Other things. George Gilbert, action item. >> Okay, so. Really quickly on David's point that we have many more application forms and expressions that we have to present like mobile first. And going back to using RPA as an example. The UiPath product that we've been working with, the core of its capability is to be able to identify specific UI elements in a very complex presentation, whether it's on a web browser or whether it's on a native app on your desktop or whether it's mobile. I don't know how complete they are on mobile because I'm not sure if they did that first but that core capability to identify in a complex, essentially collection and hierarchy of UI elements, that's what makes it powerful. Now on the AI part, I don't think it's as easy as pointing it at one app and then another and say go make them talk. It's more like helping you on the parts where they might be a little ambiguous, like if pieces move around from release to release, things like that. So my action item is say start prototyping with the RPA tools because that's probably, they're probably robust enough to start integrating your enterprise apps. And the only big new wrinkle that's come out in the last several weeks that is now in everyone's consciousness is the MuleSoft acquisition by Salesforce because that's going back to the EAI model. And we will see more app to app integration at the cloud level that's now possible. >> Neil Raden, action item. >> Well, you know, Mark Twain said, there's only two kinds of people in the world. The kind who think there are only two kinds of people in the world and the ones who know better. I'm going to deviate from that a little and say that there's really two kinds of software developers in the world. They're the true computer scientists who want to write great code. It's elegant, it's maintainable, it adheres to all the rules, it's creative. And then there's an army of people who are just trying to get something done. So the boss comes to you and says we've got to get a new website up apologizing for selling the data of 50 million of our customers and you need to do it in three days. Now, those are the kind of people who need access to things that can be reused. And I think there's a huge market for that, as well as all these other software development robots so to speak. >> Jim Kobielus, action item. >> Yeah, for simplifying web application development, I think that developers need to distinguish between back-end and front-end framework. There's a lot of convergence around the back-end framework. Specifically Node.js. So you can basically decouple the decision in terms of front-end frameworks from that and you need to write upfront. Make sure that you have a back-end that supports many front ends because there are many front ends in the world. Secondly, the front ends themselves seem to be moving towards React and Angular and Vue as being the predominant ones. You'll find more programmers who are familiar with those. And then thirdly, as you move towards consolidation on to fewer frameworks on the front-end, move towards low-code tools that allow you just with the push of a button, you know visual development, being able to deploy the built out UI to a full range of mobile devices and web applications. And to close my action item... I'll second what David said. Move toward a mobile first development approach for web applications with a focus on progressive web applications that can run on mobiles and others. Where they give a mobile experience. With intermittent connectivity, with push notifications, with a real-time in memory fast experience. Move towards a mobile first development paradigm for all of your your browser facing applications and that really is the simplification strategy you can and should pursue right now on the development side because web apps are so important, you need a strategy. >> Yeah, so mobile irrespective of the... irrespective of the underlying biology or what have you of the user. All right, so here's our action item. Our view on digital business is that a digital business uses data differently than a normal business. And a digital business transformation ultimately is about how do we increase our visibility into our data assets and find new ways of creating new types of value so that we can better compete in markets. Now, that includes data but it also includes application elements, which also are data. And we think increasingly enterprises must take a more planful and purposeful approach to identifying new ways of deriving additional streams of value out of application assets, especially web application assets. Now, this is a dream that's been put forward for a number of years and sometimes it's work better than others. But in today's world we see a number of technologies emerging that are likely, at least in this more constrained world, to present a significant new set of avenues for creating new types of digital value. Specifically tools like RPA, remote process automation, that are looking at the outcomes of an application and allow programmers use a by example approach to start identifying what are the UI elements, what those UI elements do, how they could be combined, so that they can be composed into new things and thereby provide a new application approach, a new application integration approach which is not at the data and not at the code but more at the work that a human being would naturally do. These allow for greater scale and greater automation and a number of other benefits. The reality though is that you also have to be very cognizant as you do this, even though you can find these, find these assets, find a new derivative form and apply them very quickly to new potential business opportunities that you have to know what's happening at the back-end as well. Whether it's how you go about creating the assets, with some of the front-end tooling, and being very cognizant of which front ends are going to be better or not better or better able at creating these more reusable assets. Or whether you're talking about still how relatively mundane things like how a database serialized has access to data and will fall over because you've created an automated front-end that's just throwing a lot of transactions at it. The reality is there's always going to be complexity. We're not going to see all the problems being solved but some of the new tools allow us to focus more attention on where the real business value is created by apps, find ways to reuse that, and apply it, and bring it into a digital business transformation approach. All right. Once again. George Gilbert, David Floyer, here in the studio. Neil Raden, Jim Kobielus, remote. You've been watching Wikibon Action Item. Until next time, thanks for joining us. (electronic music)
SUMMARY :
Here in the studio with me are and get software to do the things we want to do and the range of them continues to grow. and convergence on the actual frameworks and that's the beauty RPA or where it's going. that can scale the amount of work and all of the complexity that mobile brings? but also in the standard web application development world. and we know that that's always going to fail. and innovation on frankly on client-side development classes of applications. and you have the ability to work with user interfaces that the application performs. But the key thing is, as you said, recombining assets but it's more along those lines. and they're just shipping JavaScript over to us. and the ability to use machine learning and many more of the UIs can't take advantage of the fact some of these AIML technologies to and rich enough to really do this. Oh, hold on Neil, I'm feeling... I mean go to Git, right. you're going to have the same problem. and the reason why I think this is different, Let me read to you have three pickup lines and how that's going to change over time. and that's not the, you can't encode and the reason why certainly, one of the key things is that you can reduce cost. and where you can speed things up George Gilbert, action item. the core of its capability is to So the boss comes to you and says and that really is the simplification strategy that are looking at the outcomes of an application
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Mark Twain | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
March 30, 2018 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
50 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
two kinds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Angular | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
Elon Musk | PERSON | 0.99+ |
MuleSoft | ORGANIZATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Gmail | TITLE | 0.98+ |
millions of people | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
two extremes | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
dozens | QUANTITY | 0.98+ |
one question | QUANTITY | 0.98+ |
React | TITLE | 0.98+ |
one app | QUANTITY | 0.97+ |
Ember | TITLE | 0.97+ |
Vue | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
today | DATE | 0.96+ |
this week | DATE | 0.95+ |
Secondly | QUANTITY | 0.94+ |
Ajax | TITLE | 0.94+ |
JavaBeans | TITLE | 0.93+ |
RPA | TITLE | 0.91+ |
Wikibon | TITLE | 0.91+ |
thirdly | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
CASE | TITLE | 0.88+ |
Wikibon Action Item | March 23rd, 2018
>> Hi, I'm Peter Burris, and welcome to another Wikibon Action Item. (funky electronic music) This was a very interesting week in the tech industry, specifically because IBM's Think Conference aggregated in a large number of people. Now, The CUBE was there. Dave Vellante, John Furrier, and myself all participated in somewhere in the vicinity of 60 or 70 interviews with thought leaders in the industry, including a number of very senior IBM executives. The reason why this becomes so important is because IBM made a proposal to the industry about how some of the digital disruption that the market faces is likely to unfold. The normal approach or the normal mindset that people have used is that startups, digital native companies were going to change the way that everything was going to operate, and the dinosaurs were going to go by the wayside. IBM's interesting proposal is that the dinosaurs actually are going to learn to dance, utilizing or playing on a book title from a number of years ago. And the specific argument was laid out by Ginni Rometty in her keynote, when she said that there are number of factors that are especially important here. Factor number one is that increasingly, businesses are going to recognize that the role that their data plays in competition is on the ascending. It's getting more important. Now, this is something that Wikibon's been arguing for quite some time. In fact, we have said that the whole key to digital disruption and digital business is to acknowledge the difference between business and digital business, is the role that data and data assets play in your business. So we have strong agreement there. But on top of that, Ginni Rometty made the observation that 80% of the data that could be accessed and put the work in business has not yet been made available to the new activities, the new processes that are essential to changing the way customers are engaged, businesses operate, and overall change and disruption occurs. So her suggestion is that that 80%, that vast amount of data that could be applied that's not being tapped, is embedded deep within the incumbents. And so the core argument from IBM is that the incumbent companies, not the digital natives, not the startups, but the incumbent companies are poised to make a significant, to have a significant role in disrupting how markets operate, because of the value of their data that hasn't currently been put to work and made available to new types of work. That was the thesis that we heard this week, and that's what we're going to talk about today. Are the incumbent really going to strike back? So Dave Vellante, let me start with you. You were at Think, you heard the same type of argument. What did you walk away with? >> So when I first heard the term incumbent disruptors, I was very skeptical, and I still am. But I like the concept and I like it a lot. So let me explain why I like it and why I think there's some real challenges. If I'm a large incumbent global 2,000, I'm not going to just roll over because the world is changing and software is eating my world. Rather what I'm going to do is I'm going to use my considerable assets to compete, and so that includes my customers, my employees, my ecosystem, the partnerships that I have there, et cetera. The reason why I'm skeptical is because incumbents aren't organized around their data assets. Their data assets are stovepipe, they're all over the place. And the skills to leverage that data value, monetize that data, understand the contribution that data makes toward monetization, those skills are limited. They're bespoke and they're very narrow. They're within lines of business or divisions. So there's a huge AI gap between the true digital business and an incumbent business. Now, I don't think all is lost. I think a lot of strategies can work, from M&A to transformation projects, joint ventures, spin-offs. Yeah, IBM gave some examples. They put up Verizon and American Airlines. I don't see them yet as the incumbent disruptors. But then there was another example of IBM Maersk doing some very interesting and disrupting things, Royal Bank of Canada doing some pretty interesting things. >> But in a joint venture forum, Dave, to your point, they specifically set up a joint venture that would be organized around this data, didn't they? >> Yes, and that's really the point I'm trying to make. All is not lost. There are certain things that you can do, many things that you can do as an incumbent. And it's really game on for the next wave of innovation. >> So we agree as a general principle that data is really important, David Floyer. And that's been our thesis for quite some time. But Ginni put something out there, Ginni Rometty put something out there. My good friend, Ginni Rometty, put something out there that 80% of the data that could be applied to disruption, better customer engagement, better operations, new markets, is not being utilized. What do we think about that? Is that number real? >> If you look at the data inside any organization, there's a lot of structured data. And that has better ability to move through an organization. Equally, there's a huge amount of unstructured data that goes in emails. It goes in voicemails, it goes in shared documents. It goes in diagrams, PowerPoints, et cetera, that also is data which is very much locked up in the way that Dave Vellante was talking about, locked up in a particular process or in a particular area. So is there a large amount of data that could be used inside an organization? Is it private, is it theirs? Yes, there is. The question is, how do you tap that data? How do you organize around that data to release it? >> So this is kind of a chicken and egg kind of a problem. Neil Raden, I'm going to turn to you. When we think about this chicken and egg problem, the question is do we organize in anticipation of creating these assets? Do we establish new processes in anticipation of creating these data assets? Or do we create the data assets first and then re-institutionalize the work? And the reason why it's a chicken and egg kind of problem is because it takes an enormous amount of leadership will to affect the way a business works before the asset's in place. But it's unclear that we're going to get the asset that we want unless we affect the reorganization, institutionalization. Neil, is it going to be a chicken? Is it going to be the egg? Or is this one of the biggest problems that these guys are going to have? >> Well, I'm a little skeptical about this 80% number. I need some convincing before I comment on that. But I would rather see, when David mentioned the PowerPoint slides or email or that sort of thing, I would rather see that information curated by the application itself, rather than dragged out in broad data and reinterpreted in something. I think that's very dangerous. I think we saw that in data warehousing. (mumbling) But when you look at building data lakes, you throw all this stuff into a data lake. And then after the fact, somebody has to say, "Well, what does this data mean?" So I find it kind of a problem. >> So Jim Kobielus, a couple weeks ago Microsoft actually introduced a technology or a toolkit that could in fact be applied to move this kind of advance processing for dragging value out of a PowerPoint or a Word document or something else, close and proximate to the application. Is that, I mean, what Neil just suggested I think is a very, very good point. Are we going to see these kinds of new technologies directly embedded within applications to help users narrowly, but businesses more broadly, lift that information out of these applications so it can be freed up for other uses? >> I think yeah, on some level, Peter, this is a topic called dark data. It's been discussed in data management circles for a long time. The vast majority, I think 75 to 80% is the number that I see in the research, is locked up in terms of it's not searchable, it's not easily discoverable. It's not mashupable, I'm making up a word. But the term mashup hasn't been used in years, but I think it's a good one. What it's all about is if we want to make the most out of our incumbent's data, then we need to give the business, the business people, the tools to find the data where it is, to mash it up into new forms and analytics and so forth, in order to monetize it and sell it, make money off of it. So there are a wide range of data discovery and other tools that support a fairly self-service combination and composition of composite data object. I don't know that, however, that the culture of monetizing existing dataset and pulling dark data into productized forms, I don't think that's taken root in any organization anywhere. I think that's just something that consultants talk about as something that gee, should be done, but I don't think it's happening in the real world. >> And I think you're probably correct about that, but I still think Neil raised a great point. And I would expect, and I think we all believe, that increasingly this is not going to come as a result of massive changes in adoption of new data science, like practices everywhere, but an embedding of these technologies. Machine learning algorithms, approaches to finding patterns within application data, in the applications themselves, which is exactly what Neil was saying. So I think that what we're going to see, and I wanted some validation from you guys about this, is increasingly tools being used by application providers to reveal data that's in applications, and not open source, independent tool chains that then ex-post-facto get applied to all kinds of different data sources in an attempt for the organization to pull the stuff out. David Floyer, what do you think? >> I agree with you. I think there's a great opportunity for the IT industry in this area to put together solutions which can go and fit in. On the basis of existing applications, there's a huge amount of potential, for example, of ERP systems to link in with IOT systems, for example, and provide a data across an organization. Rather than designing your own IOT system, I think people are going to buy-in pre-made ones. They're going to put the devices in, the data's going to come in, and the AI work will be done as part of that, as part of implementing that. And right across the board, there is tremendous opportunity to improve the applications that currently exist, or put in new versions of applications to address this question of data sharing across an organization. >> Yeah, I think that's going to be a big piece of what happens. And it also says, Neil Raden, something about whether or not enormous machine learning deities in the sky, some of which might start with the letter W, are going to be the best and only way to unlock this data. Is this going to be something that, we're suggesting now that it's something that's going to be increasingly-distributed closer to applications, less invasive and disruptive to people, more invasive and disruptive to the applications and the systems that are in place. And what do you think, Neil? Is that a better way of thinking about this? >> Yeah, let me give you an example. Data science the way it's been practiced is a mess. You have one person who's trying to find the data, trying to understand the data, complete your selection, designing experiments, doing runs, and so forth, coming up with formulas and then putting them in the cluster with funny names so they can try to remember which one was which. And now what you have are a number of software companies who've come up with brilliant ways of managing that process, of really helping the data science to create a work process in curating the data and so forth. So if you want to know something about this particular model, you don't have to go to the person and say, "Why did you do that model? "What exactly were you thinking?" That information would be available right there in the workbench. And I think that's a good model for, frankly, everything. >> So let's-- >> Development pipeline toolkits. That's a hot theme. >> Yeah, it's a very hot theme. But Jim, I don't think you think but I'm going to test it. I don't think we're going to see AI pipeline toolkits be immediately or be accessed by your average end user who's putting together a contract, so that that toolkit or so that data is automatically munched and ingested or ingested and munched by some AI pipeline. This is going to happen in an application. So the person's going to continue to do their work, and then the tooling will or will not grab that information and then combine it with other things through the application itself into the pipeline. We got that right? >> Yeah, but I think this is all being, everything you described is being embedded in applications that are making calls to backend cloud services that have themselves been built by data scientists and exposed through rest APIs. Steve, Peter, everything you're describing is coming to applications fairly rapidly. >> I think that's a good point, but I want to test it. I want to test that. So Ralph Finos, you've been paying a lot of attention during reporting season to what some of the big guys are saying on some of their calls and in some of their public statements. One company in particular, Oracle, has been finessing a transformation, shall we say? What are they saying about how this is going as we think about their customer base, the transformation of their customer base, and the degree to which applications are or are not playing a role in those transformations? >> Yeah, I think in their last earnings call a couple days ago that the point that they were making around the decline and the-- >> Again, this is Oracle. So in Oracle's last earnings call, yeah. >> Yeah, I'm sorry, yeah. And the decline and the revenue growth rate in the public cloud, the SAS end of their business, was a function really of a slowdown of the original acquisitions they made to kind of show up as being a transformative cloud vendor, and that are basically beginning to run out of gas. And I think if you're looking at marketing applications and sales-related applications and content-type of applications, those are kind of hitting a natural high of growth. And I think what they were saying is that from a migration perspective on ERP, that that's going to take a while to get done. They were saying something like 10 or 15% of their customer base had just begun doing some sort of migration. And that's a data around ERP and those kinds of applications. So it's a long slog ahead of them, but I'd rather be in their shoes, I think, for the long run than trying to kind of jazz up in the near-term some kind of pseudo-SAS cloud growth based on acquisition and low-lying fruit. >> Yeah, because they have a public cloud, right? I mean, at least they're in the game. >> Yeah, and they have to show they're in the game. >> Yeah, and specifically they're talking about their applications as clouds themselves. So they're not just saying here's a set of resources that you can build, too. They're saying here's a set of SAS-based applications that you can build around. >> Dave: Right. Go ahead, Ralph, sorry. >> Yeah, yeah. And I think the notion there is the migration to their ERP and their systems of record applications that they're saying, this is going to take a long time for people to do that migration because of complexity in process. >> So the last point, or Dave Vellante, did you have a point you want to make before I jump into a new thought here? >> I just compare and contrast IBM and Oracle. They have public clouds, they have SAS. Many others don't. I think this is a major different point of differentiation. >> Alright, so we've talked about whether or not this notion of data as a source of value's important, and we agree it is. We still don't know whether or not 80% is the right number, but it is some large number that's currently not being utilized and applied to work differently than the data currently is. And that likely creates some significant opportunities for transformation. Do we ultimately think that the incumbents, again, I mention the chicken and the egg problem. Do we ultimately think that the incumbents are... Is this going to be a test of whether or not the incumbents are going to be around in 10 years? The degree to which they enact the types of transformation we thought about. Dave Vellante, you said you were skeptical. You heard the story. We've had the conversation. Will incumbents who do this in fact be in a better position? >> Well, incumbents that do take action absolutely will be in a better position. But I think that's the real question. I personally believe that every industry is going to get disrupted by digital, and I think a lot of companies are not prepared for this and are going to be in deep trouble. >> Alright, so one more thought, because we're talking about industries overall. There's so many elements we haven't gotten to, but there's one absolute thing I want to talk about. Specifically the difference between B2C and B2B companies. Clearly the B2C industries have been disrupted, many of them pretty significantly, over the last few years. Not too long ago, I have multiple not-necessarily-good memories of running the aisles of Toys R Us sometime after 10 o'clock at night, right around December 24th. I can't do that anymore, and it's not because my kids are grown. Or I won't be able to do that soon anymore. So B2C industries seem to have been moved faster, because the digital natives are able to take advantage of the fact that a lot of these B2C industries did not have direct and strong relationships with those customers. I would posit that a lot of the B2B industries are really where the action's going to take. And the kind of way I would think about it, and David Floyer, I'll turn to you first. The way I would think about it is that in the B2C world, it's new markets and new ways of doing things, which is where the disruption's going to take place. So more of a substitution as opposed to a churn. But in the B2B markets, it's disrupting greater efficiencies, greater automation, greater engagement with existing customers, as well as finding new businesses and opportunities. What do you think about that? >> I think the B2B market is much more stable. Relationships, business relationships, very, very important. They take a long time to change. >> Peter: But much of that isn't digital. >> A lot of that is not digital. I agree with that. However, I think that the underlying change that's happening is one of automation. B2B are struggling to put into place automation with robots, automation everywhere. What you see, for example, in Amazon is a dedication to automation, to making things more efficient. And I think that's, to me, the biggest challenges, owning up to the fact that they have to change their automation, get themselves far more efficient. And if they don't succeed in doing that, then their ability to survive or their likelihood of being taken over with a reverse takeover becomes higher and higher and higher. So how do you go about that level, huge increase in automation that is needed to survive, I think is the biggest question for B2B players. >> And when we think about automation, David Floyer, we're not talking about the manufacturing arms or only talking about the manufacturing arms. We're talking about a lot of new software automation. Dave Vellante, Jim Kobielus, RPA is kind of a new thing. Dave, we saw some interesting things at Think. Bring us up to speed quickly on what the community at Think was talking about with RPA. >> Well, I tell you. There were a lot of people in financial services, which is IBM's stronghold. And they're using software robots to automate a lot of the backend stuff that humans were doing. That's a major, major use case. I would say 25 to 30% of the financial services organizations that I talked to had active RPA projects ongoing at the moment. I don't know. Jim, what are your thoughts? >> Yeah, I think backend automation is where B2B disruption is happening. As the organizations are able to automate more of their backend, digitize more of their backend functions and accelerate them and improve the throughput of transactions, are those that will clean up. I think for the B2C space, it's the frontend automation of the digitalization of the engagement channels. But RPA is essentially a key that's unlocking backend automation for everybody, because it allows more of the frontend business analysts and those who are not traditionally BPM, or business process re-engineering professionals, to begin to take standard administrative processes and begin to automate them from, as it were, the outside-in in a greater way. So I think RPA is a secret key for that. I think we'll see some of the more disruptive organizations, businesses, take RPA and use it to essentially just reverse-engineer, as it were, existing processes, but in an automated fashion, and drive that improvement but in the backend by AI. >> I just love the term software robots. I just think that that's, I think that so strongly evokes what's going to happen here. >> If I could add, I think there's a huge need to simplify that space. The other thing I witnessed at IBM Think is it's still pretty complicated. It's still a heavy lift. There's a lot of big services component to this, which is probably why IBM loves it. But there's a massive market, I think, to simplify the adoption or RPA. >> I completely agree. We have to open the aperture as well. Again, the goal is not to train people new things, new data science, new automation stuff, but to provide tools and increasingly embed those tools into stuff that people are already using, so that the disruption and the changes happen more as a consequence of continuing to do what the people do. Alright, so let's hit the action item we're on, guys. It's been a great conversation. Again, we haven't talked about GDPR. We haven't talked about a wide array of different factors that are going to be an issue. I think this is something we're going to talk about. But on the narrow issue of can the disruptors strike back? Neil Raden, let's start with you. Neil Raden, action item. >> I've been saying since 1975 that I should be hanging around with a better class of people, but I do spend a lot of time in the insurance industry. And I have been getting a consensus that in the next five to 10 years, there will no longer be underwriters for claims adjustments. That business is ready for massive, massive change. >> And those are disruptors, largely. Jim Kobielus, action item. >> Action item. In terms of business disruption, is just not to imagine that because you were the incumbent in the past era in some solution category that's declining, that that automatically guarantees you, that makes your data fit for seizing opportunities in the future. As we've learned from Blockbuster Video, the fact that they had all this customer data didn't give them any defenses against Netflix coming along and cleaning their coffin, putting them out of business. So the next generation of disruptor will not have any legacy data to work from, and they'll be able to work miracles because they made a strategic bet on some frontend digital channel that made all the difference. >> Ralph Finos, action item. >> Yeah, I think there's a notion here of siege mentality. And I think the incumbents are in the castle walls, and the disruptors are outside the castle walls. And sometimes the disruptors, you know, scale the walls. Sometimes they don't. But I think being inside the walls is a long-run tougher thing to be at. >> Dave Vellante, action item. >> I want to pick up on something Neil said. I think it's alluring for some of these industries, like insurance and financial services and healthcare, even parts of government, that really haven't been disrupted in a huge way yet to say, "Well, I'll wait and I'll see what happens." I think that's a huge mistake. I think you have to start immediately thinking about strategies, particularly around your data, as we talked about earlier. Maybe it's M&A, maybe it's joint ventures, maybe it's spinning out new companies. But the time is past where you should be acting. >> David Floyer, action item. >> I think that it's easier to focus on something that you can actually do. So my action item is that the focus of most B2B companies should be looking at all of their processes and incrementally automating them, taking out the people cost, taking out the cost, other costs, automating those processes as much as possible. That, in my opinion, is the most likely path to being in the position that you can continue to be competitive. Without that focus, it's likely that you're going to be disrupted. >> Alright. So the one thing I'll say about that, David, is when I think you say people cost I think you mean the administrative cost associated with people. >> And people doing things, automating jobs. >> Alright, so we have been talking here in today's Wikibon Action Item about the question, will the incumbents be able to strike back? The argument we heard at IBM Think this past week, and this is the third week of March, was that data is an asset that can be applied to significantly disrupt industries, and that incumbents have a lot of data that hasn't been bought into play in the disruptive flow. And IBM's argument is that we're going to see a lot of incumbents start putting their data into play, more of their data assets into play. And that's going to have a significant impact ultimately on industry structure, customer engagement, the nature of the products and services that are available over the course of the next decade. We agree. We generally agree. We might nitpick about whether it's 80%, whether it's 60%. But in general, the observation is an enormous amount of data that exists within a large company, that's related to how they conduct business, is siloed and locked away and is used once and is not made available, is dark and is not made available for derivative uses. That could, in fact, lead to significant consequential improvements in how a business's transaction costs are ultimately distributed. Automation's going to be a big deal. David Floyer's mentioned this in the past. I'm also of the opinion that there's going to be a lot of new opportunities for revenue enhancement and products. I think that's going to be as big, but it's very clear that to start it makes an enormous amount of sense to take a look at where your existing transaction costs are, where existing information asymmetries exist, and see what you can do to unlock that data, make it available to other processes, and start to do a better job of automating local and specific to those activities. And we generally ask our clients to take a look at what is your value proposition? What are the outcomes that are necessary for that value proposition? What activities are most important to creating those outcomes? And then find those that, by doing a better job of unlocking new data, you can better automate those activities. In general, our belief is that there's a significant difference between B2C and B2B businesses. Why? Because a lot of B2C businesses never really had that direct connection, therefore never really had as much of the market and customer data about what was going on. A lot of point-of-sale perhaps, but not a lot of other types of data. And then the disruptors stepped in and created direct relationships, gathered that data and were able to rapidly innovate products and services that served consumers differently. Where a lot of that new opportunity exists is in the B2B world. And here's where the real incumbents are going to start flexing their muscles over the course of the next decade, as they find those opportunities to engage differently, to automate existing practices and activities, change their cost model, and introduce new approaches to operating that are cloud-based, blockchain-based, data-based, based on data, and find new ways to utilize their people. If there's one big caution we have about this, it's this. Ultimately, the tooling is not broadly mature. The people necessary to build a lot of these tools are increasingly moving into the traditional disruptors, the legacy disruptors if we will. AWS, Netflix, Microsoft, companies more along those lines. That talent is very dear still in the industry, and it's going to require an enormous effort to bring those new types of technologies that can in fact liberate some of this data. We looked at things like RPA, robot process automation. We look at the big application providers to increasingly imbue their products and services with some of these new technologies. And ultimately, paradoxically perhaps, we look for the incumbent disruptors to find ways to disrupt without disrupting their own employees and customers. So embedding more of these new technologies in an ethical way directly into the systems and applications that serve people, so that the people face minimal changes to learning new tricks, because the systems themselves have gotten much more automated and much more... Are able to learn and evolve and adjust much more rapidly in a way that still corresponds to the way people do work. So our action item. Any company in the B2B space that is waiting for data to emerge as an asset in their business, so that they can then do all the institutional, re-institutionalizing of work and reorganizing of work and new types of investment, is not going to be in business in 10 years. Or it's going to have a very tough time with it. The big challenge for the board and the CIO, and it's not successfully been done in the past, at least not too often, is to start the process today without necessarily having access to the data, of starting to think about how the work's going to change, think about the way their organization's going to have to be set up. This is not business process re-engineering. This is organizing around future value of data, the options that data can create, and employ that approach to start doing local automation, serve customers, and change the way partnerships work, and ultimately plan out for an extended period of time how their digital business is going to evolve. Once again, I want to thank David Floyer here in the studio with me. Neil Raden, Dave Vellante, Ralph Finos, Jim Kobielus remote. Thanks very much guys. For all of our clients, once again this has been a Wikibon Action Item. We'll talk to you again. Thanks for watching. (funky electronic music)
SUMMARY :
is that the dinosaurs actually are going to learn to dance, And the skills to leverage that data value, Yes, and that's really the point I'm trying to make. that 80% of the data that could be applied to disruption, And that has better ability to move through an organization. that these guys are going to have? And then after the fact, somebody has to say, close and proximate to the application. that the culture of monetizing existing dataset in an attempt for the organization to pull the stuff out. the data's going to come in, Yeah, I think that's going to be a big piece of what happens. of really helping the data science That's a hot theme. So the person's going to continue to do their work, that are making calls to backend cloud services and the degree to which applications are So in Oracle's last earnings call, yeah. and that are basically beginning to run out of gas. I mean, at least they're in the game. here's a set of resources that you can build, too. is the migration to their ERP I think this is a major different point of differentiation. and applied to work differently than the data currently is. and are going to be in deep trouble. So more of a substitution as opposed to a churn. They take a long time to change. And I think that's, to me, the biggest challenges, or only talking about the manufacturing arms. of the financial services organizations that I talked to and drive that improvement but in the backend by AI. I just love the term software robots. There's a lot of big services component to this, of different factors that are going to be an issue. that in the next five to 10 years, And those are disruptors, largely. that made all the difference. And sometimes the disruptors, you know, scale the walls. But the time is past where you should be acting. So my action item is that the focus of most B2B companies So the one thing I'll say about that, David, and employ that approach to start doing local automation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
75 | QUANTITY | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
March 23rd, 2018 | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Toys R Us | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Think | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Ginni | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
10 years | QUANTITY | 0.99+ |
1975 | DATE | 0.99+ |
Word | TITLE | 0.99+ |
Royal Bank of Canada | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Wikibon Action Item | De-risking Digital Business | March 2018
>> Hi I'm Peter Burris. Welcome to another Wikibon Action Item. (upbeat music) We're once again broadcasting from theCube's beautiful Palo Alto, California studio. I'm joined here in the studio by George Gilbert and David Floyer. And then remotely, we have Jim Kobielus, David Vellante, Neil Raden and Ralph Finos. Hi guys. >> Hey. >> Hi >> How you all doing? >> This is a great, great group of people to talk about the topic we're going to talk about, guys. We're going to talk about the notion of de-risking digital business. Now, the reason why this becomes interesting is, the Wikibon perspective for quite some time has been that the difference between business and digital business is the role that data assets play in a digital business. Now, if you think about what that means. Every business institutionalizes its work around what it regards as its most important assets. A bottling company, for example, organizes around the bottling plant. A financial services company organizes around the regulatory impacts or limitations on how they share information and what is regarded as fair use of data and other resources, and assets. The same thing exists in a digital business. There's a difference between, say, Sears and Walmart. Walmart mades use of data differently than Sears. And that specific assets that are employed and had a significant impact on how the retail business was structured. Along comes Amazon, which is even deeper in the use of data as a basis for how it conducts its business and Amazon is institutionalizing work in quite different ways and has been incredibly successful. We could go on and on and on with a number of different examples of this, and we'll get into that. But what it means ultimately is that the tie between data and what is regarded as valuable in the business is becoming increasingly clear, even if it's not perfect. And so traditional approaches to de-risking data, through backup and restore, now needs to be re-thought so that it's not just de-risking the data, it's de-risking the data assets. And, since those data assets are so central to the business operations of many of these digital businesses, what it means to de-risk the whole business. So, David Vellante, give us a starting point. How should folks think about this different approach to envisioning business? And digital business, and the notion of risk? >> Okay thanks Peter, I mean I agree with a lot of what you just said and I want to pick up on that. I see the future of digital business as really built around data sort of agreeing with you, building on what you just said. Really where organizations are putting data at the core and increasingly I believe that organizations that have traditionally relied on human expertise as the primary differentiator, will be disrupted by companies where data is the fundamental value driver and I think there are some examples of that and I'm sure we'll talk about it. And in this new world humans have expertise that leverage the organization's data model and create value from that data with augmented machine intelligence. I'm not crazy about the term artificial intelligence. And you hear a lot about data-driven companies and I think such companies are going to have a technology foundation that is increasingly described as autonomous, aware, anticipatory, and importantly in the context of today's discussion, self-healing. So able to withstand failures and recover very quickly. So de-risking a digital business is going to require new ways of thinking about data protection and security and privacy. Specifically as it relates to data protection, I think it's going to be a fundamental component of the so-called data-driven company's technology fabric. This can be designed into applications, into data stores, into file systems, into middleware, and into infrastructure, as code. And many technology companies are going to try to attack this problem from a lot of different angles. Trying to infuse machine intelligence into the hardware, software and automated processes. And the premise is that meaty companies will architect their technology foundations, not as a set of remote cloud services that they're calling, but rather as a ubiquitous set of functional capabilities that largely mimic a range of human activities. Including storing, backing up, and virtually instantaneous recovery from failure. >> So let me build on that. So what you're kind of saying if I can summarize, and we'll get into whether or not it's human expertise or some other approach or notion of business. But you're saying that increasingly patterns in the data are going to have absolute consequential impacts on how a business ultimately behaves. We got that right? >> Yeah absolutely. And how you construct that data model, and provide access to the data model, is going to be a fundamental determinant of success. >> Neil Raden, does that mean that people are no longer important? >> Well no, no I wouldn't say that at all. I'm talking with the head of a medical school a couple of weeks ago, and he said something that really resonated. He said that there're as many doctors who graduated at the bottom of their class as the top of their class. And I think that's true of organizations too. You know what, 20 years ago I had the privilege of interviewing Peter Drucker for an hour and he foresaw this, 20 years ago, he said that people who run companies have traditionally had IT departments that provided operational data but they needed to start to figure out how to get value from that data and not only get value from that data but get value from data outside the company, not just internal data. So he kind of saw this big data thing happening 20 years ago. Unfortunately, he had a prejudice for senior executives. You know, he never really thought about any other people in an organization except the highest people. And I think what we're talking about here is really the whole organization. I think that, I have some concerns about the ability of organizations to really implement this without a lot of fumbles. I mean it's fine to talk about the five digital giants but there's a lot of companies out there that, you know the bar isn't really that high for them to stay in business. And they just seem to get along. And I think if we're going to de-risk we really need to help companies understand the whole process of transformation, not just the technology. >> Well, take us through it. What is this process of transformation? That includes the role of technology but is bigger than the role of technology. >> Well, it's like anything else, right. There has to be communication, there has to be some element of control, there has to be a lot of flexibility and most importantly I think there has to be acceptability by the people who are going to be affected by it, that is the right thing to do. And I would say you start with assumptions, I call it assumption analysis, in other words let's all get together and figure out what our assumptions are, and see if we can't line em up. Typically IT is not good at this. So I think it's going to require the help of a lot of practitioners who can guide them. >> So Dave Vellante, reconcile one point that you made I want to come back to this notion of how we're moving from businesses built on expertise and people to businesses built on expertise resident as patterns in the data, or data models. Why is it that the most valuable companies in the world seem to be the ones that have the most real hardcore data scientists. Isn't that expertise and people? >> Yeah it is, and I think it's worth pointing out. Look, the stock market is volatile, but right now the top-five companies: Apple, Amazon, Google, Facebook and Microsoft, in terms of market cap, account for about $3.5 trillion and there's a big distance between them, and they've clearly surpassed the big banks and the oil companies. Now again, that could change, but I believe that it's because they are data-driven. So called data-driven. Does that mean they don't need humans? No, but human expertise surrounds the data as opposed to most companies, human expertise is at the center and the data lives in silos and I think it's very hard to protect data, and leverage data, that lives in silos. >> Yes, so here's where I'll take exception to that, Dave. And I want to get everybody to build on top of this just very quickly. I think that human expertise has surrounded, in other businesses, the buildings. Or, the bottling plant. Or, the wealth management. Or, the platoon. So I think that the organization of assets has always been the determining factor of how a business behaves and we institutionalized work, in other words where we put people, based on the business' understanding of assets. Do you disagree with that? Is that, are we wrong in that regard? I think data scientists are an example of reinstitutionalizing work around a very core asset in this case, data. >> Yeah, you're saying that the most valuable asset is shifting from some of those physical assets, the bottling plant et cetera, to data. >> Yeah we are, we are. Absolutely. Alright, David Foyer. >> Neil: I'd like to come in. >> Panelist: I agree with that too. >> Okay, go ahead Neil. >> I'd like to give an example from the news. Cigna's acquisition of Express Scripts for $67 billion. Who the hell is Cigna, right? Connecticut General is just a sleepy life insurance company and INA was a second-tier property and casualty company. They merged a long time ago, they got into health insurance and suddenly, who's Express Scripts? I mean that's a company that nobody ever even heard of. They're a pharmacy benefit manager, what is that? They're an information management company, period. That's all they do. >> David Foyer, what does this mean from a technology standpoint? >> So I wanted to to emphasize one thing that evolution has always taught us. That you have to be able to come from where you are. You have to be able to evolve from where you are and take the assets that you have. And the assets that people have are their current systems of records, other things like that. They must be able to evolve into the future to better utilize what those systems are. And the other thing I would like to say-- >> Let me give you an example just to interrupt you, because this is a very important point. One of the primary reasons why the telecommunications companies, whom so many people believed, analysts believed, had this fundamental advantage, because so much information's flowing through them is when you're writing assets off for 30 years, that kind of locks you into an operational mode, doesn't it? >> Exactly. And the other thing I want to emphasize is that the most important thing is sources of data not the data itself. So for example, real-time data is very very important. So what is your source of your real-time data? If you've given that away to Google or your IOT vendor you have made a fundamental strategic mistake. So understanding the sources of data, making sure that you have access to that data, is going to enable you to be able to build the sort of processes and data digitalization. >> So let's turn that concept into kind of a Geoffrey Moore kind of strategy bromide. At the end of the day you look at your value proposition and then what activities are central to that value proposition and what data is thrown off by those activities and what data's required by those activities. >> Right, both internal-- >> We got that right? >> Yeah. Both internal and external data. What are those sources that you require? Yes, that's exactly right. And then you need to put together a plan which takes you from where you are, as the sources of data and then focuses on how you can use that data to either improve revenue or to reduce costs, or a combination of those two things, as a series of specific exercises. And in particular, using that data to automate in real-time as much as possible. That to me is the fundamental requirement to actually be able to do this and make money from it. If you look at every example, it's all real-time. It's real-time bidding at Google, it's real-time allocation of resources by Uber. That is where people need to focus on. So it's those steps, practical steps, that organizations need to take that I think we should be giving a lot of focus on. >> You mention Uber. David Vellante, we're just not talking about the, once again, talking about the Uberization of things, are we? Or is that what we mean here? So, what we'll do is we'll turn the conversation very quickly over to you George. And there are existing today a number of different domains where we're starting to see a new emphasis on how we start pricing some of this risk. Because when we think about de-risking as it relates to data give us an example of one. >> Well we were talking earlier, in financial services risk itself is priced just the way time is priced in terms of what premium you'll pay in terms of interest rates. But there's also something that's softer that's come into much more widely-held consciousness recently which is reputational risk. Which is different from operational risk. Reputational risk is about, are you a trusted steward for data? Some of that could be personal information and a use case that's very prominent now with the European GDPR regulation is, you know, if I ask you as a consumer or an individual to erase my data, can you say with extreme confidence that you have? That's just one example. >> Well I'll give you a specific number on that. We've mentioned it here on Action Item before. I had a conversation with a Chief Privacy Officer a few months ago who told me that they had priced out what the fines to Equifax would have been had the problem occurred after GDPR fines were enacted. It was $160 billion, was the estimate. There's not a lot of companies on the planet that could deal with $160 billion liability. Like that. >> Okay, so we have a price now that might have been kind of, sort of mushy before. And the notion of trust hasn't really changed over time what's changed is the technical implementations that support it. And in the old world with systems of record we basically collected from our operational applications as much data as we could put it in the data warehouse and it's data marked satellites. And we try to govern it within that perimeter. But now we know that data basically originates and goes just about anywhere. There's no well-defined perimeter. It's much more porous, far more distributed. You might think of it as a distributed data fabric and the only way you can be a trusted steward of that is if you now, across the silos, without trying to centralize all the data that's in silos or across them, you can enforce, who's allowed to access it, what they're allowed to do, audit who's done what to what type of data, when and where? And then there's a variety of approaches. Just to pick two, one is where it's discovery-oriented to figure out what's going on with the data estate. Using machine learning this is, Alation is an example. And then there's another example, which is where you try and get everyone to plug into what's essentially a new system catalog. That's not in a in a deviant mesh but that acts like the fabric for your data fabric, deviant mesh. >> That's an example of another, one of the properties of looking at coming at this. But when we think, Dave Vellante coming back to you for a second. When we think about the conversation there's been a lot of presumption or a lot of bromide. Analysts like to talk about, don't get Uberized. We're not just talking about getting Uberized. We're talking about something a little bit different aren't we? >> Well yeah, absolutely. I think Uber's going to get Uberized, personally. But I think there's a lot of evidence, I mentioned the big five, but if you look at Spotify, Waze, AirbnB, yes Uber, yes Twitter, Netflix, Bitcoin is an example, 23andme. These are all examples of companies that, I'll go back to what I said before, are putting data at the core and building humans expertise around that core to leverage that expertise. And I think it's easy to sit back, for some companies to sit back and say, "Well I'm going to wait and see what happens." But to me anyway, there's a big gap between kind of the haves and the have-nots. And I think that, that gap is around applying machine intelligence to data and applying cloud economics. Zero marginal economics and API economy. An always-on sort of mentality, et cetera et cetera. And that's what the economy, in my view anyway, is going to look like in the future. >> So let me put out a challenge, Jim I'm going to come to you in a second, very quickly on some of the things that start looking like data assets. But today, when we talk about data protection we're talking about simply a whole bunch of applications and a whole bunch of devices. Just spinning that data off, so we have it at a third site. And then we can, and it takes to someone in real-time, and then if there's a catastrophe or we have, you know, large or small, being able to restore it often in hours or days. So we're talking about an improvement on RPO and RTO but when we talk about data assets, and I'm going to come to you in a second with that David Floyer, but when we talk about data assets, we're talking about, not only the data, the bits. We're talking about the relationships and the organization, and the metadata, as being a key element of that. So David, I'm sorry Jim Kobielus, just really quickly, thirty seconds. Models, what do they look like? What are the new nature of some of these assets look like? >> Well the new nature of these assets are the machine learning models that are driving so many business processes right now. And so really the core assets there are the data obviously from which they are developed, and also from which they are trained. But also very much the knowledge of the data scientists and engineers who build and tune this stuff. And so really, what you need to do is, you need to protect that knowledge and grow that knowledge base of data science professionals in your organization, in a way that builds on it. And hopefully you keep the smartest people in house. And they can encode more of their knowledge in automated programs to manage the entire pipeline of development. >> We're not talking about files. We're not even talking about databases, are we David Floyer? We're talking about something different. Algorithms and models are today's technology's really really set up to do a good job of protecting the full organization of those data assets. >> I would say that they're not even being thought about yet. And going back on what Jim was saying, Those data scientists are the only people who understand that in the same way as in the year 2000, the COBOL programmers were the only people who understood what was going on inside those applications. And we as an industry have to allow organizations to be able to protect the assets inside their applications and use AI if you like to actually understand what is in those applications and how are they working? And I think that's an incredibly important de-risking is ensuring that you're not dependent on a few experts who could leave at any moment, in the same way as COBOL programmers could have left. >> But it's not just the data, and it's not just the metadata, it really is the data structure. >> It is the model. Just the whole way that this has been put together and the reason why. And the ability to continue to upgrade that and change that over time. So those assets are incredibly important but at the moment there is no way that you can, there isn't technology available for you to actually protect those assets. >> So if I combine what you just said with what Neil Raden was talking about, David Vallante's put forward a good vision of what's required. Neil Raden's made the observation that this is going to be much more than technology. There's a lot of change, not change management at a low level inside the IT, but business change and the technology companies also have to step up and be able to support this. We're seeing this, we're seeing a number of different vendor types start to enter into this space. Certainly storage guys, Dylon Sears talking about doing a better job of data protection we're seeing middleware companies, TIBCO and DISCO, talk about doing this differently. We're seeing file systems, Scality, WekaIO talk about doing this differently. Backup and restore companies, Veeam, Veritas. I mean, everybody's looking at this and they're all coming at it. Just really quickly David, where's the inside track at this point? >> For me it is so much whitespace as to be unbelievable. >> So nobody has an inside track yet. >> Nobody has an inside track. Just to start with a few things. It's clear that you should keep data where it is. The cost of moving data around an organization from inside to out, is crazy. >> So companies that keep data in place, or technologies to keep data in place, are going to have an advantage. >> Much, much, much greater advantage. Sure, there must be backups somewhere. But you need to keep the working copies of data where they are because it's the real-time access, usually that's important. So if it originates in the cloud, keep it in the cloud. If it originates in a data-provider, on another cloud, that's where you should keep it. If it originates on your premise, keep it where it originated. >> Unless you need to combine it. But that's a new origination point. >> Then you're taking subsets of that data and then combining that up for itself. So that would be my first point. So organizations are going to need to put together what George was talking about, this metadata of all the data, how it interconnects, how it's being used. The flow of data through the organization, it's amazing to me that when you go to an IT shop they cannot define for you how the data flows through that data center or that organization. That's the requirement that you have to have and AI is going to be part of that solution, of looking at all of the applications and the data and telling you where it's going and how it's working together. >> So the second thing would be companies that are able to build or conceive of networks as data. Will also have an advantage. And I think I'd add a third one. Companies that demonstrate perennial observations, a real understanding of the unbelievable change that's required you can't just say, oh Facebook wants this therefore everybody's going to want it. There's going to be a lot of push marketing that goes on at the technology side. Alright so let's get to some Action Items. David Vellante, I'll start with you. Action Item. >> Well the future's going to be one where systems see, they talk, they sense, they recognize, they control, they optimize. It may be tempting to say, you know what I'm going to wait, I'm going to sit back and wait to figure out how I'm going to close that machine intelligence gap. I think that's a mistake. I think you have to start now, and you have to start with your data model. >> George Gilbert, Action Item. >> I think you have to keep in mind the guardrails related to governance, and trust, when you're building applications on the new data fabric. And you can take the approach of a platform-oriented one where you're plugging into an API, like Apache Atlas, that Hortonworks is driving, or a discovery-oriented one as David was talking about which would be something like Alation, using machine learning. But if, let's say the use case starts out as an IOT, edge analytics and cloud inferencing, that data science pipeline itself has to now be part of this fabric. Including the output of the design time. Meaning the models themselves, so they can be managed. >> Excellent. Jim Kobielus, you've been pretty quiet but I know you've got a lot to offer. Action Item, Jim. >> I'll be very brief. What you need to do is protect your data science knowledge base. That's the way to de-risk this entire process. And that involves more than just a data catalog. You need a data science expertise registry within your distributed value chain. And you need to manage that as a very human asset that needs to grow. That is your number one asset going forward. >> Ralph Finos, you've also been pretty quiet. Action Item, Ralph. >> Yeah, I think you've got to be careful about what you're trying to get done. Whether it's, it depends on your industry, whether it's finance or whether it's the entertainment business, there are different requirements about data in those different environments. And you need to be cautious about that and you need leadership on the executive business side of things. The last thing in the world you want to do is depend on data scientists to figure this stuff out. >> And I'll give you the second to last answer or Action Item. Neil Raden, Action Item. >> I think there's been a lot of progress lately in creating tools for data scientists to be more efficient and they need to be, because the big digital giants are draining them from other companies. So that's very encouraging. But in general I think becoming a data-driven, a digital transformation company for most companies, is a big job and I think they need to it in piece parts because if they try to do it all at once they're going to be in trouble. >> Alright, so that's great conversation guys. Oh, David Floyer, Action Item. David's looking at me saying, ah what about me? David Floyer, Action Item. >> (laughing) So my Action Item comes from an Irish proverb. Which if you ask for directions they will always answer you, "I wouldn't start from here." So the Action Item that I have is, if somebody is coming in saying you have to re-do all of your applications and re-write them from scratch, and start in a completely different direction, that is going to be a 20-year job and you're not going to ever get it done. So you have to start from what you have. The digital assets that you have, and you have to focus on improving those with additional applications, additional data using that as the foundation for how you build that business with a clear long-term view. And if you look at some of the examples that were given early, particularly in the insurance industries, that's what they did. >> Thank you very much guys. So, let's do an overall Action Item. We've been talking today about the challenges of de-risking digital business which ties directly to the overall understanding of the role of data assets play in businesses and the technology's ability to move from just protecting data, restoring data, to actually restoring the relationships in the data, the structures of the data and very importantly the models that are resident in the data. This is going to be a significant journey. There's clear evidence that this is driving a new valuation within the business. Folks talk about data as the new oil. We don't necessarily see things that way because data, quite frankly, is a very very different kind of asset. The cost could be shared because it doesn't suffer the same limits on scarcity. So as a consequence, what has to happen is, you have to start with where you are. What is your current value proposition? And what data do you have in support of that value proposition? And then whiteboard it, clean slate it and say, what data would we like to have in support of the activities that we perform? Figure out what those gaps are. Find ways to get access to that data through piecemeal, piece-part investments. That provide a roadmap of priorities looking forward. Out of that will come a better understanding of the fundamental data assets that are being created. New models of how you engage customers. New models of how operations works in the shop floor. New models of how financial services are being employed and utilized. And use that as a basis for then starting to put forward plans for bringing technologies in, that are capable of not just supporting the data and protecting the data but protecting the overall organization of data in the form of these models, in the form of these relationships, so that the business can, as it creates these, as it throws off these new assets, treat them as the special resource that the business requires. Once that is in place, we'll start seeing businesses more successfully reorganize, reinstitutionalize the work around data, and it won't just be the big technology companies who have, who people call digital native, that are well down this path. I want to thank George Gilbert, David Floyer here in the studio with me. David Vellante, Ralph Finos, Neil Raden and Jim Kobelius on the phone. Thanks very much guys. Great conversation. And that's been another Wikibon Action Item. (upbeat music)
SUMMARY :
I'm joined here in the studio has been that the difference and importantly in the context are going to have absolute consequential impacts and provide access to the data model, the ability of organizations to really implement this but is bigger than the role of technology. that is the right thing to do. Why is it that the most valuable companies in the world human expertise is at the center and the data lives in silos in other businesses, the buildings. the bottling plant et cetera, to data. Yeah we are, we are. an example from the news. and take the assets that you have. One of the primary reasons why is going to enable you to be able to build At the end of the day you look at your value proposition And then you need to put together a plan once again, talking about the Uberization of things, to erase my data, can you say with extreme confidence There's not a lot of companies on the planet and the only way you can be a trusted steward of that That's an example of another, one of the properties I mentioned the big five, but if you look at Spotify, and I'm going to come to you in a second And so really, what you need to do is, of protecting the full organization of those data assets. and use AI if you like to actually understand and it's not just the metadata, And the ability to continue to upgrade that and the technology companies also have to step up It's clear that you should keep data where it is. are going to have an advantage. So if it originates in the cloud, keep it in the cloud. Unless you need to combine it. That's the requirement that you have to have that goes on at the technology side. Well the future's going to be one where systems see, I think you have to keep in mind the guardrails but I know you've got a lot to offer. that needs to grow. Ralph Finos, you've also been pretty quiet. And you need to be cautious about that And I'll give you the second to last answer and they need to be, because the big digital giants David's looking at me saying, ah what about me? that is going to be a 20-year job and the technology's ability to move from just
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim Kobelius | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Geoffrey Moore | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
INA | ORGANIZATION | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
Sears | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
TIBCO | ORGANIZATION | 0.99+ |
DISCO | ORGANIZATION | 0.99+ |
David Vallante | PERSON | 0.99+ |
$160 billion | QUANTITY | 0.99+ |
20-year | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
Express Scripts | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
$67 billion | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
first point | QUANTITY | 0.99+ |
thirty seconds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Connecticut General | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
about $3.5 trillion | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Cigna | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Dylon Sears | ORGANIZATION | 0.98+ |
Peter Burris Big Data Research Presentation
(upbeat music) >> Announcer: Live from San Jose, it's theCUBE presenting Big Data Silicon Valley brought to you by SiliconANGLE Media and its ecosystem partner. >> What am I going to spend time, next 15, 20 minutes or so, talking about. I'm going to answer three things. Our research has gone deep into where are we now in the big data community. I'm sorry, where is the big data community going, number one. Number two is how are we going to get there and number three, what do the numbers say about where we are? So those are the three things. Now, since when we want to get out of here, I'm going to fly through some of these slides but again there's a lot of opportunity for additional conversation because we're all about having conversations with the community. So let's start here. The first thing to know, when we think about where this is all going is it has to be bound. It's inextricably bound up with digital transformation. Well, what is digital transformation? We've done a lot of research on this. This is Peter Drucker who famously said many years ago, that the purpose of a business is to create and keep a customer. That's what a business is. Now what's the difference between a business and a digital business? What's the business between Sears Roebuck, or what's the difference between Sears Roebuck and Amazon? It's data. A digital business uses data as an asset to create and keep customers. It infuses data and operations differently to create more automation. It infuses data and engagement differently to catalyze superior customer experiences. It reformats and restructures its concept of value proposition and product to move from a product to a services orientation. The role of data is the centerpiece of digital business transformation and in many respects that is where we're going, is an understanding and appreciation of that. Now, we think there's going to be a number of strategic capabilities that will have to be built out to make that possible. First off, we have to start thinking about what it means to put data to work. The whole notion of an asset is an asset is something that can be applied to a productive activity. Data can be applied to a productive activity. Now, there's a lot of very interesting implications that we won't get into now, but essentially if we're going to treat data as an asset and think about how we could put more data to work, we're going to focus on three core strategic capabilities about how to make that possible. One, we need to build a capability for collecting and capturing data. That's a lot of what IoT is about. It's a lot of what mobile computing is about. There's going to be a lot of implications around how to ethically and properly do some of those things but a lot of that investment is about finding better and superior ways to capture data. Two, once we are able to capture that data, we have to turn it into value. That in many respects is the essence of big data. How we turn data into data assets, in the form of models, in the form of insights, in the form of any number of other approaches to thinking about how we're going to appropriate value out of data. But it's not just enough to create value out of it and have it sit there as potential value. We have to turn it into kinetic value, to actually do the work with it and that is the last piece. We have to build new capabilities for how we're going to apply data to perform work better, to enact based on data. Now, we've got a concept we're researching now that we call systems of agency, which is the idea that there's going to be a lot of new approaches, new systems with a lot of intelligence and a lot of data that act on behalf of the brand. I'm not going to spend a lot of time going into this but remember that word because I will come back to it. Systems of agency is about how you're going to apply data to perform work with automation, augmentation, and actuation on behalf of your brand. Now, all this is going to happen against the backdrop of cloud optimization. I'll explain what we mean by that right now. Very importantly, increasingly how you create value out of data, how you create future options on the value of your data is going to drive your technology choices. For the first 10 years of the cloud, the presumption is all data was going to go to the cloud. We think that a better way of thinking about it is how is the cloud experience going to come to the data. We've done a lot of research on the cost of data movement and both in terms of the actual out-of-pocket costs but also the potential uncertainty, the transaction costs, etc, associated with data movement. And that's going to be one of the fundamental pieces or elements of how we think about the future of big data and how digital business works, is what we think about data movement. I'll come to that in a bit. But our proposition is increasingly, we're going to see architectural approaches that focus on how we're going to move the cloud experience to the data. We've got this notion of true private cloud which is effectively the idea of the cloud experience on or near premise. That doesn't diminish the role that the cloud's going to play on industry or doesn't say that Amazon and AWS and Microsoft Azure and all the other options are not important. They're crucially important but it means we have to start thinking architecturally about how we're going to create value of data out of data and recognize that means that it, we have to start envisioning how our organization and infrastructure is going to be set up so that we can use data where it needs to be or where it's most valuable and often that's close to the action. So if we think then about that very quickly because it's a backdrop for everything, increasingly we're going to start talking about the idea of where's the workload going to go? Where's workload the dog going to be against this kind of backdrop of the divorce of infrastructure? We believe that and our research pretty strongly shows that a lot of workloads are going to go to true private cloud but a lot of big data is moving into the cloud. This is a prediction we made a few years ago and it's clearly happening and it's underway and we'll get into what some of the implications are. So again, when we say that a lot of the big data elements, a lot of the process of creating value out of data is going to move into the cloud. That doesn't mean that all the systems of agency that build or rely on that data, the inference engines, etc, are also in a public cloud. A lot of them are going to be distributed out to the edge, out to where the action needs to be because of latency and other types of issues. This is a fundamental proposition and I know I'm going fast but hopefully I'm being clear. All right, so let's now get to the second part. This is kind of where the industry's going. Data is an asset. Invest in strategic business capabilities to appreciate, to create those data assets and appreciate the value of those assets and utilize the cloud intelligently to generate and ensure increasing returns. So the next question is well, how will we get there? Now. Right now, not too far from here, Neil Raden for example, was on the show floor yesterday. Neil made the observation that, as he wandered around, he only heard the word big data two or three times. The concept of big data is not dead. Whether the term is or is not is somebody else's decision. Our perspective, very simply, is that the notion is bifurcating. And it's bifurcating because we see different strategic imperatives happening at two different levels. On the one hand, we see infrastructure convergence. The idea that increasingly we have to think about how we're going to bring and federated data together, both from a systems and a data management standpoint. And on the other hand, we're going to see infrastructure or application specialization. That's going to have an enormous implication over next few years, if only because there just aren't enough people in the world that understand how to create value out of data. And there's going to be a lot of effort made over the next few years to find new ways to go from that one expertise group to billions of people, billions of devices, and those are the two dominant considerations in the industry right now. How can we converge data physically, logically, and on the other hand, how can we liberate more of the smarts associated with this very, very powerful approach so that more people get access to the capacities and the capabilities and the assets that are being generated by that process. Now, we've done at Wikibon, probably I don't know, 18, 20, 23 predictions overall on the role that or on the changes being wrought by digital business. Here I'm going to focus on four of them that are central to our big data research. We have many more but I'm just going to focus on four. The first one, when we think about infrastructure convergence we worry about hardware. Here's a prediction about what we think is going to happen with hardware and our observation is we believe pretty strongly that future systems are going to be built on the concept of how do you increase the value of data assets. The technologies are all in place. Simpler parts that it more successfully bind specifically through all its storage and network are going to play together. Why, because increasingly that's the fundamental constraint. How do I make data available to other machines, actors, sources of change, sources of process within the business. Now, we envision or we are watching before our very eyes, new technologies that allow us to take these simple piece parts and weave them together in very powerful fabrics or grids, what we call UniGrid. So that there is almost no latency between data that exists within one of these, call it a molecule, and anywhere else in that grid or lattice. Now again, these are not systems that are going to be here in five years. All the piece parts are here today and there are companies that are actually delivering them. So if you take a look at what Micron has done with Mellanox and other players, that's an example of one of these true private cloud oriented machines in place. The bottom line though is that there is a lot of room left in hardware. A lot of room. This is what cloud suppliers are building and are going to build but increasingly as we think about true private cloud, enterprises are going to look at this as well. So future systems for improving data assets. The capacity of this type of a system with low latency amongst any source of data means that we can now think about data not as... Not as a set of sources that have to be each individually, each having some control over its own data and sinks woven together by middleware and applications but literally as networks of data. As we start to think about distributing data and distributing control and authority associated with that data more broadly across systems, we now have to think about what does it mean to create networks of data? Because that, in many respects, is how these assets are going to be forged. I haven't even mentioned the role that security is going to play in all of this by the way but fundamentally that's how it's likely to play out. We'll have a lot of different sources but from a business standpoint, we're going to think about how those sources come together into a persistent network that can be acted upon by the business. One of the primary drivers of this is what's going on at the edge. Marc Andreessen famously said that software is eating the world, well our observation is great but if software's eating the world, it's eating it at the edge. That's where it's happening. Secondly, that this notion of agency zones. I said I'm going to bring that word up again, how systems act on behalf of a brand or act on behalf of an institution or business is very, very crucial because the time necessary to do the analysis, perform the intelligence, and then take action is a real constraint on how we do things. And our expectation is that we're going to see what we call an agency zone or a hub zone or cloud zone defined by latency and how we architect data to get the data that's necessary to perform that piece of work into the zone where it's required. Now, the implications of this is none of this is going to happen if we don't use AI and related technologies to increasingly automate how we handle infrastructure. And technologies like blockchain have the potential to provide a interesting way of imagining how these networks of data actually get structured. It's not going to solve everything. There's some people that think the blockchain is kind of everything that's necessary but it will be a way of describing a network of data. So we see those technologies on the ascension. But what does it mean for DBMS? In the old way, in the old world, the old way of thinking, the database manager was the control point for data. In the new world these networks of data are going to exist beyond a single DBMS and in fact, over time, that concept of federated data actually has a potential to become real. When we have these networks of data, we're going to need people to act upon them and that's essentially a lot of what the data scientist is going to be doing. Identifying the outcome, identifying the data that's required, and weaving that data through the construction and management, manipulation of pipelines, to ensure that the data as an asset can persist for the purposes of solving a near-term problem or over whatever duration is required to solve a longer term problem. Data scientists remain very important but we're going to see, as a consequence of improvements in tooling capable of doing these things, an increasing recognition that there's a difference between a data scientist and a data scientist. There's going to be a lot of folks that participate in the process of manipulating, maintaining, managing these networks of data to create these business outcomes but we're going to see specialization in those ranks as the tooling is more targeted to specific types of activities. So the data scientist is going to become or will remain an important job, going to lose a little bit of its luster because it's going to become clear what it means. So some data scientists will probably become more, let's call them data network administrators or networks of data administrators. And very importantly as I said earlier, there's just not enough of these people on the planet and so increasingly when we think about again, digital business and the idea of creating data assets. A central challenge is going to be how to create the data or how to turn all the data that can be captured into assets that can be applied to a lot of different uses. There's going to be two fundamental changes to the way we are currently conceiving of the big data world on the horizon. One is well, it's pretty clear that Hadoop can only go so far. Hadoop is a great tool for certain types of activities and certain numbers of individuals. So Hadoop solves problems for an important but relatively limited subset of the world. Some of the new data science platforms that we just talked about, that I just talked about, they're going to help with a degree of specialization that hasn't been available before in the data world, will certainly also help but it also will only take it so far. The real way that we see the work that we're doing, the work that the big data community is performing, turned into sources of value that extend into virtually every single corner of humankind is going to be through these cloud services that are being built and increasingly through packaged applications. A lot of computer science, it still exists between what I just said and when this actually happens. But in many respects, that's the challenge of the vendor ecosystem. How to reconstruct the idea of packaged software, which has historically been built around operations and transaction processing, with a known data model and an unknown or the known process and some technology challenges. How do we reapply that to a world where we now are thinking about, well we don't know exactly what the process is because the data tells us at the moment that the actions going to be taking place. It's a very different way of thinking about application development. A very different way of thinking about what's important in IT and very different way of thinking about how business is going to be constructed and how strategy's going to be established. Packaged applications are going to be crucially important. So in the last few minutes here, what are the numbers? So this is kind of the basis for our analysis. Digital business, role of data is an asset, having an enormous impact in how we think about hardware, how do we think about database management or data management, how we think about the people involved in this, and ultimately how we think about how we're going to deliver all this value out to the world. And the numbers are starting to reflect that. So why don't you think about four numbers as I go through the two or three slides. Hundred and three billion, 68%, 11%, and 2017. So of all the numbers that you will see, those are four of the most important numbers. So let's start by looking at the total market place. This is the growth of the hardware, software, and services pieces of the big data universe. Now we have a fair amount of additional research that breaks all these down into tighter segments, especially in software side. But the key number here is we're talking about big numbers. 103 billion over the course of next 10 years and let's be clear that 103 billion dollars actually has a dramatic amplification on the rest of the computing industry because a lot of the pricing models associated with, especially the software, are tied back to open source which has its own issues. And very importantly, the fact that the services business is going to go through an enormous amount of change over the next five years as service companies better understand how to deliver some of these big data rich applications. The second point to note here is that it was in 2017 that the software market surpassed the hardware market in big data. Again, for first number of years we focused on buying the hardware and the system software associated with that and the software became something that we hope to discover. So I was having a conversation here in theCUBE with the CEO of Transwarp which is a very interesting Chinese big data company and I asked what's the difference between how you do things in China and how we do things in the US? He said well, in the US you guys focus on proof of concept. You spend an enormous amount of time asking, does the hardware work? Does the database software work? Does the data management software work? In China we focus on the outcome. That's what we focus on. Here you have to placate the IT organization to make sure that everybody in IT is comfortable with what's about to happen. In China, were focused on the business people. This is the first year that software is bigger than hardware and it's only going to get bigger and bigger over time. It doesn't mean again, that hardware is dead or hardware is not important. It's going to remain very important but it does mean that the centerpiece of the locus of the industry is moving. Now, when we think about what the market shares look like, it's a very fragmented market. 60%, 68% of the market is still other. This is a highly immature market that's going to go through a number of changes over the next few years. Partly catalyzed by that notion of infrastructure convergence. So in four years our expectation is that, that 68% is going to start going down pretty fast as we see greater consolidation in how some of these numbers come together. Now IBM is the biggest one on the basis of the fact that they operate in all these different segments. They operating the hardware, software, and services segment but especially because they're very strong within the services business. The last one I want to point your attention to is this one. I mentioned earlier on, that our expectation is that the market increasingly is going to move to a packaged application orientation or packaged services orientation as a way of delivering expertise about big data to customers. Splunk is the leading software player right now. Why, because that's the perspective that they've taken. Now, perhaps we're a limited subset. It's perhaps for a limited subset of individuals or markets or of sectors but it takes a packaged application, weaves these technologies together, and applies them to an outcome. And we think this presages more of that kind of activity over the course of the next few years. Oracle, kind of different approach and we'll see how that plays out over the course of the next five years as well. Okay, so that's where the numbers are. Again, a lot more numbers, a lot of people you can talk to. Let me give you some action items. First one, if data was a core asset, how would IT, how would your business be different? Stop and think about that. If it wasn't your buildings that were the asset, it wasn't the machines that were the asset, it wasn't your people by themselves who were the asset, but data was the asset. How would you reinstitutionalize work? That's what every business is starting to ask, even if they don't ask it in the same way. And our advice is, then do it because that's the future of business. Not that data is the only asset but data is a recognized central asset and that's going to have enormous impacts on a lot of things. The second point I want to leave you with, tens of billions of users and I'm including people and devices, are dependent on thousands of data scientists that's an impedance mismatch that cannot be sustained. Packaged apps and these cloud services are going to be the way to bridge that gap. I'd love to tell you that it's all going to be about tools, that we're going to have hundreds of thousands or millions or tens of millions or hundreds of millions of data scientists suddenly emerge out of the woodwork. It's not going to happen. The third thing is we think that big businesses, enterprises, have to master what we call the big inflection. The big tech inflection. The first 50 years were about known process and unknown technology. How do I take an accounting package and do I put on a mainframe or a mini computer a client/server or do I do it on the web? Unknown technology. Well increasingly today, all of us have a pretty good idea what the base technology is going to be. Does anybody doubt it's going to be the cloud? We got a pretty good idea what the base technology is going to be. What we don't know is what are the new problems that we can attack, that we can address with data rich approaches to thinking about how we turn those systems into actors on behalf of our business and customers. So I'm a couple minutes over, I apologize. I want to make sure everybody can get over to the keynotes if you want to. Feel free to stay, theCUBE's going to be live at 9:30. If I got that right. So it's actually pretty exciting if anybody wants to see how it works, feel free to stay. Georgia's here, Neil's here, I'm here. I mentioned Greg Terrio, Dave Volante, John Greco, I think I saw Sam Kahane back in the corner. Any questions, come and ask us, we'll be more than happy. Thank you very much for, oh David Volante. >> David: I have a question. >> Yes. >> David: Do you have time? >> Yep. >> David: So you talk about data as a core asset, that if you look at the top five companies by market cap in the US, Google, Amazon, Facebook, etc. They're data companies, they got data at the core which is kind of what your first bullet here describes. How do you see traditional companies closing that gap where humans, buildings, etc at the core as we enter this machine intelligence era, what's your advice to the traditional companies on how they close that gap? >> All right. So the question was, the most valuable companies in the world are companies that are well down the path of treating data as an asset. How does everybody else get going? Our observation is you go back to what's the value proposition? What actions are most important? what's data is necessary to perform those actions? Can changing the way the data is orchestrated and organized and put together inform or change the cost of performing that work by changing the cost transactions? Can you increase a new service along the same lines and then architect your infrastructure and your business to make sure that the data is near the action in time for the action to be absolute genius to your customer. So it's a relatively simple thought process. That's how Amazon thought, Apple increasingly thinks like that, where they design the experience and they think what data is necessary to deliver that experience. That's a simple approach but it works. Yes, sir. >> Audience Member: With the slide that you had a few slides ago, the market share, the big spenders, and you mentioned that, you asked the question do any of us doubt that cloud is the future? I'm with Snowflake, I don't see many of those large vendors in the cloud and I was wondering if you could speak to what are you seeing in terms of emerging vendors in that space. >> What a great question. So the question was, when you look at the companies that are catalyzing a lot of the change, you don't see a lot of the big companies being at the leadership. And someone from Snowflake just said, well who's going to lead it? That's a big question that has a lot of implications but at this point time it's very clear that the big companies are suffering a bit from the old, from the old, trying to remember what the... RCA syndrome. I think Clay Christensen talked about this. You know, the innovators dilemma. So RCA actually is one of the first creators. They created the transistor and they held a lot of original patents on it. They put that incredible new technology, back in the forties and fifties, under the control of the people who ran the vacuum tube business. When was the last time anybody bought RCA stock? The same problem is existing today. Now, how is that going to play out? Are we going to see a lot of, as we've always seen, a lot of new vendors emerge out of this industry, grow into big vendors with IPO related exits to try to scale their business? Or are we going to see a whole bunch of gobbling up? That's what I'm not clear on but it's pretty clear at this point in time that a lot of the technology, a lot of the science, is being done in smaller places. The moderating feature of that is the services side. Because there's limited groupings of expertise that the companies that today are able to attract that expertise. The Googles, the Facebooks, the AWSs, etc, the Amazons. Are doing so in support of a particular service. IBM and others are trying to attract that talent so they can apply it to customer problems. We'll see over the next few years whether the IBMs and the Accentures and the big service providers are able to attract the kind of talent necessary to diffuse that knowledge into the industry faster. So it's the rate at which that the idea of internet scale computing, the idea of big data being applied to business problems, can diffuse into the marketplace through services. If it can diffuse faster that will have both an accelerating impact for smaller vendors, as it has in the past. But it may also again, have a moderating impact because a lot of that expertise that comes out of IBM, IBM is going to find ways to drive in the product faster than it ever has before. So it's a complicated answer but that's our thinking at this point time. >> Dave: Can I add to that? >> Yeah. (audience member speaking faintly) >> I think that's true now but I think the real question, not to not to argue with Dave but this is part of what we do. The real question is how is that knowledge going to diffuse into the enterprise broadly? Because Airbnb, I doubt is going to get into the business of providing services. (audience member speaking faintly) So I think that the whole concept of community, partnership, ecosystem is going to remain very important as it always has and we'll see how fast those service companies that are dedicated to diffusing knowledge, diffusing knowledge into customer problems actually occurs. Our expectation is that as the tooling gets better, we will see more people be able to present themselves truly as capable of doing this and that will accelerate the process. But the next few years are going to be really turbulent and we'll see which way it actually ends up going. (audience member speaking faintly) >> Audience Member: So I'm with IBM. So I can tell you 100% for sure that we are, I hired literally 50 data scientists in the last three months to go out and do exactly what you're saying. Sit down with clients and help them figure out how to do data science in the enterprise. And so we are in fact scaling it, we're getting people that have done this at Google, Facebook. Not a whole lot of those 'cause we want to do it with people that have actually done it in legacy fortune 500 Companies, right? Because there's a little bit difference there. >> So. >> Audience Member: So we are doing exactly what you said and Microsoft is doing the same thing, Amazon is actually doing the same thing too, Domino Data Lab. >> They don't like they're like talking about it too much but they're doing it. >> Audience Member: But all the big players from the data science platform game are doing this at a different scale. >> Exactly. >> Audience Member: IBM is doing it on a much bigger scale than anyone else. >> And that will have an impact on ultimately how the market gets structured and who the winners end up being. >> Audience Member: To add too, a lot of people thought that, you mentioned the Red Hat of big data, a lot of people thought Cloudera was going to be the Red Hat of big data and if you look at what's happened to their business. (background noise drowns out other sounds) They're getting surrounded by the cloud. We look at like how can we get closer to companies like AWS? That was like a wild card that wasn't expected. >> Yeah but look, at the end of the day Red Hat isn't even the Red Hat of open source. So the bottom line is the thing to focus on is how is this knowledge going to diffuse. That's the thing to focus on. And there's a lot of different ways, some of its going to diffuse through tools. If it diffuses through tools, it increases the likelihood that we'll have more people capable of doing this in IBM and others can hire more. That Citibank can hire more. That's an important participant, that's an important play. So you have something to say about that but it also says we're going to see more of the packaged applications emerge because that facilitates the diffusion. This is not, we haven't figured out, I don't know exactly, nobody knows exactly the exact shape it's going to take. But that's the centerpiece of our big data researches. How is that diffusion process going to happen, accelerate, and what's the resulting structure going to look like? And ultimately how are enterprises going to create value with whatever results. Yes, sir. (audience member asks question faintly) So the recap question is you see more people coming in and promising the moon but being incapable of delivering because they are, partly because the technology is uncertain and for other reasons. So here's our approach. Or here's our observation. We actually did a fair amount of research on this. When you take a look at what we call a approach to doing big data that's optimized for the costs of procurement i.e. let's get the simplest combination of infrastructure, the simplest combination of open-source software, the simplest contracting, to create that proof of concept that you can stand things up very quickly if you have enough expertise but you can create that proof of concept but the process of turning that into actually a production system extends dramatically. And that's one of the reasons why the Clouderas did not take over the universe. There are other reasons. As George Gilbert's research has pointed out, that Cloudera is spending 53, 55 % of their money right now just integrating all the stuff that they bought into the distribution five years ago. Which is a real great recipe for creating customer value. The bottom line though is that if we focus on the time to value in production, we end up taking a different path. We don't focus as much on whether the hardware is going to work and the network is going to work and the storage can be integrated and how it's going to impact the database and what that's going to mean to our Oracle license pool and all the other things that people tend to think about if they're focused on the technology. And so as a consequence, you get better time to value if you focus on bringing the domain expertise, working with the right partner, working with the appropriate approach, to go from what's the value proposition, what actions are associated with a value proposition, what's stated in that area to perform those actions, how can I take transaction costs out of performing those actions, where's the data need to be, what infrastructure do I require? So we have to focus on a time to value not the time to procure. And that's not what a lot of professional IT oriented people are doing because many of them, I hate say it, but many of them still acquire new technology with the promise to helping the business but having a stronger focus on what it's going to mean to their careers. All right, I want to be really respectful to everybody's time. The keynotes start in about five minutes which means you just got time. If you want to stay, feel free to stay. We'll be here, we'll be happy to talk but I think that's pretty much going to close our presentation broadcast. Thank you very much for being an attentive audience and I hope you found this useful. (upbeat music)
SUMMARY :
brought to you by SiliconANGLE Media that the actions going to be taking place. by market cap in the US, Google, Amazon, Facebook, etc. or change the cost of performing that work in the cloud and I was wondering if you could speak to the idea of big data being applied to business problems, (audience member speaking faintly) Our expectation is that as the tooling gets better, in the last three months to go out and do and Microsoft is doing the same thing, but they're doing it. Audience Member: But all the big players from Audience Member: IBM is doing it on a much bigger scale how the market gets structured They're getting surrounded by the cloud. and the network is going to work
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sam Kahane | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Neil Raden | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
John Greco | PERSON | 0.99+ |
Citibank | ORGANIZATION | 0.99+ |
Greg Terrio | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
David Volante | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Sears Roebuck | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Domino Data Lab | ORGANIZATION | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
11% | QUANTITY | 0.99+ |
George Gilbert | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
68% | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
53, 55 % | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Facebooks | ORGANIZATION | 0.99+ |
103 billion | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
Accentures | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Hundred | QUANTITY | 0.99+ |
Transwarp | ORGANIZATION | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
50 data scientists | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
three times | QUANTITY | 0.99+ |
103 billion dollars | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
first bullet | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
Secondly | QUANTITY | 0.99+ |
five years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
hundreds of millions | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Action Item | Big Data SV Preview Show - Feb 2018
>> Hi, I'm Peter Burris and once again, welcome to a Wikibon Action Item. (lively electronic music) We are again broadcasting from the beautiful theCUBE Studios here in Palo Alto, California, and we're joined today by a relatively larger group. So, let me take everybody through who's here in the studio with us. David Floyer, George Gilbert, once again, we've been joined by John Furrier, who's one of the key CUBE hosts, and on the remote system is Jim Kobielus, Neil Raden, and another CUBE host, Dave Vellante. Hey guys. >> Hi there. >> Good to be here. >> Hey. >> So, one of the things we're, one of the reasons why we have a little bit larger group here is because we're going to be talking about a community gathering that's taking place in the big data universe in a couple of weeks. Large numbers of big data professionals are going to be descending upon Strata for the purposes of better understanding what's going on within the big data universe. Now we have run a CUBE show next to that event, in which we get the best thought leaders that are possible at Strata, bring them in onto theCUBE, and really to help separate the signal from the noise that Strata has historically represented. We want to use this show to preview what we think that signal's going to be, so that we can help the community better understand what to look for, where to go, what kinds of things to be talking about with each other so that it can get more out of that important event. Now, George, with that in mind, what are kind of the top level thing? If it was one thing that we'd identify as something that was different two years ago or a year ago, and it's going to be different from this show, what would we say it would be? >> Well, I think the big realization that's here is that we're starting with the end in mind. We know the modern operational analytic applications that we want to build, that anticipate or influence a user interaction or inform or automate a business transaction. And for several years, we were experimenting with big data infrastructure, but that was, it wasn't solution-centric, it was technology-centric. And we kind of realized that the do it yourself, assemble your own kit, opensource big data infrastructure created too big a burden on admins. Now we're at the point where we're beginning to see a more converged set of offerings take place. And by converged, I mean an end to end analytic pipeline that is uniform for developers, uniform for admins, and because it's pre-integrated, is lower latency. It helps you put more data through one single analytic latency budget. That's what we think people should look for. Right now, though, the hottest new tech-centric activity is around Machine Learning, and I think the big thing we have to do is recognize that we're sort of at the same maturity level as we were with big data several years ago. And people should, if they're going to work with it, start with the knowledge, for the most part, that they're going to be experimenting, 'cause the tooling isn't quite mature enough, we don't have enough data scientists for people to be building all these pipelines bespoke. And the third-party applications, we don't have a high volume of them where this is embedded yet. >> So if I can kind of summarize what you're saying, we're seeing bifurcation occur within the ecosystem associated with big data that's driving toward simplification on the infrastructure side, which increasingly is being associated with the term big data, and new technologies that can apply that infrastructure and that data to new applications, including things like AI, ML, DL, where we think about modeling and services, and a new way of building value. Now that suggests that one or the other is more or less hot, but Neil Raden, I think the practical reality is that here in Silicon Valley, we got to be careful about getting too far out in front of our skis. At the end of the day, there's still a lot of work to be done inside how you simply do things like move data from one place to the other in a lot of big enterprises. Would you agree with that? >> Oh absolutely. I've been talking to a lot clients this week and, you know, we don't talk about the fact that they're still running their business on what we would call legacy systems, and they don't know how to, you know, get out of them or transform from them. So they're still starting to plan for this, but the problem is, you know, it's like talking about the 27 rocket engines on the whatever it was that he launched into space, launching a Tesla into space. But you can talk about the engineering of those engines and that's great, but what about all the other things you're going to have to do to get that (laughs) car into space? And it's the same thing. A year ago, we were talking about Hadoop and big data and, to a certain extent, Machine Learning, maybe more data science. But now people are really starting to say, How do we actually do this, how do we secure it, how do we govern it, how do we get some sort of metadata or semantics on the data we're working with so people know what they're using. I think that's where we are in a lot of companies. >> Great, so that's great feedback, Neil. So as we look forward, Jim Kobielus, the challenges associated with what it means to better improve the facilities of your infrastructure, but also use that as a basis for increasing your capability on some of the new applications services, what are we looking for, what should folks be looking for as they explore the show in the next couple of weeks on the ML side? What new technologies, what new approaches? Going back to what George said, we're in experimentation mode. What are going to be the experiments that are going to generate greatest results over the course of the next year? >> Yeah, for the data scientists, who flock to Strata and similar conferences, automation of the Machine Learning pipeline is super hot in terms of investments by the solution providers. Everybody from Google to IBM to AWS, and others, are investing very heavily in automation of, not just the data engine, that problem's been had a long time ago. It's automation of more of the feature engineering and the trending. These very manual, often labor intensive, jobs have to be sped up and automated to a great degree to enable the magic of productivity by the data scientists in the new generation of app developers. So look for automation of Machine Learning to be a super hot focus. Related to that is, look for a new generation of development suites that focus on DevOps, speeding the Machine Learning in DL and AI from modeling through training and evaluation deployment in iteration. We've seen a fair upswing in the number of such toolkits on the market from a variety of startup vendors, like the DataRobots of the world. But also coming to say, AWS with SageMaker, for example, that's hot. Also, look for development toolkits that automate more of the cogeneration, you know, a low-code tools, but the new generation of low-code tools, as highlighted in a recent Wikibons study, use ML to drive more of the actual production of fairly decent, good enough code, as a first rough prototype for a broad range of applications. And finally we're seeing a fair amount of ML-generated code generation inside of things like robotic process automation, RPA, which I believe will probably be a super hot theme at Strata and other shows this year going forward. So there's a, you mentioned the idea of better tooling for DevOps and the relationship between big data and ML, and what not, and DevOps. One of the key things that we've been seeing over the course of the last few years, and it's consistent with the trends that we're talking about, is increasing specialization in a lot of the perspectives associated with changes within this marketplace, so we've seen other shows that have emerged that have been very, very important, that we, for example, are participating in. Places like Splunk, for example, that is the vanguard, in many respects, of a lot of these trends in big data and how big data can applied to business problems. Dave Vellante, I know you've been associated with a number of, participating in these shows, how does this notion of specialization inform what's going to happen in San Jose, and what kind of advice and counsel should we tell people to continue to explore beyond just what's going to happen in San Jose in a couple weeks? >> Well, you mentioned Splunk as an example, a very sort of narrow and specialized company that solves a particular problem and has a very enthusiastic ecosystem and customer base around that problem. LAN files to solve security problems, for example. I would say Tableau is another example, you know, heavily focused on Viz. So what you're seeing is these specialized skillsets that go deep within a particular domain. I think the thing to think about, especially when we're in San Jose next week, is as we talk about digital disruption, what are the skillsets required beyond just the domain expertise. So you're sort of seeing this bifurcated skillsets really coming into vogue, where if somebody understands, for example, traditional marketing, but they also need to understand digital marketing in great depth, and the skills that go around it, so there's sort of a two-tool player. We talk about five-tool player in baseball. At least a multidimensional skillset in digital. >> And that's likely to occur not just in a place like marketing, but across the board. David Floyer, as folks go to the show and start to look more specifically about this notion of convergence, are there particular things that they should think about that, to come back to the notion of, well, you know, hardware is going to make things more or less difficult for what the software can do, and software is going to be created that will fill up the capabilities of hardware. What are some of the underlying hardware realities that folks going to the show need to keep in mind as they evaluate, especially the infrastructure side, these different infrastructure technologies that are getting more specialized? >> Well, if we look historically at the big data area, the solution has been to put in very low cost equipment as nodes, lots of different nodes, and move the data to those nodes so that you get a parallelization of the, of the data handling. That is not the only way of doing it. There are good ways now where you can, in fact, have a single version of that data in one place in very high speed storage, on flash storage, for example, and where you can allow very fast communication from all of the nodes directly to that data. And that makes things a lot simpler from an operational point of view. So using current Batch Automation techniques that are in existence, and looking at those from a new perspective, which is I do IUs apply these to big data, how do I automate these things, can make a huge difference in just the practicality in the elapsed time for some of these large training things, for example. >> Yeah, I was going to say that to many respects, what you're talking about is bringing things like training under a more traditional >> David: Operational, yeah. >> approach and operational set of disciplines. >> David: Yes, that's right. >> Very, very important. So John Furrier, I want to come back to you, or I want to come to you, and say that there are some other technologies that, while they're the bright shiny objects and people think that they're going to be the new kind of Harry Potter technologies of magic everywhere, Blockchain is certainly going to become folded into this big data concept, because Blockchain describes how contracts, ownership, authority ultimately get distributed. What should folks look for as the, as Blockchain starts to become part of these conversations? >> That's a good point, Peter. My summary of the preview for BigData SV Silicon Valley, which includes the Strata show, is two things: Blockchain points to the future and GDPR points to the present. GDPR is probably the most, one of the most fundamental impacts to the big data market in a long time. People have been working on it for a year. It is a nightmare. The technical underpinnings of what companies have to do to comply with GDPR is a moving train, and it's complete BS. There's no real solutions out there, so if I was going to tell everyone to think about that and what to look for: What is happening with GDPR, what's the impact of the databases, what's the impact of the architectures? Everyone is faking it 'til they make it. No one really has anything, in my opinion from what I can see, so it's a technical nightmare. Where was that database? So it's going to impact how you store the data, and the sovereignty issue is another issue. So the Blockchain then points to the sovereignty issue of the data, both in terms of the company, the country, and the user. These things are going to impact software development, application development, and, ultimately, cloud choice and the IoT. So to me, GDPR is not just a one and done thing and Blockchain is kind of a future thing to look at. So I would look out of those two lenses and say, Do you have a direction or a narrative that supports me today with what GDPR will impact throughout the organization. And then, what's going on with this new decentralized infrastructure and the role of data, and the sovereignty of that data, with respect to company, country, and user. So to me, that's the big issue. >> So George Gilbert, if we think about this question of these fundamental technologies that are going to become increasingly important here, database managers are not dead as a technology. We've seen a relative explosion over the last few years in at least invention, even if it hasn't been followed with, as Neil talked about, very practical ways of bringing new types of disciplines into a lot of enterprises. What's going to happen with the database world, and what should people be looking for in a couple of weeks to better understand how some of these data management technologies are going to converge and, or involve? >> It's a topic that will be of intense interest and relevance to IT professionals, because it's become the common foundation of all modern apps. But I think what we can do is we can see, for instance, a leading indicator of what's going to happen with the legacy vendors, where we have in-memory technologies from both transaction processing and analytics, and we have more advanced analytics embedded in the database engine, including Machine Learning, the model training, as well as model serving. But the, what happened in the big data community is that we disassembled the DBMS into the data manipulation language, which is an analytic language, like, could be Spark, could be Flink, even Hive. We had the Catalog, which I think Jim has talked about or will be talking about, where we're not looking, it's not just a dictionary of what's in one DBMS, but it's a whole way of tracking and governing data across many stores. And then there's the Storage Manager, could be the file system, an object store, could be just something like Kudu, which is a MPP way of, in parallel, performing a bunch of operations on data that's stored. The reason I bring all this up is, following on David's comment about the evolution of hardware, databases are fundamentally meant to expose capabilities in the hardware and to mediate access to data, using these hardware capabilities. And now that we have this, what's emerging as this unigrid, with memory-intensive architectures and super low latency to get from any point or node on that cluster to any other node, like with only a five microsecond lag, relative to previous architectures. We can now build databases that scale up with the same knowledge base that we built databases... I'm sorry, that scale out, that we used to build databases that scale up. In other words, it democratizes the ability to build databases of enormous scale, and that means that we can have analytics and the transactions working together at very low latency. >> Without binding them. Alright, so I think it's time for the action items. We got a lot to do, so guys, keep it really tight, really simple. David Floyer, let me start with you. Action item. >> So action item on big data should be focus on technologies that are going to reduce the elapse time of solutions in the data center, and those are many and many of them, but it's a production problem, it's becoming a production problem, treat it as a production problem, and put it in the fundamental procedures and technologies to succeed. >> And look for vendors >> Who can do that, yes. >> that do that. George Gilbert, action item. >> So I talked about convergence before. The converged platform now is shifting, it's center of gravity is shifting to continuous processing, where the data lake is a reference data repository that helps inform the creation of models, but then you run the models against the streaming continuous data for the freshest insights-- >> Okay, Jim Kobielus, action item. >> Yeah, focus on developer productivity in this new era of big data analytics. Specifically focus on the next generation of developers, who are data scientists, and specifically focus on automating most of what they do, so they can focus on solving problems and sifting through data. Put all the grunt work or training, and all that stuff, take and carry it by the infrastructure, the tooling. >> Peter: Neil Raden, action item. >> Well, one thing I learned this week is that everything we're talking about is about the analytical problem, which is how do you make better decisions and take action? But companies still run on transactions, and it seems like we're running on two different tracks and no one's talking about the transactions anymore. We're like the tail wagging the dog. >> Okay, John Furrier, action item. >> Action item is dig into GDPR. It is a really big issue. If you're not proactive, it could be a nightmare. It's going to have implications that are going to be far-reaching in the technical infrastructure, and it's the Sarbanes-Oxley, what they did for public companies, this is going to be a nightmare. And evaluate the impact of Blockchains. Two things. >> David Vellante, action item. >> So we often say that digital is data, and just because your industry hasn't been upended by digital transformations, don't think it's not coming. So it's maybe comfortable to sit back and say, Well, we're going to wait and see. Don't sit back and wait and see. All industries are susceptible to digital transformation. >> Alright, so I'll give the action item for the team. We've talked a lot about what to look for in the community gathering that's taking place next week in Silicon Valley around strata. Our observations as the community, it descends upon us, and what to look for is, number one, we're seeing a bifurcation in the marketplace, in the thought leadership, and in the tooling. One set of group, one group is going more after the infrastructure, where it's focused more on simplification, convergence; another group is going more after the developer, AI, ML, where it's focused more on how to create models, training those models, and building applications with the services associated with those models. Look for that. Don't, you know, be careful about vendors who say that they do it all. Be careful about vendors that say that they don't have to participate in a converged approach to doing this. The second thing I think we need to look for, very importantly, is that the role of data is evolving, and data is becoming an asset. And the tooling for driving velocity of data through systems and applications is going to become increasingly important, and the discipline that is necessary to ensure that the business can successfully do that with a high degree of predictability, bringing new production systems are also very important. A third area that we take a look at is that, ultimately, the impact of this notion of data as an asset is going to really come home to roost in 2018 through things like GDPR. As you scan the show, ask a simple question: Who here is going to help me get up to compliance and sustain compliance, as the understanding of privacy, ownership, etc. of data, in a big data context, starts to evolve, because there's going to be a lot of specialization over the next few years. And there's a final one that we might add: When you go to the show, do not just focus on your favorite brands. There's a lot of new technology out there, including things like Blockchain. They're going to have an enormous impact, ultimately, on how this marketplace unfolds. The kind of miasma that's occurred in big data is starting to specialize, it's starting to break down, and that's creating new niches and new opportunities for new sources of technology, while at the same time, reducing the focus that we currently have on things like Hadoop as a centerpiece. A lot of convergence is going to create a lot of new niches, and that's going to require new partnerships, new practices, new business models. Once again, guys, I want to thank you very much for joining me on Action Item today. This is Peter Burris from our beautiful Palo Alto theCUBE Studio. This has been Action Item. (lively electronic music)
SUMMARY :
We are again broadcasting from the beautiful and it's going to be different from this show, And the third-party applications, we don't have Now that suggests that one or the other is more or less hot, but the problem is, you know, it's like talking about the What are going to be the experiments that are going to in a lot of the perspectives associated with I think the thing to think about, that folks going to the show need to keep in mind and move the data to those nodes and people think that they're going to be So the Blockchain then points to the sovereignty issue What's going to happen with the database world, in the hardware and to mediate access to data, We got a lot to do, so guys, focus on technologies that are going to that do that. that helps inform the creation of models, Specifically focus on the next generation of developers, and no one's talking about the transactions anymore. and it's the Sarbanes-Oxley, So it's maybe comfortable to sit back and say, and sustain compliance, as the understanding of privacy,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
George | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Jim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
GDPR | TITLE | 0.99+ |
next week | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
A year ago | DATE | 0.99+ |
two lenses | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
two years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
third area | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
one group | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
27 rocket | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
next year | DATE | 0.98+ |
Two things | QUANTITY | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.97+ |
two-tool player | QUANTITY | 0.97+ |
five microsecond | QUANTITY | 0.96+ |
One set | QUANTITY | 0.96+ |
Tableau | ORGANIZATION | 0.94+ |
a year | QUANTITY | 0.94+ |
single version | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
Wikibons | ORGANIZATION | 0.91+ |
Wikibon | ORGANIZATION | 0.91+ |
two different tracks | QUANTITY | 0.91+ |
five-tool player | QUANTITY | 0.9+ |
several years ago | DATE | 0.9+ |
this year | DATE | 0.9+ |
Strata | TITLE | 0.87+ |
Harry Potter | PERSON | 0.85+ |
one thing | QUANTITY | 0.84+ |
years | DATE | 0.83+ |
one place | QUANTITY | 0.82+ |
Peter Burris, Wikibon | Action Item, Feb 9 2018
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (upbeat music) Once again, we're broadcasting from theCUBE studio in beautiful Palo Alto, California, and I have joining me here in the studio George Gilbert, David Floyer, both Wikibon analysts, and remote, welcome Neil Raden and Jim Kobielus. This week, we're going to talk about something that's actually quite important, and it's one of those examples of an innovation in which technology that is maturing in multiple domains is brought together in unique and interesting ways to potentially dramatically revolutionize how work gets done. Specifically, we're talking about something we call augmented programming. The notion of augmented programming borrows from some of the technologies associated with new or declarative low-code development environments, machine learning, and an increasing understanding of the role that automation's going to play, specifically as pertains to human and human-augmented activities. Now, low-code programming has been around for a while. Machine learning's been around for a while, and, increasingly, some of these notions of automation have been around for a while. But it's how they are coming together to create new approaches and new possibilities that can dramatically improve the speed of systems development, the quality of systems development, and, ultimately, very importantly, the ongoing manageability of those systems. So, Jim Kobielus, let's start with you. What are some of the issues associated with augmented programming that users need to be focused on? >> Yeah, well, the primary issue, or, really, the driver, is that we need to increase the productivity of developers greatly, because required of them to build programs, applications faster with fewer resources, and deploy them more rapidly in DevOps environments, and to manage that code, and to optimize that code for 10 zillion downstream platforms from mobile to web to the Internet of Things, and so forth. They need power tooling to be able to drive this process. Now, low-code platforms, you know, that whole low-code space has been around for years. It's very much evolved from what used to be called rapid application development, which itself evolved from the 4GL languages of decades past, and so forth. Looking at it now, here, we're moving towards the end of the second decade of this century. All low-code development space has evolved, it is rapidly emerging into, BPM, on the one hand, orchestration modeling tools. Robotic process automation, on the other hand, to enable the average end user or business analyst to quickly gin up an application based on being able to wire together UI components fairly rapidly, and drive it from the UI on in. What we're seeing now is that more and more machine learning is being used in the process of developing low-code application, or in the low-code development of applications. More machine learning is being used in a variety of capacities, one of which is simply to be able to infer the appropriate program code for external assets like screenshots and wireframes, but also from database schema and so forth. A lot of machine learning is coming to this space in a major way. >> But it sounds, though, there's still going to be some degree of specialization, and the nature of the tools that we might use in this notion of augmented programming. So, RPA may be associated with a certain class of applications and environmental considerations, and there'll be other tools, for example, that might be associated with different application considerations and environmental attributes as well. But David Floyer, one of the things that we're concerned about is, a couple weeks ago, we talked about the notion of data-aware middleware, where the idea that, increasingly, we'll see middleware emerge that's capable of moving data in response to the metadata attributes of the data, combined with invisibility to the application patterns. But when we think about this notion of augmented programming, what are some of the potential limits that people have to think about as they consider these tools? >> Peter, that's a very good question. The key for all of these techniques is to use the right tools in the right place. A lot of the environments where the leading edge of this environment assumes an environment where the programmer has access to all of his data, he owns it, and he is the only person there. The challenge is, in many applications, you are sharing data. You are sharing data across the organization, you are sharing data between programmers. Now, this introduces a huge amount of complexity, and there have been many attempts to try and tackle this. There've been data dictionaries, there've been data management, ways of managing this data. They haven't had a very good history. The efforts involved in trying to make those work within an organization have been, at best, spasmodic. >> (laughs) Spasmodic, good word! >> When we go into this environment, I think the key is, make sure that you are applying these tools to the areas initially where somebody does have access to all the data, and then carefully look at it from the point of view of shared data, because you have a whole lot of issues in state environments, which we do not have in non-state environments, and the complexity of locking data, the complexity of many people accessing that data, that requires another set of tools. I'm all in favor of these low-code-type environments, but you have to make sure that you're applying the right tools for the right type of applications. >> And specifically, for example, a lot of metadata that's typically associated with a database is not easily revealed to an application developer, nor an application. And so, you have to be very, very careful about how you exploit that. Now, Neil Raden, there has been over the years, as David mentioned, a number of passes at doing this that didn't go so well, but there are some business reasons to think why this time it might go a little bit better. Talk a little bit about some of the higher-level business considerations that are on the table that may catalyze better adoption this time of these types of tools. >> One thing is that, no matter what kind of an organization you are, whether you're a huge multinational or an SMB or whatever, all of these companies are really rotten with what we call shadow systems. In other words, companies have applications that do what they do, and what they don't do, people cobble together. The vast majority of 'em are done in Access and Excel, still. Even in advanced organizations, you'll find this. If there's a way to eliminate that, because it's a real killer of productivity, then that's a real positive. I suppose my concern is that when you deal at that level, how are you going to maintain coherency and consistency in those systems over time without adding, like he said, orchestration of those systems? What David is saying, I think, is really key. >> Yeah, I, go ahead, sorry, Neil. Go ahead. >> No, that's all right. What I was-- >> I think-- >> Peter: Sorry. Bad host. >> David: You think? >> Neil: No, go ahead. >> No, what I was going to say was, and a crucial feature of this is that a lot of times, the application is owned by a business line, and the business line presumes that they own their data, and they have modeled those systems for a certain type of work, for a certain volume of work, for a certain distribution of control, and when you reveal a lot of this stuff, you sometimes break those assumptions. That can lead to real serious breaks in the system. >> You know, they're not always evil, as we like to characterize them. Some of them are actually well-thought-out and really good system, better than anything they could get 'em from the IT organizations. But the point is, they're usually pretty brittle, and they require a lot of effort for the people who develop them to keep them running because they don't use the kind of tools and approaches and platforms and methodologies that lend themselves to good-quality software. I think there's real potential for RTA in that area. >> I think there are also some interesting platforms that are driving to help in this particular area, particularly of applications which go across departments in an organization. ServiceNow, for example, has a very powerful platform for very high-level production of systems, and it's being used a lot of the time to solve problems of procedures, procedures going across different departments, automating those procedures. I think there are some extremely good tools coming out which will significantly help, but they do help more in the serial procedures, rather than the concurrent procedures. >> And there are some expectations about the type of tools you use, and the extensibility of those tools, et cetera, which leads me, anyway, George, to ask the question about some of the machine learning attributes of this. We've got to be careful about machine learning being positioned as the panacea for all business problems, which too often seems to be the case. But we are certainly, it's reasonable to observe that machine learning can, in fact, help us in important ways at understanding how patterns in applications and data are working, how people are working together. Talk a little bit about the machine learning attributes of some of these tools. >> Well, I like to say that every few years, we have a technology we get so excited about that we assume it tastes like chocolate, costs a dollar, and cures cancer. Machine learning is that technology right now. The interesting thing about robotic process automation in many low-code environments is that they're sort of inheriting the mantle of the old application macros, and even cross-application macros from the early desktop office wars. The difference now is, unlike then, there were APIs that those scripts could talk to, and they could then treat the desktop applications as an application platform. As David said, and Neil, we're going through application user interfaces now, and when you want to do a low-code programming environment, you want often to program by example. But then you need to generalize parts, you know, when you move this thing to this place, you might now want to generalize that. That's where machine learning can start helping take literal scripts, and adding more abstract constructs to them. >> So, you're literally digitizing some of the digital primitives that are in some of these applications, and that allows you to reveal data the machine learning can apply to make observations, recommendations about patterns, and actually do code generation. >> And you know, I would add one thing, that it's not just about the UI anymore, because we're surfacing, as we were talking earlier, the data-driven middleware. Another way of looking at what used to be the system catalog, we had big applications all talking to a central database. But now that we have so many repositories, we're sort of extricating the system catalog so that we can look at and curate data in many locations. These tools can access that because they have user interfaces, as well as APIs. And then, in addition, you don't have to go against a database that is unprotected with an applications business logic. More and more, we have microservices and serverless functions where they embody the business logic, and you can go against them, and they enforce the rules as well. >> That's great, so, David Floyer-- >> I should point out-- >> Hold on, Jim. Dave Floyer, this is not a technology set that suddenly is emerging on the scene independent of other changes. There's also some important changes in the hardware itself that are making it possible for us to reveal data differently, so that these types of tools and these types of technologies can be applied. I'm specifically thinking about something as mundane as SSD, flash-based storage, and other types of technologies that allow us to different things with data so that we can envision working with this stuff. Give us a quick list down on the infrastructure, some of the key technologies in making this possible. >> When we look at systems architectures now, what we never had was fast memories, fast storage. We had very, very slow storage, and we had to design systems to take account of that. What is coming in now is much, much faster storage built on things like NVMe, other fabrics, which really get to any data within microseconds, as opposed to the milliseconds. That's thousands of times faster. What you can do with these is, not only can the access density that you can achieve to the data is much, much higher than it was. Many, again, many thousand times higher. That enables you to take a different approach to sharing data. Instead of having to share data at the disk level, you can now, for example, take a snapshot of data. You can allow that snapshot to be the snapshot of, for example, the analytics system on the hour, or on the day, or whatever timescale that you want it. And then, in parallel, you can use huge amounts of analytic data against a snapshot of that same data while the same operational system is working. There are some techniques there which I think are very exciting, indeed. The other big change is that we're going to be talking machine to machine. Applications were designed for human, most of applications were designed for a human to be the recipient at the other end. One of the differences when you're dealing with machines is now you have to get your code done in microseconds, as opposed to seconds. Again, a thousand times faster. This is a very exciting area, but when we're looking at low-code, for example, you're still going to need those well-crafted algorithms, those well-crafted code, very fast code that you're going to need as one of the tools of programmers. There's still going to be a need for people who can create these very fast algorithms. An exciting time all the way around for programmers. >> What were you going to say, Jim? And I want to come back and have you talk about DevOps for a second. >> Yeah, I think I was going to, I'll add to what David was just saying. Most low-code tools are not entirely no-code, meaning what they do is they auto-generate code, pursuant to some business declared a specification. The code, the actual, professional programmers can go in and modify that code and tweak it and optimize it. And I want to tie in now to something that George was talking about, the role of ML in this process. ML can make a huge mess, in the sense that ML can be an enabler for more people who don't know whole lot about development. You want to build stuff willy-nilly, so there's more code out there than you can shake a stick at, and there's no standards. But also, I'm seeing, and I saw this past week, MIT has a project, they already have a tool, that's able to do this. It's able to take ML, use ML to take a snapshot or a segment of code out of one program, and then modify it so that it fit and then transplant it into another application and modify it so it fits the context of the new application along various attributes, and so forth. What I'm getting at is that ML can be, according to what, say, MIT has done, ML can be a tool for enabling reuse of code and re-contextualization and tweaking of code. In other words, ML can be a handmaiden of enforcing standards as code gets repurposed throughout these low-code environments. I think that ML can be, it's a double-edged sword, in terms of enabling stronger or weaker governance over the whole development process. >> Yeah, and I want to add to that, Jim, that it's not just you can enforce, or at least, reveal standards and compliance, but also increases the likelihood that we become a little bit more tool-dependent. And then going back to what you were talking about, a little bit less tool-dependent, I should say. Going back to what you were talking about, David, it increases the likelihood that people are using the right tool for the right job, which is a pretty crucial element of this, especially as we do in adoption. So, Jim, give us a couple of quick observations on what a development organization is going to have to do differently to get going on utilizing some of these technologies. What are the top two or three things that folks are going to have to think about? >> First of all, in the low-code space, there are general-purpose tools that can bang out code for various target languages, for various applications, and there are highly special-purpose tools that can go gangbusters on auto-ginning web application code and mobile code and IoT code. First and foremost, you got to decide how much of the ocean you want to boil off, in terms of low-code. I recommend that if you have a requirement for accelerating, say, mobile code development, then go with low-code tools that are geared to iOS and Android and so forth, as your target platform, and stay there. Don't feel like you have to get some monster suite that can do everything, potentially. That's one critical thing. Another critical thing is it's not, the tool that you adopt, it needs to be more than just a development tool. It needs to also have capabilities built in to help your team govern those code builds within whatever DevOps, CIC, or repository you have inside your organization, make sure that the tool you've got plays well with your DevOps environment, with your workflows, with your code repositories. And then, number three, we keep forgetting this, but the front-end development is still not a walk in the woods. In fact, specifying a complex business logic that drives all this code generation, this is stuff for professional developers more often than not. These are complex, even RPA tools are, quite frankly, not as user-friendly as maybe potentially they could be down the road, 'cause you still need somebody to think through the end-to-end application, and then to specify those steps at a declarative level that need to be accomplished before the RPA tool can do its magic and build something that you might want to then crystallize as a repeatable asset in your organization. >> So it doesn't take the thinking out of application development. >> James: Oh, no, no, no no. >> All right, so, let's do this. Let's hit the action items and see what we all think folks should do next. David Floyer, let me start with you. What's the action item out of this? >> The action item is horses for courses. The right horse for the right course, the right tools for the right job. Understand where things are stateless and where things are state, and use the appropriate tools, and, as Jim was just saying, make sure that there is integration of those tools into the current processes and procedures for coding. >> George Gilbert, action item. >> I would say that, building on that, start with pilots where it involves one or a couple simple applications. Or, I should say, one or a couple enterprise applications, but with less, less sort of branching, if-then type of logic built in. It could be hardwired-- >> So, simple flows? >> Simple flows, so that over time you can generalize that and play with how the RPA tools or low-code tools can generalize their auto-generated code. >> Peter: Neil Raden, action item. >> My suggestion is that if you involve someone who's going to learn how to use these tools and develop an application or applications for you, make sure that you're dealing with someone who's going to be around for a while, because otherwise, you're going to end up with a lot of orphan code that you can't maintain. We've certainly seen that before. >> David: That's great. >> Peter: Jim Kobielus, action item. >> Yeah, action item is, approach low-code as tooling for the professional developer, not to necessarily bring in untrained, non-traditional developers. Like Neil said, make sure that the low-code environment itself is there for the long haul, it'll be managed and used by professional developers, and make sure that they are provided with the front-end visual workspace that helps them do their jobs most effectively, that is user-friendly for them to get stuff done in a hurry. And don't worry about bringing in the freelance, untrained developers into your organization, or somehow re-tasking your business analysts to become coders. That's probably not the best idea in the long run, for maintainability of the code, if nothing else. >> Certainly not in the intermediate term. Okay, so here's the action item. Here's our Wikibon Action Item. As digital business progresses, it needs to be able to create digital assets that are predicated on valuable data faster, in a more flexible way, with more business knowledge embedded and imbued directly in how the process works. A new class of tools is emerging that we think will actually allow this to happen more successfully. It combines mature knowledge in the application development world with new insights in how machine learning works, and new understanding of the impacts of automation on organization. We call these augmented programming tools, and essentially, we call them augmented programming, because in this case, the system is taking some degree of responsibility for the business to generate code, identify patterns, and ultimately do a better job maintaining how applications get organized and run. While these technologies have potential power, we have to acknowledge that there's not ever going to be a one-size-fits-all at all. In fact, we believe very strongly that we're going to see a range of different tools emerge that will allow developers to take advantage of this approach, given their starting point of the artifacts that are available, and the characteristics of the applications that have to be built. One of the ones that we think is particularly important is robotic process automation, or RPA, which starts with the idea of being able to discover something about the way applications work by looking at how the application behaves onscreen, encapsulate that, generalize it so that it can be used as a tool in future application development work. We also note that these application development technologies will not operate independent of other technology and organizational changes within the business. Specifically, on the technology side, we are encouraged that there's a continuing evolution of hardware technology that's going to take advantage of faster data access utilizing solid-state disks, NVMe over fabric, and new types of system architectures that are much better-suited for rapid shared data access. Additionally, we observed that there's new classes of technologies that are emerging that allow a data control plane to actually operate based on metadata characteristics, and informed by application patterns, often through things like machine learning. One of the organizational issues that we think is really crucial is that folks should not presume that this is going to be a path for taking anybody in the business and turning them into an application developer. You still have to be able to think like an application developer and imagine how you turn a business process into something that looks like a program. But another group that we think has to be considered here is not just the DevOps people, although that's important, but go down a level. The good old DBAs who have always suffered through new advances in tools that made the assumption that the data that's in a database is always available, and they don't have to worry about transaction scaling, and they don't have to worry about the way that the database manager's set up. It would be unfortunate if the value of these tools from a collaboration standpoint, to work better with the business, to work better with the younger programmers, ended up failing because developers continue to not pay attention to how the underlying systems that currently control a lot of the data operate. Okay, once again, this has been, we really appreciate you participating. Thank you, David Floyer and George Gilbert, and on the remote, Neil Raden and Jim Kobielus. We've been talking about augmented programming. This has been Wikibon Action Item. (upbeat music)
SUMMARY :
of the role that automation's going to play, and drive it from the UI on in. and the nature of the tools that we might use and he is the only person there. and the complexity of locking data, business considerations that are on the table that when you deal at that level, Yeah, I, go ahead, sorry, Neil. What I was-- Peter: Sorry. and the business line presumes that they own their data, that lend themselves to good-quality software. that are driving to help in this particular area, and the extensibility of those tools, et cetera, and adding more abstract constructs to them. and that allows you to reveal data that it's not just about the UI anymore, some of the key technologies in making this possible. You can allow that snapshot to be the snapshot of, and have you talk about DevOps for a second. and modify it so it fits the context of the new application And then going back to what you were talking about, make sure that the tool you've got So it doesn't take the thinking Let's hit the action items make sure that there is integration of those tools but with less, Simple flows, so that over time you can generalize that that you can't maintain. and make sure that they are provided with that this is going to be a path
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 9 2018 | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
10 zillion | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
iOS | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
This week | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
one program | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
thousand times | QUANTITY | 0.94+ |
CIC | TITLE | 0.94+ |
past week | DATE | 0.94+ |
ServiceNow | TITLE | 0.93+ |
One thing | QUANTITY | 0.93+ |
Access | TITLE | 0.92+ |
one thing | QUANTITY | 0.89+ |
thousands of times | QUANTITY | 0.88+ |
one critical thing | QUANTITY | 0.88+ |
a dollar | QUANTITY | 0.87+ |
couple weeks ago | DATE | 0.85+ |
second decade of this century | DATE | 0.84+ |
number three | QUANTITY | 0.76+ |
decades | DATE | 0.75+ |
couple simple applications | QUANTITY | 0.73+ |
one of | QUANTITY | 0.71+ |
couple enterprise applications | QUANTITY | 0.67+ |
a second | QUANTITY | 0.63+ |
double | QUANTITY | 0.61+ |
top | QUANTITY | 0.57+ |
two | QUANTITY | 0.53+ |
ML | TITLE | 0.51+ |
4GL | OTHER | 0.48+ |
Peter Burris, Wikibon | Action Item Quick Take: Teradata, Feb 2018
(electronic pop music) >> Hi, I'm Peter Burris. Welcome to a Wikibon Action Item Quick Take. This week, Teradata announced some earnings and some changes. Neil Raden, what happened? >> A couple of years ago, and don't hold my feet to the fire, but most people considered Teradata dying out, a company with great technology that just wasn't current with where things were going. They saw that, too, and they've done a tremendous job at reinventing themselves. The numbers were evident in their 4th quarter and full fiscal year numbers. They weren't spectacular, but they did beat everybody's estimate, which is a good thing. They also showed something like $250 million in subscription income, which was probably zero a year and a half ago. So that's a good thing. I think it's showing that they're making progress. They're not out of the woods yet, obviously, but I think that the program is a good program and the numbers are showing it. The other thing that I really, really like is that they elevated Oliver Ratzesberger to COO. So he's now basically in charge of pretty much everything, right? (laughs) He's going to take charge of the entire organization's sales, and marketing, and service, and so forth. He was in charge of product before this. Really, good things have happened in terms of their technology with Oliver. I've known Oliver for a while, and he's been with eBay, did a great job there. I think he's going to stick around. Sales, products, services, and marketing under one team, that's a pretty tall order. But I think he's up to it, and I'm looking forward to the 2018 year and seeing how well they do. >> Excellent, Neil. So, Teradata transitioning and finding people that can make it happen. This has been a Wikibon Action Item Quick Take. (electronic pop music)
SUMMARY :
Welcome to a Wikibon Action Item Quick Take. I think he's going to stick around. and finding people that can make it happen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
$250 million | QUANTITY | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Oliver Ratzesberger | PERSON | 0.99+ |
Oliver | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
2018 year | DATE | 0.99+ |
4th quarter | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
one team | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
a year and a half ago | DATE | 0.95+ |
couple of years ago | DATE | 0.93+ |
Action Item with Peter Burris
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. On Action Item, every week I assemble the core of the Wikibon research time here in our theCUBE Palo Alto studios, as well as remotely, to discuss a seminal topic that's facing the technology industry, and business overall, as we navigate this complex transition of digital business. Here in the studio with me this week, I have David Floyer. David, welcome. >> Thank you. >> And then remotely, we have George Gilbert, Neil Raden, Jim Kobielus, and Ralph Finos. Guys, thank you very much for joining today. >> Hi, how are you doing? >> Great to be here. >> This week, we're going to discuss something that's a challenge to talk about in a small format, but we're going to do our best, and that is, given that the industry is maneuvering through this significant transformation from a product orientation to a services orientation, what's that going to mean for business models? Now this is not a small question, because there are some very, very big players that the technology industry has been extremely dependent upon to drive forward invention, and innovation, and new ideas, and customers, that are entirely dependent upon this ongoing stream of product revenue. On the other hand, we've got companies like AWS, and others that are much more dependent upon the notion of services revenue, where the delivery of the value is in a continuous service orientation. And we conclude most of the SaaS players in that as well, like sales force, etc. So how are those crucial companies, that have been so central to the development of the technology industry, and still are essential to the future of the technology industry, going to navigate this transition? Similarly, how are the services companies, for those circumstances in which the customer does want a private asset that they can utilize as a basis for performing their core business, how are they going to introduce a product orientation? What's that mix, what's that match going to be? And that's what we're going to talk about today. So David, I've kind of laid it out, but really, where are we in this notion of product to service in some of these business model changes? >> It's an early stage, but it's very, very profound changes going on. We can see it from the amount of business of the cloud business supplies are providing. You can see that Amazon, Google, IBM, and Microsoft Azure, all of those are putting very large resources into creating services to be provided to the business itself. But equally, we are aware that services themselves need to be on premise as well, so we're seeing the movement of true private cloud, for example, which is going to be provided as a service as well, so if we take some examples, like for example, Oracle, the customer, they're a cloud customer, they're providing exactly the same service on premise as they provide in the cloud. >> And by service, you mean in how the customer utilizes the technologies. >> Correct. >> The asset arrangement may be very different, but the proposition of what the customer gets out of the assets are essentially the same. >> Yes, the previous model was, we provide you with a product, you buy a number of those products, you put them together, you service it, you look after it. The new model, here coming in with TPC, with the single throat to choke, is that the vendor will look after the maintenance of everything, putting in new releases, bringing things up to date, and they will have a smaller set of things that they will support, and as a result, it's win-win. It's win for the customer, because he's costs are lower, and he can concentrate on differentiated services. >> And secure and privatize his assets. >> Right, and the vendor wins because they have economies of scale, they can provide it at a much lower cost as well. And even more important to both sides is that the time to value of new releases is much, much quicker, and time to security exposures, time to a whole number of other things, improve with this new model. >> So Jim, when we think about this notion of a services orientation, ultimately, it starts to change the relationships between the customer and the vendor. And the consequence of that is, not surprisingly, that a number of different considerations, whether they be metrics, or other elements, become more important. Specifically we start thinking about the experience that the customer has of using something. Walk us through this kind of transition to an experience-oriented approach to conceiving of whether or not the business model's being successful. >> Right, your customer will now perceive the experience in the context of an entire engagement that is multi-channel, multi-touch point, multi-device, multi-application, and so forth, where they're expecting the same experience, the same value, the same repeatable package of goodies, whatever it is they get from you, regardless of the channel through which you're touching them or they're touching you. That channel may be provided through a private, on-premises implementation of your stack, or through a public cloud implementation of your capability, or most likely through all of the above, combined into a hybrid true private cloud. Regardless of the packaging, and the delivery of that value in the context of the engagement the customer expects it to be, self-service increasingly, predictable, managed by the solution provider, guaranteed with a fast continuous release in update cycle. So, fundamentally it's an experience economy, because the customer has many other options to go to, of providers that can provide them with a good or better experience, in terms of the life cycle of things that you're doing for them. So bottom line, the whole notion of a TPC really gets to that notion that the experience is the most important thing, the cloud experience, that can be delivered on-prem, or can be delivered in the public environment. And that's really the new world. With a multi-cloud is that master sort of a matrix of the seamless cross-channel experience. >> We like to think of the notion of a business model as worrying about three fundamental questions. How are you going to create value? How are you going to deliver value? And how are you going to capture value? Where the creation is how shared it's going to be, it's going to be a network of providers, you're going to have to work with OEMs. The delivery, is it going to be online, is it going to be on-prem? Those types of questions, but this notion of value capture is a key feature, David, of how this is changing. And George, I want to ask you a question. The historical norm is that value capture took place in the form of, I give you a product, you give me cash. But when we start moving to a services-orientation, where the services is perhaps being operated and delivered by the supplier, it introduces softer types of exchange mechanisms, like, how are you going to use my data? Are you going to improve the fidelity of the system by pooling me with a lot of other customers? Am I losing my differentiation? My understanding of customers, is that being appropriated and munged with others to create models? Take us through this soft value capture challenge that a service provider has, and what specifically, I guess actually the real challenge that the customer has as they try to privatize their assets, George. >> So, it's a big question that you're asking, and let me use an example to help sort of make concrete the elaboration, or an explanation. So now we're not just selling software, but we might be selling sort of analytic data services. Let's say, a vendor like IBM works with Airbus to build data services where the aircraft that Airbus sells to its airline customers, that provides feedback data that both IBM has access to, to improve its models about how the aircraft work, as well as that data would also go back to Airbus. Now, Airbus then can use that data service to help its customers with prescriptions about how to operate better on certain routes, how to do maintenance better, not just predictive maintenance, but how to do it more just in time with less huge manuals. The key here is that since it's a data service that's being embedded with the product, multiple vendors can benefit from that data service. And the customer of the traditional software company, so in this case, Airbus being the customer of IBM, has to negotiate to make sure its IP is protected to some extent, but at the same time, they want IBM to continue working with that data feedback because it makes their models richer, the models that Airbus gets access to richer over time. >> But presumably that has to be factored into the contractual obligations of both parties enter into, to make sure that those soft dollars are properly commensurated in the agreements. That's not something that we're seeing a lot in the industry, but the model of how we work closely with our clients and our customers is an important one. And it's likely to change the way that IT thinks about itself as a provider of services. Neil, what kinds of behaviors are IT likely to start exhibiting as it finds itself, if not competing, at least trying to mimic the classes of behaviors that we're seeing from service providers inside their own businesses? >> Yeah, well, IT organizations grew over the last, I dunno, 50 years or so, organically, and it was actually amazing how similar their habits and processes, and ways of doing things were the same across industries, and locations, and so forth. But the problem was that everything they had to deal with, whether they were the computers, or the storage, or the networks, and so forth, were all really expensive. So they were always in a process of managing from scarcity. The business wanted more and more from them, and they had lower and lower budgets, because they had to maintain what they had, so it created a lot of tension between IT and organizations, and because of that, whenever a conversation happened between other groups within the business and IT, IT always seemed to have the last word, no, or okay. Whatever the decision was, it was really IT's. And what I see happening here is, when the IT business becomes less insular, I think a lot of this tension between IT and the rest of the organization will start to dissipate. And that's what I'm hoping will happen, because they started this concept of IT vs the business, but if you went out in an organization and asked 100 people what they did, not one of them would say, "I'm the business," right? They have a function, but IT created this us vs them thing, to protect themselves, and I think that once they're able to utilize external services for hardware, for software, for whatever else they have to do, they become more like a commercial operation, like supply-side, or procurement, or something, and managing those relationships, and getting the services that they're paying for, and I think ultimately that could really help organizations, by breaking down those walls in IT. >> So it used to be that an IT decision to make an investment would have uncertain returns, but certain costs, and there are multiple reasons why those returns would be uncertain, or those benefits would be uncertain. Usually it was because some other function would see the benefits under their umbrella, you know, marketing might see increased productivity, or finance would see increased productivity as a consequence of those investments, but the costs always ended up in IT. And that's one of the reasons why we yet find ourself in this nasty cycle of constantly trying to push costs down, because the benefits always showed up somewhere else, the costs always showed up inside IT. But it does raise this question ultimately of, does this notion of an ongoing services orientation, is it just another way of saying, we're letting a lock in back in the door in a big way? Because we're now moving from a relationship, a sourcing relationship that's procurement oriented, buy it, spend as little money as possible, get value out of it, as opposed to a services orientation, which is effectively, move responsibility for this part of the function off into some other service provider, perpetually. And that's going to have a significant implication, ultimately, on the question of whether or not we buy services, default to services. Ralph, what do you think, where are businesses going to end up on this, are we just going to see everything end up being a set of services, or is there going to be some model that we might use, and I'll ask the team this, some model that we might use to conceive when it should be a purchase, and when it should be a service? What do you think, Ralph? >> Yeah, I think the industry's gravitating towards a service model, and I think it's a function of differentiation. You know, if you're an enterprise, and you're running a hundred different workloads, and 15 of them are things that really don't differentiate you from your competition, or create value that's differentiable in some kind of way, it doesn't make any sense to own that kind of functionality. And I think, in the long run, more and more aspects, or a higher percentage of workload is going to be in that category. There will always be differentiation workloads, there will always be workloads requiring unique kinds of security, especially around transactions. But in the net, the slow march of service makes a lot of sense to me. >> What do you think, guys? Are we going to see, uh, do we agree with Ralph, number one? And number two, what about those exceptions? Is there a framework that we can start to utilize to start helping folks imagine what are the exceptions to that rule, what do you think David? >> Sure, I think that there are circumstances when... >> Well first, do we generally agree with the march? >> Absolutely, absolutely. >> I agree too. >> Yes, fully agree that more and more services are going to be purchased, and a smaller percentage of the IT budget from an enterprise will go into specific purchases of assets. But there are some circumstances where you will want to make sure that you have those assets on premise, that there is no other call on those assets, either from the court, or from difference of priority between what you need and what a service provider needs. So in both those circumstances, they may well choose to purchase it, or to have the asset on the premise so that it's clearly theirs, and clearly their priority of when to use it, and how to use it. So yes, clearly, an example might be, for example, if you are a bank, and you need to guarantee that all of that information is yours, because you need to know what assets are owned by who, and if you give it to a service provider, there are circumstances where there could be a legal claim on that service provider, which would mean that you'll essentially go out of business. So there are very clear examples of where that could happen, but in general, I agree. There's one other thing I'd like to add to this conversation. The interesting thing from an IT point of view, an enterprise IT, is that you'll have fewer people to do business with, you'll be buying a package of services. So that means many of the traditional people that you did business with, both software and hardware, will not be your customers anymore, and they will have to change their business models to deal with this. So for example, Permabit has become an OEM supplier of capabilities of data management inside. And Kaminario has just announced that it's becoming a software vendor. >> Nutanix. >> Nutanix is becoming a software vendor, and is either allowing other people to take the single throat to choke, or putting together particular packages where it will be the single throat to choke. >> Even NetAct, which is a pretty consequential business, has been been around for a long time, is moving in this direction. >> Yes, a small movement in that direction, but I think a key question for many of these vendors are, do I become an OEM supplier to the... >> Customer owner. >> The customer owner. Or what's my business model going to be? Should I become the OEM supplier, or should I try and market something directly in some sort of way to the vendors? >> Now this is a very important point, David, because one of the reasons, for a long time, why the OEM model ran into some challenges, is precisely over customer ownership. But when data from operations of the product, or of the service is capable of flowing, not only to the customer engagement originator, but also to the OEM supplier, the supplier has pretty significant, the OEM company has pretty significant visibility, ultimately, into what is going on with their product. And they can use that to continuously improve their product, while at the same time, reducing some of the costs associated with engagement. So the flowing of data, the whole notion of digital business allows a single data about operation to go to multiple parties, and as a consequence, all those parties now have viable business models, if they do it right. >> Yeah, absolutely. And Kaminario will be be a case in point. They need metadata about the whole system, as a whole, to help them know how to apply the best patches to their piece of software, and the same is true for other suppliers of software, the Permabit, or whoever those are, and it's the responsibility of that owner or the customer to make sure that all of those people can work in that OEM environment effectively, and improve their product as well. >> Yeah, so great conversation guys. This is a very, very rich and fertile domain, and I think it's one that we're going to come back to, if not directly, at least in talking about how different vendors are doing things, or how customers have to, or IT organizations have to adjust their behaviors to move from a procurement to a strategic sourcing set of relationships, etc. But what I'd like to do now, as we try to do every week, is getting to the Action Item round, and I'm going to ask each of you guys to give me, give our audience, give our users, the action item, what do they do differently on next Monday as a consequence of this conversation? And George Gilbert, I'm going to start with you. George, action item. >> Okay, so mine is really an extension of what we were talking about when I was raising my example, which is your OEM supplier, let's say IBM, or a company we just talked to recently, C3 IoT, is building essentially what are application data services that would accompany your products that you, who used to be a customer, are selling a supply chain master, say. So really trying to boil that down is, there is a model of your product or service could be the digital twin, and as your vendor keeps improving it, and you offer it to your customers, you need to make sure that as the vendor improves it, that there is a version that is backward compatible with what you are using. So there's the IP protection part, but then there's also the compatibility protection part. >> Alright, so George, your action item would be, don't focus narrowly on the dollars being spent, factor those soft dollars as well, both from a value perspective, as well an ongoing operational compatibility perspective. Alright, Jim Kobielus, action item. >> Action item's for IT professionals to take a quick inventory of what of your assets in computing you should be outsourcing to the cloud as services, it's almost everything. And also, to inventory, what of your assets must remain in the form of hard discreet tangible goods or products, and my contention is that, I would argue that the edge, the OT, the operational technology, the IOT, sensors and actuators that are embedded in your machine tools and everything else, that you're running the business on, are the last bastion of products in this new marketplace, where everything else becomes a service. Because the actual physical devices upon which you've built your OT are essentially going to remain hard tangible products forevermore, of necessity, and you'll probably want to own those, because those are the very physical fabric of your operation. >> So Jim, your action item is, start factoring the edge into your consideration of the arrangements of your assets, as you think about product vs services. >> Yes. >> Neil Raden, action item. >> Well, I want to draw a distinction between actually, sorry, between actually, ah damn, sorry. (laughs) >> Jim: I like your fan, Neil. >> Peter: Action item, get your monitor right. >> You know. I want to draw the distinction between actually moving to a service, as opposed to just doing something that's a funding operate. Suppose we have 500 Oracle applications in our company running on 35 or 40 Oracle instances, and we have this whole army of Oracle DBAs, and programmers, and instance tuners, and we say well, we're going to give all the servers to the Salvation Army, and we're going to move everything to the Oracle cloud. We haven't really changed anything in the way the IT organization works. So if we're really looking for change in culture and operation, and everything else, we have to make sure we're thinking about how we're changing, reading the way things get done and managed in the organization. And I think just moving to the cloud is very often just a budgetary thing. >> So your action item would be, as you go through this process, you're going to re-institutionalize the way you work, get ready to do it. Ralph Finos, action item. >> Yeah, I think if you're a vendor, if you're an IT industry vendor, you kind of want to begin to look a lot like, say, a Honda or Toyota in terms of selling the hardware to get the service in the long term relationship in the lock-in. I think that's really where the hardware vendors, as one group of providers, is going to want to go. And I think you want, as a user and an enterprise, I think you're going to want to drive your vendors in that direction. >> So your action item would be, for a user anyway, move from a procurement orientation that's focused on cost, to a vendor management orientation that's focused on co-development, co-evolution of the value that's being delivered by the service. David Floyer, action item. >> So my action item is for vendors, a whole number of smaller vendors. They have to decide whether they're going to invest in the single most expensive thing that they can do, which is an enterprise sales force, for direct selling of their products to enterprise IT, and-or whether they're going to take an OEM type model, and provide services to a subset, for example, to focus on the cloud service providers, which Kaminario are doing, or focus on selling indirectly to all of the, the vendors who are owning the relationship with the enterprise. So that, to me, is a key decision, very important decision as the number of vendors will decline over the next five years. >> Certainly, what we have, visibility to what we have right now, so your action item is, as a small vendor, choose whose sales force you're going to use, yours or somebody else's. >> Correct. >> Alright. So great conversation guys. Let me kind of summarize this a bit. This week, we talked about the evolving business models in the industry, and the basic notion, or the reason why this has become such an important consideration, is because we're moving from an era where the types of applications that we were building were entirely being used internally, and were therefore effectively entirely private, vs increasingly trying to extend even those high-volume transaction processing applications into other types of applications that deliver things out to customers. So the consequence of the move to greater integration, greater external delivery of things within the business, has catalyzed this movement to the cloud. And as a consequence, this significant reformation, from a product to a services orientation, is gripping the industry, and that's going to have significant implications on how both buyers and users of technology, and sellers and providers of technology are going to behave. We believe that the fundamental question is going to come down to, what process are you going to use to create value, with partnerships, go it alone? How are you going to deliver that value, through an OEM sales force, through a network of providers? And how are you going to capture value out of that process, through money, through capturing of data, and more of an advertising model? These are not just questions that feature in the consumer world, they're questions that feature significantly in the B2B world as well. Our expectations, over the next few years, we expect to see a number of changes start to manifest themselves. We expect to see, for example, a greater drive towards experience of the customer as a dominant consideration. And today, it's the cloud experience that's driving many of these changes. Can we get the cloud experience, both the public cloud, and on premise, for example? Secondly, our expectations that we're going to see a lot of emphasis on how soft exchanges of value take place, and how we privatize those exchanges. Hard dollars are always going to flow back and forth, even if they take on subscription, as opposed to a purchase orientation, but what about that data that comes out of the operations? Who owns that, and who gets to lay claim to future revenue streams as a consequence of having that data? Similarly, we expect to see that we will have a new model that IT can use to start focusing its efforts on more business orientation, and therefore not treating IT as the managers of hardware assets, but rather managers of business services that have to remain private to the business. And then finally, our expectation is that this march is going to continue. There will be significant and ongoing drive to increase the role that a service's business model plays in how value is delivered, and how value is captured. Partly because of the increasing dominant role that data's playing as an asset in digital business. But we do believe that there are some concrete formulas and frameworks that can be applied to best understand how to arrange those assets, how to institutionalize and work around those assets, and that's a key feature of how we're working with our customers today. Alright, once again, team, thank you very much for this week's Action Item. From theCUBE studios in beautiful Palo Alto, I want to thank David Floyer, George Gilbert, Jim Kobielus, Neil Raden, and Ralph Finos, this has been Action Item.
SUMMARY :
Here in the studio with me this week, I have David Floyer. And then remotely, we have George Gilbert, Neil Raden, that have been so central to the development of the cloud business supplies are providing. And by service, you mean in how the customer but the proposition of what the customer Yes, the previous model was, we provide you with the time to value of new releases is much, that the customer has of using something. because the customer has many other options to go to, Where the creation is how shared it's going to be, the models that Airbus gets access to richer over time. But presumably that has to be factored into because they had to maintain what they had, or is there going to be some model that we might use, But in the net, the slow march of service So that means many of the traditional people the single throat to choke, or is moving in this direction. do I become an OEM supplier to the... Should I become the OEM supplier, So the flowing of data, the whole notion of digital business and it's the responsibility of that owner or the customer and I'm going to ask each of you guys to give me, could be the digital twin, and as your vendor don't focus narrowly on the dollars being spent, And also, to inventory, what of your assets of the arrangements of your assets, Well, I want to draw a distinction between And I think just moving to the cloud is get ready to do it. in terms of selling the hardware to get the service co-development, co-evolution of the value and provide services to a subset, for example, what we have right now, so your action item is, So the consequence of the move to greater integration,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Airbus | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Honda | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Peter | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
50 years | QUANTITY | 0.99+ |
Salvation Army | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kaminario | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
100 people | QUANTITY | 0.99+ |
NetAct | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
single | QUANTITY | 0.99+ |
both parties | QUANTITY | 0.99+ |
next Monday | DATE | 0.99+ |
This week | DATE | 0.99+ |
500 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
40 | QUANTITY | 0.98+ |
35 | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Permabit | ORGANIZATION | 0.96+ |
Secondly | QUANTITY | 0.95+ |
single data | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.91+ |
Action Item | The Role of Open Source
>> Hi, I'm Peter Burris, Welcome to Wikibon's Action Item. (slow techno music) Once again Wikibon's research team is assembled, centered here in The Cube Studios in lovely Palo Alto, California, so I've got David Floyer and George Gilbert with me here in the studio, on the line we have Neil Raden and Jim Kobielus, thank you once again for joining us guys. This week we are going to talk about an issue that has been dominant consideration in the industry, but it's unclear exactly what direction it's going to take, and that is the role that open source is going to play in the next generation of solving problems with technology, or we could say the role that open source will play in future digital transformations. No one can argue whether or not open source has been hugely consequential, as I said it has been, it's been one of the major drivers of not only new approaches to creating value, but also new types of solutions that actually are leading to many of the most successful technology implementations that we've seen ever, that is unlikely to change, but the question is what formal open source take as we move into an era where there's new classes of individuals creating value, like data scientists, where those new problems that we're trying to solve, like problems that are mainly driven by the role that data as opposed to code plays, and that there are new classes of providers, namely service providers as opposed to product or software providers, these issues are going to come together, and have some pretty important changes on how open source behaves over the next few years, what types of challenges it's going to successfully take on, and ultimately how users are going to be able to get value out of it. So to start the conversation off George, let's start by making a quick observation, what has the history of open source been, take us through it kind of quickly. >> The definition has changed, in its first incarnation it was fixed UNIX fragmentation and the high price of UNIX system servers, meaning UNIX the proprietary UNIX's and the proprietary servers they were built, that actually rather quickly morphed into a second incarnation where it was let's take the Linux stack, Linux, Apache, MySQL, PHP, Python, and substitute that for the old incumbents, which was UNIX, BEA Web Logic, the J2E server and Oracle Database on an EMC storage device. So that was the collapse of the price of infrastructure, so really quickly then it morphed into something very, very different, which was we had the growth of the giant Internet scale vendors, and neither on pricing nor on capacity could traditional software serve their needs, so Google didn't quite do open source, but they published papers about what they did, those papers then were implemented. >> Like Map Produce. Yeah Map Produce, Big Table, Google File System, those became the basis of Hadoop which Yahoo open sourced. There is another incarnation going, that's probably getting near its end of life right now, which is sort of a hybrid, where you might take Kafka which is open source, and put sort of proprietary bits around it for management and things like that, same what Cloudera, this is called the open core model, it's not clear if you can build a big company around it, but the principle is, the principle for most of these is, the value of the software is declining, partly because it's open source, and partly because it's so easy to build new software systems now, and the hard part is helping the customer run the stuff, and that's where some of these vendors are capturing it. >> So let's David turn our attention to how that's going to turn into actual money. So in this first generation of open source, I think up until now, certainly Red Hat, Canonical have made money by packaging and putting forward distributions, that have made a lot of money, IBM has been one of the leaders in contributing open source, and then turning that into a services business, Cloudera, Horton Works, NapR, some of these other companies have not generated the same type of market presence that a Red Hat or Canonical have put forward, but that doesn't mean there aren't companies out there that have been very successful at appropriating significant returns out of open source software, mainly however they're doing it as George said, as a service, give us some examples. >> I think the key part of open source is providing a win-win environment, so that people are paid to do stuff, and what is happening now a lot is that people are putting stuff into open source in order that it becomes a standard, and also in order that it is maintained by the community as a whole. So those two functions, those two capabilities of being paid by a company often, by IBM or by whoever it is to do something on behalf of that company, so that it becomes a standard, so that it becomes accepted, that is a good business model, in the sense that it's win-win, the developer gets recognition, the person paying for it achieves their business objective of for example getting a standard recognized-- >> A volume. >> Volume, yes. >> So it's a way to get to volume for the technology that you want to build your business around. >> Yes, what I think is far more difficult in this area is application type software, so where open source has been successful, as George said is in the stacks themselves, the lower end of the stacks, there are a few, and they usually come from very very successful applications like Word, Microsoft Word, or things like that where they can be copied, and be put into open source, but even there they have around them software from a company, Red Hat or whoever it is, that will make it successful. >> Yes but open office wasn't that successful, get to the kind of, today we have Amazon, we have some of the hyper scalars that are using that open core model and putting forward some pretty powerful services, is that the new Red Hat, is that the new Canonical? >> The person who's made most money is clearly Amazon, they took open source code and made it robust, and made it in volume, those are the two key things you to have for success, it's got to be robust, it's got to be in volume, and it's very difficult for the open source community to achieve that on its own, it needs the support of a large company to do that, and it needs the value that that large company is going to get from it, for them to put those resources in. So that has been a very successful model a lot of people decry it because they're not giving back, and there's an argument-- >> They being Amazon, have not given back quite as much. >> Yes they have relatively very few commiters. I think that's more of a problem in the T&Cs of the open source contract, so those should probably be changed, to put more onus on people to give back into the pool. >> So let me stop you, so we have identified one thing that is likely going to have to be evolved as we move forward, to prevent problems, some of the terms and conditions, we try to ensure that there is that quid pro quo, that that win-win exists. So Jim Kobielus, let me ask you a question, open source has been, as David mentioned, open source has been more successful where there is a clear model, a clear target of what the community is trying to build, it hasn't been quite successful, where it is in fact is expected that the open source community is going to start with some of the original designs, so for example, there's an enormous plethora of big data tools, and yet people are starting to ask why is big data more successful, and partly it's because putting these tools together is so difficult. So are we going to see the type of artifacts and assets and technologies associated with machine learning, AI, deep learning et cetera, easily lend themselves to an open source treatment, what do you think? >> I think were going to see open source very much take off in the niches of the deep learning and machine learning AI space, where the target capabilities we've built are fairly well understood by our broad community. Machine learning clearly, we have a fair number of frameworks that are already well established, with respect to the core capabilities that need to be performed from modeling and training, and deployment of statistical models into applications. That's where we see a fair amount of takeoff for Tensor Flow, which Google built in an open source, because the core of deep learning in terms of the algorithm, in terms of the kinds of functions you perform to be able to take data and do feature engineering and algorithm selection are fairly well understood, so those are the kinds of very discreet capabilities for which open source code is becoming standard, but there's many different alternative frameworks for doing that, Tensor Flow being one of them, that are jostling for presence in the market. The term is commoditized, more of those core capabilities are being commoditized by the fact that there well understood and agreed to by a broad community. So those are the discrete areas we're seeing the open source alternatives become predominant, but when you take a Tensor Flow and combine it with a Spark, and with a Hadoop and a Kafka and broader collections of capabilities that are needed for robust infrastructure, those are disparate communities that each have their own participants committed and so forth, nobody owns that overall step, there's no equivalent of a lamp stack were all things to do with deep learning machine learning AI on an open source basis come to the fore. If some group of companies is going to own that broadening stack, that would indicate some degree of maturation for this overall ecosystem, that's not happening yet, we don't see that happening right now. >> So Jim, I want to, my bias, I hate the term commoditization, but I Want to unify what you said with something that David said, essentially what we're talking about is the agreement in a collaborative open way around the conventions of how we perform work that compute model which then turns into products and technologies that can in fact be distributed and regarded as a standard, and regarded as a commodity around which trading can take place. But what about the data side of things George, we have got, Jim's articulated I think a pretty good case, that we're going to start seeing some tools in the marketplace, it's going to be interesting to see whether that is just further layering on top of all this craziness that is happening in the big data world, and just adding to it in the ML world, but how does the data fit into this, are we going to see something that looks like open source data in the marketplace? >> Yes, yes, and a modified yes. Let me take those in two pieces. Just to be slightly technical, hopefully not being too pedantic, software used to mean algorithms and data structures, so in other words the recipe for what to do, and the buckets for where to put the data, that has changed in the data in terms of machine learning, analytic world where the algorithms and data are so tied together, the instances of the data, not the buckets, that the data changed the algorithms, the algorithms change the data, the significance of that is, when we build applications now, it's never done, and so you go, the construct we've been focusing on is the digital twin, more broadly defined than a smart device, but when you go from one vendor and you sort of partially build it, it's an evergreen thing, it's never done, then you go to the next vendor, but you need to be able to backport some core of that to the original vendor, so for all intents and purposes that's open source, but it boils down to actually the original Berkeley license for open source, not the Apache one everyone is using now. And remind me of the other question? >> The other issue is are we going to see datasets become open source like we see code bases and code fragments and algorithms becoming open source? >> Yes this is also, just the way Amazon made infrastructure commoditized and rentable, there are going to be many datasets were they used to be proprietary, like a Google web crawl, and Google knowledge graph of disambiguation people, places and things, some of these things are either becoming open source, or openly accessible by API, so when you put those resources together you're seeing a massive deflation, or a massive shrinkage in the capital intensity of building these sorts of apps. >> So Neil, if we take a look at where we are this far, we can see that there is, even though we're moving to a services oriented model, Amazon for example is a company that is able to generate commercial rents out of open source software, Jim has made a pretty compelling case that open source software can be, or will emerge out of the tooling world for some of these new applications, there are going to be some examples of datasets, or at least APIs to datasets that will look more open source like, so it's not inconceivable that we'll see some actual open source data, I think GDPR, and some other regulations, we're still early in the process of figuring out how we're going to turn data into commodity, using Jim's words. But what about the personnel, what about the people? There were reasons why developers moved to open source, some of the soft reasons that motivated them to do things, who they work with, getting the recognition, working on relevant projects, working with relevant technologies, are we going to see a similar set of soft motivators, diffuse into the data scientist world, so that these individuals, the real ones who are creating the real value, are going to have some degree of motivation to participate with each other collaborate with each other in an open source way, what do you think? >> Good question, I think the answer is absolutely true, but it's not unique to data scientists, academics, scientists in molecular biology, civil engineers, they all wannabe recognized by their peers, on some level beyond just their, just what they're doing in their organization, but there is another segment of data scientists that are just guys working for a paycheck, and generating predictive analysis and helping the company along and so forth, and that's what they're going to do. The whole open source thing, you remember object programming, you remember JavaBeans, you remember Web Services, we tried to turn developers into librarians, and when they wanted to develop something, you go to Github, I go to Github right now and I say I'm looking for a utility that can figure out why my face is so pink on this camera, I get 1000 listings of programs, and have no idea which ones work and which ones don't, so I think the whole open source thing is about to explode, it already has, in terms of piece parts. But I think managing in an organization is different, and when I say an organization, there's the Googles and the Amazons and so forth of the world, and then there's everybody else. >> Alright so we've identified an area where we can see some consequence of change where we can anticipate some change will be required to modernize the open source model, the licensing model, we see another one where the open source communities going to have to understand how to move from a product and code to a data and service orientation, can we think of any others? >> There is one other that I'd like to add to that, and that is compliance. You addressed it to some extent, but compliance brings some real-world requirements onto code and data, and you were saying earlier on that one of the options is bringing code and data so that they intermingle and change each other, I wonder whether that when you look at it from a compliance point of view will actually pass muster, because you need from a compliance point of view to prove, for example, in the health service, that it works, and it works the same way every time, and if you've got a set of code and data that doesn't work the same every time, you probably are going to get pushed back from the people who regularly health that this is not, you can't do it that way, you'll have to find another way to do it. But that again is, is at the same each time, so the point I'm making-- >> This is a bigger issue than just open source, this is an issue where the idea if continuous refinement of the code, and the data-- >> Automatic refinement. >> Automatic refinement, could in fact, we're going to have to change some compliance laws, is open source, is it possible the open source community might actually help us understand that problem? >> Absolutely, yes. >> I think that's a good point, I think that's a really interesting point, because you're right George, the idea of a continuous development, is not something that for example Serr Banes actually says I get this, Serr Banes actually says "Oh yeah, I get this." Serr Banes actually is like, yes the data, I acknowledge that this date is right, and I acknowledge the process by which it was created was read, now this is another subject, let's bring this up later, but I think it's relevant here, because in many respects it's a difference between an income statement and balance sheet right? Saying it's good now, is kind of like the income statement, but let's come back to this, because I think it's a bigger issue. You're asserting the open source community in fact may help solve this problem by coming up with new ways of conceiving say versioning of things, and stamping things and what is a distribution, what isn't a distribution, with some of these more tightly bound sets of-- >> What we find normally is that-- >> Jim: I think that we are going to-- >> Peter: Go on Jim. >> Just to elaborate on what Peter was talking about, that whole theme, I think what we're going to see is more open source governance of models and data, within distributed development environments, using technologies like block chain as a core enabler for these workflows, for these as it were general distributed hyper ledgers indicate the latest and greatest version of a given dataset, or a given model being developed somewhere around some common solution domain, I think those kinds of environments for governance will become critically important, as this pipeline for development and training and deployment of these assets, gets ever more distributed and virtual. >> By the way Jim I actually had a conversation with a very large open source distribution company a few months ago about this very point, and I agree, I think blockchain in fact could become a mechanism by which we track intellectual property, track intellectual contributions, find ways to then monetize those contributions, going back to what you were saying David, and perhaps that becomes something that looks like the basis of a new business model, for how we think about how open source goes after these looser, goosier problems. >> But also to guarantee integrity without going through necessarily a central-- >> Very important, very important because at the end of the day George-- >> It's always hard to find somebody to maintain. >> Right, big companies, one of the big challenges that companies today are having is that they do open source is that they want to be able to keep track of their intellectual property, both from a contribution standpoint, but also inside their own business, because they're very, very concerned that the stuff that they're creating that's proprietary to their business in a digital sense, might leave the building, and that's not something a lot of banks for example want to see happen. >> I want to stick one step into this logic process that it think we haven't yet discussed, which is, we're talking about now how end customers will consume this, but there still a disconnect in terms of how the open source software vendor's or even hybrid ones can get to market with this stuff, because between open source pricing models and pricing levels, we've seen a slow motion price collapse, and the problem is that, the new go to market motion is actually made up of many motions, which is discover, learn, try, buy, recommend, and within each of those, the motion was different, and you hear it's almost like a reflex, like when your doctor hit you on the knee and your leg kind of bounced, everybody says yeah we do land and expand, and land was to discover, learn, try augmented with inside sales, the recommend and standardizes still traditional enterprise software where someone's got to talk to IT and procurement about fitting into the broader architecture, and infrastructure of the firm, and to do that you still need what has always been called the most expensive migratory workforce in the world, which is an enterprise sales force. >> But I would suggest there's a big move towards standardization of stacks, true private cloud is about having a stack which is well established, and the relationship between all the different piece parts, and the stack itself is the person who is responsible for putting that stack and maintaining that stack. >> So for a moment pretend that you are a CIO, are you going to buy OpenStack or are you going to buy the Vmware stack? >> I'm going to buy Vmware stack. >> Because that's about open source? >> No, the point I'm saying is that those open source communities or pieces, would then be absorbed into the stack as an OEM supplier as opposed to a direct supplier and I think that's true for all of these stacks, if you look at the stack for example and you have code from Netapp or whatever it is that's in that code and they're contributing It You need an OEM agreement with that provider, and it doesn't necessarily have to be open source. >> Bottom line is this stuff is still really, really complicated. >> But this model of being an OEM provider is very different from growing an enterprise sales force, you're selling something that goes into the cost of goods sold of your customer, and that the cost of goods sold better be less than 15 percent, and preferably less than five percent. >> Your point is if you can't afford a sales force, an OEM agreement is a much better way of doing it. >> You have to get somebody else's sales force to do it for you. So look I'm going to do the Action Item on this, I think that this has been a great conversation again, David, George, Neil, Jim, thanks a lot. So here's the Action Item, nobody argues that open source hasn't been important, and nobody suggests that open source is not going to remain important, what we think based on our conversation today is that open source is going to go through some changes, and those changes will occur as a consequence of new folks that are going to be important to this like data scientists, to some of the new streams of value in the industry, may not have the same motivations that the old developer world had, new types of problems that are inherently more data oriented as opposed process-oriented, and it's not as clear that the whole concept of data as an artifact, data as a convention, data as standards and commodities, are going to be as easy to define as it was in the cold world. As well as ultimately IT organizations increasingly moving towards an approach that focused more on the consumption of services, as opposed to the consumption of product, so for these and many other reasons, our expectation is that the open source community is going to go through its own transformation as it tries to support future digital transformations, current and future digital transformations. Now some of the areas that we think are going to be transformed, is we expect that there's going to be some pressure on licensing, we think there's going to be some pressure in how compliance is handled, and we think the open source community may in fact be able to help in that regard, and we think very importantly that there will be some pressure on the open source community trying to rationalize how it conceives of the new compute models, the new design models, because where open source always has been very successful is when we have a target we can collaborate to replicate and replace that target or provide a substitute. I think we can all agree that in 10 years we will be talking about how open source took some time to in fact put forward that TPC stack, as opposed to define the true private cloud stack. So our expectation is that open source is going to remain relevant, we think it's going to go through some consequential changes, and we look forward to working with our clients to help them navigate what some of those changes are, both as commiters, and also as consumers. Once again guys, thank you very much for this week's Action Item, this is Peter Barris, and until next week thank you very much for participating on Wikibon's Action Item. (slow techno music)
SUMMARY :
and that is the role that open source is going to play and substitute that for the old incumbents, and partly because it's so easy to build IBM has been one of the leaders in contributing open source, so that people are paid to do stuff, that you want to build your business around. the lower end of the stacks, it needs the support of a large company to do that, of the open source contract, going to have to be evolved as we move forward, that are jostling for presence in the market. and just adding to it in the ML world, and the buckets for where to put the data, there are going to be many datasets were they used some of the soft reasons that motivated them to do things, and so forth of the world, There is one other that I'd like to add to that, and I acknowledge the process by which Just to elaborate on what Peter was talking about, going back to what you were saying David, are having is that they do open source is that they want and to do that you still need what has always and the stack itself is the person who is responsible and it doesn't necessarily have to be open source. Bottom line is this stuff is still and that the cost of goods sold better an OEM agreement is a much better way of doing it. and it's not as clear that the whole concept
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Peter Barris | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Horton Works | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
less than five percent | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | TITLE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
NapR | ORGANIZATION | 0.99+ |
Word | TITLE | 0.99+ |
less than 15 percent | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
two functions | QUANTITY | 0.99+ |
two capabilities | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
PHP | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
MySQL | TITLE | 0.99+ |
second incarnation | QUANTITY | 0.99+ |
first incarnation | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.98+ |
Palo Alto, California | LOCATION | 0.98+ |
This week | DATE | 0.98+ |
GDPR | TITLE | 0.98+ |
two key | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.97+ |
1000 listings | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
UNIX | TITLE | 0.97+ |
this week | DATE | 0.96+ |
Github | ORGANIZATION | 0.96+ |
first generation | QUANTITY | 0.96+ |
Vmware | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.95+ |
Kafka | TITLE | 0.95+ |
one step | QUANTITY | 0.94+ |
each time | QUANTITY | 0.93+ |
JavaBeans | TITLE | 0.92+ |
both | QUANTITY | 0.91+ |
BEA Web Logic | ORGANIZATION | 0.91+ |
Action Item | Why Hardware Matters
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (funky electronic music) We're broadcasting, once again, from theCUBE studios in lovely Palo Alto. And I've got the Wikibon research team assembled here with me. I want to introduce each of them. David Floyer. >> Hi. >> George Gilbert are here in the studio with me. Remote we have Jim Kobielus, Stu Miniman, and Neil Raden. Thanks everybody for joining. Now, we're going to talk about something that is increasingly overlooked, that we still think has enormous importance in the industry. And that is, does hardware matter? For 50 years, in many respects, the rate of change in industry has been strongly influenced, if not determined by the rate of change in the underlying hardware technologies. As hardware technologies improved, the result was that software developers would create software that would fill up that capacity. But we're experiencing a period where some of the traditional approaches to improving hardware performance are going down. We're also seeing that there is an enormous, obviously, move to the cloud. And the cloud is promising different ways of procuring the infrastructure capacity that businesses need. So that raises the question with potential technologies constraints on the horizon, and an increasing emphasis on utilization of the cloud, is systems integration and hardware going to continue to be a viable business option? And something that users are going to have to consider as they think about how to source their infrastructure? Now there are a couple of considerations today that are making this important right now. Jim Kobielus, what are some of those considerations that increase the likelihood that we'll see some degree of specialization that's likely to turn into different hardware options? >> Yeah Peter, hi everybody. I think one of the core considerations is that edge computing has become the new approach to architecting enterprise and consumer grade applications everywhere. And edge computing is nothing without hardware on the edge, devices as well as hubs and gateways and so forth, to offload and the handle much of the processing needed. And increasingly, it's AI, artificial intelligence. deep learning, machine learning. So going forward now, looking at how it's shaping up, hardware's critically important. Burning AI, putting AI onto chipsets, low power, low cost chips that can do deep learning, machine learning, natural language processing, fast, cheaply, in an embedded form factor, critically important for the development of edge computing as a truly end-to-end distributed fabric for the next generation of application. >> So Jim, are we likely to see greater specialization of some of those AI algorithms and data structures and what not, drive specialization and the characteristics of the chips that support it, or is it all going to be just default down to tensor flow or GPUs? >> It has been GPUs for AI. Much of AI, in terms of training and inferencing, has been in the cloud, and much of it has been based historically, heretofore, on GPUs, and video being the predominant provider. However, GPUs historically have not been optimized for AI, because they've been built for gaming and consumer applications. However, the next generation, the current generation, from Nvidia and others, are chipsets in the cloud and other form factors for AI, incorporates what's called tensor core processing, really a highly densely packed tensor core processing components to be able to handle deep learning neural networks, very fast, very efficiently for inferencing and training. So Nvidia and everybody else now is making a big bet on tensor core processing architecture. Of course Google's got one of the more famous ones, their TPU architecture, but they're not the only ones. So going forward, we're looking at, in the AI ecosystem, especially for edge computing, there increasingly will be a blend of GPUs like for cloud based core processing, TPUs or similar architecture, or device-level processing. But also, FPGAs, A6, and CPUs are not out of the running because for example, CPUs are critically important for systems on the chip, which are quite fundamentally important for unattended operation as well as attended operation in terms of edge devices to handle things like natural language processing for conversational UIs. >> So that suggests that we're going to see a lot of new architecture thinking introduced as a consequence of trying to increase the parallelism through a system by incorporating more processing at the edge. >> Jim: Right. >> That's going to have an impact on volume economics and where the industry goes from an architecture standpoint. David Floyer, does that ultimately diminish the importance of systems integration as we move from the edge back towards the core and towards cloud in whatever architectural form it takes? >> I think the opposite, it actually is, systems integration becomes more important. And the key question has been can software do everything? Do we need specialized hardware for anything? And the answer is yes, because the standard x86 systems are just not improving in speed at all. >> Why not? >> That's a long answer to that. But it's to do with the amount of heat that's produced, and the degree of density that you can achieve. Even the chip itself-- >> So the ability to control bits flying around the chip-- >> Correct. >> Is going down-- >> Right. >> As a consequence of dispersion of energy and heat into the chip. >> Right, There are a lot of other factors as well. >> Other reasons as well, sure. >> But the important thing is, how do you increase the speed? And a standard x86 cycle time with it's instruction set, that's now fixed. So what can you do? Well, you can obviously, reduce the number of instructions and then parallelize those instructions within that same space. And that's going to give you a very significant improvement. And that's the basis of GPUs and FPGAs. So GPUs for example, you could have floating point arithmetic, or standard numbers or extended floating point arithmetic. All of those help in calculations, large scale calculations. The FPGAs are much more flexible. They can be programmed in very good ways, so they're useful for smaller volume things. A6 are important, but what we're seeing is a movement to specialized hardware to process AI in particular. And one area is very interesting to me is, to take the devices at the edge, what we call the level one systems. Those devices need to be programmed very, very intently for what is happening there. They are bringing all the data in, they're making that first line reduction of data, they're making the inferences, they're taking the decisions based on that information coming in and then sending much less data up to the level twos above it. So what are examples of this type of system that exist now? Because in hardware, volume matters. The amount of stuff you produce, the costs go down dramatically. >> And software too, in the computing industry, volume matters. >> Absolutely, absolutely. >> I think it's pretty safe to say that. >> Yeah, absolutely. So volume matters, so it's interesting to look at one of the first real volume AI applications, which is in the iPhone X. And Apple have introduced the latest chipset. It has neural networks within it. It has GPUs built in, and it's being used for simple things like face recognition and other areas of AI. And the interesting thing is the cost of this. The cost of that whole set, the chip itself, is $27. The total cost with all the senors and everything, to do that sort of AI work is $100. And that's a very low bar, and very, very difficult to introduce in other ways. So this level of integration for the consumer business in my opinion, is going to have a very significant effect on the choices that are made by manufacturers of devices going into industry and other things. They're going to take advantage of this in a big way. >> So Neil Raden, we've heard, or we've been down the FPGA road for example, in the past, data warehousing introduced, or it was thought that data warehouse workloads which did not necessarily lend themselves to a lot of the prevailing architectures in the early 90s, could get this enormous acceleration by giving users greater programmable control over the hardware. How'd that work out? >> Well, for Intersil for example, what actually worked out pretty well for awhile. But what they did is they used that PGA to handle the low-level data stuff and maybe reducing the complexity of the query before it was passed on to the CPUs where things ran in parallel. But that was before Intel introduced multi-core chips. And it kind of killed the effectiveness. And the other thing was, it was highly proprietary which made it impossible to take up to the cloud. And there was no programming. I always laugh when people say FPGA because it should have been called FGA. Because there was no end user computing of an FPGA. >> So that means that, although we still think we're going to see some benefit from this. But it kind of brings us back to the cloud, because if hardware economics are improved to scale, then that says that there are a few companies that are likely to drive a lot of the integration issues. If things like FPGAs don't get broadly diffused and programmed by large numbers of people, but we can see how they could, in fact, dramatically improve the performance, and quality of workloads, then it suggests that some of these hyperscalers are going to have an enormous impact ultimately on defining what constitutes systems integration. Stu, take us through some of the challenges that we've heard recently on the cloud, or on theCUBE at reinvent and other places, about how we start seeing some of the hyperscalers make commitments about specialized hardware, the role that systems integration's going to play, and then we'll talk about whether that could be replicated across more on-premise types of systems. >> Sure Peter, and to go back to your opening remarks for this segment, does hardware matter? When we first saw cloud computing roll out, many people thought that this was just undifferentiated commodity equipment. But if you really dig in and understand what the hyperscalers, the public cloud companies are doing, they really do what I've called hyperoptimize the solution. So when James Hamilton and AWS talks about their infrastructure, they don't just take components and throw a bunch of stuff from off the shelf out there. They build for every application, a configuration, and they just scale that to tens of thousands of nodes. So like what we had done in the enterprise before, which was build a stack for an application, now the public cloud does that for services and for applications that they're building up the stack. So hardware absolutely matters. And if we look not only at the public cloud, but you mentioned on the enterprise side, it's where do I need to think about hardware? Where do I need to put time and effort? What David Floyer's talked about is that integration is still critically important. But the enterprise should not be worrying about taking all of the pieces and putting them together. They should be able to buy solutions, leverage platforms that take care of that environment. Very timely discussion about all of the Intel issues that are happening. If I'm using a public cloud, well I don't have to necessarily worry about, I need to worry about that there was an issue, but I need to go to my supplier (chuckles) and make sure that they are handling that. And if I'm using serverless technology, obviously I'm a little bit detached from what that, whether or not I have that issue, and how that gets resolved. So absolutely, hardware is important. It's just, who manages that hardware, what pieces I need to think about, and where that happens. And the fascinating stuff happening in the AI pieces that Jim's been talking about, where you're really seeing some of the differentiation and innovation happening at the hardware level, to make sure that it can react for those applications that need it. >> So we've got this tension in the model right now. We've got this tension in the marketplace, where a lot of the new design decisions are going to be driven by what's happening at the edge. As we try to put more software out to where more human activity or system activity's actually taking place. And at the same time, a lot of the new design and architecture decisions being, first identified and encountered by some of the hyperscalers. The workloads are at the edge, the new design decisions are at the hyperscaler, latency is going to ensure that there is a fair amount of, a lot of workload that remains at the edge, as well as cost. So what does that mean for that central class of system? Are we going to see, as we talk about, TPC, true private cloud, becoming a focal point for new classes of designs, new classes of engineering? Are we going to see a Dell-EMC box that says, "designed in Texas," or "designed in Hopkinton," and is that going to matter to users? David Floyer, what do we think? >> So it's really important from the consumer point, from the customer's point of view, that they can deal with a total system. So if they want a system at the very edge, the level one we want, to do something in the manufacturing, they may go to Dell, but they may also go to Sony or they may go to Honeywell or NCL-- >> Rahway, or who knows. >> Rahway, yes, Alibaba. There are a whole number of probably new people that are going to be in that space. When you're talking about systems on site for the high level systems, level two and above, then they are going to be very, it will be very important to them that the service level that comes from the manufacturer, the integration of all the different components, both software and hardware, come from that manufacturer. He is organizing it from a service perspective. All of those things become actually more important in this environment. It's more complex, there are more components. There are more FPGAs and GPUs and all sorts of other things, connected together, it'll be their responsibility as the deliverer of a solution, to put that together and to make sure it works, and that it can be serviced. >> And very importantly to make sure, as you said, that it works and it can be serviced. >> Yeah. >> So that's going to be there. So the differentiation will be, does the design and engineering lead to simpler configuration, simpler change. >> Absolutely. >> Accommodate the programming requirements, accommodate the application requirements, all that are-- >> All in there, yes. >> Approximate to the realities of where data needs to be. George, you had a comment? >> Yeah, I got to say, having gone to IBM's IOT event a year ago in Munich, it was pretty clear that, when you're selling these new types of systems that we're alluding to here, it's like a turnkey appliance. It's not just bringing the Intel chip down. It's as David and Jim pointed out, it's a system on a chip that's got transistor real estate for specialized functions. And because it's not running the same scalable clustered software that you'd find in the cloud, you have small footprint software that's highly verticalized or specialized. So we're looking at lower volume, specialized turnkey appliances, that don't really share the architectural and compatibility traits of the enterprise and true private cloud cousins. And we're selling it, for the most part, to new customers, the operations technology folks, not IT, and often, you're selling it in conjunction with the supply chain master. In other words, auto OEM might go to their suppliers in conjunction with another vendor and sell these edge devices or edge gateways. >> And so that raises another very important question. Stu, I'm going to ask this of you. We're not going to be able to answer this question today. It's a topic for another conversation. But one of the things that the industry's not spending enough time talking about is that we are in the midst of a pretty consequential shift from a product orientation in business models to a service orientation in business models. We talk about APIs, we talk about renting, we talk about pay-as-you-go. And there is still an open question about how well those models are going to are going to end up on premise in a lot of circumstances. But Stu, when we think about this notion of the cloud experience, providing a common way of thinking about a cloud operating model, clearly the design decisions that are going to have to be made by the traditional providers of integrated systems are going to have to start factoring that question of how do we move from a product to a service orientation along with their business models, their way of financing, et cetera. What do you think is happening? Where's the state of the art in that today? >> Yeah, and Peter, it actually goes back to when we at Wikibon launched the true private cloud research a little bit over two years ago. It was not just saying, "How do we do something "better than virtualization?" It was really looking at, as you said, that cloud operating model. And what we're hearing very loud from customers today is, it's not that they have a public cloud strategy and an private cloud strategy. They have a cloud strategy (chuckles). And one of the challenges that they're really having is, how do they get their arms around that? Because today their private cloud and their public cloud a lot of times it's different suppliers, it's different operating environments as you said. We could spend a whole nother call on just discussing some of the nuance and pieces here. But the real trend we've been seeing, and kind of the second half of last year, and big thing we'll see, I'm sure, through this year, is what are the solutions? And how can customers manage this much simpler? And what are the technology pieces? And operational paradigms that are going to help them through this environment? And yeah, it's a little bit detached from some of the hardware discussion we're having here. Because of course, at the end of the day, it shouldn't matter what hardware or what locale I'm in, it's how I manage the entire environment. >> But it does (laughs). >> Yeah. >> It shouldn't matter, but the reality is, I think we're concluding that it does. >> Right, we think back to, oh back in the early days, "Oh, virtualization, great. "I can take any x86. "Oh wait, but I had a BIOS problem, "and that broke things." So when containers rolled out, we had the same kind of discussion, this, "Oh wait." There was something down at the storage or networking layer that broke. So it's always, where is the proper layer? How do we manage that? >> Right, I for one just continue to hope that we're going to see the Harry Potter computing model show up at some point in time. But until then, magic is not going to run software. It's going to have to run on hardware, and that has physical and other realities. All right, thanks guys. Let's wrap this one up. Let me give some, what the action item is. So this week, we've talked about the importance of hardware in the marketplace going forward. And partly, it's catalyzed by an event that occurred this week. A security firm discovered a couple of flaws in some of the predominant, common, standard volume CPUs, including Intel's, that have long term ramifications. And while one of the fixes is not going to be easy, the other one can be fixed by software. But the suggestion is that the fix, that software fix would take out 30% of the computing power of the chip. And we were thinking to ourselves, what would happen if the world suddenly lost 30% of their computing power overnight? And the reality is, a lot of bad things would happen. And it's very clear that hardware still matters. And we have this tension between what's happening at the edge, where we're starting to see a need for greater distribution of function that's performing increasingly specialized workloads, utilizing increasingly new technology, that's not, that the prevailing stack is not necessarily built for. So the edge is driving new opportunities for design that's going to turn into new requirements for hardware that will only be possible if there's new volume markets capable of supporting it, and new suppliers bringing it to market. That doesn't however mean that the whole concept of systems integration goes away. On the contrary, even though we're going to see this enormous amount of change at the edge, there's an enormous net new invention in what does it mean to do systems integration? We're seeing a lot of that happen in the hyperscalers first, in companies like Amazon, and Google, and elsewhere. But don't be fooled. The HPE's the IBM's, the Dell-EMC's are all very cognizant of these approaches and these changes, and these challenges. And in many respects, a lot of the original work, a lot of the original invention is still being performed in their labs. So the expectation is the new design model is being driven by the edge. Plus the new engineering model's being driven by the hyperscalers, will not mean that it all ends up in two tiers. But we will see a need for modern systems integration happening in the true private cloud, on the premise, where a lot of the data and a lot of the workloads and a lot of the intellectual property is still going to reside. That however, does not mean that the model going forward is the same. Some of the new engineering dynamics, or some of the new design dynamics will have to start factoring in how the hardware simplifies configuration. For example, FPGAs have been around for a long time. But end users don't program FPGAs. So what good does it do to reflect the FPGA capability inside a box, inside a true private cloud box, if the user doesn't have any simple, straightforward, meaningful way to make use of it? So a lot of new emphasis on improve manageability, AI for ITOM, ways of providing application developers access to accelerated devices. This is where the new systems and design issues are going to manifest themselves in the marketplace. Underneath this, when we talk about unigrid, we're talking about some pretty consequential changes ultimately in how design and engineering of some of these big systems works. So our conclusion is, lots that the hardware still matters, but that the industry continued to move and drive in a direction that reduces the complexity of the underlying hardware. But that doesn't mean that users aren't going to have to, aren't going to encounter serious, serious decisions and serious issues regarding which supplier they should work with. So the action item is this. As we move from a product to a service orientation in the marketplace, hardware is still going to matter. That creates a significant challenge for a lot of users, because now we're talking about how that hardware is rendered as platforms that will have long-term consequences inside a business. So CIOs, start thinking about 2018 as the year in which you start to consider the new classes of platforms that you're going to move to. Because those platforms will be the basis for simplifying a lot of underlying decisions regarding where is the best design and engineering of infrastructure going forward. Once again, I want to thank my Wikibon teammates. George Gilbert, David Floyer, Stu Miniman, Neil Raden, Jim Kobielus, for a great Action Item. From theCUBE studios in Palo Alto, this has been Action Item. Talk to you soon. (funky electronic music)
SUMMARY :
And I've got the Wikibon research team So that raises the question with potential is that edge computing has become the new But also, FPGAs, A6, and CPUs are not out of the running by incorporating more processing at the edge. the importance of systems integration And the answer is yes, and the degree of density that you can achieve. and heat into the chip. Right, There are a lot of other And that's the basis of GPUs and FPGAs. And software too, in the computing industry, And the interesting thing is the cost of this. a lot of the prevailing architectures in the early 90s, And it kind of killed the effectiveness. the role that systems integration's going to play, at the hardware level, to make sure that it can and is that going to matter to users? the level one we want, that the service level that comes from the manufacturer, And very importantly to make sure, as you said, So the differentiation will be, Approximate to the realities of where data needs to be. And because it's not running the same of the cloud experience, and kind of the second half of last year, It shouldn't matter, but the reality is, or networking layer that broke. but that the industry continued to move
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
$100 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$27 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
George | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Alibaba | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Munich | LOCATION | 0.99+ |
iPhone X. | COMMERCIAL_ITEM | 0.99+ |
two tiers | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
Hopkinton | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
this week | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
a year ago | DATE | 0.98+ |
today | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Rahway | PERSON | 0.98+ |
early 90s | DATE | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
2018 | DATE | 0.97+ |
Honeywell | ORGANIZATION | 0.96+ |
this year | DATE | 0.96+ |
Dell-EMC | ORGANIZATION | 0.93+ |
theCUBE | ORGANIZATION | 0.92+ |
Intersil | ORGANIZATION | 0.91+ |
second half of last year | DATE | 0.89+ |
one area | QUANTITY | 0.88+ |
two years ago | DATE | 0.88+ |
Harry Potter | PERSON | 0.83+ |
level twos | QUANTITY | 0.83+ |
over | DATE | 0.77+ |
James Hamilton | PERSON | 0.76+ |
Rahway | ORGANIZATION | 0.74+ |
NCL | ORGANIZATION | 0.67+ |
Wikibon Predictions Webinar with Slides
(upbeat music) >> Hi, welcome to this year's Annual Wikibon Predictions. This is our 2018 version. Last year, we had a very successful webinar describing what we thought was going to happen in 2017 and beyond and we've assembled a team to do the same thing again this year. I'm very excited to be joined by the folks listed here on the screen. My name is Peter Burris. But with me is David Floyer, Jim Kobielus is remote. George Gilbert's here in our Pal Alto studio with me. Neil Raden is remote. David Vellante is here in the studio with me. And Stuart Miniman is back in our Marlboro office. So thank you analysts for attending and we look forward to a great teleconference today. Now what we're going to do over the course of the next 45 minutes or so is we're going to hit about 13 of the 22 predictions that we have for the coming year. So if you have additional questions, I want to reinforce this, if you have additional questions or things that don't get answered, if you're a client, give us a call. Reach out to us. We'll leave you with the contact information at the end of the session. But to start things off we just want to make sure that everybody understands where we're coming from. And let you know who is Wikibon. So Wikibon is a company that starts with the idea of what's important as to research communities. Communities are where the action is. Community is where the change is happening. And community is where the trends are being established. And so we use digital technologies like theCUbE, CrowdChat and others to really ensure that we are surfacing the best ideas that are in a community and making them available to our clients so that they can succeed successfully, they can be more successful in their endeavors. When we do that, our focus has always been on a very simple premise. And that is that we're moving to an era of digital business. For many people, digital business can mean virtually anything. For us it means something very specific. To us, the difference between business and digital business is data. A digital business uses data to differentially create and keep a customer. So borrowing from what Peter Drucker said if the goal of business is to create customers and keep and sustain customers, the goal of digital business is to use data to do that. And that's going to inform an enormous number of conversations and an enormous number of decisions and strategies over the next few years. We specifically believe that all businesses are going to have establish what we regard as the five core digital business capabilities. First, they're going to have to put in place concrete approaches to turning more data into work. It's not enough to just accrete data, to capture data or to move data around. You have to be very purposeful and planful in how you establish the means by which you turn that data into work so that you can create and keep more customers. Secondly, it's absolutely essential that we build kind of the three core technology issues here, technology capabilities of effectively doing a better job of capturing data and IoT and people, or internet of things and people, mobile computing for example, is going to be a crucial feature of that. You have to then once you capture that data, turn it into value. And we think this is the essence of what big data and in many respects AI is going to be all about. And then once you have the possibility, kind of the potential energy of that data in place, then you have to turn it into kinetic energy and generate work in your business through what we call systems of agency. Now, all of this is made possible by this significant transformation that happens to be conterminous with this transition to digital business. And that is the emergence of the cloud. The technology industry has always been defined by the problems it was able to solve, catalyzed by the characteristics of the technology that made it possible to solve them. And cloud is crucial to almost all of the new types of problems that we're going to solve. So these are the five digital business capabilities that we're going to talk about, where we're going to have our predictions. Let's start first and foremost with this notion of turn more data into work. So our first prediction relates to how data governance is likely to change in a global basis. If we believe that we need to turn more data into work well, businesses haven't generally adopted many of the principles associated with those practices. They haven't optimized to do that better. They haven't elevated those concepts within the business as broadly and successfully as they have or as they should. We think that's going to change in part by the emergence of GDPR or the General Data Protection Regulation. It's going to go in full effect in May 2018. A lot has been written about it. A lot has been talked about. But our core issues ultimately are is that the dictates associated with GDPR are going to elevate the conversation on a global basis. And it mandates something that's now called the data protection officer. We're going to talk about that in a second David Vellante. But if is going to have real teeth. So we were talking with one chief privacy officer not too long ago who suggested that had the Equifax breach occurred under the rules of GDPR that the actual finds that would have been levied would have been in excess of 160 billion dollars which is a little bit more than the zero dollars that has been fined thus far. Now we've seen new bills introduced in Congress but ultimately our observation and our conversations with a lot of data chief privacy officers or data protection officers is that in the B2B world, GDPR is going to strongly influence not just our businesses behavior regarding data in Europe but on a global basis. Now that has an enormous implication David Vellante because it certainly suggest this notion of a data protection officer is something now we've got another potential chief here. How do we think that's going to organize itself over the course of the next few years? >> Well thank you Peter. There are a lot of chiefs (laughs) in the house and sometimes it gets confusing as the CIO, there's the CDO and that's either chief digital officer or chief data officer. There's the CSO, could be strategy, sometimes that could be security. There's the CPO, is that privacy or product. As he says, it gets confusing sometimes. On theCUbE we talked to all of these roles so we wanted to try to add some clarity to that. First thing we want to say is that the CIO, the chief information officer, that role is not going away. A lot of people predict that, we think that's nonsense. They will continue to have a critical role. Digital transformations are the priority in organizations. And so the chief digital officer is evolving from more than just a strategy role to much more of an operation role. Generally speaking, these chiefs tend to report in our observation to the chief operating officer, president COO. And we see the chief digital officer as increasing operational responsibility aligning with the COO and getting incremental responsibility that's more operational in nature. So the prediction really is that the chief digital officer is going to emerge as a charismatic leader amongst these chiefs. And by 2022, nearly 50% of organizations will position the chief digital officer in a more prominent role than the CIO, the CISO, the CDO and the CPO. Those will still be critical roles. The CIO will be an enabler. The chief information security officer has a huge role obviously to play especially in terms of making security a teams sport and not just falling on IT's shoulders or the security team's shoulders. The chief data officer who really emerged from a records and data management role in many cases, particularly within regulated industries will still be responsible for that data architecture and data access working very closely with the emerging chief privacy officer and maybe even the chief data protection officer. Those roles will be pretty closely aligned. So again, these roles remain critical but the chief digital officer we see as increasing in prominence. >> Great, thank you very much David. So when we think about these two activities, what we're really describing is over the course of the next few years, we strongly believe that data will be regarded more as an asset within business and we'll see resources devoted to it and we'll see certainly management devoted to it. Now, that leads to the next set of questions as data becomes an asset, the pressure to acquire data becomes that much more acute. We believe strongly that IoT has an enormous implication longer term as a basis for thinking about how data gets acquired. Now, operational technology has been in place for a long time. We're not limiting ourselves just operational technology when we talk about this. We're really talking about the full range of devices that are going to provide and extend information and digital services out to consumers, out to the Edge, out to a number of other places. So let's start here. Over the course of the next few years, the Edge analytics are going to be an increasingly important feature overall of how technology decisions get made, how technology or digital business gets conceived and even ultimately how business gets defined. Now David Floyer's done a significant amount of work in this domain and we've provided that key finding on the right hand side. And what it shows is that if you take a look at an Edge based application, a stylized Edge based application and you presume that all the data moves back to an centralized cloud, you're going to increase your costs dramatically over a three year period. Now that moderates the idea or moderates the need ultimately for providing an approach to bringing greater autonomy, greater intelligence down to the Edge itself and we think that ultimately IoT and Edge analytics become increasingly synonymous. The challenge though is that as we evolve, while this has a pressure to keep more of the data at the Edge, that ultimately a lot of the data exhaust can someday become regarded as valuable data. And so as a consequence of that, there's still a countervailing impression to try to still move all data not at the moment of automation but for modeling and integration purposes, back to some other location. The thing that's going to determine that is going to be rate at which the cost of moving the data around go down. And our expectation is over the next few years when we think about the implications of some of the big cloud suppliers, Amazon, Google, others, that are building out significant networks to facilitate their business services may in fact have a greater impact on the common carriers or as great an impact on the common carriers as they have on any server or other infrastructure company. So our prediction over the next few years is watch what Amazon, watch what Google do as they try to drive costs down inside their networks because that will have an impact how much data moves from the Edge back to the cloud. It won't have an impact necessarily on the need for automation at the Edge because latency doesn't change but it will have a cost impact. Now that leads to a second consideration and the second consideration is ultimately that when we talk about greater autonomy at the Edge we need to think about how that's going to play out. Jim Kobielus. >> Jim: Hey thanks a lot Peter. Yeah, so what we're seeing at Wikibon is that more and more of the AI applications, more of the AI application development involves AI and more and more of the AI involves deployment of those models, deep learning machine learning and so forth to the Edges of the internet of things and people. And much of that AI will be operating autonomously with little or no round-tripping back to the cloud. What that's causing, in fact, we're seeing really about a quarter of the AI development projects (static interference with web-conference) as Edge deployment. What that involves is that more and more of that AI will be, those applications will be bespoke. They'll be one of a kind, or unique or an unprecedented application and what that means is that, you know, there's a lot of different deployment scenarios within which organizations will need to use new forms of learning to be able to ready that data, those AI applications to do their jobs effectively albeit to predictions of real time, guiding of an autonomous vehicle and so forth. Reinforcement learning is the core of what many of these kinds of projects, especially those that involve robotics. So really software is hitting the world and you know the biggest parts are being taken out of the Edge, much of that is AI, much of that autonomous, where there is no need or less need for real time latency in need of adaptive components, AI infused components where as they can learn by doing. From environmental variables, they can adapt their own algorithms to take the right actions. So, they'll have far reaching impacts on application development in 2018. For the developer, the new developer really is a data scientist at heart. They're going to have to tap into a new range of sources of data especially Edge sourced data from the senors on those devices. They're going to need to do commitment training and testing especially reinforcement learning which doesn't involve trained data so much as it involves being able to build an algorithm that can learn to maximum what's called accumulative reward function and if you do the training there adaptly in real time at the Edge and so forth and so on. So really, much of this will be bespoke in the sense that every Edge device increasingly will have its own set of parameters and its own set of objective functions which will need to be optimized. So that's one of the leading edge forces, trends, in development that we see in the coming year. Back to you Peter. >> Excellent Jim, thank you very much. The next question here how are you going to create value from data? So once you've, we've gone through a couple trends and we have multiple others about what's going to happen at the Edge. But as we think about how we're going to create value from data, Neil Raden. >> Neil: You know, the problem is that data science emerged rapidly out of sort of a perfect storm of big data and cloud computing and so forth. And people who had been involved in quantitative methods you know rapidly glommed onto the title because it was, lets face it, it was very glamorous and paid very well. But there weren't really good best practices. So what we have in data science is a pretty wide field of things that are called data science. My opinion is that the true data scientists are people who are scientists and are involved in developing new or improving algorithms as opposed to prepping data and applying models. So the whole field really kind of generated very quickly, in really, just in a few years. To me I called it generation zero which is more like data prep and model management all done manually. And it wasn't really sustainable in most organizations because for obvious reasons. So generation one, then some vendors stepped up with tool kits or benchmarks or whatever for data scientists and made it a little better. And generation two is what we're going to see in 2018, is the need for data scientists to no longer prep data or at least not spend very much time with it. And not to do model management because the software will not only manage the progression of the models but even recommend them and generate them and select the data and so forth. So it's in for a very big change and I think what you're going to see is that the ranks of data scientists are going to sort of bifurcate to old style, let me sit down and write some spaghetti code in R or Java or something and those that use these advanced tool kits to really get the work done. >> That's great Neil and of course, when we start talking about getting the work done, we are becoming increasingly dependent upon tools, aren't we George? But the tool marketplace for data science, for big data, has been somewhat fragmented and fractured. And hasn't necessarily focused on solving the problems of the data scientists. But in many respects focusing the problems that the tools themselves have. What's going to happen in the coming year when we start thinking about Neil's prescription that as the tools improve what's going to happen to the tools. >> Okay so, the big thing that we see supporting what Neil's talking about, what Neil was talking about is partly a symptom of a product issue and a go to market issue where the produce issue was we had a lot of best of breed products that were all designed to fit together. That in the broader big data space, that's the same issue that we faced with more narrowly with ArpiM Hadoop where you know, where we were trying to fit together a bunch of open source packages that had an admin and developer burden. More broadly, what Neil is talking about is sort of a richer end to end tools that handle both everything from the ingest all to the way to the operationalization and feedback of the models. But part of what has to go on here is that with open source, these open source tools the price point and the functional footprints that many of the vendors are supporting right now can't feed an enterprise sales force. Everyone talks with their open source business models about land and expand and inside sales. But the problem is once you want to go to wide deployment in an enterprise, you still need someone negotiating commercial terms at a senior level. You still need the technical people fitting the tools into a broader architecture. And most of the vendors that we have who are open source vendors today, don't have either the product breadth or the deal size to support traditional enterprise software. An account team would typically a million and a half to two million quota every year so we see consolidation and the consolidation again driven by the need for simplicity for the admins and the developers and for business model reasons to support enterprise sales force. >> All right, so what we're going to see happen in the course of the coming year is a lot of specialization and recognition of what is data science, what are the practices, how is it going to work, supported by an increasing quality of tools and a lot of tool vendors are going to be left behind. Now the third kind of notion here for those core technology capabilities is we still have to enact based on data. The good new is that big data is starting to show some returns in part because of some of the things that AI and other technologies are capable of doing. But we have to move beyond just creating the potential for, we have to turn that into work and that's what we mean ultimately by this notion of systems of agency. The idea that data driven applications will increasingly be act on behalf of a brand, on behalf of a company and building those systems out is going to be crucial. It's going to have a whole new set of disciplines and expertise required. So when we think about what's going to be required, it always starts with this notion of AI. A lot of folks are presuming however, that AI is going to be relatively easy to build or relatively easy to put together. We have a different opinion George. What do we think is going to happen as these next few years unfold related to AI adoption in large enterprises? >> Okay so, let's go back to the lessons we learned from sort of the big data, the raw, you know, let's put a data link in place which was sort of the top of everyone's agenda for several years. The expectation was it was going to cure cancer, taste like chocolate and cost a dollar. And uh. (laughing) It didn't quite work out that way. Partly because we had a burden on the administrator again of so many tools that weren't all designed to fit together, even though they were distributed together. And then the data scientists, the guys who had to take all this data that wasn't carefully curated yet. And turn that into advanced analytics and machine learning models. We have many of the same problems now with tool sets that are becoming more integrated but at lower levels. This is partly what Neil Raden was just talking about. What we have to recognize is something that we see all along, I mean since the beginning of (laughs) corporate computing. We have different levels of extraction and you know at the very bottom, when you're dealing with things like Tensorflow or MXNet, that's not for mainstream enterprises. That's for you know, the big sophisticated tech companies who are building new algorithms on those frameworks. There's a level above that where you're using like a spark cluster in the machine learning built into that. That's slightly more accessible but when we talk about mainstream enterprises taking advantage of AI, the low hanging fruit is for them to use the pre-trained models that the public cloud vendors have created with all the consumer data on speech, image recognition, natural language processing. And then some of those capabilities can be further combined into applications like managing a contact center and we'll see more from like Amazon, like recommendation engines, fulfillment optimization, pricing optimization. >> So our expectation ultimately George is that we're going to see a lot of this, a lot of AI adoption happen through existing applications because the vendors that are capable of acquiring a talent, taking or experimenting, creating value, software vendors are going to be where a lot of the talent ends up. So Neil, we have an example of that. Give us an example of what we think is going to happen in 2018 when we start thinking about exploiting AI and applications. >> Neil: I think that it's fairly clear to be the application of what's called advanced analytics and data science and even machine learning. But really, it's rapidly becoming a commonplace in organizations not just at the bottom of the triangle here. But I like the example of SalesForce.com. What they've done with Einstein, is they've made machine learning and I guess you can say, AI applications available to their customer base and why is that a good thing? Because their customer base already has a giant database of clean data that they can use. So you're going to see a huge number of applications being built with Einstein against Salesforce.com data. But there's another thing to consider and that is a long time ago Salesforce.com built connectors to a zillion times of external data. So, if you're a SalesForce.com customer using Einstein, you're going to be able to use those advanced tools without knowing anything about how to train a machine learning model and start to build those things. And I think that they're going to lead the industry in that sense. That's going to push their revenue next year to, I don't know, 11 billion dollars or 12 billion dollars. >> Great, thanks Neil. All right so when we think about further evidence of this and further impacts, we ultimately have to consider some of the challenges associated with how we're going to create application value continually from these tools. And that leads to the idea that one of the cobblers children, it's going to gain or benefit from AI will in fact be the developer organization. Jim, what's our prediction for how auto-programming impacts development? >> Jim: Thank you very much Peter. Yeah, automation, wow. Auto-programming like I said is the epitome of enterprise application development for us going forward. People know it as co-generation but that really understates the control of auto-programming as it's evolving. Within 2018, what we're going to see is that machine learning driven co-generation approach of becoming the forefront of innovation. We're seeing a lot of activity in the industry in which applications use ML to drive the productivity of developers for all kinds of applications. We're also seeing a fair amount of what's called RPA, robotic process automation. And really, how they differ is that ML will deliver or will drive co-generation, from what I call the inside out meaning, creating reams of code that are geared to optimize a particular application scenario. This is RPA which really takes over the outside in approach which is essentially, it's the evolution of screen scraping that it's able to infer the underlined code needed for applications of various sorts from the external artifacts, the screens and from sort of the flow of interactions and clips and so forth for a given application. We're going to see that ML and RPA will compliment each other in the next generation of auto-programming capabilities. And so, you know, really application development tedium is really the enemy of, one of the enemies of productivity (static interference with web-conference). This is a lot of work, very detailed painstaking work. And what they need is to be better, more nuanced and more adaptive auto-programming tools to be able to build the code at the pace that's absolutely necessary for this new environment of cloud computing. So really AI-related technologies can be applied and are being applied to application development productivity challenges of all sorts. AI is fundamental to RPA as well. We're seeing a fair number of the vendors in that stage incorporate ML driven OCR and natural language processing and screen scraping and so forth into their core tools to be able to quickly build up the logic albeit to drive sort of the verbiage outside in automation of fairly complex orchestration scenario. In 2018, we'll see more of these technologies come together. But you know, they're not a silver bullet. 'Cause fundamentally and for organizations that are considering going deeply down into auto-programming they're going to have to factor AI into their overall plans. They need to get knowledgeable about AI. They're going to need to bring more AI specialists into their core development teams to be able to select from the growing range of tools that are out there, RPA and ML driven auto-programming. Overall, really what we're seeing is that the AI, the data scientists, who's been the fundamental developer of AI, they're coming into the core of development tools and skills in organizations. And they're going to be fundamental to this whole trend in 2018 and beyond. If AI gets proven out in auto-programming, these developers will then be able to evangelize the core utility of the this technology, AI. In a variety of other backend but critically important investments that organizations will be making in 2018 and beyond. Especially in IT operations and in management, AI is big in that area as well. Back to you there, Peter. >> Yeah, we'll come to that a little bit later in the presentation Jim, that's a crucial point but the other thing we want to note here regarding ultimately how folks will create value out of these technologies is to consider the simple question of okay, how much will developers need to know about infrastructure? And one of the big things we see happening is this notion of serverless. And here we've called it serverless, developer more. Jim, why don't you take us through why we think serverless is going to have a significant impact on the industry, at least certainly from a developer perspective and developer productivity perspective. >> Jim: Yeah, thanks. Serverless is really having an impact already and has for the last several years now. Now, everybody, many are familiar in the developer world, AWS Lambda which is really the ground breaking public cloud service that incorporates the serverless capabilities which essentially is an extraction layer that enables developers to build stateless code that executes in a cloud environment without having to worry about and to build microservices, we don't have to worry about underlined management of containers and virtual machines and so forth. So in many ways, you know, serverless is a simplification strategy for developers. They don't have to worry about the underlying plumbing. They can worry, they need to worry about the code, of course. What are called Lambda functions or functional methods and so forth. Now functional programming has been around for quite a while but now it's coming to the form in this new era of serverless environment. What we'll see in 2018 is that we're predicting is that more than 50% of lean microservices employees, in the public cloud will be deployed in serverless environments. There's AWS and Microsoft has the Azure function. IMB has their own. Google has their own. There's a variety of private, there's a variety of multiple service cloud code bases for private deployment of serverless environments that we're seeing evolving and beginning to deploy in 2018. They all involve functional programming which really, along, you know, when coupled with serverless clouds, enables greater scale and speed in terms of development. And it's very agile friendly in the sense that you can quickly Github a functionally programmed serverless microservice in a hurry without having to manage state and so forth. It's very DevOps friendly. In the very real sense it's a lot faster than having to build and manage and tune. You know, containers and DM's and so forth. So it can enable a more real time and rapid and iterative development pipeline going forward in cloud computing. And really fundamentally what serverless is doing is it's pushing more of these Lamba functions to the Edge, to the Edges. If you're at an AWS Green event last week or the week before, but you notice AWS is putting a big push on putting Lambda functions at the Edge and devices for the IoT as we're going to see in 2018. Pretty much the entire cloud arena. Everybody will push more of the serverless, functional programming to the Edge devices. It's just a simplification strategy. And that actually is a powerful tool for speeding up some of the development metabolism. >> All right, so Jim let me jump in here and say that we've now introduced the, some of these benefits and really highlighted the role that the cloud is going to play. So, let's turn our attention to this question of cloud optimization. And Stu, I'm going to ask you to start us off by talking about what we mean by true private cloud and ultimately our prediction for private cloud. Do we have, why don't you take us through what we think is going to happen in this world of true private cloud? >> Stuart: Sure Peter, thanks a lot. So when Wikibon, when we launched the true private cloud terminology which was about two weeks ago next week, two years ago next week, it was in some ways coming together of a lot of trends similar to things that you know, George, Neil and James have been talking about. So, it is nothing new to say that we needed to simplify the IT stack. We all know, you know the tried and true discussion of you know, way too much of the budget is spent kind of keeping lights on. What we'd like to say is kind of running the business. If you squint through this beautiful chart that we have on here, a big piece of this is operational staffing is where we need to be able to make a significant change. And what we've been really excited and what led us to this initial market segment and what we're continuing to see good growth on is the move from traditional, really siloed infrastructure to you want to have, you know, infrastructure where it is software based. You want IT to really be able to focus on the application services that they're running. And what our focus for the this for the 2018 is of course it's the central point, it's the data that matters here. The whole reason we've infrastructured this to be able to run applications and one of the things that is a key determiner as to where and what I use is the data and how can I not only store that data but actually gain value from that data. Something we've talked about time and again and that is a major determining factor as to am I building this in a public cloud or am I doing it in you know my core. Is it something that is going to live on the Edge. So that's what we were saying here with the true private cloud is not only are we going to simplify our environment and therefore it's really the operational model that we talked about. So we often say the line, cloud is not a destination. But it's an operational model. So a true private cloud giving me some of the you know, feel and management type of capability that I had had in the public cloud. It's, as I said, not just virtualization. It's much more than that. But how can I start getting services and one of the extensions is true private cloud does not live in isolation. When we have kind of a core public cloud and Edge deployments, I need to think about the operational models. Where data lives, what processing happens need to be as environments, and what data we'll need to move between them and of course there's fundamental laws of physics that we need to consider in that. So, the prediction of course is that we know how much gear and focus has been on the traditional data center. And true private cloud helps that transformation to modernization and the big focus is many of these applications we've been talking about and uses of data sets are starting to come into these true private cloud environments. So, you know, we've had discussions. There's Spark, there's modern databases. Many of these, there's going to be many reasons why they might live in the private cloud environment. And therefore that's something that we're going to see tremendous growth and a lot of focus. And we're seeing a new wave of companies that are focusing on this to deliver solutions that will do more than just a step function for infrastructure or get us outside of our silos. But really helps us deliver on those cloud native applications where we pull in things like what Jim was talking about with serverless and the like. >> All right, so Stu, what that suggests ultimately is that data is going to dictate that everything's not going to end up in the private or in the public cloud or centralized public clouds because of latency costs, data governance and IP protection reasons. And there will be some others. At bare minimum, that means that we're going to have it in most large enterprises as least a couple of clouds. Talk to us about what this impact of multi cloud is going to look like over the course of the next few years. >> Stuart: Yeah, critical point there Peter. Because, right, unfortunately, we don't have one solution. There's nobody that we run into that say, oh, you know, I just do a single you know, one environment. You know it would be great if we only had one application to worry about. But as you've done this lovely diagram here, we all use lots of SaaS and increasingly, you know, Oracle, Microsoft, SalesForce, you know, all pushing everybody to multiple SaaS environments that has major impacts on my security and where my data lives. Public clouds, no doubt is growing at leaps and bounds. And many customers are choosing applications to live in different places. So just as in data centers, I would kind of look at it from an application standpoint and build up what I need. Often, there's you know, Amazon doing phenomenal. But you know, maybe there's things that I'm doing with Azure. Maybe there's things that's I'm doing with Google or others as well as my service providers for locality, for you know, specialized services, that there's reasons why people are doing it. And what customers would love is an operational model that can actually span between those. So we are very early in trying to attack this multi cloud environment. There's everything from licensing to security to you know, just operationally how do I manage those. And a piece of them that we were touching on in this prediction year, is Kubernetes actually can be a key enabler for that cloud native environment. As Jim talked about the serverless, what we'd really like is our developer to be able to focus on building their application and not think as much about the underlined infrastructure whether that be you know, racket servers that I built myself or public cloud infrastructures. So we really want to think more it's at the data and application level. It's SaaS and pass is the model and Kubernetes holds the promise to solve a piece of this puzzle. Now Kubernetes is not by no means a silver bullet for everything that we need. But it absolutely, it is doing very well. Our team was at the Linux, the CNCF show at KubeCon last week and there is you know, broad adoption from over 40 of the leading providers including Amazon is now a piece. Even SalesForce signed up to the CNCF. So Kubernetes is allowing me to be able to manage multi cloud workflows and therefore the prediction we have here Peter is that 50% of developing teams will be building, sustaining multi cloud with Kubernetes as a foundational component of that. >> That's excellent Stu. But when we think about it, the hardware of technology especially because of the opportunities associated with true private cloud, the hardware technologies are also going to evolve. There will be enough money here to sustain that investment. David Floyer, we do see another architecture on the horizon where for certain classes of workloads, we will be able to collapse and replicate many of these things in an economical, practical way on premise. We call that UniGrid, NVME is, over fabric is a crucial feature of UniGrid. >> Absolutely. So, NVMe takes, sorry NVMe over fabric or NVMe-oF takes NVMe which is out there as storage and turns it into a system framework. It's a major change in system architecture. We call this UniGrid. And it's going to be a focus of our research in 2018. Vendors are already out there. This is the fastest movement from early standards into products themselves. You can see on the chart that IMB have come out with NVMe over fabrics with the 900 storage connected to the power. Nine systems. NetApp have the EF750. A lot of other companies are there. Meta-Lox is out there looking for networks, for high speed networks. Acceler has a major part of the storage software. So and it's going to be used in particular with things like AI. So what are the drivers and benefits of this architecture? The key is that data is the bottleneck for application. We've talked about data. The amount of data is key to making applications more effective and higher value. So NVMe and NVMe over fabrics allows data to be accessed in microseconds as opposed to milliseconds. And it allows gigabytes of data per second as opposed to megabytes of data per second. And it also allows thousands of processes to access all of the data in very very low latencies. And that gives us amazing parallelism. So what's is about is disaggregation of storage and network and processes. There are some huge benefits from that. Not least of which is you save about 50% of the processor you get back because you don't have to do storage and networking on it. And you save from stranded storage. You save from stranded processor and networking capabilities. So it's overall, it's going to be cheaper. But more importantly, it makes it a basis for delivering systems of intelligence. And systems of intelligence are bringing together systems of record, the traditional systems, not rewriting them but attaching them to real time analytics, real time AI and being able to blend those two systems together because you've got all of that additional data you can bring to bare on a particular problem. So systems themselves have reached pretty well the limit of human management. So, one of the great benefits of UniGrid is to have a single metadata lab from all of that data, all of those processes. >> Peter: All those infrastructure elements. >> All those infrastructure elements. >> Peter: And application. >> And applications themselves. So what that leads to is a huge potential to improve automation of the data center and the application of AI to operations, operational AI. >> So George, it sounds like it's going to be one of the key potential areas where we'll see AI be practically adopted within business. What do we think is going to happen here as we think about the role that AI is going to play in IT operations management? >> Well if we go back to the analogy with big data that we thought was going to you know, cure cancer, taste like chocolate, cost a dollar, and it turned out that the application, the most wide spread application of big data was to offload ETL from expensive data warehouses. And what we expect is the first widespread application of AI embedded in applications for horizontal use where Neil mentioned SalesForce and the ability to use Einstein as SalesForce data and connected data. Now because the applications we're building are so complex that as Stu mentioned you know, we have this operational model with a true private cloud. It's actually not just the legacy stuff that's sucking up all the admin overhead. It's the complexity of the new applications and the stringency of the SLA's, means that we would have to turn millions of people into admins, the old you know, when the telephone networks started, everyone's going to have to be an operator. The only way we can get past this is if we sort of apply machine learning to IT Ops and application performance management. The key here is that the models can learn how the infrastructure is laid out and how it operates. And it can also learn about how all the application services and middleware works, behaving independently and with each other and how they tie with the infrastructure. The reason that's important is because all of a sudden you can get very high fidelity root cause analysis. In the old management technology, if you had an underlined problem, you'd have a whole sort of storm of alerts, because there was no reliable way to really triangulate on the or triage the root cause. Now, what's critical is if you have high fidelity root cause analysis, you can have really precise recommendations for remediation or automated remediation which is something that people will get comfortable with over time, that's not going to happen right away. But this is critical. And this is also the first large scale application of not just machine learning but machine data and so this topology of collecting widely desperate machine data and then applying models and then reconfiguring the software, it's training wheels for IoT apps where you're going to have it far more distributed and actuating devices instead of software. >> That's great, George. So let me sum up and then we'll take some questions. So very quickly, the action items that we have out of this overall session and again, we have another 15 or so predictions that we didn't get to today. But one is, as we said, digital business is the use of data assets to compete. And so ultimately, this notion is starting to diffuse rapidly. We're seeing it on theCUbE. We're seeing it on the the CrowdChats. We're seeing it in the increase of our customers. Ultimately, we believe that the users need to start preparing for even more business scrutiny over their technology management. For example, something very simple and David Floyer, you and I have talked about this extensively in our weekly action item research meeting, the idea of backing up and restoring a system is no longer in a digital business world. It's not just backing up and restoring a system or an application, we're talking about restoring the entire business. That's going to require greater business scrutiny over technology management. It's going to lead to new organizational structures. New challenges of adopting systems, et cetera. But, ultimately, our observations is that data is going to indicate technology directions across the board whether we talk about how businesses evolve or the roles that technology takes in business or we talk about the key business capability, digital business capabilities, of capturing data, turning it into value and then turning into work. Or whether we talk about how we think about cloud architecture and which organizations of cloud resources we're going to utilize. It all comes back to the role that data's going to play in helping us drive decisions. The last action item we want to put here before we get to the questions is clients, if we don't get to your question right now, contact us. Send us an inquiry. Support@silicongangle.freshdesk.com. And we'll respond to you as fast as we can over the course of the next day, two days, to try to answer your question. All right, David Vellante, you've been collecting some questions here. Why don't we see if we can take a couple of them before we close out. >> Yeah, we got about five or six minutes in the chat room, Jim Kobielus has been awesome helping out and so there's a lot of detailed answer there. The first, there's some questions and comments. The first one was, are there too many chiefs? And I guess, yeah. There's some title inflation. I guess my comment there would be titles are cheap, results aren't. So if you're creating chief X officers just for the, to check a box, you're probably wasting money. So you've got to give them clear roles. But I think each of these chiefs has clear roles to the extent that they are you know empowered. Another comment came up which is we don't want you know, Hadoop spaghetti soup all over again. Well true that. Are we at risk of having Hadoop spaghetti soup as the centricity of big data moves from Hadoop to AI and ML and deep learning? >> Well, my answer is we are at risk of that but that there's customer pressure and vendor economic pressure to start consolidating. And we'll also see, what we didn't see in the ArpiM big data era, with cloud vendors, they're just going to start making it easier to use some of the key services together. That's just natural. >> And I'll speak for Neil on this one too, very quickly, that the idea ultimately is as the discipline starts to mature, we won't have people that probably aren't really capable of doing some of this data science stuff, running around and buying a tool to try to supplement their knowledge and their experience. So, that's going to be another factor that I think ultimately leads to clarity in how we utilize these tools as we move into an AI oriented world. >> Okay, Jim is on mute so if you wouldn't mind unmuting him. There was a question, is ML a more informative way of describing AI? Jim, when you and I were in our Boston studio, I sort of asked a similar question. AI is sort of the uber category. Machine learning is math. Deep learning is a more sophisticated math. You have a detailed answer in the chat. But maybe you can give a brief summary. >> Jim: Sure, sure. I don't want too pedantic here but deep learning is essentially, it's a lot more hierarchical deeper stacks of neural network of layers to be able to infer high level extractions from data, you know face recognitions, sentiment analysis and so forth. Machine learning is the broader phenomenon. That's simply along a different and part various approaches for distilling patterns, correlations and algorithms from the data itself. What we've seen in the last week, five, six tenure, let's say, is that all of the neural network approaches for AI have come to the forefront. And in fact, the core often market place and the state of the art. AI is an ancient paradigm that's older than probably you or me that began and for the longest time was rules based system, expert systems. Those haven't gone away. The new era of AI we see as a combination of both statical approaches as well as rules based approaches, and possibly even orchestration based approaches like graph models or building broader context or AI for a variety of applications especially distributed Edge application. >> Okay, thank you and then another question slash comment, AI like graphics in 1985, we move from a separate category to a core part of all apps. AI infused apps, again, Jim, you have a very detailed answer in the chat room but maybe you can give the summary version. >> Jim: Well quickly now, the most disruptive applications we see across the world, enterprise, consumer and so forth, the advantage involves AI. You know at the heart of its machine learning, that's neural networking. I wouldn't say that every single application is doing AI. But the ones that are really blazing the trail in terms of changing the fabric of our lives very much, most of them have AI at their heart. That will continue as the state of the art of AI continues to advance. So really, one of the things we've been saying in our research at Wikibon `is that the data scientists or those skills and tools are the nucleus of the next generation application developer, really in every sphere of our lives. >> Great, quick comment is we will be sending out these slides to all participants. We'll be posting these slides. So thank you Kip for that question. >> And very importantly Dave, over the course of the next few days, most of our predictions docs will be posted up on Wikibon and we'll do a summary of everything that we've talked about here. >> So now the questions are coming through fast and furious. But let me just try to rapid fire here 'cause we only got about a minute left. True private cloud definition. Just say this, we have a detailed definition that we can share but essentially it's substantially mimicking the public cloud experience on PRIM. The way we like to say it is, bringing the cloud operating model to your data versus trying to force fit your business into the cloud. So we've got detailed definitions there that frankly are evolving. about PaaS, there's a question about PaaS. I think we have a prediction in one of our, you know, appendices predictions but maybe a quick word on PaaS. >> Yeah, very quick word on PaaS is that there's been an enormous amount of effort put on the idea of the PaaS marketplace. Cloud Foundry, others suggested that there would be a PaaS market that would evolve because you want to be able to effectively have mobility and migration and portability for this large cloud application. We're not seeing that happen necessarily but what we are seeing is that developers are increasingly becoming a force in dictating and driving cloud decision making and developers will start biasing their choices to the platforms that demonstrate that they have the best developer experience. So whether we call it PaaS, whether we call it something else. Providing the best developer experience is going to be really important to the future of the cloud market place. >> Okay great and then George, George O, George Gilbert, you'll follow up with George O with that other question we need some clarification on. There's a question, really David, I think it's for you. Will persistent dims emerge first on public clouds? >> Almost certainly. But public clouds are where everything is going first. And when we talked about UniGrid, that's where it's going first. And then, the NVMe over fabrics, that architecture is going to be in public clouds. And it has the same sort of benefits there. And NV dims will again develop pretty rapidly as a part of the NVMe over fabrics. >> Okay, we're out of time. We'll look through the chat and follow up with any other questions. Peter, back to you. >> Great, thanks very much Dave. So once again, we want to thank you everybody here that has participated in the webinar today. I apologize for, I feel like Hans Solo and saying it wasn't my fault. But having said that, none the less, I apologize Neil Raden and everybody who had to deal with us finding and unmuting people but we hope you got a lot out of today's conversation. Look for those additional pieces of research on Wikibon, that pertain to the specific predictions on each of these different things that we're talking about. And by all means, Support@silicongangle.freshdesk.com, if you have an additional question but we will follow up with as many as we can from those significant list that's starting to queue up. So thank you very much. This closes out our webinar. We appreciate your time. We look forward to working with you more in 2018. (upbeat music)
SUMMARY :
And that is the emergence of the cloud. but the chief digital officer we see how much data moves from the Edge back to the cloud. and more and more of the AI involves deployment and we have multiple others that the ranks of data scientists are going to sort Neil's prescription that as the tools improve And most of the vendors that we have that AI is going to be relatively easy to build the low hanging fruit is for them to use of the talent ends up. of the triangle here. And that leads to the idea the logic albeit to drive sort of the verbiage And one of the big things we see happening is in the sense that you can quickly the role that the cloud is going to play. Is it something that is going to live on the Edge. is that data is going to dictate that and Kubernetes holds the promise to solve the hardware technologies are also going to evolve. of the processor you get back and the application of AI to So George, it sounds like it's going to be one of the key and the stringency of the SLA's, over the course of the next day, two days, to the extent that they are you know empowered. in the ArpiM big data era, with cloud vendors, as the discipline starts to mature, AI is sort of the uber category. and the state of the art. in the chat room but maybe you can give the summary version. at Wikibon `is that the data scientists these slides to all participants. over the course of the next few days, bringing the cloud operating model to your data Providing the best developer experience is going to be with that other question we need some clarification on. that architecture is going to be in public clouds. Peter, back to you. on Wikibon, that pertain to the specific predictions
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2017 | DATE | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
May 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
General Data Protection Regulation | TITLE | 0.99+ |
Dave | PERSON | 0.99+ |
1985 | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
George O | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Hans Solo | PERSON | 0.99+ |
Support@silicongangle.freshdesk.com | OTHER | 0.99+ |
12 billion dollars | QUANTITY | 0.99+ |
second consideration | QUANTITY | 0.99+ |
11 billion dollars | QUANTITY | 0.99+ |
Nine systems | QUANTITY | 0.99+ |
Day One Wrap | HPE Discover 2017 Madrid
>> (Narrator) Live from Madrid, Spain it's theCUBE. Covering HP Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> We're back in Espana. theCUBE, the leader in live tech coverage is here covering HPE Discover Madrid, day one. I'm Dave Villante with my cohost, Peter Burris. Well, it's all coming into focus, Peter. >> It is, it actually is. >> It is, I mean, it better be after five or six years. It's taking longer than I had hoped. But, the story is consistent now. The last four Discovers, despite some of the distractions of spin merges and so forth the story of hybrid IT, the Intelligent Edge, bringing automation is somewhat new to the data center. Services lead starts to actually make sense. >> Peter: Through private cloud. >> Yep, and you know, we talked about at the top of the show today, the spectrum. We're running AWS re:Invent, we got a big presence there. Obviously its affected the entire industry, and then you've got HPE, the likes of HPE, Dell EMC, to a certain extent IMB basically not given up, say wait a minute, these are our customers, they want Cloud on prem, we're gonna deliver to them. They want Cloud in the Cloud, we'll help them get there. >> Peter: Oracle. >> Oracle as well. Oracle, different strategy. We should talk about that a little bit. But, summarize, you know synthesize your take on the day, and where we're at with HPE. >> So I would say that the... What we talked about this morning was, when Meg first took over the reins, she stopped a whole bunch of stuff, and HP stopped spending and behaving like a company that believed that it had to get scale as fast as possible because that was the only way to win. And she ended up going back to, look, lets focus on the customers and what the customers are trying to do, and not how we're trying to leverage our assets. And it kind of took a pause, and for a while you could kind of see them start putting things back together, and you kind of had a sense of where it was all gonna go. But this has been kind of the coming out party for what the last five years have been about. As you said, I think we've seen the three core messages that certainly line up, you know, with a little bit of cavat here. Their story is very much aligned with what we think the industry needs to see right now. At least, our research suggests. Gonna need true private Cloud, the ability to put the Cloud service where your data requires, and not force your business to move it's data to some Clouds location. You're gonna need increased automation within your IT organization, because you're not going to be able to support these more complex workloads if you don't find ways to increase the productivity of your people, and even more importantly, dramatically reduce even the possibility of a failure, and that's what AI inside IT's all about. And very importantly, the idea that you gotta put more intelligence at the Edge, that that interface between the real world and the digital world is really what's gonna drive the dynamic in the computing industry over the next few years. And HP has shown up and they're not just talking about it, they're showing it. And it's nice to be there. >> Well it's interesting, Meg Whitman came by and was talking to us, and we were talking about the Aruba acquisition. She said, look, we bought this because it was a nice business, it could show some growth. And it was, you know, a way to compete with Cisco and differentiate, because, hey were trying to compete head to head with Cisco and it was going okay, but not great. Aruba gave them a clear differentiator. And then all of the sudden, the Edge became this tailwind. And it kinda got them there early. >> Well, lets remember what Mark Hurd talked about. He said, well, why are you going after the network world. I like their 67% gross margins. Okay, so... >> Dave: Talking about 3Com. >> He's talking about 3Com, he's talking about all the things that HP did as it tried to get into the networking business. >> Dave: Cisco, right, yeah. >> It was purely driven by gross margin. They didn't quite have the customer story down. Aruba has always been a great customer story. They've always say, look, this is your business challenge. You know, are you sick and tired of dropping your connection as you go from one conference room to another. This is your security issues. On, and on, and on. They had three or four concrete value propositions that just worked for customers. That acquisition at that time it happened, it happened about the same time that HP was starting to rededicate itself back to thinking about it's customer base. So, it's not surprising to me that that integration, or that merger has been one of the more successful that HP's undertaken. >> So again, the spectrum. You know, you got Andy Jassy on one end who started this whole thing, and you got the likes of HPE on the other end. And you're right, it does align with a lot of things that we've been saying around true private cloud and so forth. Jassy doesn't buy it. He flat out says, this is old guard thinking trying to hang on to the past. But, our analysis suggests it's not just old guard thinking. It's customer thinking because they can't just move their business into the Cloud. Thoughts. >> Totally agree. So I'd say there are a couple of things about it. It's customer thinking based on the realities of the data assets that they're trying to leverage as they transform into a digital business. Data is real, and it has, it's gonna weigh in on how your infrastructure looks. And the Edge is gonna have characteristics that mean you're gonna have to do automation right there, right where the action is. You're not gonna be able to send it up to the Cloud all the time. There's gonna be a lot of business events that take place in that core, in that second tier. So, it's not that it's... It's not that it's old versus new guard. And here's why I say that, Dave. It's because in many respects, we're giving some props to HP right now, which is great. But, in many respects, the story that HP is telling today is a story that is still being largely, has largely been told, largely fashioned by what AWS has done over the last 10 years. And that is, here's what the Cloud experience is. And now HP's adding, "And you want that Cloud experience whatever your data demands." The difference, therefore, between the old guard and and the new guard, or the old way and the new way, on premise, is that it used to be, it was pretty clear to me, and I think it was pretty clear to us, that the old, that the talk about private Cloud was simply a way of thinking about how to put new marketing spin on the enhancement, upgrade, replacement cycle for servers and storage. And that did not work. It just flat out didn't work. >> Well it worked in the sense that it froze the market a little bit. >> Eh, it froze the market a little bit. But, overall, for the past five or six years our growth has been slowing down pretty dramatically. So, I would say, that the data is pretty unassailable. You're not gonna move everything to a central location. But, you're gonna want that Cloud experience. And so, the question is, are we gonna see great Cloud experience where the physics, the legality, and inertia property governance demands that you put your data. >> Well, I thought Jesse St Laurent was gonna talk about the next wave. He mentioned Multi-Cloud. >> Peter: He's CTO of... >> Of SimpliVity, now HPE Hyperconverged. >> Peter: Right. >> I thought he was talking about, he said the next wave is Hyper-V. Okay, check. I mean, like, that's like to me a feature of the product. And then he sort of talked about Multi-Cloud. And that really where I thought he was gonna go, because when you look at what AWS is doing, and I've always contended, they're years ahead, we can debate how many. Five, seven, three. Probably closer to five than three. But where they're headed is serverless, you know, functional programming. Stateless, new programming models. It's all about the developer to those guys. And that's the parlance that they speak in. The Hyperconverge guys all talk in VM terms. And that's not how Amazon talks or thinks. So, you know, the question is, is that a next wave, and can the Enterprise guys >> Peter: Talk developer? >> Yeah, can they catch that wave? >> So, I think... Look, lets be honest. AWS is a great company. There's no question about it. They've done things that a lot of old style infrastructure jocks thought couldn't be done. And they did it. And they continue to, they continue to demonstrate that they are really engaging their customers and turning that insight and knowledge into great services. So, this is not, this is not a knock on AWS. But what ultimately has to, and I think AWS is recognizing this as well, because they're starting to talk a lot about IoT and their approach to IoT, recognizing that not all the data is gonna be sourced up in the Cloud. The data is gonna be generated in a lot of other places and they have to participate there as well. So, from our perspective ultimately, we would say that Multi-Cloud, the ability to, the ability to naturally place your data where the data needs to be placed, which is increasingly is gonna be closer to the event that needs to be automated, that needs to have that high quality experience, is gonna be the way, is gonna be the dominant factor in determining the characteristics of the application infrastructure that you put in place. And, we'll see what happens. Serverless, yeah, serverless is great. You can do a lot with it. But, you can also still build junky applications with serverless. Microservices are great, yeah. But you can still build junky applications with Microservices. >> A lot of those services aren't so micro as Neil Raden would say. >> That's exactly right. So you can still do bad stuff in the Cloud. So, at the end of the day, the whole point is to get a new compact between business who have the vision of the digital services and digital capabilities they want, IT professionals and developers who are gonna generate, create that value, and then infrastructure people who are not who are allowing the data and the workload to fall where is naturally should fall, and then making it possible for the industry to work together, because that's what users want. >> Okay, so let me ask the question differently. You agree that the Cloud guys generally, Amazon specifically, is ahead of the Enterprise guys when it comes to infrastructures and servers. >> Peter: Yeah, there's no question there. >> Okay, is the lead extending, or is it dwindling. Amazon's lead in your view. >> Well, so look, you have Amazon's lead, first of you have to think about Amazon's lead relative to Microsoft, Oracle, and others. And, they're not as far ahead as, they're not that far ahead of Microsoft. >> Dave: Right. >> So there's a real battle raging there. Google has at least as good a relationship with a lot of developers as Amazon does. When you think about what a lot of developers are building in the Cloud experience, they're using Kubernetes, they're using TensorFlow, they're increasingly going to use Istio. I mean, so, it's not, There's gonna be increased energy being put forward to try to talk about how that Cloud innovation's gonna happen. >> So those are the three Hypercloud guys. >> Those are three Hypercloud guys. And, as we talked about, they are increasingly defining what the Cloud experience is. I think what we're seeing now, is the Enterprise guys stepping back and saying, you know what, we have to define our roll in the Cloud experience, and not presume that we're gonna tell everybody what the Cloud experience is. Which is what they were doing for many years, and they failed at it. >> And you could make an argument that HPE as a smaller company with less assets to encumber them, can actually deliver that through partnerships, maybe not as profitably, most definitely not as profitably, but actually can deliver that outcome for customers as a more agile customer. >> We'll see, we'll see, because... >> Dave: You could make that argument is all I'm saying. >> Well, you could make that argument, but remember, we're moving from, and even HP announced some stuff today with Greenlake, moving from a product orientation increasingly to a service orientation. And there's demonstration that you can do things with your business model that may allow you to do things in different levels of profitability at somewhere, you know, when you take more of a services approach to things. So, I think the most important message that we can leave from today is that, our observation on that notion of a spectrum, from, you know, public put it on public, to a true private orientation which is hybrid where an on premise play is gonna be essential. That spectrum seems to be real, number one. Number two, however, it doesn't mean that AWS in particular is not going to be successful at driving the definition of the Cloud experience, and number three, we're now seeing at least one company, but we're also starting to see indications of others, acknowledge that their roll in all of this will be to take whatever the leaders in Cloud are talking about and make it possible, that experience possible where the data requires and that will include on premise. >> So, and I agree with you, AWS is defining that Cloud experience. So, as Ana Pinczuk was speaking, I just wrote down, I jotted down, AWS Cloud experience, which they've defined, and HPE Cloud experience. So I've got pay as you go, you know this kind of flex capacity, kind of. I mean it's as close as you can probably get. >> Peter: Greenlake. >> Yeah, Greenlake Kind of. >> Something we all need to learn more about. But, it's getting there, it's getting there. >> But it will never get there entirely, right? Because, they're gonna require to be, you know, buy a years worth of capacity, thresholds, you're gonna have thresholds above and threshold below. >> Except, we also heard, again I think there's more, I don't wanna, I think you're right. >> It's nuanced, it's not 100% of the way there. >> You start throwing the balance sheet and finances in there and how you're gonna do it. >> We'll come back to that. So, elastic? Again, kind of. You know, to a point. Integrated services? Like tons of them, like thousands a year? Some of those, but as I was saying before, HP's ecosystem play, allows them to pick and choose. >> Yeah, but remember Dave, okay keep going, keep going. >> Security, sort of, let's call it the Amazon way. Here's our security, it's good. But take it or leave it. And then, the HP approach is your way. HPE, you have security your way. If that's the edict of the organization, we can map to that. One Cloud versus Multi-Cloud. Obviously, HPE has a Multi-Cloud strategy, Amazon doesn't. They don't care about managing Multi-Clouds. They care about managing their Cloud. And then services as a service. HPE can deliver that and, Amazon I got a question mark, it's their ecosystem that's delivering those services. So I guess the point is, that I'm making is, maybe it's not the exact replica of the Amazon experience, but there are attributes of it, which appeal to Enterprise IT. >> Peter: That's right. >> Which Amazon is really not interested in delivering. >> Peter: Right. >> Ergo, the assumption is, my assumption is that, that business, that on prem business will be here for a long, long time. >> Peter: Absolutely. >> Indefinitely. >> And we would agree with that. In fact we think, ultimately, that there's gonna be enough uniqueness about how businesses use their data and treat their data that we expect to see this notion of true private Cloud actually be a bigger overall piece of the marketplace than the one size fits all, with a degree of customization possible, that Amazon's providing. But, again, this is, we have to be careful here. Because as analysts, we're sort of naturally falling into this trap of setting up AWS and HPE or any of these folks in opposition. There are companies that have very, very different opposed visions of how this is gonna play out. Specifically, we can talk about Amazon saying it's all gonna be IaaS, we're gonna out paths in there. And then, increasingly obviously, Microsoft and Oracle saying, oh no, we're gonna have application Clouds. You're gonna buy and application Cloud, and you're gonna do a whole bunch of stuff in that. What we see today is not in opposition, >> Dave: Right. >> to the AWS vision, it's not. It is a, okay, great. But for this type of work, this type of data, this type of workload, this type of reality, chances are, you're gonna need to put this type of stuff here, and have it fit into the overall motion of Cloud experience, and it doesn't have to be a complete substitute. It just has to work for that class of workload. >> Well, but, bringing it back to HP, and we gotta wrap, is HPE does not have an application Cloud, right? >> Peter: They don't. >> And as a result, it's going to be in a knife fight. With Amazon, with Dell EMC, and with China. >> It's gonna be in a knife fight with companies that are like it. China, you know, Huawei, Dell EMC, Cisco. >> You're right, you're right. Amazon's setting the pricing tone and the business model tone. >> Look, right now it's Amazon and Microsoft, are helping to set the stage of what this is all gonna look like. >> So, again, bottom line is, it's not a 60% gross margin company, Mark Hurds vision of going to compete with Cisco. It's a 25 to 32% gross margin business. >> Peter: That's really focused on customer problems. >> Focus on customer problems throws off a couple billion dollars of cash, it can eke out a little bit of growth. You know, that's what it is. >> Not a bad business. >> No, it's a great business, actually. Alright, Pete, thanks the wrap on day one. We'll be back tomorrow 8:30 am local time, right? >> Man: Sure. >> Roughly. >> Man: 8:45. >> 8:45 local time. Check out theCUBE.net, where you'll see this show, you'll see the other shows that we're doing including re:Invent John Furrier and the crew are over there today. That's a wrap for day one, this is theCUBE. We'll see you tomorrow. (upbeat music)
SUMMARY :
Brought to you by Hewlett Packard Enterprise. Well, it's all coming into focus, Peter. the story of hybrid IT, the Intelligent Edge, Yep, and you know, we talked about on the day, and where we're at with HPE. that that interface between the real world And it was, you know, a way to compete with Cisco He said, well, why are you going after the network world. he's talking about all the things that HP did So, it's not surprising to me that the likes of HPE on the other end. that the old, that the talk about private Cloud froze the market a little bit. that the data is pretty unassailable. was gonna talk about the next wave. It's all about the developer to those guys. the ability to naturally place your data A lot of those services aren't so micro So, at the end of the day, the whole point is to get You agree that the Cloud guys generally, Okay, is the lead first of you have to think about Amazon's lead in the Cloud experience, is the Enterprise guys stepping back and saying, And you could make an argument that that may allow you to do things in So, and I agree with you, Yeah, Greenlake But, it's getting there, it's getting there. Because, they're gonna require to be, you know, I think you're right. and how you're gonna do it. You know, to a point. Yeah, but remember Dave, If that's the edict of the organization, we can map to that. Ergo, the assumption is, my assumption is that, that we expect to see this notion of true private Cloud and it doesn't have to be a complete substitute. And as a result, it's going to be in a knife fight. China, you know, Huawei, Dell EMC, Cisco. and the business model tone. are helping to set the stage It's a 25 to 32% gross margin business. You know, that's what it is. Alright, Pete, thanks the wrap on day one. re:Invent John Furrier and the crew
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Villante | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Ana Pinczuk | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Mark Hurd | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Mark Hurds | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
67% | QUANTITY | 0.99+ |
Pete | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
tomorrow | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Espana | LOCATION | 0.99+ |
3Com | ORGANIZATION | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Five | QUANTITY | 0.99+ |
8:45 | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Action Item | AWS re:Invent 2017 Expectations
>> Hi, I'm Peter Burris, and welcome once again to Action Item. (funky electronic music) Every week, Wikibon gathers together the research team to discuss seminal issues that are facing the IT industry. And this week is no different. In the next couple of weeks, somewhere near 100,000 people are gonna be heading to Las Vegas for the Amazon, or AWS re:Invent show from all over the world. And this week, what we wanna do is we wanna provide a preview of what we think folks are gonna be talking about. And I'm joined here in our lovely Palo Alto studio, theCUBE studio, by Rob Hof, who is the editor-in-chief of SiliconANGLE. David Floyer, who's in analyst at Wikibon. George Gilbert, who's an analyst Wikibon. And John Furrier, who's a CUBE host and co-CEO. On the phone we have Neil Raden, an analyst at Wikibon, and also Dave Vellante, who's co-CEO with John Furrier, an analyst at Wikibon as well. So guys, let's jump right into it. David Floyer, I wanna hit you first. AWS has done a masterful job of making the whole concept of infrastructure as a service real. Nobody should downplay how hard that was and how amazing their success has been. But they're moving beyond infrastructure as a service. What do we expect for how far up Amazon is likely to go up the stack this year at re:Invent? >> Well, I can say what I'm hoping for. I agree with your premise that they have to go beyond IAS. The overall market for cloud is much bigger than just IAS, with SaaS and other clouds as well, both on-premise and off-premise. So I would start with what enterprise CIOs are wanting, and they are wanting to see a multi-cloud strategy, both on-premise and multiple clouds. SaaS clouds, other clouds. So I'm looking for AWS to provide additional services to make that easier. in particular, services, I thought of private clouds for enterprises. I'm looking for distributed capabilities, particularly in the storage area so they can link different clouds together. I want to see edge data management capabilities. I'd love to see that because the edge itself, especially the low-latency stuff, the real-time stuff, that needs specialist services, and I'd like to see them integrate that much better than just Snowball. I want to see more details about AI I'd love to see what they're doing in that. There's tremendous potential for AI in operational and to improve security, to improve availability, recovery. That is an area where I think they could be a leader of the IT industry. >> So let me stop you there, and George I wanna turn to you. So AWS in AI how do we anticipate that's gonna play out at re:Invent this year? >> I can see three things in decreasing order of likelihood. The first one is, they have to do a better job of tooling, both for, sort of, developers who want to dabble in, well get their arms around AI, but who aren't real data scientists. And then also hardcore tools for data scientists that have been well served by, recently, Microsoft and IBM, among others. So this is this Iron Man Initiative that we've heard about. For the hardcore tools, something from Domino Data Labs that looks like they're gonna partner with them. It's like a data-science workbench, so for the collaborative data preparation, modeling, deployment. That whole life cycle. And then for the developer-ready tooling, I expect to see they'll be working with a company called DataRobot, which has a really nifty tool where you put in a whole bunch of training data, and it trains, could be a couple dozen models that it thinks that might fit, and it'll show you the best fits. It'll show you the features in the models that are most impactful. In other words, it provides a lot of transparency. >> So it's kind of like models for models. >> Yes, and it provides transparency. Now that's the highest likelihood. And we have names on who we think the likely suspects are. The next step down, I would put applying machine learning to application performance management and IT operations. >> So that's the whole AI for ITOM that David Floyer just mentioned. >> Yeah. >> Now, presumably, this is gonna have to extend beyond just AI for Amazon or AWS-related ITOM. Our expectation's that we're gonna see a greater distribution of, or Amazon take more of a leadership in establishing a framework that cuts across multi-cloud. Have I got that right, David Floyer? >> Absolutely. A massive opportunity for them to provide the basics on their own platform. That's obviously the starting point. They'll have the best instrumentation for all of the components they have there. But they will need to integrate that in with their own databases, with other people's databases. The more that they can link all the units together and get real instrumentation from an application point of view of the whole of the infrastructure, the more value AI can contribute. >> John Foyer, the whole concept of the last few years of AWS is that all roads eventually end up at AWS. However, there's been a real challenge associated with getting this migration momentum to really start to mature. Now we saw some interesting moves that they made with VMware over the last couple of years, and it's been quite successful. And some would argue it might even have given another round of life to VMware. Are there some things we expect to see AWS do this time that are gonna reenergize the ecosystem to start bringing more customers higher up the stack to AWS? >> Yeah, but I think I look at it, quickly, as VMware was a groundbreaking even for both companies, VMware and AWS. We talked about that at that research event we had with them. The issue that is happening is that AWS has had a run in the marketplace. They've been the leader in cloud. Every year, it's been a slew of announcements. This year's no different. They're gonna have more and more announcements. In fact, they had to release some announcements early, before the show, because they have, again, more and more announcements. So they have the under-the-hood stuff going on that David Floyer and George were pointing out. So the classic build strategy is to continue to be competitive by having more services layered on top of each other, upgrading those services. That's a competitive strategy frame that's under the hood. On the business side, you're seeing more competition this year than ever before. Amazon now is highly contested, certainly in the marketplace with competitors. Okay, you're seeing FUD, the uncertainty and doubt from other people, how they're bundling. But it's clear. The cloud visibility is clear to customers. The numbers are coming in, multiple years of financial performance. But now the ecosystem plays, really, the interesting one. I think the VMware move is gonna be a tell sign for other companies that haven't won that top-three position. >> Example? >> I will say SAP. >> Oh really? You think SAP is gonna have a major play this year where we might see some more stuff about AWS and SAP? >> I'm hearing rumblings that SAP is gonna be expanding their relationship. I don't have the facts yet on the ground, but from what I'm sensing, this is consistent with what they've been doing. We've seen them at Google cloud platform. We talked to them specifically about how they're dealing with cloud. And their strategy is clear. They wanna be on Azure, Google, and Amazon. They wanna provide that database functionality and their client base in from HANA, and roll that in. So it's clear that SAP wants to be multi-cloud. >> Well we've seen Oracle over the past couple of years, or our research has suggested, I would say, that there's been kind of two broad strategies. The application-oriented strategy that goes down to IAAS aggressively. That'd be Oracle and Microsoft. And then the IAAS strategy that's trying to move up through an ecosystem play, which is more AWS. David Floyer and I have been writing a lot of that research. So it sounds like AWS is really gonna start doubling down in an ecosystem and making strategic bets on software providers who can bring those large enterprise install bases with them. >> Yeah, and the thing that you pointed out is migration. That's a huge issue. Now you can get technical, and say, what does that mean? But Andy Jassy has been clear, and the whole Amazon Web Services Team has been clear from day one. They're customer centric. They listen to the customers. So if they're doing more migration this year, and we'll see, I think they will be, I think that's a good tell sign and good prediction. That means the customers want to use Amazon more. And VMware was the same way. Their customers were saying, hey, we're ops guys, we want to have a cloud strategy. And it was such a great move for VMware. I think that's gonna lift the fog, if you will, pun intended, between what cloud computing is and other alternatives. And I think companies are gonna be clear that I can party with Amazon Web Services and still run my business in a way that's gonna help customers. I think that's the number one thing that I'm looking for is, what is the customers looking for in multi-cloud? Or if it's server-less or other things. >> Well, or yeah I agree. Lemme run this by you guys. It sounds as though multi-cloud increasingly is going to be associated with an application set. So, for example, it's very difficult to migrate a database manager from one place to another, as a snowflake. The cost to the customer is extremely high. The cost to the migration team is extremely high, lotta risk. But if you can get an application provider to step up and start migrating elements of the database interface, then you dramatically reduce the overall cost of what that migration might look like. Have I got that right, David Floyer? >> Yeah, absolutely. And I think that's what AWS, what I'm expecting them to focus on is more integration with more SaaS vendors, making it a better place-- >> Paul: Or just software vendors. >> Or software vendors. Well, SaaS vendors in particular, but software vendors in particular-- >> Well SAP's not a SaaS player, right? Well, they are a little bit, but most of their installations are still SAP on Oracle and moving them over, then my ass is gonna require a significant amount of SAP help. >> And one of the things I would love to see them have is a proper tier-one database as a service. That's something that's hugely missing at the moment, and using HANA, for example, on SAP, it's a tier-one database in a particular area, but that would be a good move and help a lot of enterprises to move stuff into AWS. >> Is that gonna be sufficient, though, given how dominant Oracle is in that-- >> No, they need something general purpose which can compete with Oracle or come to some agreement with Oracle. Who knows what's gonna happen in the future? >> Yeah, I don't know. >> Yeah we're all kinda ignoring here. It will be interesting to see. But at the end of the day, look, Oracle has an incentive also to render more of what it has, as a service at some level. And it's gonna be very difficult to say, we're gonna render this as a service to a customer, but Amazon can't play. Or AWS can't play. That's gonna be a real challenge for them. >> The Oracle thing is interesting and I bring this up because Oracle has been struggling as a company with cloud native messaging. In other words, they're putting out, they have a lot of open source, we know what they have for tooling. But they own IT. I mean if you dug up Oracle, they got the database as David pointed out, tier one. But they know the IT guys, they've been doing business in IT for years as a legacy vendor. Now they're transforming, and they are trying hard to be the cloud native path, and they're not making it. They're not getting the credit, and I don't know if that's a cultural issue with Oracle. But Amazon has that positioning from a developer cloud DNA. Now winning real enterprise deals. So the question that I'm looking for is, can Amazon continue to knock down these enterprise deals in lieu of these incumbent or legacy players in IT. So if IT continues to transform more towards cloud native, docker containers, or containers in Kubernetes, these kinds of micro services, I would give the advantage to Amazon over Oracle even though that Oracle has the database because ultimately the developers are driving the behavior. >> Oh again I don't think any of us would disagree with that. >> Yeah so the trouble though is the cost of migrating the applications and the data. That is huge. The systems of record are there for a reason. So there are two fundamental strategies for Oracle. If they can get their developers to add the AI, add the systems of intelligence. Make them systems of intelligence, then they can win in that strategy. Or the alternative is that they move it to AWS and do that movement in AWS. That's a much more risky strategy. >> Right but I think our kind of concluding point here is that ultimately if AWS can get big application players to participate and assist and invest in and move customers along with some of these big application migrations, it's good for AWS. And to your point John, it's probably good for the customers too. >> Absolutely. >> Yeah I don't think it's mutually exclusive as David makes a point about migrating for Oracle. I don't see a lot of migration coming off of Oracle. I look at overall database growth is the issue. Right so Oracle will have that position, but it's kind of like when we argued about the internet growth back in 1997. Just internet users growing was so great that rising tide flows. So I believe that the database growth is going to happen so fast that Amazon is not necessarily targeting Oracle's market share, they're going after the overall database market, which might be a smaller tier two kind of configuration or new architectures that are developing. So I think it's interesting dynamic and Oracle certainly could play there and lock in the database, but-- >> Here's what I would say, I would say that they're going after the new workload world, and a lot of that new workload is gonna involve database as it always has. Not like there's anything that the notion that we have solved or that database is 90% penetrated for the applications that are gonna be dominant matter in 2025 is ridiculous. There's a lot of new database that's gonna be sold. I think you're absolutely right. Rob Hof what's the general scuttlebutt that you're hearing. You know you as editor of SiliconANGLE, editor-in-chief of SiliconANGLE. What is the journalist world buzzing about for re:Invent this year? >> Well I guess you know my questions is because of the challenges that we're facing like we just talked about with migrating, the difficulty in migrating some of these applications. We also see very fast growing rivals like Google. Still small, but growing fast. And then there's China. That's a big one where is there a natural limit there that they're gonna have? So you put these things together, and I guess we see Amazon Web Services still growing at 42% a year or whatever it's great. But is it gonna start to go down because of all these challenges? >> 'Cause some of the constraints may start to assert themselves. >> Rob: Exactly, exactly. >> So-- >> Rob: That's what I'm looking at. >> Kind of the journalism world is kinda saying, are there some speed bumps up ahead for AWS? >> Exactly, and we saw one just a couple, well just this week with China for example. They sold off $300 million worth of data centers, equipment and such to their partner in China Beijing Sinnet. And they say this is a way to comply with Chinese law. Now we're going to start expanding, but expanding while you're selling off $300 million worth of equipment, you know, it begs a question. So I'm curious how they're going to get past that. >> That does raise an interesting question, and I think I might go back to some of the AI on ITOM, AI on IT operations management. Is that do you need control of the physical assets in China to nonetheless sell great service. >> Rob: And that's a big question. >> For accessing assets in China. >> Rob: Right. >> And my guess is that if they're successful with AI for ITOM and some of these other initiatives we're talking about. It in fact may be very possible for them to offer a great service in China, but not actually own the physical assets. And that's, it's an interesting question for some of the Chinese law issues. Dave Vellante, anything you want to jump in on, and add to the conversation? For example, if we look at some of the ecosystem and some of the new technologies, and some of the new investments being made around new technologies. What are some of your thoughts about some of the new stuff that we might hear about at AWS this year? >> Dave: Well so, a couple things. Just a comment on some of the things you guys were saying about Oracle and migration. To me it comes down to three things, growth, which is clearly there, you've talked about 40% plus growth. Momentum, you know the flywheel effect that Amazon has been talking about for years. And something that really hasn't been discussed as much which is economics, and this is something that we've talked about a lot and Amazon is bringing a software like marginal economics model to infrastructure services. And as it potentially slows down its growth, it needs to find new areas, and it will expand its tan by gobbling up parts of the ecosystem. So, you know there's so much white space, but partners got to be careful about where they're adding value because ultimately Amazon is gonna target those much in the same way, in my view anyway that Microsoft and Intel have in the past. And so I think you've got to tread very carefully there, and watch where Amazon is going. And they're going into the big areas of AI, trying to do more stuff with the Edge. And anywhere there's automation they are going to grab that piece of value in the value chain. >> So one of the things that we've been, we've talked about two main things. We've talked about a lot of investments, lot of expectations about AI and how AI is gonna show up in a variety of different ways at re:Invent. And we've talked about how they're likely to make some of these migration initiatives even that much more tangible than they have been. So by putting some real operational clarity as to how they intend to bring enterprises into AWS. We haven't talked about IoT. Dave just mentioned it. What's happening with the Edge, how is the Edge going to work? Now historically what we've seen is we've seen a lot of promises that the Edge was all going to end up in the cloud from a data standpoint, and that's where everything was gonna be processed. We started seeing the first indications that that's not necessarily how AWS is gonna move last year with Snowball and server-less computing, and some of those initiatives. We have anticipated a real honest to goodness true private cloud, AWS stack with a partnership. Hasn't happened yet. David Floyer what are we looking for this year? Are we gonna see that this year or are we gonna see more kind of circumnavigating the issue and doing the best that they can? >> Yeah, well my prediction last year was that they would come out with some sort of data service that you could install on your on-premise machine as a starting point for this communication across a multi cloud environment. I'm still expecting that, whether it happens this year or early next year. I think they have to. The pressure from enterprises, and they are a customer driven organization. The pressure from enterprises is going to mandate that they have some sort of solution on-premise. It's a requirement in many countries, especially in Europe. They're gonna have to do that I think without doubt. So they can do it in multiple ways, they can do it as they've done with the US government by putting in particular data centers, whole data centers within the US government. Or they can do it with small services, or they can have a, take the Microsoft approach of having an AWS service on site as well. I think with pressure from Microsoft, the pressure from Europe in particular is going to make this an essential requirement of their whole strategy. >> I remember a number of years going back a couple decades when Dell made big moves because to win the business of a very large manufacturer that had 50,000 work stations. Mainly engineers were turning over every year. To get that business Dell literally put a distribution point right next to that manufacturer. And we expect to see something similar here I would presume when we start talking about this. >> Yeah I mean I would make a comment on the IoT. First of all I agree with what David said, and I like his prediction, but I'm kind of taking a contrarian view on this, and I'm watching a few things at Amazon. Amazon always takes an approach of getting into new markets either with a big idea, and small teams to figure it out or building blocks, and they listen to the customer. So IoT is interesting because IoT's hard, it's important, it's really a fundamental important infrastructure, architecture that's not going away. I mean it has to be nailed down, it's obvious. Just like blockchain kinda is obvious when you talk about decentralization. So it'll be interesting to see what Amazon does on those two fronts. But what's interesting to note is Amazon always becomes their first customer. In their retail business, AWS was powering retail. With Whole Foods, and the stuff they're doing on the physical side, it'll be very interesting to see what their IoT strategy is from a technology standpoint with what they're doing internally. We get food delivered to our house from Amazon Fresh, and they got Whole Foods and all the retail. So it'll be interesting to see that. >> They're buying a lot of real estate. And I thought about this as well John. They're buying a lot of real estate, and how much processing can they put in there. And the only limit is that I don't think Whole Foods would qualify as particularly secure locations (laughing) when we start talking about this. But I think you're absolutely right. >> That only brings the question, how will they roll out IoT. Because he's like okay roll out an appliance that's more of an infrastructure thing. Is that their first move. So the question that I'm looking for is just kind of read the tea leaves and saying, what is really their doing. So they have the tech, and it's gonna be interesting to see, I mean it's more of a high level kind of business conversation, but IoT is a really big challenging area. I mean we're hearing that all over the place from CIOs like what's the architecture, what's the playbook? And it's different per company. So it's challenging. >> Although one of the reasons why it looks different per company is because it is so uncertain as to how it's gonna play out. There's not a lot of knowledge to fuse. My guess is that in 10 years we're gonna look back and see that there was a lot more commonality and patterns of work that were in IoT that many people expected. So I'll tell you one of the things that I saw last year that particularly impressed me at AWS re:Invent. Was the scale at which the network was being built out. And it raised for me an interesting question. If in fact one of the chief challenges of IoT. There are multiple challenges that every company faces with IoT. One is latency, one is intellectual property control, one is legal ramification like GDPR. Which is one of the reasons why the whole Europe play is gonna be so interesting 'cause GDPR is gonna have a major impact on a global basis, it's not just Europe. Bandwidth however is an area that is not necessarily given, it's partly a function of cost. So what happens if AWS blankets the world with network, and customers to get access to at least some degree of Edge no longer have to worry about a telco. What happens to the telco business at least from a data communication standpoint? Anybody wanna jump in on that one? >> Well yeah I mean I've actually talked to a couple folks like Ericson, and I think AT&T. And they're actually talking about taking their central offices and even the base stations, and sort of outfitting them as mini data centers. >> As pops. >> Yeah. But I think we've been hearing now for about 12 months that, oh maybe Edge is going to take over before we actually even finish getting to the cloud. And I think that's about as sort of ill-considered as the notion that PCs were gonna put mainframes out of business. And the reason I use that as an analogy, at one point IBM was going to put all their mainframe based databases and communication protocol on the PC. That was called OS2 extended edition. And it failed spectacularly because-- >> Peter: For a lot of reasons. >> But the idea is you have a separation of concerns. Presentation on one side in that case, and data management communications on the other. Here in this, in what we're doing here, we're definitely gonna have the low latency inferencing on the Edge and then the question is what data goes back up into the cloud for training and retraining and even simulation. And we've already got, having talked to Microsoft's Azure CTO this week, you know they see it the same way. They see the compute intensive modeling work, and even simulation work done in the cloud, and the sort of automated decisioning on the Edge. >> Alright so I'm gonna make one point and then I want to hit the Action Item around here. The one point I wanna make is I have a feeling that over, and I don't know if it's gonna happen at re:Invent this year but I have a feeling that over the course of the next six to nine months, there's going to be a major initiative on the part of Amazon to start bringing down the cost of data communications, and use their power to start hitting the telcos on a global basis. And what's going to be very very interesting is whether Amazon starts selling services to its network independent of its other cloud services. Because that could have global implications for who wins and who loses. >> Well that's a good point, I just wanna add color on that. Just anecdotally from my perspective you asked a question and I went, haven't talked to anyone. But knowing the telco business, I think they're gonna have that VMware moment. Because they've been struggling with over the top for so long. The rapid pace of innovation going on, that I don't think Amazon is gonna go after the telcos, I think it's just an evolutionary steamroller effect. >> It's an inevitability. >> It's an inevitability that the steamroller's coming. >> So users, don't sign longterm data communications deals right now. >> Why wouldn't you do a deal with Amazon if you're a telco, you get relevance, you have stability, lock in your cash flows, cut your deal, and stay alive. >> You know it's an interesting thought. Alright so let's hit the Action Item around here. So really quickly, as a preface for this, the way we wanna do this is guys, is that John Furrier is gonna have a couple hour one on one with Andy Jassy sometime in the next few days. And so if you were to, well tell us a little about that first John. >> Well every re:Invent we've been doing re:Invent for multiple years, I think it's our sixth year, we do all the events, and we cover it as the media partner as you know. And I'm gonna have a one on one sit down every year prior to re:Invent to get his view, exclusive interview, for two hours. Talk about the future. We broke the first Amazon story years ago on the building blocks, and how they overcame, and now they're winning. So it's a time for me to sit down and get his insight and continue to tell the story, and document the growth of this amazing success story. And so I'm gonna ask him specific questions and I wanted, love to know what he's thinking. >> Alright guys so I want each of you to pretend that you are, so representing your community, what would your community, what's the one question your community would like answered by Andy Jassy. George let's start with you. >> So my question would be, are you gonna take IT operations management, machine learn enable it, and then as part of offering a hybrid cloud solution, do you extend that capability on-prem, and maybe to even other vendor clouds. >> Peter: That's a good one, David Floyer. >> I've got two if I may. >> The more the merrier. >> I'll say them very quickly. The first one, John, is you've, the you being AWS, developed a great international network, with fantastic performance. How is AWS going to avoid conflicts with the EU, China, Japan, and particularly about their resistance about using any US based nodes. And from in-country telecommunication vendors. So that's my first, and the second is, again on AI, what's going to be the focus of AWS in applying the value of AI. Where are you gonna focus first and to give value to your customers? >> Rob Hof do you wanna ask a question? >> Yeah I'd like to, one thing I didn't raise in terms of the challenges is, Amazon overall is expanding so fast into all kinds of areas. Whole Foods we saw this. I'd ask Jassy, how do you contend with reality that a lot of these companies that you're now bumping up against as an overall company. Now don't necessarily want to depend on AWS for their critical infrastructure because they're competitors. How do you deal with that? >> Great question, David Vellante. >> David: Yeah my question is would be, as an ecosystem partner, what advice would you give? 'Cause I'm really nervous that as you grow and you use the mantra of, well we do what customers want, that you are gonna eat into my innovation. So what advice would you give to your ecosystem partners about places that they can play, and a framework that they should think about where they should invest and add value without the fear of you consuming their value proposition. >> So it's kind of the ecosystem analog to the customer question that Rob asked. So the one that I would have for you John is, the promise is all about scale, and they've talked a lot about how software at scale has to turn into hardware. What will Amazon be in five years? Are they gonna be a hardware player on a global basis? Following his China question, are they gonna be a software management player on a global basis and are not gonna worry as much about who owns the underlying hardware? Because that opens up a lot of questions about maybe there is going to be a true private cloud option an AWS will just try to run on everything, and really be the multi cloud administrator across the board. The Cisco as opposed to the IBM in the internet transformation. Alright so let me summarize very quickly. Thank you very much all of you guys once again for joining us in our Action Item. So this week we talked about AWS re:Invent. We've done this for a couple of years now. theCUBE has gone up and done 30, 35, 40 interviews. We're really expanding our presence at AWS re:Invent this year. So our expectation is that Amazon has been a major player in the industry for quite some time. They have spearheaded the whole concept of infrastructure as a service in a way that, in many respects nobody ever expected. And they've done it so well and so successfully that they are having an enormous impact way beyond just infrastructure in the market place today. Our expectation is that this year at AWS re:Invent, we're gonna hear a lot about three things. Here's what we're looking for. First, is AWS as a provider of advanced artificial intelligence technologies that then get rendered in services for application developers, but also for infrastructure managers. AI for ITOM being for example a very practical way of envisioning how AI gets instantiated within the enterprise. The second one is AWS has had a significant migration as a service initiative underway for quite some time. But as we've argued in Wikibon research, that's very nice, but the reality is nobody wants to bond the database manager. They don't want to promise that the database manager's gonna come over. It's interesting to conceive of AWS starting to work with application players as a way of facilitating the process of bringing database interfaces over to AWS more successfully as an onboarding roadmap for enterprises that want to move some of their enterprise applications into the AWS domain. And we mentioned one in particular, SAP, that has an interesting potential here. The final one is we don't expect to see the kind of comprehensive Edge answers at this year's re:Invent. Instead our expectation is that we're gonna continue to see AWS provide services and capabilities through server-less, through other partnerships that allow AWS to be, or the cloud to be able to extend out to the Edge without necessarily putting out that comprehensive software stack as an appliance being moved through some technology suppliers. But certainly green grass, certainly server-less, lambda, and other technologies are gonna continue to be important. If we finalize overall what we think, one of the biggest plays is, we are especially intrigued by Amazon's continuing build out of what appears to be one of the world's fastest, most comprehensive networks, and their commitment to continue to do that. We think this is gonna have implications far beyond just how AWS addresses the Edge to overall how the industry ends up getting organized. So with that, once again thank you very much for enjoying Action Item, and participating, and we'll talk next week as we review some of the things that we heard at AWS. And we look forward to those further conversations with you. So from Peter Burris, the Wikibon team, SiliconANGLE, thank you very much and this has been Action Item. (funky electronic music)
SUMMARY :
of making the whole concept be a leader of the IT industry. So AWS in AI how do we anticipate For the hardcore tools, Now that's the highest likelihood. So that's the whole AI for ITOM is gonna have to extend for all of the components they have there. the ecosystem to start that AWS has had a run in the marketplace. I don't have the facts yet on that goes down to IAAS aggressively. and the whole Amazon Web Services Team of the database interface, And I think that's what but software vendors in particular-- but most of their installations And one of the things I happen in the future? But at the end of the day, look, So the question that I'm looking for is, of us would disagree with that. that they move it to AWS for the customers too. So I believe that the database that the notion that we have solved because of the challenges 'Cause some of the to comply with Chinese law. the physical assets in China and some of the new technologies, of the things you guys how is the Edge going to work? is going to make this because to win the business and all the retail. And the only limit is that just kind of read the Which is one of the reasons even the base stations, And the reason I use that as an analogy, and the sort of automated of the next six to nine months, But knowing the telco the steamroller's coming. So users, don't sign longterm with Amazon if you're a telco, the way we wanna do this is guys, and document the growth of that you are, so and maybe to even other vendor clouds. So that's my first, and the second is, in terms of the challenges is, and a framework that So it's kind of the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
George | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Domino Data Labs | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
John Foyer | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Action Item | 2018 Predictions
>> Hi, welcome once again to Action Item. (funky electronic music) I'm Peter Burris and this is Wikibon's weekly research meeting where we bring together some of the best minds in Silicon Valley to talk about some of the trends that are most important. We're broadcasting from here in the Cube studios in beautiful Palo Alto, California. And in the studio, I'm being joined by George Gilbert and David Floyer and on the phone we have Neil Raden, Jim Kobielus, Dave Vellante. Team, thanks very much for being part of this conversation today. What we're going to do today is we're going to bring forward some of Wikibon's predictions for 2018. In a previous show, we discussed what we learned in 2017, so some of the trends and some of the expectations that didn't play out as expected. This year we're going to dig a little bit deep into what we think is going to happen in 2018 and it all starts with a proposition that even as we go through significant industry change, we're not necessarily going to see the economics of the industry change as fast, which leads to prediction number one. David Floyer, what is it? >> So, my prediction is that volume is going to take a key role in the evolution of disruptive technologies. So for example, in AI and IOT and in true private cloud, volume is going to be the key determination of when it starts to take off, when it starts to hockey stick. >> So this has been something that's been featured in the industry for a while, Dave, but give us an example. What's the relationship between volume and AI? >> So if we take the relationship between AI and volume, AI is going sideways, and I would predict that it's going to go sideways in 2018 because every implementation is a snowflake until there are solutions out there which can be delivered in volume by vendors. Then that will the point at which things will take off. So an example, for example, automated cars. They are AI, when they start to come out in volume, there'll be volume manufacturers, volume of the census, volume of the processes, the on-processes, volume of everything that will drive down costs and make those implementations so quickly. >> And it's still software, so we're still worried about support and service on a very, very broad scale. >> David: Yeah. >> So that leads to our second quick prediction. Dave Vellante, build on this notion of volume. What's going to be the impact on a lot of the innovative smaller companies in 2018? >> Dave: So Peter, my prediction is got to go scale or go home, AKA go out of business. So we expect massive industry consolidation is going to take place in the next two years, certainly through 2019 as the business models of VC-backed tech startups are getting smashed by cloud and, to a great extent, open source. In a turnabout from the historical norms, innovations and cost reductions from the largest cloud players are moving at a pace that's faster than many, if not most startups are able to deliver. So finding white space is much, much harder. We see private equity as playing a key role here, providing capital for M and A and doing roll-ups that are going to create scale and large portfolios that can compete. >> So Neil Raden, as we think about what Dave just said, one of the key things that's happening is a lot of money's being put into some of the new technologies that are intended to provide more intelligence in a lot of different places. One of the large company leaders indicating or describing how this was going to play out was IBM with its Watson story. What's been going on with Watson? What's our prediction for how that's playing out and likely, what's a likely 2018 scenario for IBM and Watson? >> Neil: Well, not to sugarcoat it but Watson's been a dismal failure, and I think that IBM is going to reassess their whole approach to cognitive computing in 2018. Numbers don't lie, let me give you some numbers from 2016. They obviously don't have '17 yet. But these are reliable numbers from some institutional clients of mine. Their goal for 2016 was over 8,000 clients. They achieved 500. Their goals for business partners was over 4,000 and they achieved 329. So, you know, the numbers speak for themselves, but Watson hasn't caught on. It's a solution in search of a problem. It was a marketing stunt, really, that someone thought to be turned into a 20 billion dollar per year business. It's not even a product, really. It's dozens of subsystems that are linked with APIs. Some of them are interesting, but most already are available in the open source world. >> Well one of the things we talked about last week, Neil, was the idea that we're going to see more buy, as opposed to build, and we talked about the volume play there, and then we asked the question, is there going to be more software or is there going to be more services? It sounds like IBM's play to be a dominate player in AI-related services has not gone as well as expected. Is that kind of where we are right now? >> Neil: Well, yeah. If you look at one of the more public failures of Watson, which was MD Anderson Cancer Center, they pulled the plug on the project after 62 million dollars, but IBM only got about 20 million dollars of that, the rest of it went to PWC. So how they intend to split that business between global services and their partners, I really don't know. And the failure of Watson at MD Anderson wasn't entirely IBM's fault. A lot of it had to do with PWC's project management, and a lot of it had to do with the people at Anderson who basically started the project by looking at a very well-understood type of leukemia that had a well-understood etiology and treatment options. So when the auditors looked at it, they said we haven't learned anything for 62 million dollars, and that's been repeated at other projects. >> So it sounds like this is, again, tied back to the idea of scale, volume, and related issues. But it also sounds like there's a lot of question, ultimately, about what is AI? What isn't AI? What role is Watson going to play? Is it going to be private data? Is it going to be public data? A lot of questions are going to emerge over the course of next year. But there are domains where AI, ML, DL are likely to have some important success. And George, we've got a prediction about where they're likely to be successful in 2018. What are we thinking, what's one domain where we think at least machine learning is going to have a significant impact in 2018? >> Well, keying off David's point about volume, volume economics, we think that IT operations management is going to be one of the first horizontal applications that embeds machine learning. It's not about presenting, modeling, and tools to developers, it's just part of the application. The reason it's important, there's really two key reasons. We're building out shared ephemeral infrastructure, which is very different from the dedicated silos that we had for mission-critical applications. And this infrastructure, and the application landscape on top of it, is extremely hard to manage, and machine learning can help greatly. And I think investment in that will be driven also by a realization that this is training wheels for IOT in the sense that you're monitoring machines through data telemetry that they throw off, and you're using models to figure out how they should be operating versus how they are operating. >> So this has significant applications across IOT, ML, and how we get to volume because it's a controlled and pretty well-defined space. By that I mean, but nonetheless, it's related to the problem space, and by that I mean that bespoke applications, whether they're from AI or whatnot, are going to create new needs for new types of monitoring. But the classification of the tools and the classifications of the devices that will be monitored are pretty well-understood and they're controlled by the IT industry, so they ought to have pretty good definitions. Is that what we're thinking here, George? >> Yes, precisely, and the bespoke pieces can be modeled because they fall within a well-known domain. But I just want to add on the go to market side that keys off of what Dave Vellante said, which is that these IT operations management applications, they can come from cloud vendors, they can come from enterprise software vendors, but especially the ones that are going to be hybrid cloud are going to need enterprise sales forces to get them to market. You hear millions of, virtually millions of startups say our go to market strategy is land and expand. That doesn't get you enterprise wide, and for that you need an enterprise sales force, most expensive migratory workforce in the world, and startups don't have them. And that's why, one of the reasons, we will see roll-ups for scale. >> So we've talked about the need for scale, the impact on start-ups, the impact on big companies like IBM. One of the domains we think this is going to play out most successfully is in ITOM, IT operations management for some of these new technologies. But underneath all of this is a lot of new complexity because of distribution of function, distribution of data, distribution of application, and there needs to be a new technology concept that allows for that distribution to take place under control. And we talked about this a few weeks ago, but Jim Kobielus, what's our prediction for the world of blockchain or blockchain-like technologies are going to take in facilitating this new distribution of capability around digital business? >> Jim: Yeah, blockchain, we're predicting, it will be as fundamental to the growth of the worldwide digital infrastructure and digital markets as 40 plus, 30 to 40 years ago TCPIP was to the growth of what became the web and the internet. And why is that? Well, you know, when you look at the basic principles for development of any infrastructure where there's an innovation on the infrastructure side that is shared or standardized, robust, meaning secure, and distributed, it quickly becomes a common bond enabling growth of sharing and teaming and markets and so forth. So really, it's a layering process where we have TCPIP and you know, DNS and URL providing this shared address space. Layered on top of that was public key infrastructure, which is the foundation of the security that makes blockchain so strong. You know, PKI and SSL and all that is an enabler, that's another robust, shared common infrastructure. And then on top of that, what we see on top of that is they distributed robust shared record of transactions. That's blockchain, and really blockchain as an enabler for the new generation of digital crypto currencies such as bitcoin, enabling a shared robust and distributed currency or means of payment across the worldwide economy. So, in many ways, blockchain is an enabler for this new generation of truly robust and shared currency and transactions with a mutable, secured, shared record. It's just going to be a growth accelerator for the world economy in the 21st century going forward. >> So in many respects, technology takes off when network formation occurs. TCPIP was a foundation for network formation for distributed computing. What we're basically saying is a blockchain becomes a crucial feature of how application networks get constructed over the course of the next 10 years. Have I got that right, David Floyer? >> Absolutely, that's the key. The guy who sold the first telephone was a genius, the second was easier, and it gets easier and easier as that work grows, and blockchain is a key contributor to the development of those networks, and a one-to-one relationship, many, many one-to-one relationships that can occur from that, away from centralization and to a much more distributed environment. >> So I think we've got time for one more prediction really quickly, and I'll bring it up, and then I want to open it up for conversation because this is an interesting one. We come back to this notion of global network formation, blockchain being what we think, or blockchain-like technologies being a crucial element of that. But let's talk about how the relationship between technology, the cloud, and global economies are likely to evolve. For the most part, when people think about the cloud today, we think about US-based companies: Amazon, Microsoft, Google, Facebook, IBM also in there. But there's some other companies are going to have a say on how the cloud industry evolves over the course of the next five years: Alibaba, Tencent, Baidu. So our prediction is that in 2018, we're going to see a lot more conversation about the role that China plays in establishing some of the new rules for how cloud, application networks, and security plays on a global basis, and that's going to facilitate the emergence of Alibaba, Tencent, and Baidu, also on the global stage as cloud-computing companies. What are you guys' thoughts? Dave Vellante, let me start with you. >> Dave: Well I think we're going to see the emergence of, we've seen the emergence of the China cloud and we're going to see that seep through other parts of Asia Pacific. As we discussed earlier as a team in our private meeting, Europe is going to be a very interesting pivot point because if China can control at least portions of Europe and use that as a lure for China, that's going to give them a leg up on global cloud. >> So that leads ultimately to a series of questions about what will be the relationship between formation of cloud industries, the evolution of the cloud industries, and geopolitical concerns. And I think what we need to do, guys, is dedicate an entire research meeting to that question because it's going to be one of the most important dictators of how the industry evolves over the next few years, and ultimately how businesses and enterprises need to start establishing crucial partnerships with their key and strategic suppliers. So look in the last couple minutes we want to do our Action Item round. Now, what we do here at the Action Item show is we start off having a conversation and then we go into the Action Item, what are you going to do differently Monday as a consequence of the information we're talking about? So let's do that now, hit some Action Items, what you heard from the five, six predictions that we talked about. David Floyer, what's your Action Item? >> So my Action Item is for CIOs and CTOs, is to take a pause on IOT and look for vendors that have solutions which can be put in easily and quickly and span OT and IT in the IOT space. >> Neil Raden, what's your Action Item? >> Neil: Well, I think there's a lot of activity around AI and there's going to be an explosion of it in 2018 but most of it's not really going to be AI, it's going to be machine learning, and machine learning is really just math and floating points. AI is different. AI is neuroscience, it's neurology, it's biology and physics and sociology, it's more science. I think that some machine learning is there on the event horizon of AI, but it's not. So we need to make sure we're clear about what announcements and what technology is machine-learning versus artificial intelligence. >> Jim Kobielus, what's your Action Item? >> Jim: I think my Action Item is to revisit IBM's prospects in the AI market in deep learning going forward. And revisit on a positive note actually because IBM officially turned around their cognitive strategy in the last year. Do they focus on the power AI flight form which is really framework agnostic and so forth. And really the AI space that's actually shaping up is different from the one that IBM and others envisioned at the start of this decade, and so it really is 2018, we're going to see IBM come out strong, I believe, as a provider of, one of the providers of the core framework agnostic data deep learning development platforms in the industry, that's my prediction. >> David Vellante, what's your Action Item? >> Dave: I think if you're a startup, you really have to take a hard look at your business and the value that you're bringing to market and be honest, if you're not delivering something that the cloud guys can't deliver or don't want to deliver, then I think you've really got to think about pivoting or exiting the business that you're in. And as part of that, I think you've got to find, to George's point, distribution channels and distribution partners that can help you with go to market at scale or you're in big trouble. >> George Gilbert, Action Item. >> We've been talking about sort of the cloud wars and my recommendation to CIOs and senior IT leaders would be that if you want to hedge your bets, you don't want to be all in on one cloud, it's not dividing a workload across different clouds. Pick a cloud for a workload or for an application because its portability is, it's sort of more of a dream than a reality. It's not about moving containers around, you're in an API ecosystem, you're subject to data gravity, so it's almost like if you're going to do the equivalent of distributed computing, you're going to put some part of the application on one cloud and some part in another cloud. >> So the Action Item is be smart about the relationship between new style of applications and architecture and cloud choices. Okay, let me summarize the meeting very quickly. This has been a great conversation about predictions in 2018, you expect to see more from us over the course of the next month, this is going to be a major theme of ours in November and into December. So, quickly the findings are these. The technology industry made a major mistake with the dot com boom, and the mistake was a presumption that technology change necessarily meant economic change. That is a false assumption. The economics of technology have been pretty well understood for quite some time and they're going to assert themselves even as we go through this significant transformative period in the technology industry. And the economics of volume are going to continue to be important. And we expect that those economics, coupled with the three factors of what's driving cloud architecture decisions, the realities of physics, geopolitical concerns, and literature property concerns, are going to lead to some significant changes in 2018 that we've only just conceived of. One, we expect that we're going to see an emergence of true private cloud that will continue to be crucial to how businesses think about their information technology overall infrastructure and plant, and that's going to have an impact ultimately on where AI gets developed, more from software vendors based on volume. Two, we expect to see a significant impact on, ultimately, what happens in the VC fronted world as startups, which have historically just presumed that there was no need for go to market, that everything was going to be try and buy and then we'd scale from there, start to hit the business realities of the consistency of the economics of volume. Three, IBM we think is repositioning, and somewhat paradoxically is likely to become more successful as a consequence, as a provider of the technologies that make possible some of these new comprehensive, complex AI and related oriented technologies, and not just as a service provider. Very importantly, ITOM is going to become increasingly important and we'll see AI, machine learning be an essential feature of that, in fact, one of the places where we learn how to do it right. And the final one is lots going on with blockchain, but we expect greater distribution of applications, greater distribution of data, and the security technologies and the technologies for bringing that together and supporting the network formation of data and applications must be in place, and that's going to be a major area of technology and innovation in 2018. Alright, so this closes out our Action Item for this week. Once again, I'm Peter Burris. I'd like to, as always, thank the Wikibon team for participating with me today and we look forward to once again visiting with you from the Cube studios here in Palo Alto, California on the next Action Item. Thank you very much. (funky electronic music)
SUMMARY :
and on the phone we have Neil Raden, a key role in the evolution of disruptive technologies. that's been featured in the industry for a while, Dave, and I would predict that it's going to go sideways in 2018 And it's still software, So that leads to our second quick prediction. is going to take place in the next two years, One of the large company leaders indicating or describing and I think that IBM is going to reassess Well one of the things we talked about last week, Neil, and a lot of it had to do with the people at Anderson So it sounds like this is, again, tied back to the idea of and the application landscape on top of it, of the devices that will be monitored but especially the ones that are going to be hybrid cloud and there needs to be a new technology concept of the worldwide digital infrastructure get constructed over the course of the next 10 years. and to a much more distributed environment. and that's going to facilitate the emergence Europe is going to be a very interesting pivot point as a consequence of the information we're talking about? is to take a pause on IOT but most of it's not really going to be AI, is different from the one that IBM and others envisioned and the value that you're bringing to market and my recommendation to CIOs and senior IT leaders and that's going to be a major area
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Vellante | PERSON | 0.99+ |
PWC | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2018 | DATE | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Tencent | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Baidu | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
Jim | PERSON | 0.99+ |
December | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
November | DATE | 0.99+ |
21st century | DATE | 0.99+ |
Monday | DATE | 0.99+ |
millions | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
next year | DATE | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
62 million dollars | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
over 4,000 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
MD Anderson Cancer Center | ORGANIZATION | 0.99+ |
over 8,000 clients | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
329 | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Wikibon Analyst Meeting | Lessons & Predictions
>> Hi, welcome once again to Wikibon's weekly research meeting from The Cube's Palo Alto studios. (upbeat electronic music) >> I'm Peter Burris, and we're being joined as always by Wikibon's team of analysts, including George Gilbert here in the studio with me. On the phone we have David Floyer, Neil Raden, and James Kebyouis. And today, what we're going to do is we're going to talk about some of the lessons that we learned in 2008 or 2017. Over the course of the next month, Wikibon is going to put a fair amount of research into making our annual predictions, and this is the first step. What lessons did we learn? What is working? What isn't working? As a consequence of some of the things that were tried, predictions that were made, and initiatives that haven't necessarily panned out. Now the reason we want to do this is not just to talk about technology, but we're trying to bring the idea to those users out there who are in the midst of budgeting, about where they should continue to place bets, and where they might want to start thinking about rationing down; things that don't seem to be panning out. So there's a lot of ground to cover, and let's get started. And I want to start with you David Floyer. So the first thing I think we've learned in 2017 is that the cloud is not going to be homogenous. Do you agree? >> David: Absolutely, it's becoming very, very heterogeneous. We brought into play, the concept of true private cloud, and we're seeing that develop very strongly, and we're predicting that again. In the future we'll develop service at a completely different way of doing storage that's coming from the cloud. From the hypervisor cloud, into the private cloud, and for general purpose. And wishing really some very big changes in how systems will begin to be developed. >> So with that as a basis for some of the kind of Macro Trends, the idea that business is not going to move to Cloud as we like to say. The cloud is going to move to business. There are a number of applications that are driving some of these changes. Neil I want to start with you. One of them is big data, or perhaps we should finally start calling it analytics. What is it about analytics that is starting to catalyze a rethinking of the overall architecture that we're going to use to sustain some of these digital business changes, that all companies, all institutions face? >> I don't know how it started Peter. People have been doing analytics for decades while Corporate IT was more or less obsessed with operations. But over the last five to ten years, analytics has just become the most important thing, and it's not a flip flop. The problem is the approach to analytics has jumped from one thing to another so quickly. I don't think that anyone has had a chance to really perfect their approach. We went from predictive analytics, and then we went to data science and big data. And now everything is machine learning and artificial intelligence. If I were inside an organization right now, my head would be spinning. So we'd have to elucidate some clear directions for people about what works and what doesn't, and what the level of effort in as it spended to get things done. >> So (mumbles)? Is it safe to say. Is it face to say. Is it safe to say at this point, that the kind of general purpose notion of big data, where you throw everything into a single store, like a datalake. And then you have everybody run around looking for data. Is that starting to break down and become increasingly specialized? Is that kind of what we learned in 2017? >> I think it's safe to say that big data never really crossed the Cazim. Its closest application to something that would be appealing to mainstream customers, was taking ATL and offloading it in front of very expensive dated warehouses. But the way the open source ecosystem, principally with the (mumbles) distributions that curated all these open source components. The way it tried to attack that problem was so complicated in terms of the administrative demands, that most customers choked on it. >> Peter: So we're seeing increasing specialization, in part because of the nature of the problems that people are trying to solve, but also the complexity of the underlying solution. So that leads to a third question, and the third question is, we talked about cloud not being homogenous. We talked about big data becoming more specialized and solution oriented, outcome oriented. One of the other big drivers in all this, David Floyer is IOT. We'll talk in a second about how IOT and analytics are going to come together. But what are we learning from IOT in 2017. >> What we're learning, is that the edge is again, not homogenous. And it's much better to look at the, break out the edge, and break out the IOT at the edge, into a primary layer, and the secondary layer. The primary layer is the layer that is a solution which takes the sensors, takes equipment, takes AI technologies, and brings them all together, add a solution to a business problem. And we believe that that is a much lower cost, and volume approach to the problem, then everybody, every IT making their own equipment in their own factories or entrances So the primary is where most of the data is going to be generated, and also where most of the generated data is going to be compressed down from, maybe as much a million to one into the secondary layer. And that's the interface between the primary layer and the cloud computing, whether it be (mumbles) by the cloud, or public cloud, or any combinations of those. That's the tertiary layer, and the secondary level would be that interface at the edge between the primary devices, and the cloud computing the rest of the enterprises, dependent upon. >> So Jim Kobielus, we've got three lessons learned in the table. Clouds are homogenous, analytics are increasingly going to be a feature of applications. But that's going to require to be retooling. IOT is not going to be homogenous, it's going to drive new data sources, and new opportunities to create value in (mumbles). Where are developers in all this. What do we learn, or what are we learning as the developer committee starts to try to participate more in the process of creating new levels of digitally based value in business. >> Right, what we're going to need to. Well what developers are learning, and enterprises are learning, is that their current group of core developers are not prepared for this AI at the edge revolution. Not prepared in terms of skill, the tools at their disposal. The DevOps pipeline, their workflows that are in place. The teaming arrangement in collab are still dataalkes themselves. Not prepared to do AI effectively, or drive it effectively to the edge, where it can achieve the intent that (mumbles) who (mumbles) in terms of business value. So what that means is, in 2018 and beyond, if you're an enterprise IT manager, you're an analytics manager. Where do you place your budget? Is it skills up where you hire the right people? Do you operate your tools, and somehow make due with the DevOps tools you have? I would bring more of the, for example model governance over algorithms and deep learnings and (mumbles) model into the core governance structure you have. Can you do a datalake? Do you have datalakes that are architected to handle machine data in great volume Like (mumbles) and exabytes of machine data generated by all these end points. Okay, there's all these decisions that need to be made, and there's money that needs to be spent to invest in this entire development infrastructure ecosystem, to really prepare yourself to build these disruptive applications that might take your industry by its storm. None of this comes cheap. >> Peter: So it seems guys like, we're in a situation where the technology in many respects is available to undertake and build, and deploy, and generate value out of some of these new classes of applications. But skills are very, very unevenly distributed. Neil Raden, let's talk a little bit about that. What is the core skills challenge that businesses face today as they attempt to explore new ways of solving problems with digitally related technologies. >> I think that software vendors are going to provide a tiered capability, just like we've seen in other kinds of analytical tools. Where you have a small number of people at the top of the tier, who have the background and the skill to understand whether this model was the appropriate model, or whether we found a correlation that was serious, because they were all time series, or something like that. And then you have a larger group of people who use these tools to drive a machine learning algorithm, or like data robot, where it just runs 10 or 12 different algorithms, and it helps you find the best one and so forth. But that doesn't mean that it's correct, and that doesn't mean that those people understand the statistics that are generated by the model. That requires governance of the people at the top of that tier. And then of course, there's the lower tier, which is how they communicate to these people, what you've done with these techniques. >> Peter: So this is a broad problem, it sounds like. It sounds like we've got a skills deficit problem that's going to have far reaching impacts. We'll talk more about this during predictions, but I think there's one that's on everybody's mind right now. Are we going to se specialist software and solution vendors emerge out of this to start the process of at least solving some of these problems, and showing the industry how to go about it. Or is this something that all large enterprises, and mid-size enterprises are going to have to do on their own, and they got to start throwing an enormous amount of money at these issues? David Floyer, give our CIOs a kind of a vision of where they should be thinking right now about how to address the challenges of skills. >> David: Well, the big decision to make for all enterprise, but most enterprises, is whether to, the degree to which they should invest in their own solutions. Their own AI solution. Or should they wait until those solutions are included in (mumbles) the packages, and general purpose packages, and packages they get from (mumbles), then the (mumbles). And if you're a very large enterprise, and you can see a clear business differentiation, then clearly that investment can be justified. But I think for many enterprise CIOs, they will sit back and wait, and see the degree to which they need to invest. I don't mean to say that they should be actively seeing what is available in a marketplace, but they should be probably spending more time reaching out to potential vendors with a solution, who can generate volume, rather than trying to create snowflakes on their own. >> Peter: So, before we get to the action item round, Jim I want to build on that very quickly. So Dave's arguing essentially that we're moving into a buy vs. build as we go through this transformation. I think we all agree, that's where we are today. Next question though. Is it going to be buying software, or is it going to be buying services? Or some combination of the two? What did we learn in 2017, about how the availability of increasingly advanced services, especially in the AR realm from some of the big cloud suppliers, is changing or altering the way businesses think about how they're going to generate value out of these technologies. >> Jim: Yeah, I think right now what we're seeing is the swing is towards buying services. Buying cloud services that have machine learning, deep learning, AI, (mumbles) again from the usual suspects AWS, Microsoft. (mumbles) has been Google and IBM, so forth. What we see right now in the whole developer war, is to win the hearts and minds of AI developer, is it's coming down to whose cloud are you going to put your data in. You can do your model training & development and deployment. Whose framework? AWS's mxnet? Microsoft's CNTK whatever? Google's Tensorflow? Are you going to use in those bendors, the solution providers in those frameworks provide free training models, and a lot of other capability to not build out, not only the models, but to provide a floor DevOp pipeline for the data (mumbles) that you have to be standardized on one solution provider a lot more more than others. >> Peter: George, George. Hey Jim, let me bring George in. George, what do you have to say about (crosstalk). >> I think we've seen this. We've seen this movie before when enterprises started to build out their applications. At one point they were thinking of large enterprises, custom data modeling how their entire enterprise worked, and realized they didn't have the skills to do that. They brought (mumbles). So I don't think the choice is binary between buying services, or buying apps. I think there's also, are we going to wait for the install base of apps. The big vendors who've installed the large horizontal apps to add machine learning capabilities to those applications, where we start to surround those legacy apps with more niche package solutions. And then the third one is, will we see vendors like IBM, and maybe Accenture, which have a mix of services and some repeatable IP. >> Great, so the one I'll add to this before we do the Action Item guys, is I think one of the more important things that we're facing in the industry right now. Is, as it becomes evident per David's earlier point that the cloud is not going to be homogenous. Are we moving into another round of platform wars? Where users have to be very, very smart about what platform they choose. Yes, but increasingly having the options to do the appropriate level of integration across whatever arrangement of cloud services, on premise, true pride of cloud, etc. Probably something. A lesson that we've learned, and one that our clients will increasingly tell us that we have to focus on. Okay, Action Item round guys. David Floyer, I want to talk with you. David Floyer, action item. >> The action item for me is actually an infrastructure. There is a tremendous opportunity evolving to develop, be able to put applications with far more data on to their systems. And those are based on a change in architecture, which we're calling (mumbles), which is tripping away the storage and the networking completely from the processes. Being able to assemble systems which do things, which are just unimaginable, just by the (crosstalk). >> George Gilbert, action item. >> I'd go back to picking how you're going to divide your efforts among extending your existing package ups with machine learning capabilities, and finding where the highest auto y areas for those are. Look at the emerging, sort of, I don't want to say startups, but younger companies that are adding these complimentary capabilities, and. >> Peter: Okay, good. Next, Jim Kobielus, action item. >> Yeah, well action item is explore the new generation of high level developmental traction framework for AI and deeplink light. The new glue on framework that Microsoft and (mumbles) released a couple of weeks ago. That will enable the rest of us developers to be able to do deep learning AI development using code and visual paradigms that they have grown to love and use in their core development initiative. >> Peter: Neil Raden, action item. >> I like machine learning even though it has a lofty title that maybe it doesn't deserve. It's not that complicated. But more importantly, it creates opportunities for organizations to do things that really can help them. I think we spend too much time talking about AI, and I think the average organization needs a computer that thinks like a human being, about as much as we need airplanes that flap their wings. But there's too much time on AI, which is a very esoteric area. Facial recognition and all that other stuff. That's going to be packaged with things if you need it, but companies don't need to worry about finding people who can develop that. >> No need to anthropomorphize what doesn't need to be anthropomorphized. Okay, so here's our overall action item. 2017 has been a year of significant success in the computing industry, as businesses increasingly woke up to the idea that the transformation to digital business is not just about taking costs out of IT. It's about doing things differently, and specifically doing more with data. We've seen a lot of leaders in this realm. Companies that have been called digital natives have paved the way, but a lot of other industries are now recognizing that the role of data as an asset, is crucial to their future. And they want to find ways of appropriating that. In particular, we think that there are three lessons that have been learned at the technology level. Lesson number one, the cloud is not going to be homogenous. The cloud is going to be a combination of technologies, each optimized to handle data, as it pertains to particular uses, application forms and workloads, in the natural and appropriate way. Data will drive workload, will drive, cloud implementation. Number two, is that one of the key issue, or one of the key areas of that changes the transformation from big data concepts to analytic practicalities. We've got years of working with analytics. The technology is improving, the hardware is improving, and now we can apply new and interesting ways. And very importantly, that includes applying it to existing legacy applications to extend their useful life as well. A lot is going to go on into this, but the good news, ultimately, is that technology is becoming increasingly usable and increasingly useful to business. Third, the IOT, or internet of things, is going to have an enormous consequence in how we consider the arrangement of IT assets, IT investments, and IT personnel. And our expectation ultimately, is that that will continue to be a crucial determinant of the decisions that ultimately get made, if success is a criteria. Because our observation is, yes software is going to eat the world, but it's going to eat it at the edge. The last poin that we want to make here ultimately, is that a lot of IT organizations have to fess up to the reality that they're not skilled to do a lot of these things. They're not skilled to fully support the business's needs in these transformations. We are no longer in control of the speed of transformation in our industries, that's being set by our competitors who may be better or worse than us at introducing some of these new technologies, and taking advantage of them. And introducing a new business model and customer experience capabilities. As a consequence, there's going to be a new round of value being created by solution providers, utilizing different cloud options, different IOT options, and different AI options in response to expertise about how those solutions need to be deployed. And IT has to accept that, sooner rather than later, and start the process of establishing the frameworks for strategic management of those suppliers, so they can appropriately weave them into
SUMMARY :
is that the cloud is not going to be homogenous. We brought into play, the concept of true private cloud, is not going to move to Cloud as we like to say. The problem is the approach to analytics Is that starting to break down and become I think it's safe to say that So that leads to a third question, and the of the generated data is going to be compressed as the developer committee starts to try to and there's money that needs to be spent What is the core skills challenge that businesses and the skill to understand whether this model the industry how to go about it. David: Well, the big decision to make for all enterprise, Is it going to be buying software, the models, but to provide a floor DevOp pipeline George, what do you have to say about (crosstalk). and realized they didn't have the skills to do that. that the cloud is not going to be homogenous. Being able to assemble systems which I'd go back to picking how you're going to divide Next, Jim Kobielus, action item. to be able to do deep learning AI development That's going to be packaged with things if you need it, The cloud is going to be a combination of technologies,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
David Floye | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
James Kebyouis | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
third question | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Wikibon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
three lessons | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first step | QUANTITY | 0.98+ |
single store | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Palo Alto | LOCATION | 0.98+ |
next month | DATE | 0.97+ |
three lessons | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
Third | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
ten years | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
mumbles | ORGANIZATION | 0.93+ |
12 different algorithms | QUANTITY | 0.91+ |
one point | QUANTITY | 0.91+ |
one solution | QUANTITY | 0.9+ |
decades | QUANTITY | 0.89+ |
The Cube | ORGANIZATION | 0.88+ |
couple of weeks ago | DATE | 0.86+ |
five | QUANTITY | 0.76+ |
Number two | QUANTITY | 0.74+ |
DevOps | TITLE | 0.69+ |
a million | QUANTITY | 0.66+ |
second | QUANTITY | 0.65+ |
number one | QUANTITY | 0.6+ |
last | QUANTITY | 0.6+ |
much | QUANTITY | 0.57+ |
tertiary | QUANTITY | 0.53+ |
Wikibon Research Meeting | October 20, 2017
(electronic music) >> Hi, I'm Peter Burris and welcome once again to Wikibon's weekly research meeting from the CUBE studios in Palo Alto, California. This week we're going to build upon a conversation we had last week about the idea of different data shapes or data tiers. For those of you who watched last week's meeting, we discussed the idea that data across very complex distributed systems featuring significant amounts of work associated with the edge are going to fall into three classifications or tiers. At the primary tier, this is where the sensor data that's providing direct and specific experience about the things that the sensors are indicating, that data will then signal work or expectations or decisions to a secondary tier that aggregates it. So what is the sensor saying? And then the gateways will provide a modeling capacity, a decision making capacity, but also a signal to tertiary tiers that increasingly look across a system wide perspective on how the overall aggregate system's performing. So very, very local to the edge, gateway at the level of multiple edge devices inside a single business event, and then up to a system wide perspective on how all those business events aggregate and come together. Now what we want to do this week is we want to translate that into what it means for some of the new technologies, new analytics technologies that are going to provide much of the intelligence against each of this data. As you can imagine, the characteristics of the data is going to have an impact on the characteristics of the machine intelligence that we can expect to employ. So that's what we want to talk about this week. So Jim Kobielus, with that as a backdrop, why don't you start us off? What are we actually thinking about when we think about machine intelligence at the edge? >> Yeah, Peter, we at the edge, the edge of body, the device be in the primary tier that acquires fresh environmental data through its sensors, what happens at the edge? In the extreme model, we think about autonomous engines, let me just go there just very briefly, basically, it's a number of workloads that take place at the edge, the data workloads. The data is (mumbles) or ingested, it may be persisted locally, and that data then drives local inferences that might be using deep layer machine learning chipsets that are embedded in that device. It might also trigger various tools called actuations. Things, actions are taken at the edge. If it's the self-driving vehicle for example, an action may be to steer the car or brake the car or turn on the air conditioning or whatever it might be. And then last but not least, there might be some degree of adaptive learning or training of those algorithms at the edge, or the training might be handled more often up at the second or tertiary tier. The tertiary tier at the cloud level, which has visibility usually across a broad range of edge devices and is ingesting data that is originated from all of the many different edge devices and is the focus of modeling, of training, of the whole DevOps process, where teams of skilled professionals make sure that the models are trained to a point where they are highly effective for their intended purposes. Then those models are sent right back down to the secondary and the primary tiers, where act out inferences are made, you know, 24 by seven, based on those latest and greatest models. That's the broad framework in terms of the workloads that take place in this fabric. >> So Neil, let me talk to you, because we want to make sure that we don't confuse the nature of the data and the nature of the devices, which may be driven by economics or physics or even preferences inside of business. There is a distinction that we have to always keep track of, that some of this may go up to the Cloud, some of it may stay local. What are some of the elements that are going to indicate what types of actual physical architectures or physical infrastructures will be built out as we start to find ways to take advantage of this very worthwhile and valuable data that's going to be created across all of these different tiers? >> Well first of all, we have a long way to go with sensor technology and capability. So when we talk about sensors, we really have to define classes of sensors and what they do. However, I really believe that we'll begin to think in a way that approximates human intelligence, about the same time as airplanes start to flap their wings. (Peter laughs) So, I think, let's have our expectations and our models reflect that, so that they're useful, instead of being, you know hypothetical. >> That's a great point Neil. In fact, I'm glad you said that, because I strongly agree with you. But having said that, the sensors are going to go a long ways, when we... but there is a distinction that needs to be made. I mean, it may be that that some point in time, a lot of data moves up to a gateway, or a lot of data moves up to the Cloud. It may be that a given application demands it. It may be that the data that's being generated at the edge may have a lot of other useful applications we haven't anticipated. So we don't want to presume that there's going to be some hard wiring of infrastructure today. We do want to presume that we better understand the characteristics of the data that's being created and operated on, today. Does that make sense to you? >> Well, there's a lot of data, and we're just going to have to find a way to not touch it or handle it any more times than we have to. We can't be shifting it around from place to place, because it's too much. But I think the market is going to define a lot of that for us. >> So George, if we think about the natural place where the data may reside, the processes may reside, give us a sense of what kinds of machine learning technologies or machine intelligence technologies are likely to be especially attractive at the edge, dealing with this primary information. Okay, I think that's actually a softball which is, we've talked before about bandwidth and latency limitations, meaning we're going to have to do automated decisioning at the edge, because it's got to be fast, low latency. We can't move all the data up to the Cloud for bandwidth limitations. But, by contrast, so that's data intensive and it's fast, but up in the cloud, where we enhance our models, either continual learning of the existing ones or rethinking them entirely, that's actually augmented decisions, and augmented means it's augmenting a human in the process, where, most likely, a human is adding additional contextual data, performing simulations, and optimizing the model for different outcomes or enriching the model. >> It may in fact be a crucial element or crucial feature of the training by in fact, validating that the action taken by the system was appropriate. >> Yes, and I would add to that, actually, that you might, you used an analogy, people are going from two extremes where they say, some people say, "Okay, so all the analytics has to be done in the cloud," Wikibon and David Floyer, and Jim Kovielus have been pioneering the notion that we have to do a lot more at the client. But you might look back at client server computing where the client was focused on presentation, the server was focused on data integrity. Similarly, here, the edge or client is going to be focused on fast inferencing and the server is going to do many of the things that were associated with a DBMS and data integrity in terms of reproducibility, of decisions in the model for auditing, security, versioning, orchestration in terms of distributing updated models. So we're going to see the roles of the edge and the cloud rhyme with what we saw in server. Neither one goes away, they augment each other. >> So, Jim Kovielus, one of the key issues there is going to be the gateway, and the role that the gateway plays, and specifically here, we talked about the nature of again, the machine intelligence that's going to be operating more on the gateway. What are some of the characteristics of the work that's going to be performed at the gateway that kind of has oversight of groupings or collections of sensor and actuator devices? >> Right, good question. So the perfect example that everybody's familiar with now about a gateway in this environment, a smart home hub. A smart home hub, just for the sake of discussion, has visibility across two or more edge devices. It could be a smart speaker, could be the HVAC system is sensor equipped and so forth, what it does, the pool it performs, a smart hub of any sort, is that it acquires data from the edge devices, the edge devices might report all of their data directly to the hub, or the sensor devices might also do inferences and then pass on the results of the inferences it has given to the hub, regardless. What the hub does is A, it aggregates the data across those different edge devices over which it has this ability and control, B, it may perform it's own inferences based on models that look out across an entire home in terms of patterns of activity. Then it might take the hub, various actions autonomous by itself, without consulting an end user or anything else. It might take action in terms of beef up the security, adjust the HVAC, it adjusts the light in the house or whatever it might be, based on all that information streaming in real time. Possibly, its algorithms will allow you to determine what of that data shows an anomalous condition that deviates from historical patterns. Those kinds of determinations, whether it's anomalous or a usual pattern, are often taken at the hub level, 'cause it's maintaining sort of a homeostatic environment, as it were, within its own domain, and that hub might also communicate up the stream, to a tertiary tier that has oversight, let's say, of a smart city environment, where everybody in that city or whatever, might have a connection into some broader system that say, regulates utility usage across the entire region to avoid brownouts and that kind of thing. So that gives you an idea of what the role of a hub is in this kind of environment. It's really a controller. >> So, Neil, if we think about some of the issues that people really have to consider as they start to architect what some of these systems are going to look like, we need to factor both what is the data doing now, but also ensure that we build into the entire system enough of a buffer so that we can anticipate and take advantage of future ways of using that data. Where do we draw that fine line between we only need this data for this purpose now and geez, let's ensure that we keep our options open so that we can use as much data as we want at some point in time in the future? >> Well, that's a hard question, Peter, but I would say that if it turns out that this detailed data coming from sensors, that the historical aspect of it isn't really that important. If the things you might be using that data for are more current, then you probably don't need to capture all that. On the other hand, there have been many, many occasions historically, where data has been used other than its original purpose. My favorite example was scanners in grocery stores, where it was meant to improve the checkout process, not have to put price stickers on everything, manage inventory and so forth. It turned out that some smart people like IRI and some other companies said, "We'll buy that data from you, "and we're going to sell it to advertisers," and all sorts of things. We don't know the value of this data yet, it's too new. So I would err on the side of being conservative and capturing and saving as much as I could. >> So what we need to do is, we need to marry or we need to do an optimization of some form about how much is it going to cost to transmit the data versus what kind of future value or what kinds of options of future value might there be on that data. That is, as you said, a hard problem, but we can start to conceive of an approach to characterizing that ratio, can't we? >> I hope so. I know that, personally, when I download 10 gigabytes of data, I pay for 10 gigabytes of data, and it doesn't matter if it came from a mile away or 10,000 miles away. So there has to be adjustments for that. There's also ways of compressing data because this sensor data I'm sure is going to be fairly sparse, can be compressed, is redundant, you can do things like RLL encoding, which takes all the zeroes out and that sort of thing. There are going to be a million practices that we'll figure out. >> So as we imagine ourselves in this schemata of edge, hub, tertiary or primary, secondary and tertiary data and we start to envision the role that data's going to play and how we conduct or how we build these architectures and these infrastructures, it does raise an interesting question, and that is, from an economic standpoint, what do we anticipate is going to be the classes of devices that are going to exploit this data? David Foyer who's not here today, hope you're feeling better David, has argued pretty forcibly, that over the next few years we'll see a lot of advances made in microprocessor technology. Jim, I know you've been thinking about this a fair amount. What types of function >> Jim: Right. >> might we actually see being embedded in some of these chips that software developers are going to utilize to actually build some of these more complex and interesting systems? >> Yeah, first of all, one of the trends we're seeing in the chipset market for deep learning, just to be there for a moment, is that deep learning chipsets traditionally, when I say traditionally, the last several years the market has been dominated by GP's graphic processing unit. Invidia of course, is the primary provider of those. Of course, Invidia has been along around for a long time as a gaming solution provider. Now, what's happening with GPU technology, in fact, the latest generation of Invidia's architecture shows where it's going. The thing that is more deep learning optimized capabilities at the chipset level. They're called tensor processing, and I don't want to bore you with all the technical details, but the whole notion of-- >> Peter: Oh, no, Jim, do bore us. What is it? (Jim laughs) >> Basically deep learning is based on doing high speed, fast matrix map. So fundamentally, tensor cores do high velocity fast matrix math, and the industry as a whole is moving toward embedding more tensor cores directly into the chipset, higher density of tensor core. Invidia in its latest generation of chip has done that. They haven't totally taken out the gaming oriented GPU capabilities, but there are competitors and they have a growing list, more than a dozen competitors on the chipset side now. We're all going down a road of embedding far more technical processing units into every chip. Google is well known for something called GPU tensor processing units, their chip architecture. But they're one of many vendors that are going down that road. The bottom line is the chipset itself is becoming authenticated and being optimized for the core function that CPU and really GPU technology and even ASIX and FPGAs were not traditionally geared to do, which is just deep learning at a high speed, many cores, to do things like face recognition and video and voice recognition freakishly fast, and really, that's where the market is going in terms of enabling underlying chipset technology. What we're seeing is that, what's likely to happen in the chipsets of the year 2020 and beyond, they'll be predominantly tensor core processing units, But they'll be systemed on a chip that, and I'm just talking about future, not saying it's here now, systems on a chip that include some, a CPU, to managing real time OS, like a real time Linux or what not, and with highly dense tensor core processing unit. And in this capability, these'll be low power chips, and low cost commodity chips that'll be embedded in everything. Everything from your smart phone, to your smart appliances in your home, to your smart cars and so forth. Everything will have these commodity chips. 'Cause suddenly every edge device, everything will be an edge device, and will be able to provide more than augmentation, automation, all these things we've been talking about, in ways that are not necessarily autonomous, but can operate with a great degree of autonomy to help us human beings to live our lives in an environmentally contextual way at all points in time. >> Alright, Jim, let me cut you off there, because you said something interesting, a lot more autonomy. George, what does it mean, that we're going to dramatically expand the number of devices that we're using, but not expand the number of people that are going to be in place to manage those devices. When we think about applying software technologies to these different classes of data, we also have to figure out how we're going to manage those devices and that data. What are we looking at from an overall IT operations management approach to handling a geometrically greater increase in the number of devices and the amount of data that's being generated? (Jim starts speaking) >> Peter: Hold on, hold on, George? >> There's a couple dimensions to that. Let me start at the modeling side, which is, we need to make data scientists more productive or we need to push out to a greater, we need to democratize the ability to build models, and again, going back to the notion of simulation, there's this merging of machine learning and simulation where machine learning tells you correlations in factors that influence an answer. Whereas, the simulation actually lets you play around with those correlations, to find the causations, and by merging them, we make it much, much more productive to find the models that are both accurate and to optimize them for different outcomes. >> So that's the modeling issue. >> Yes. >> When we think about after we, which is great. Now as we think about some of the data management elements, what are we looking at from a data management standpoint? >> Well, and this is something Jim has talked about, but, you know we had DevOps for joining the, essentially merging the skills of the developers with the operations folks, so that there's joint responsibility of keeping stuff live. >> Well what about things like digital twins, automated processes, we've talked a little it about breadth versus depth, ITOM, What do you think? Are we going to build out, are all these devices going to reveal themselves, or are we going to have to put in place a capacity for handling all of these things in some consistent, coherent way? >> Oh, okay, in terms of managing. >> In terms of managing. >> Okay. So, digital twins were interesting because they pioneered or they made well known a concept called essentially, a symmetric network, or a knowledge graph, which is just a way of abstracting what is a whole bunch of data models and machine learning models that represents the structure and behavior of a device. In IIoT terminology, it was like an industrial device, like a jet engine. But that same construct, the knowledge graph and the digital twin, can be used to describe the application software and the infrastructure, both middleware and hardware, that makes up this increasingly sophisticated network of learning and inferencing applications. And the reason this is important, it sounds arcane, the reason it's important is we're building now vastly more sophisticated applications over great distances, and the only way we can manage them is to make the administrators far more productive. The state of the art today is, alerts on the performance of the applications, and alerts on the, essentially, the resource intensity of the infrastructure. By combining that type of monitoring with the digital twin, we can get a, essentially much higher fidelity reading on when something goes wrong. We don't get false positives. In other words, you don't have, if something goes wrong, it's like the fairy tale of the pea underneath the mattress, all the way up, 10 mattresses, you know it's uncomfortable. Here, it'll pinpoint exactly what gets wrong, rather than cascading all sorts of alerts, and that is the key to productivity in managing this new infrastructure. >> Alright guys, so let's go into the action item around here. What I'd like to do now is ask each of you for the action item that you think users are going to have to apply or employ to actually get some value, and start down this path of utilizing machine intelligence across these different tiers of data to build more complex, manageable application infrastructures. So, Jim, I'd like to start with you, what's your action item? >> My action item is related what George just said, modeled centrally, deployed in a decentralized fashion, machine learning, and use digital twin technology to do your modeling against device classes, in a more coherent way. There's not one model that won't fit all of the devices. Use digital twin technology to structure the modeling process to be able to tune a model to each class of device out there. >> George, action item. >> Okay, recognize that there's a big difference between edge and cloud, as Jim said. But I would elaborate, edge is automated, low latency decision making, extremely data intensive. Recognize that the cloud is not just where you trickle up a little bit of data, this is where you're going to use simulations, with a human in the loop, to augment-- >> System wide, system wide. >> System wide, with a human in the loop to augment how you evaluate new models. >> Excellent. Neil, action item. >> I would have people start on the right side of the diagram and start to think about what their strategy is and where they fit into these technologies. Be realistic about what they think they can accomplish and do the homework. >> Alright, great. So let me summarize our meeting this week. This week we talked about the role that the three tiers of data that we've described will play in the use of machine intelligence technologies as we build increasingly complex and sophisticated applications. We've talked about the difference between primary, secondary, and tertiary data. Primary data being the immediate experience of sensors. Analog being translated into digital, about a particular thing or set of things. Secondary being the data that is then aggregated off of those sensors for business event purposes, so that we can make a business decision, often automatically down at an edge scenario, as a consequence of signals that we're getting from multiple sensors. And then finally, tertiary data, that looks at a range of gateways and a range of systems, and is considering things at a system wide level, for modeling, simulation and integration purposes. Now, what's important about this is that it's not just better understanding the data and not just understanding the classes of technologies that we used, that will remain important. For example, we'll see increasingly powerful low cost device specific arm like processors pushed into the edge. And a lot of competition at the gateway, or at the secondary data tier. It's also important, however to think about the nature of the allocations and where the work is going to be performed across those different classifications. Especially as we think about machine learning, machine etiologies and deep learning. Our expectation is that we will see machine learning being used on all three levels, Where machine etiology is being used on against all forms of data to perform a variety of different work, but that the work that will be performed will be a... Will be naturally associated and related to the characteristics of the data that's being aggregated at that point. In other words, we won't see simulations, which are characteristics of tertiary data, George, at the edge itself. We will however, see edge devices often reduce significant amounts of data from a perhaps a video camera or something else to make relatively simple decisions that may involve complex technologies to allow a person into a building, for example. So our expectation is that over the next five years we're going to see significant new approaches to applying increasingly complex machine etiologies technologies across all different classes of data, but we're going to see them applied in ways that fit the patterns associated with that data, because it's the patterns that drive the applications. So our overall action item, it's absolutely essential that businesses that considering and conceptualizing what machine intelligence can do, but be careful about drawing huge generalizations about what the future machine intelligence is. The first step is to parse out the characteristics of the data driven by the devices that are going to generate it and the applications that are going to use it, and understand the relationship between the characteristics of that data and the types of machine intelligence work that can be performed. What is likely, is that an impedance mismatch between data and expectations of machine intelligence will generate a significant number of failures that often will put businesses back years in taking full advantage of some of these rich technologies. So, once again we want to thank you this week for joining us here on the Wikibon weekly research meeting. I want to thank George Gilbert who is here CUBE Studio in Palo Alto, and Jim Kobielus and Neil Raden who were both on the phone. And we want to thank you very much for joining us here today, and we look forward to talking to you again in the future. So this is Peter Burris, from the CUBE's Palo Alto Studio. Thanks again for watching Wikibon's weekly research meeting. (electronic music)
SUMMARY :
the characteristics of the data is going to have an impact that take place at the edge, the data workloads. that are going to indicate what types about the same time as airplanes start to flap their wings. It may be that the data that's being generated at the edge to not touch it or handle it any more times than we have to. and optimizing the model for different outcomes or crucial feature of the training and the server is going to do many of the things and the role that the gateway plays, is that it acquires data from the edge devices, and geez, let's ensure that we keep our options open that the historical aspect of it or we need to do an optimization of some form So there has to be adjustments for that. has argued pretty forcibly, that over the next few years in fact, the latest generation of Invidia's architecture What is it? in the chipsets of the year 2020 and beyond, that are going to be in place to manage those devices. that are both accurate and to optimize them Now as we think about some of the data management elements, essentially merging the skills of the developers and that is the key to productivity in managing the action item that you think to structure the modeling process to be able to tune a model Recognize that the cloud is not just where you trickle up to augment how you evaluate new models. Neil, action item. and do the homework. So our expectation is that over the next five years
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Jim Kovielus | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
October 20, 2017 | DATE | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
10 mattresses | QUANTITY | 0.99+ |
10,000 miles | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
second | QUANTITY | 0.99+ |
two extremes | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
this week | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
one model | QUANTITY | 0.98+ |
each class | QUANTITY | 0.98+ |
three tiers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
24 | QUANTITY | 0.98+ |
one | QUANTITY | 0.96+ |
a mile | QUANTITY | 0.96+ |
more than a dozen competitors | QUANTITY | 0.95+ |
IRI | ORGANIZATION | 0.95+ |
Wikibon | PERSON | 0.94+ |
seven | QUANTITY | 0.94+ |
first | QUANTITY | 0.92+ |
CUBE Studio | ORGANIZATION | 0.86+ |
2020 | DATE | 0.85+ |
couple dimensions | QUANTITY | 0.79+ |
Palo Alto Studio | LOCATION | 0.78+ |
single business event | QUANTITY | 0.75+ |
tertiary tier | QUANTITY | 0.74+ |
last several years | DATE | 0.71+ |
years | DATE | 0.7+ |
twin | QUANTITY | 0.64+ |
Wikibon Research Meeting | Systems at the Edge
>> Hi I'm Peter Burris and welcome once again to Wikibons's weekly research meeting on theCUBE. (funky electronic music) This week we're going to discuss something that we actually believe is extremely important. And if you listen to the recent press announcements this week from Deli MC, the industry increasingly is starting to believe is important. And that is, how are we going to build systems that are dependent upon what happens at the edge? The past 10 years have been dominated about the cloud. How are we going to build things in the cloud? How are we going to get data to the cloud? How are we going to integrate things in the cloud? While all those questions remain very relevant, increasingly, the technology's becoming available, the systems and the design elements are becoming available, and the expertise is now more easily bought together so that we can start attacking some extremely complex problems at the edge. A great example of that is the popular notion of what's happening with automated driving. That is a clear example of huge design requirements at the edge. Now to understand these issues, we have to be able to generalize certain attributes of the differences in the resources, whether they be hardware or software, but increasingly, especially from a digital business transformation standpoint, the differences in the characteristics of the data. And that's what we're going to talk about this week. How do different types of data, data that's generated at the edge, data that's generated elsewhere, going to inform decisions about the classes of infrastructure that we're going to have to build and support as we move forward with this transformation that's taking place in the industry. So to kick it off, Neil Raden I want to turn to you. What are some of those key data differences and what taxonomically do we regard as what we call primary, secondary, and tertiary data? Neil. >> Well, primary data come in from sensors. It's a little bit different than anything we've ever seen in terms of doing analytics. Now I know that operational systems do pick up primary data, credit card transactions, something like that. But, scanner data, not scanner data, I mean sensor data is really designed for analysis. It's not designed for record keeping. And because it's designed for analysis, we have to have a different way of treating it than we do other things. If you think about a data lake, everything that falls into that data lake has come from somewhere else, it's been used for something else. But this data is fresh, and that requires that we really have to treat it carefully. Now, the retention and stewardship of that requires a lot of thought. And I don't think industry has really thought of that through a great deal. But look, sensor data is not new, it's been around for a long time. But what's different now is the volume and the lack of latency in it. But any organization that wants to get involved in it really needs to be thinking about what's the business purpose of it. If you're just going into, IOT as we call it generically, to save a few bucks you might as well not bother. It really is something that will change your organization. Now, what do we do with this data is a real problem because for the most part, these senses are going to be remote, and there's going to be a lot of, that means they're going to generate a lot of data. So what do we do with it? Do we reduce it at the sight? That's been one suggestion. There's an issue that any model for reduction could conceivably lose data that may be important somewhere down the line. Can the data be reconstituted through metadata or some sort of reverse algorithms? You know, perhaps. Those are the things we really need to think about. My humble opinion is the software and the devices need to be a single unit. And for the most part, they need to be designed by vendors, not by individual ITs. >> So David Floyer, let's pick up on that. Software and devices as single unit, designed more by vendors who have specific demand expertise, turn into solutions and present it to business. What do you think? >> Absolutely, I completely concur with that. The initial attempts to using the sensors and connecting to the sensors were very simple things like for example, the nest, the thermostats. And that's worked very well. But if you look at it over time, the processing for that has gone into the home, into your Apple TV device or your Alexa or whatever it is. So, that's coming down and now it's getting even closer to the edge. In the future, our proposition is that it will get even closer and then those will put together solutions, all types of solutions that are appropriate to the edge that will be taking not just one sensor but multiple sensors, collecting that data together, just like in the autonomous car for example where you take the lidars and the radars and the cameras etcetera. We'll be taking that data, we'll be analyzing it, and we'll be making decisions based on that data at the edge. And vendors are going to play a crucial role in providing these solutions to IT and to the OT and to many other parts. And a large value will be in their expertise that they will develop in this area. >> So as a rule of thumb, when I was growing up and learned to drive, I was told always keep five car lengths between you and whatever's in front of you at whatever speed you're traveling. What you just described David is that there will be sensors and there will be processing that takes place in that automated car that isn't using that type of rule of thumb but know something about tire temperature, and therefore the coefficient of friction on the tires, know something about the brakes, knows what the stopping power needs to be at the speed and therefore what buffer needs to be between it and whatever else is around it. >> Absolutely. >> This is no longer a rule of thumb, this is physics and deep understanding of what it's going to require to stop that car. >> And on top of that, what you'll also want to know, outside from your car is, what type of car is in front of you? Is that an autonomous car, or is that somebody being driven bye Peter? In which case, you have 10 lengths behind you. >> But that's not going to be primary data. Is that what we mean by secondary data? >> No, that's still primary because you're going to set up a connection between you and that other car. That car is going to tell you I'm primary to you, that's primary data. >> Here's what I mean, correct use primary data but, from a standpoint of that the car in that case is submitting a signal, right? So even though to your car it's primary data, but one of the things from a design standpoint that's interesting, is that car is now transmitting a digital signal about it's state that's relevant to you so that you can combine that >> Correct. inside effectively, a gateway inside your car. >> Yes. >> So there's external information that is in fact digital coming in, combining with the sensors about what's happening in your car. Have I got that right? >> Absolutely. That to me is a sort of sengrey one, then you've got the tertiary data which is the big picture about the traffic conditions >> Routes. and the weather and the routes and that sort of thing which is at that much higher cloud level, yes. So David Vellante, we always have to make sure as we have these conversations. We've talked a bit about this data, we've talked a little bit about the classes of work that's going to be performed at the different levels. How do we ensure that we sustain the business problem in this conversation? >> So, I mean I think Wikibon's done some really good work on describing what this sort of data model looks like from edge devices where you have primary data, the gateways where you're doing aggregated data in the cloud where maybe the serious modeling occurs. And my assertion would be is that the technology to support that elongating and increasingly distributed data model has been maturing for a decade and the real customer challenge is not just technical, it's really understanding a number of factors and I'll name some. Where in the distributed data value chain are you going to differentiate? And how does the data that you're capturing in that data pipeline contribute to monetization? What are the data sources, who has access to that data, how do you trust that data, and interpret it, and act on it with confidence? There are significant IP ownership in data protection issues. Who owns the data? Is it the device manufacturer, is it the factory, etcetera. What's the business model that's going to allow you to succeed? What skill sets are required to win? And really importantly, what's the shape of the ecosystem that needs to form to go to market and succeed? These are the things that I think customers are really struggling with that I talk to. >> Now, the one thing I'd add to that and I want to come back to it is the idea that, and who is ultimately bonding the solution because this is going to end up in a court of law. But let's come to this IP issue, George. Let's talk about how local data is going to be, is going to enter into the flow of analytics, and that question of who owns data, because that's important and then have the question about some of the ramifications and liabilities associated with this. >> Okay well, just on the IP protection and the idea that a vendor has to take sort of whole product responsibility for the solution. That vendor is probably going to be dealing with multiple competitors when they're sort of enabling say, self-driving car or other, you know edge, or smaller devices. The key thing is that, a vendor will say, you know, the customer keeps their data and the customer gets the insights from that data. But that data is informing in the middle a black box, an analytic black box. It's flowing through it, that's where the insights come out, on the other side. But the data changes that black box as it flows through it. So, that is something where, you know, when the vendor provides a whole solution to Mercedes, that solution will be better when they come around to BMW. And the customers should make sure that what BMW gets the benefit of, goes back to Mercedes. That's on the IP thing. I want to add one more thing on the tertiary side which is, when you're close to the edge, it's much more data intensive. When we've talked about the reduction in data and the real-time analytics, at the tertiary level it's going to be more where time is a bigger factor and you're essentially running a simulation, it's more compute intensive. And so you're doing optimizations of the model and those flow back as context to inform both the gateway and the edge. >> David Floyer I want to turn it to you. So we've talked a little bit about the characteristics of the data, great list of Dave Vellante about some of the business considerations, we will get very quickly in a second to some of the liability issues cause that's going to be important. But take us through how, which George just said about the tertiary elements. Now we've got all the data laid out, how is that going to map to the classes of devices? And we'll then talk a bit about some of the impacts on the industry. What's it going to look like? >> So if we take the primary edge first, and you take that as a unit, you'll have a number of senses within that. >> So just released, this is data about the real world that's coming into the system to be processed? >> Yes. So it'll have, for example, cameras. If we take a simple example of making sure that bad people don't get into your site. You'll have a camera there which will be facial recognition. They'll have a badge of some sort, so you'll read that badge, you may want to take their weight, you may want to have a infrared sensor on them so that you can tell their exact distance. So, a whole set of sensors that the vendor will put together for the job of insuring you don't get bad guys in there. And what you're insuring is that bad guys don't get in there, that's obviously one, very important, and also, that you don't go and- >> Stop good guys from going in. stop good guys from going in there. So those are the two characteristics >> The false-positive problem. the false-positives. Those are the two things you're trying to design that- >> At the primary edge. at the primary edge. And there's a mass amount of data going into that, which is only going to be reduced to very, very little data coming up to the next level which is this guy came here, this was his characteristics, he didn't look well today, maybe you should see a nurse, or whatever other information you can gather from that will go up to that secondary level, and then that'll also be a record of to HR maybe, about who has arrived there or what time they arrived, to the manufacturing systems about who is there and who has those skills to do a particular job. There are multiple uses of that data which can then be used for differentiation for whatever else from that secondary layer into local systems and then equally they can be pushed up to the higher level which is, how much power should be generating today, what are the higher levels. >> We now have 4,000 people in the building, air condition therefore is going to look like this, or, it could be combined with other types of data like over time we're going to need new capacity, or payroll, or whatever else it might be. >> And each level will have its own type of AI. So you've got AI at the edge, which is to produce a specific result, and then there's AI to optimize at the secondary level and then the AI optimize bigger things at the tertiary level. >> So we're going to talk more about some of the AI next week, but for right now we're talking about classes of devices that are high performance, high bandwidth, cheap, constrained, proximate to the event. >> Yep. >> Gateways that are capable of taking that information and start to synthesize it for the business, for other business types of things, and then tertiary systems, true private cloud for example, although we may have very sizable things at the gateway as well, >> There will be true private clouds. that are capable of integrating data in a more broad way. What's the impact in the industry? Are we going to see IT firms roll in and control this sweeping, (man chuckles) as Neil said, trillions of new devices. Is this all going to be intel? Is it all going to be, you know, looking like clients and PCs? >> My strong advice is, that the devices themselves will be done by extreme specialists in those areas that they will need a set of very deep technology understanding of the devices themselves, the senses themselves, the AI software relevant to that. Those are the people that are going to make money in that area. And you're much better off partnering with those people and letting them solve the problems, and you solve, as Dave said earlier, the ones that can differentiate you within your processes, within your business. So yes, leave that to other people is my strong advice. And from an IT's point of view, just don't do it yourself. >> Well the gateway's, sound like you're suggesting, the gateway is where that boundary's going to be. >> Yes. That's where the boundary is. >> And the IT technologies may increasingly go down to the edge, but it's not clear that the IT vendor expertise goes down to the edge >> Correct. at the same degree. >> Correct. >> So, Neil let's come back to you. When we think about this arrangement of data, you know, how the use cases are going to play out, and where the vendors are, we still have to address this fundamental challenge that Dave Vellante bought up. Who's going to end up being responsible for this? Now you've worked in insurance, what does that mean from an overall business standpoint? What kinds of failure weights are we going to accommodate? How is this going to play out? What do you think? >> Well, I'd like to point out that I worked in insurance 30 years ago. (men chuckling) >> Male Voice: I didn't want to date ya Neil. (men chuckling) >> Yeah the old reliable life insurance company. Anyway, one of the things David was just discussing sounded a lot to me like complex event processing. And I'm wondering where the logical location event needs to be, because it needs some prior data to do CEP, you have to have something to compare it against. But if you're pushing it all back to the tertiary level, there's going to be a lot of latency. And the whole idea was CEP was, you know, right now. So, that I'm a little curious about. But I'm sorry, what was your question? >> Well no, let's address that. So CEP David, I agree. But I don't want to turn this into a general discussion and CEP. It's got its own set of issues. >> It's clear there have got to be complex models created. And those are going to be created in a large environment, almost certainly in a tertiary type environment. And those are going to be created by the vendors of those particular problem solvers at the primary edge. To a large extent, they're going to provide solutions in that area. And they're going to have to update those. And so, they are going to have to have lots and lots of test data for themselves and maybe some companies will provide test data if it's convenient for those, for a fee or whatever it is, to those vendors. But the primary model itself is going to be in the tertiary level, and that's going to be pushed down to the primary level itself. >> I'm going to make an assertion here that the, the way I think about this Neil is that the data coming off at the primary level is going to be the sensor data, the sensor said it was good. Then that is recorded as an event, we let somebody in the building. And that's going to be a key feature of what happens at the secondary level. I think a lot of complex processing is likely to end up at that secondary level. >> Absolutely. >> Then the data gets pushed up to the tertiary level and it becomes part of an overall social understanding of the business, it's behavior data. So increasingly, what did we do as a consequence of letting this person in the building? Oh we tried to stop him. That's going to be more of the behavioral data that ends up at the tertiary level, will still do complex event processing there. It's going to be interesting to see whether or not we end up with CEP directly in the sensor tower. Might under certain circumstances, that's a cost question though. So let me now turn it in the last few minutes here Neil back to you. At the end of the day, we've seen for years the question of how much security is enough security? And businesses said, "Oh I want to be 100% secure." And sometimes see-so said "We got that. You gave me the money, we've now made you 100% secure." But we know it's not true. Same thing is going to exist here. How much fidelity is enough fidelity down at the edge? How do we ensure that business decisions can be translated into design decisions that lead to an appropriate and optimized overall approach to the way the system operates? From a business standpoint back, what types of conversations are going to take place in the boardroom that the rest of the organization's going to have to translate into design decisions? >> You know, boy, bad actors are going to be bad actors. I don't think you can do anything to eliminate it. The best you can do is use the best processes and the best techniques to keep it from happening and hope for the best. I'm sorry, that's all I can really say about it. >> There's quite a lot of work going on at the moment from Arm, in particular. They've got a security device image ability. So, there's a lot of work going on in that very space. It's obviously interesting from an IT perspective is how do you link the different security systems, both from an Arm point of view and then from a X86 as you go further up the chain. How are they going to be controlled and how's that going to be managed? That's going to be a big IT issue. >> Yeah, I think the transmission is the weak point. >> Male Voice: What do you mean by that Neil? >> Well the data has to flow across networks, that would be the easiest place for someone to intercept it and, you know, and do something nefarious. >> Right yeah, so that's purely in a security thing. I was trying to use that as an analogy. So, at the end of the day, the business is going to have to decide how much data do we have to capture off the edge to ensure that we have the kinds of models we want, so that we can realize the specificity of actions and behaviors that we want in our business? That's partly a technology question, partly a cost question. Different sensors are able to operate at different speeds for example. But ultimately, we have to be able to bring those, that list of decisions or business issues that Dave Vellante raised, down to some of the design questions. But it's not going to be throw a $300 micro processor everything. There's going to be very, very concrete decisions that have to take place. So, George do you agree with that? >> Yes, two issues though. One, there's the existing devices that can't get re-instrumented, that they already have their software, hardware stack. >> There's a legacy in place? >> Yes. But there's another thing which is, some of the most advanced research that's been going on that produced much of today's distributed computing and big data infrastructure, like the Berkeley Analytics lab, and say their contributions spark in related technologies. They're saying we have to throw everything out and start over for secure real-time systems. That you have to build from hardware all the way up. In other words, you're starting from the sand to re-think something that's secure and real-time that you can't layer it on. >> So very quickly David, that's a great point George. Building on what George has said very quickly, the primary responsibility for bonding the behavior or the attributes of these devices are going to be with the vendor. >> Of creating the solution? >> Correct. >> That's going to be the primary responsibility. But obviously from an IT point of view, you need to make sure that that device is doing the job that's important for your business, not too much, not too little, is doing that job, and that you are able to collect the necessary data from it that is going to be of value to you. So that's a question of qualification of the devices themselves. >> Alright so, David Vellante, Neil Raden, David Floyer, George Gilbert, action item round. I want one action item from you guys from this conversation. Keep it quick, keep it short, keep it to the point. David Floyer, what's your action item? >> So my action item is don't go into areas that you don't need to. You do not need to become experts, IT in general does not need to become experts at the edge itself. Rely on partners, rely on vendors to do that unless of course you're one of those vendors. In which case, you'll need very, very deep knowledge. >> Or you choose that that's where you're value stream your differentiations is going to be which means you just became one of those values. >> Yes, exactly. >> George Gilbert. >> I would build on that and I would say that if you look at the skills required to build these full stack solutions, there's data science, there's application development, there's the analytics. Very few of those solutions are going to have skills all in one company. So the go-to market model for building these is going to be something that, at least at this point in time, we're going to have to look to like combinations like IBM working with sort of supply chain masters. >> Good. Neil Raden, action item. >> The question is not necessarily one of technology because that's going to evolve. But I think as an organization, you need to look at it from this end which is, would employing this create a new business opportunity for us? Something we're not already doing. Or number two, change our operations in some significant way. Or number three, you know, the old red queen thing. We have to do it to keep up with the competition. >> Male Voice: David Vellante, action item. >> Okay well look, at the risk of sounding trite, you got to start the planning process from the customer on in, and so often people don't. You got to understand where you're going to add value for customers and constructing and external and internal ecosystem that can really juice that value creation. >> Alright, fantastic guys. So let me quickly summarize. This week on the Wikibon Friday research meeting in the cube, we discussed a new way of thinking about data characteristics that will inform system design and a business value that's created. We observe that data is not all the same when we think about these very complex, highly distributed, and decentralized systems that we're going to build. That there's a difference between primary data, secondary data, and tertiary data. Primary data is data that is generated from real world events or measurements and then turned into signals that can be acted upon very proximate to that real world set of conditions. A lot of sensors will be there, a lot of processing will be moved down there, and a lot of actuators and actions will take place without referencing other locations within the cloud. However, we will see circumstances where the events that are taken, or the decisions that are taken on those vents, will be captured in some sort of secondary tier that will then record something about the characteristics of the actions and events that were taken, and then summarized and then pushed up to a tertiary tier where that data can then be further integrated in other attributes and elements of the business. The technology to do this is broadly available but not universally successfully applied. We expect to see a lot of new combinations of edge-related device to work with primary data. That is going to be a combination of currently successful firms in the OT or operational technology world, most likely in partnership with a lot of other vendors that have demonstrated significant expertise and understanding the problems, especially the business problems, associated with the fidelity of what happens at the edge. The IT industry is going to approach very aggressively and very close to this at that secondary level, through gateways and other types of technologies. And even though we'll see IT technology continue to move down to the primary level, it's not clear exactly how vendors will be able to follow that. More likely, we'll see the adoption of IT approaches to doing things at the primary level by vendors that have the main expertise in how that level works. We will however see significantly interesting true private cloud and public cloud data end up from the tertiary level end up with a whole new sets of systems that are going to be very important from an administration and management standpoint because they have to work within the context of the fidelity of this overall system together. The final point we want to make is that these are not technology problems by themselves. While significant technology problems are on the horizon about how we think about handling this distribution of data, managing it appropriately, our ability, ultimately, to present the appropriate authority at different levels within that distributive fabric to ensure the proper working condition in a way that nonetheless we can recreate if we need to. But these are, at bottom, fundamentally business problems. They're business problems related to who owns the intellectual property that's being created, they're business problem related to what level in that stack do I want to show my differentiation to my customers and they're business problems from a liability and legal standpoint as well. The action item is, all firms will in one form or another be impacted by the emergence of the edge as a dominate design as consideration for their infrastructure but also for their business. Three ways, or a taxonomy that looks at three classes of data, primary, secondary, and tertiary, will help businesses sort out who's responsible, what partnerships I need to put in place, what technologies and I going to employ, and very importantly, what overall business exposure I'm going to accommodate as I think ultimately about the nature of the processing and business promises that I'm making to my marketplace. Once again, this has been the Wikibon Friday research meeting here on theCUBE. I want to thank all the analysts who were here today, but especially thank you for paying attention and working with us. And by all means, let's hear those comments back about how we're doing and what you think about this important question of different classes of data driven by different needs of the edge. (funky electronic music)
SUMMARY :
A great example of that is the popular notion And for the most part, they need to be designed present it to business. that are appropriate to the edge that will be taking and learned to drive, I was told of what it's going to require to stop that car. Is that an autonomous car, or is that But that's not going to be primary data. That car is going to tell you I'm primary inside your car. Have I got that right? the big picture about the traffic conditions and the weather and the routes What's the business model that's going to allow you to succeed? Now, the one thing I'd add to that the benefit of, goes back to Mercedes. of the liability issues cause that's going to be important. and you take that as a unit, and also, that you don't go and- So those are the two characteristics Those are the two things you're trying to design that- and then that'll also be a record of to HR maybe, air condition therefore is going to look like this, a specific result, and then there's AI to optimize high bandwidth, cheap, constrained, proximate to the event. Is it all going to be, you know, looking like clients and PCs? Those are the people that are going to make money in that area. Well the gateway's, sound like you're suggesting, at the same degree. How is this going to play out? Well, I'd like to point out that I worked in insurance Male Voice: I didn't want to date ya Neil. And the whole idea was CEP was, you know, right now. But I don't want to turn this into be in the tertiary level, and that's going to be And that's going to be a key feature of That's going to be more of the behavioral data and the best techniques to keep it from happening and how's that going to be managed? Well the data has to flow across networks, capture off the edge to ensure that we have can't get re-instrumented, that they already have their some of the most advanced research that's been going on are going to be with the vendor. the necessary data from it that is going to be of value to you. Keep it quick, keep it short, keep it to the point. IT in general does not need to Or you choose that that's where you're is going to be something that, at least at this point in time, Neil Raden, action item. We have to do it to keep up with the competition. You got to understand where you're going to add value sets of systems that are going to be very important
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Mercedes | ORGANIZATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
$300 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
10 lengths | QUANTITY | 0.99+ |
two characteristics | QUANTITY | 0.99+ |
Berkeley Analytics | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
4,000 people | QUANTITY | 0.99+ |
two issues | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
today | DATE | 0.99+ |
each level | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one suggestion | QUANTITY | 0.98+ |
Three ways | QUANTITY | 0.98+ |
five car | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
This week | DATE | 0.97+ |
two things | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
30 years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
Wikibons | ORGANIZATION | 0.97+ |
trillions of new devices | QUANTITY | 0.97+ |
single unit | QUANTITY | 0.97+ |
one sensor | QUANTITY | 0.96+ |
one form | QUANTITY | 0.96+ |
first | QUANTITY | 0.94+ |
one company | QUANTITY | 0.94+ |
Apple TV | COMMERCIAL_ITEM | 0.92+ |
one action item | QUANTITY | 0.92+ |
three classes | QUANTITY | 0.91+ |
intel | ORGANIZATION | 0.89+ |
Wikibon | EVENT | 0.86+ |
one more | QUANTITY | 0.79+ |
second | QUANTITY | 0.76+ |
past 10 years | DATE | 0.75+ |
CEP | ORGANIZATION | 0.75+ |
Deli MC | ORGANIZATION | 0.73+ |
CEP | TITLE | 0.68+ |
Arm | ORGANIZATION | 0.65+ |
Wikibon Friday | EVENT | 0.64+ |
Alexa | TITLE | 0.64+ |
years | QUANTITY | 0.62+ |
few bucks | QUANTITY | 0.6+ |
Wikibon Analyst Meeting | Dell EMC Analyst Summit
>> Welcome to another edition of Wikibon's Weekly Research Meeting on theCUBE. (techno music) I'm Peter Burris, and once again I'm joined by, in studio, George Gilbert, David Floyer. On the phone we have Dave Vellante, Stu Miniman, Ralph Finos, and Neil Raden. And this week we're going to be visiting Dell EMC's Analyst Summit. And we thought we'd take some time today to go deeper into the transition that Dell and EMC have been on in the past few years, touching upon some of the value that they've been creating for customers and addressing some of the things that we think they're going to have to do to continue on the path that they're on and continue to deliver value to the marketplace. Now, to look back over the course of the past year, it was about a year ago that the transaction actually closed. And in the ensuing year, there's been a fair amount of change. We've seen some interesting moves by Dell to bring the companies together, a fair amount of conversation about how bigger is better. And at the most recent VMworld, we saw a lot of great news of VMworld, VMware in particular working more closely with Amazon and others, or AWS and others. So we've seen some very positive things happen in the course of the past year. But there are still some crucial questions that are addressed. And to kick us off, Dave Vellante, where are we one year in and what are we expecting to hear this week? >> Dave: And foremost, Michael Dell was trying to transform his company. It wasn't happening fast enough. He had to go private. He wanted to be an enterprise player, and amazingly, he and Silver Lake came up with four billion dollars in cash. And they may very well pull off one of the greatest wealth creation trades in the history of the computer industry because for four billion dollars, they're getting an asset that's worth somewhere north of 50 billion, and they're paying down the debt that they used to lever that acquisition through cash flow. So like I say, for a pittance (laughs) of four billion dollars, they're going to turn that into a lot of dough, tens and tens of billions. If you look at EMC pre the M and A, I'm sorry, if you look at Dell pre M and A, pre-merger, their transformation was largely failing. The company was making a lot of acquisitions but it wasn't able to reshape itself fast enough. If you look at EMC pre-merger, it was a powerhouse, but it was suffering from this decade-long collapse of infrastructure hardware and software pricing, which was very much a drag on growth and cash flow. So the company was forced to find a white knight, which came in the form of Michael Dell. So you had this low gross margin company, Dell's public gross margin before it went private were in the teens. EMC was in the roughly 60%. Merge those together and you get a roughly 30% plus gross margin entity. I don't think they're there yet. I think they got a lot of work to do. So a lot of talk about integration. And there's some familiarity with these two companies because they had a fairly large OEM deal for the better part of a decade in the 90s. But culturally, it's quite different. Dell's a very metrics-driven culture with a lot of financial discipline. EMC's kind of a take the hill, do whatever it takes culture. And they're in the process of bringing those together, and a lot of cuts are taking place. So we want to understand what impacts those will have to customers. The other point I want to make is that without VMware, in my view anyway, the combination of these companies would not be nearly as interesting. In fact, it would be quite boring. So the core of these companies, you know, have faced a lot of challenges. But they do have VMware to leverage. And I think the challenge that customers really need to think about is how does this company continue to innovate now that they can't really do M and A? If you look at EMC, for years, they would spend money on R and D and make incremental improvements to its product lines and then fill the gaps with M and A. And there're many, many examples of that, Isilon, Data Domain, XtremIO, and dozens of others. That kept EMC competitive. So how does Dell continue that strength? It spends about four and a half billion a year on R and D, and according to Wikibon's figures, that's about 6% of revenue. If you compare that with other companies, Oracle, Amazon, they're into the 12%. Google's mid-teens. Microsoft, obviously to 12, 13%. Cisco's up there. EMC itself was spending 12% on R and D. So IBM's only about 6%, but remember IBM, about two thirds of the company is services. It's not R and D heavy. So Dell has got to cut costs. It's a must. And what implications does that have on the service levels that customers have grown to expect, and what's the implications on Dell's roadmap? I think we would posit that a lot of the cash cows are going to get funded in a way that allows them to have a managed decline in that business. And it's likely that customers are going to see reduced roadmap functions going forward. So a key challenge that I see for Dell EMC is growth. The strength is really VMware, and the leverage of the VMware and their own install base I think gives Dell EMC the ability to keep pace with its competitors because it's got kind of the inside baseball there. It's got a little bit of supply chain leverage, and of course its sales force and its channels are a definite advantage for this company. But it's got a lot of weaknesses and challenges. Complexity of the portfolio, it's got a big debt load that hamstrings its ability to do M and A. I think services is actually a big opportunity for this company. Servicing its large install base. And I think the key threat is cloud and China. I think China, with its low-cost structure, made a deal like this inevitable. So I come back to the point of Michael Dell's got to cut in order to stay competitive. >> Peter: Alright, so one of the, sorry- >> Dave: Next week, hear a lot about sort of innovation strategies, which are going to relate to the edge. Dell EMC has not announced an edge strategy. It needs to. It's behind HPE in that regard, one its major competitors. And it's got to get into the game. And it's going to be really interesting to see how they are leveraging data to participate in that IOT business. >> Great summary, Dave. So you mentioned that one of the key challenges that virtually every company faces is how do they reposition themselves in a world in which the infrastructure platform, foundation, is going to be more cloud-oriented. Stu Miniman, why don't you take us through, very quickly, where Dell EMC is relative to the cloud? >> Stu: Yeah, great question, Peter. And just to set that up, it's important to talk about one of the key initiatives from Dell and EMC coming together, one of the synergies that Michael Dell has highlighted is really around the move from converged infrastructure to hyper converged infrastructure. And this is also the foundational layer that Dell EMC uses today for a lot of their cloud solutions. So EMC has done a great job with the first wave of converged infrastructure through partnering with Cisco. They created the Vblock, which is now VxBlock, which is now a multi-billion dollar revenue stream. And Dell did a really good job of jumping on early with the hyper converged infrastructure trend. So I'd written research years ago that not only was it through partnerships but through OEM deals, if you look at most of the solutions that were being sold on the market, the underlying server for them was Dell. And that was even before the EMC acquisition. Once they acquired EMC, they really get kind of control, if you will, of the VMware VSAN business, which is a very significant player. They have an OEM relationship with Nutanix, who's doing quite well in the space, and they put together their own full-stack solution, which takes Dell's hardware, the VMware VSAN, and the go-to-market processes of what used to be VCE, and they put together VxRail, which is doing quite well from a revenue and a growth standpoint. And the reason I set this all up to talk about cloud is that if you look at Dell's positioning, a lot of their cloud starts at that foundational infrastructure level. They have all of these enterprise hybrid clouds and different solutions that they've been offering for a few years. And underneath those, really it is a simplified infrastructure hardware offering. So whether that is the traditional VCE converged infrastructure solutions or the newer hyper converged infrastructure solutions, that's the base level. And then there's software that wraps on top of it. So they've done a decent amount of revenue. The concern I have is, you know, Peter, you laid out, it's very much a software world. We've been talking a lot at Wikibon about the multi-cloud nature of what's going on. And while Dell and the Dell family have a very strong position in the on-premises market, that's really they're center strength, is around hardware and customer and the enterprises data center. And the threat is public cloud and multi-cloud. And if it centers around hardware and especially when you dig down and say, "okay, I want to sell more servers," which is one of the primary drivers that Michael wants to have with his whole family of solutions, how much can you really live across these in various environments? Of course, they have partnerships with Microsoft. There's the VMware partnerships with Amazon, which is interesting, how they even partner with the likes of Google and others, it can be looked at. But from that kind of center strength is on premises and therefore they're not really living heavily in the public and multi-cloud world, unless you look at Pivotal. So Pivotal's a software, and that's where they're going to say that the big push is, but it's these massive shifts of large install base of EMC, Dell, and VMware, compared to the public cloud that are doing the land grabs. So this is where it's really interesting to look at. And the announcement that we're interested to look at is how IOT and edge fits into all of this. So David Foyer and you, Peter, research about how- >> Peter: Yeah, well, we'll get to that. >> Stu: There's a lot of nuance there. >> We'll get to that in a second, Stu. But one of the things I wanted to mention to David Floyer is that certainly in the case of Dell, they have been a major player in the Intel ecosystem. And as we think about what's going to happen over the course of the next couple of years, what's going to happen with Intel? It's going to continue to dominate. And what's that going to mean for Dell? >> Sure, Dell's success, I mean, what Stu has been talking about is the importance of volume for Dell, being a volume player. And obviously when they're looking at Intel, the PC is a declining market, and ARM is doing incredibly well in the mobile and other marketplaces. And Dell's success is essentially tied to Intel. So the question to ask is if Intel starts to lose market share to ARM and maybe even IBM, what is the impact on that on Dell? And in particular, what is the impact on the edge? And so if you look at the edge, there are two primary parts. We put forward there are two parts of the edge. There's the primary data, which is coming from the sensors themselves, from the cameras and other things like that. So there's the primary edge, and there's the secondary edge, which is after that data has been processed. And if you think about the primary edge, AI and DL go to the primary edge because that's where the data is coming in, and you want the highest fidelity of data. So you want to do the processing as close as possible to that. So you're looking at these examples in autonomous cars. You're seeing it in security cameras, that all of that processing is going to much cheaper chips, very, very close to the data itself. What that means is that most of that IOT, or could mean, is that most of that IOT could go to other vendors, other than Intel, to go to the ARM vendors. And if you look at that market, it's going to be very specialized in the particular industry and the particular problem it's trying to solve. So it's likely that non-IT vendors are going to be in that business. And you're likely to be selling to OT and not the IT. So all of those are challenges to Dell in attacking the edge. They can win the secondary edge, which is the compressed data, initially compressing it 1,000 to one, probably going to a million to one compression of the data coming from the sensors to a much higher value data but much, much smaller amounts, both on the compute side and on the storage side. So if that bifurcation happens at the edge, the size of marketplace is going to be very considerably reduced for Intel. And Dell has in my view a strategic decision to make of whether they get into being part of that ARM ecosystem for the edge. There's a strong argument that's saying that they would need to do that. >> And they will be announcing something on Monday, I believe, or next week. We're going to hear a lot about that. But when we think, ultimately, about the software that Dell and EMC are going to have to think about, they're very strong in VMware, which is important, and there's no question that virtual machines will remain important, if not only from an install base standpoint but from, in the future, how the cloud is organized and arranged and managed. Pivotal also is an interesting play, especially as it does a better job of incorporating more of the open source elements that are becoming very attractive to developers. But George, let me ask you a question, ultimately, about where is Dell in some of these more advanced software worlds? When we think about machine learning, when we think about AI, these are not strong markets right now, are not huge markets right now, but they're leading indicators. They're going to provide cues about where the industry's going to go and who's going to get a chance to provide the tooling for them. So what's our take right now, where Dell is, Dell EMC is relative to some of these technologies? >> Okay, so that was a good lead in for my take on all the great research David Floyer's done, which is when we go through big advances in hardware, typically relative price performance changes between CPU, memory, storage, networking. When we see big relative changes between those, then there's an opportunity for the software to be re-architected significantly. So in this case, what we call unigrid, what David's called unigrid previously is the ability to build scale-out, extremely high-performance clusters to the point where we don't have to bottleneck on shared storage like a SAN anymore. In other words, we can treat the private memory for each node as if it were storage, direct-attached storage, but it is now so fast in getting between nodes and to the memory in a node that for all intents and purposes, it can perform as if you had a shared storage small cluster before. Only now this can scale out to hundreds, perhaps thousands, of nodes. The significance of that is we are in an era of big data and big analytics. And so the issue here is can Dell sort of work with the most advanced software vendors who are trying to push the envelope to build much larger-scale data management software than they've been able to. Now, Dell has an upward, sort of an uphill climb to master the cloud vendors. They build their own infrastructure hardware. But they've done pools of GPUs, for instance, to accelerate machine learning training. Dell could work with these data management vendors to get pools of this scale-out hardware in the clouds to take advantage of the NoSQL databases, the NewSQL databases. There's an opportunity to leapfrog. What we found out at Oracle, at their user conference this week was even though they're building similar hardware, their database is not yet ready to take advantage of it. So there is an opportunity for Dell to start making inroads in the cloud where their generic infrastructure wouldn't. Now, one more comment on the edge, I know David was saying on the sort of edge device, that's looking more and more like it doesn't have to be Intel-compatible. But if you go to the edge gateway, the thing that bridges OT and IT, that's probably going to be their best opportunity on the edge. The challenge, though, is it's not clear how easy it will be in a low-touch sort of go-to-market model that Dell is accustomed to because like they discovered in the late 90s, it cost $6,000 per year per PC to support. And no one believed that number until Intel did a study on itself and verified it. The protocols from all the sensors on the OT side are so horribly complex and legacy-oriented that even the big auto manufacturers keep track of the different ones on a spreadsheet. So mapping the IT gateway server to all the OT edge devices may turn out to be horribly complex for a few years. >> Oh, it's not a question of may. It is going to be horribly complex for the next few years. (laughing) I don't think there's any question about that. But look, here's what I want to do. I want to ask one more question. And I'm going to go do a round table and ask everybody to give me what the opportunity is and what the threat is. But before I do that, the one thing we haven't discussed, and Dave Vellante, I'm going to throw it over to you, is we've looked at the past of Dell talks a lot about the advantages of its size and the economies of scale that it gets. And Dell's not in the semiconductor business or at least not in a big way. And that's one place where you absolutely do get economies of scale. They got VMware in the system software business, which is an important point. So there may be some economies there. But in manufacturing and assembly, as you said earlier, Dave, that is all under consideration when we think about where the real cost efficiencies are going to be. One of the key places may be in the overall engagement model. The ability to bring a broad portfolio, package it up, and make it available to a customer with the appropriate set of services, and I think this is why you said services is still an opportunity. But what does it mean to get to the Dell EMC overall engagement model as Dell finds or looks to find ways to cut costs, to continue to pay down its debt and show a better income statement? >> Dave: So let me take the customer view. I mean, I think you're right. This whole end to end narrative that you hear from Dell, for years you heard it from HP, I don't think it really makes that much of a difference. There is some supply chain leverage, no question. So you can get somewhat cheaper components, you could probably get supplies, which are very tight right now. So there are definitely some tactical advantages for customers, but I think your point is right on. The real leverage is the engagement model. And the interesting thing from I think our standpoint is that you've got a very high-touch EMC direct sales force, and that's got to expand into the channel. Now, EMC's done a pretty good job with the channel over the last, you know, half a decade. Dell doesn't have as good a reputation there. Its channel partners are many more but perhaps not as sophisticated. So I think one of the things to watch is the channel transformation and then how Dell EMC brings its services and its packages to the market. I think that's very, very important for customers in terms of reducing a lot of the complexity in the Dell EMC portfolio, which just doubled in complexity. So I think that is something that is going to be a critical indicator. It's an opportunity, and at the same time, if they blow it, it's a big threat to this organization. I think it's one of the most important things, especially, as you pointed out, in the context of cost cutting. If they lose sight of the importance of the customer, they could hit some bumps in the road and open it up for competition to come in and swoop some of their business. I don't think they will. I think Michael Dell is very focused on the customer, and EMC's culture has always been that way. So I would bet on them succeeding there, but it's not a trivial task. >> Yeah, I would agree with you. In fact, one of the statements that we heard from Michael Dell and other executives at Dell EMC at VMworld, over and over and over again, on theCUBE and elsewhere, was this notion of open with an opinion. And in many respects, the opinion is not just something that they say. It's something that they do through their packaging and how they put their technologies into the marketplace. Okay, guys, rapid fire, really, really, really short answers. Let's start with the threats. And then we'll close with the positive note on the strengths. David Floyer, really quick, biggest threat that we're looking at next week? >> The biggest threat is the evolution of ARM processes, and if they keep to an Intel-only strategy, that to me is their biggest threat. Those could offer a competition in both mobile, increasing percentages of mobile, and also also in the IOT and other processor areas. >> Alright, George Gilbert, biggest threat? >> Okay, two, summarizing the comments I made before, one, they may not be able to get the cloud vendors to adopt pools of their scale-out infrastructure because the software companies may not be ready to take advantage of it yet. So that's cloud side. >> No, you just get one. Dave Vellante. >> Dave: Interest rates. (laughing) >> Peter: Excellent. Stu Miniman. >> Stu: Software. >> Peter: Okay, come on Stu. Give me an area. >> Stu: Dell's a hardware company! Everything George said, there's no way the cloud guys are going to adopt Dell EMC's infrastructure gear. This is a software play. Dell's been cutting their software assets, and I'm really worried that I'm going to see an edge box, you know, that doesn't have the intelligence that they need to put the intelligence that they say that they're going to put in. >> So, specifically, it's software that's capable of running the edge centers, so to speak. Ralph Finos. >> Ralph: Yeah, I think the hardware race to the bottom. That's a big part of their business, and I think that's a challenge when you're looking at going head on head, with HPE especially. >> Peter: Neil Raden, Neil Raden. >> Neil: Private managed cloud. >> Or what we call true private cloud, which goes back to what Stu said, related to the software and whether or not it ends up being manageable. Okay, threats. David Floyer. >> You mean? >> Or I mean opportunities, strengths. >> Opportunities, yes. The opportunity is being by far the biggest IT place out there, and the opportunity to suck up other customers inside that. So that's a big opportunity to me. They can continue to grow by acquisition. Even companies the size of IBM might be future opportunities. >> George Gilbert. >> On the opposite side of what I said earlier, they really could work with the data management vendors because we really do need scale-out infrastructure. And the cloud vendors so far have not spec'd any or built any. And at the same time, they could- >> Just one, George. (laughing) Stu Miniman. >> Dave: Muted. >> Peter: Dave Vellante. >> Dave: I would say one of the biggest opportunities is 500,000 VMware customers. They've got the server piece, the networking piece kind of, and storage. And combine that with their services prowess, I think it's a huge opportunity for them. >> Peter: Stu, you there? Ralph Finos. >> Stu: Sorry. >> Peter: Okay, there you go. >> Stu: Dave stole mine, but it's not the VMware install base, it's really the Dell EMC install base, and those customers that they can continue moving along that journey. >> Peter: Ralph Finos. >> Ralph: Yeah, highly successful software platform that's going to be great. >> Peter: Neil Raden. >> Neil: Too big to fail. >> Alright, I'm going to give you my bottom lines here, then. So this week we discussed Dell EMC and our expectations for the Analyst Summit and our observations on what Dell has to say. But very quickly, we observed that Dell EMC is a financial play that's likely to make a number of people a lot of money, which by the way has cultural implications because that has to be spread around Dell EMC to the employee base. Otherwise some of the challenges associated with cost cutting on the horizon may be something of an issue. So the whole cultural challenges faced by this merger are not insignificant, even as the financial engineering that's going on seems to be going quite well. Our observation is that the cloud world ultimately is being driven by software and the ability to do software, with the other observation that the traditional hardware plays tied back to Intel will by themselves not be enough to guarantee success in the multitude of different cloud options that will become available, or opportunities that will become available to a wide array of companies. We do believe the true private cloud will remain crucially important, and we expect that Dell EMC will be a major player there. But we are concerned about how Dell is going to evolve as a, or Dell EMC is going to evolve as a player at the edge and the degree to which they will be able to enhance their strategy by extending relationships to other sources of hardware and components and technology, including, crucially, the technologies associated with analytics. We went through a range of different threats. If we identify two that are especially interesting, one, interest rates. If the interest rates go up, making Dell's debt more expensive, that's going to lead to some strategic changes. The second one, software. This is a software play. Dell has to demonstrate that it can, through its 6% of R and D, generate a platform that's capable of fully automating or increasing the degree to which Dell EMC technologies can be automated. In many conversations we've had with CIOs, they've been very clear. One of the key criteria for the future choices of suppliers will be the degree to which that supplier fits into their automation strategy. Dell's got a lot of work to do there. On the big opportunities side, the number one from most of us has been VMware and the VMware install base. Huge opportunity that presents a pathway for a lot of customers to get to the cloud that cannot be discounted. The second opportunity that we think is very important that I'll put out there is that Dell EMC still has a lot of customers with a lot of questions about how digital transformation's going to work. And if Dell EMC can establish itself as a thought leader in the relationship between business, digital business, and technology and bring the right technology set, including software but also packaging of other technologies, to those customers in a true private cloud format, then Dell has the potential to bias the marketplace to their platform even as the marketplace chooses in an increasingly rich set of mainly SaaS but public cloud options. Thanks very much, and we look forward to speaking with you next week on the Wikibon Weekly Research Meeting here on theCUBE. (techno music)
SUMMARY :
And in the ensuing year, there's been And it's likely that customers are going to see And it's got to get into the game. platform, foundation, is going to be more cloud-oriented. and the go-to-market processes of what used to be VCE, certainly in the case of Dell, So the question to ask is Dell EMC is relative to some of these technologies? in the clouds to take advantage and ask everybody to give me what the opportunity is and that's got to expand into the channel. And in many respects, the opinion is not just and if they keep to an Intel-only strategy, one, they may not be able to get No, you just get one. Dave: Interest rates. Peter: Excellent. Peter: Okay, come on Stu. the cloud guys are going to adopt that's capable of running the edge centers, so to speak. Ralph: Yeah, I think the hardware race to the bottom. related to the software and whether or not So that's a big opportunity to me. And the cloud vendors so far have not spec'd any Stu Miniman. And combine that with their services prowess, Peter: Stu, you there? install base, it's really the Dell EMC install base, that's going to be great. and the ability to do software,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Ralph | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
12% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT
>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)
SUMMARY :
is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Judith | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Stephanie McReynolds | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Jennifer Shin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Joe Caserta | PERSON | 0.99+ |
Suzie Welch | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Jen | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mark Turner | PERSON | 0.99+ |
Judith Hurwitz | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Elysian | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Honeywell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Derek Sivers | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
1998 | DATE | 0.99+ |
Day Two Wrap | Veritas Vision 2017
>> Narrator: Live from Las Vegas it's the Cube. Covering Veritas Vision 2017. Brought to you by Veritas. >> Welcome back to Las Vegas everybody. This is the wrap for Veritas 2017. This is the Cube, the leader in live tech coverage, I'm Dave Vellante with Stu Mindeman. And Stu, two days where we're witnessing the evolution transformation of Veritas. Veritas used to be the gold standard for what wasn't known at the time as software design but just software function to deliver storage capabilities, no hardware agenda and now you're seeing investment under the leadership of new management. Some innovation, a cycle that's quite rapid. It's hard to tell how much of that is really taking shape in the customer base. Seems like the channel, partners are picking up on it. Customers are still sort of trying to figure out how to move beyond so their existing legacy situation, it's like Heath Townsend says. The vendor community tends to move at the speed of CIO. It's a great quote. But overall, I think very good show. Some surprises here in terms of specifically the breadth of the Veritas portfolio not just a backup company. Really focused on data management, focused on information management which obviously is relevant in the digital economy. What were your takeaways? >> So Dave the big strategy is the 360 data management. And I think one of the things we teased out in here is first of all, nobody thinks the cloud is simple. Multicloud, where customers are and when you dig into it and what Veritas has learned in the last year is that there's a lot of work to be done. Where are their deeper integrations that they need to have. There's different requirements from the different partners here. See Microsoft, the top level sponsor. Russinovich up on stage, giving kind of his usual hybrid cloud with a lot of open source pitch there but seems a good fit from the customers and partners that we talked to here to say Microsoft aligns well with what Veritas is doing. Amazon big player here. Lot of integration is happening behind the scenes to make sure that Veritas can work there. And then you follow Google of course, big focus around data, good to see where Veritas is going. We had a nice conversation with Google. Google seems very open on a lot of these not as much focus on some of the functionality that Veritas has so it's a good natural fit and then IBM and Oracle kind of rounding out the big players here. The thing I've come in, I think every show I've gone to this year Dave, is where do companies that have been around for more than a couple of years fit in this multicloud world and absolutely that's where the puck's going as Bill Coleman said that's where they're betting the company and putting it forward and we wondered coming in would it be like ah, yeah. This is a net backup and Veritas foundation suite with a new coat of paint on it? And no, I mean they really brought in a lot of new management team sure there's engineers here with a lot of expertise and experience to build on to know how to do this but I was pretty impressed with what I saw this week Dave. >> So no hardware agenda is evolving to no cloud agenda. That's one of the things we learned here and we had a good discussion. Got a little bit awkward at times but good discussion about why Veritas relative to the other players here. And what the answer we got back which we had to tease it out a little bit was essentially the upstart guys, the Rubrics, the Cohesity's to a certain extent Zerto I think they tried to put Veeam in that category we'll come back to Veeam it's kind of interesting Maybe not big enough to deliver on that multicloud vision. And they're really not even trying. Cohesity and Rubric I don't know. >> They've added a lot of cloud recently, actually Rubric's been doing it for a while, Cohesity definitely seen there. They understand that cloud but I think what maybe I'd say Dave, they tend to start from an on premises piece as opposed to you say this Veritas strategy is it doesn't matter and what many of the player, right, where is there natural gravity? Is it on premises or is in the public cloud and Nutanix, they partner with Google, they're doing the cloud. But absolutely, most of their >> Dave: They make more money. >> Stu: Most of their revenue is, you know, is found there. >> So the upstarts I kind of buy the Veritas argument that there maybe doesn't have the Gravitas and the heft to attack that multicloud other than pick at it and grow and they'll do hundreds of millions of dollars in revenue and maybe get to a billion and have a great exit. I think that'll happen. And then the other guys, the big guys, HPE, Dell EMC, IBM, they certainly have the capabilities to do that. But is it going to be the main focus of those companies? HPE maybe. We'll see. HPE and Veeam are an interesting partnership. My information suggests that Veeam is driving many tens of millions of dollars through Hewlett Packard Enterprise now that the microfocus deal has been done and they got rid of data protector. IBM they're kind of re-invigorating the storage business, data production is part of that. Dell EMC is I think challenged to invest They can't invest in as much as they used to certainly not in acquisitions. The acquisition pipeline is basically dried up. >> Stu: Dave, Dave, look at the DataMain was a great acquisition by EMC at the time now under Dell EMC. I mean, you're probably closer to it than me. I don't hear a strong cloud message coming out of that group when we talk about backup and the like. Dell corporate, of course they've got Microsoft partnerships Veeam has Amazon partnerships but it very much is tied to appliances or arrays or servers at the main piece, it's not a software message which is where Veritas is. >> Dave: If you look at Dell EMC's acquisitions recently, Isilon a couple billion, two and a half billion I think, Data Domain two and a half billion, DSSD a billion, which really hasn't turned into much at this point in time anyway. Extreme IO, not sure what they paid but you know you're hearing ebbs and flows on that but that my point is that is how under Joe Tucci EMC innovated. They would incrementally add on to their existing platforms. You were there. You saw it. And then they would invest in what Joe Tucci used to call tuck in acquisitions. And all that was well and good and they were able to sort of keep, not sort of, they were able to keep pace with the industry. That's basically stopped. That strategy. We've seen cuts and layoffs but still a financial windfall I think is coming for Dell. And VMware is a secret sauce there so we don't have to dig into that too much but my point is that services is going to be the lynchpin for that company in terms of attacking multicloud services and VMware. So now you >> Stu: And pivotal of course too. >> Dave: And pivotal as well, that's right. Great point. Now you come back to Veritas. Focused on that strategy of information management. Investing apparently in RND. Seemingly patient capital with Carlyle so you know me, I like to unpack the numbers. From what I can tell, my sources and got to do some more digging on this but when Veritas was acquired by Carlyle it was about 2.3 billion dollar company, wouldn't surprise me if on an income statement basis it's actually shrunk. It wouldn't surprise me at all. In fact, Bill Coleman kind of hinted to that. And especially if you start looking at rateable revenue models, maybe bookings could be up and I've heard numbers as high as 2.6, 2.7 billion but who knows. I've also heard now, the evaluation at the time of the acquisition was 7 billion and change. I've heard numbers as high as 14, 15 billion now, maybe a little inflated but I think easily over ten. And I think this company has an opportunity to get to three billion, get the evaluation up to 15, maybe even 20 billion. Big win for the private equity investors and the key to that, I think, is going to be a continuous investment. Go to market that aligns to those new areas that they're talking about and very importantly the ecosystem. I want to see this thing start exploding. The big highlights here were the cloud guys. What else would you highlight? You know, you walk around the shows a lot of smaller partners here Really would like to see that ecosystem grow. That's something that we're going to watch. And the audience grow. I think this show is up from last year next year I believe it's in Las Vegas again moving to the Cosmopolitan little bit better venue, bigger venue we'll see if they can get up to where the big boys go over time but overall I'd say pretty good second year for Veritas Vision. >> Yeah, you know Dave, when you look at the different areas Veritas has a full suite of software to find storage. The analogy I've used all the time storage industry is a knife fight in a dark alley. So you've got some big players out there that all have their software defined storage messaging out there of course Veritas would say they all have the hardware agenda. There's some truth to that but Veritas also has to partner with a bunch of these players to get there so where did they get the reach, how does the channel help them punch above their white, the differences there a two and a half, 2.6 billion dollar run rate company, revenue company that is private. So you know, they're trusted because they have history. They're not a small startup can this innovation and all the new team members come in and definitely the cloud piece is pretty interesting, Dave we see, we'll be back at Reinvent with the Cube and Veritas will have a presence there. Amazon, huge ecosystem, where do they play where do they show up, data, we've said so many times on here it becomes repetitive data is the new oil and customers need to take advantage of them. Can Veritas' message get them at the table and in a conversation where so much, it's about infrastructure and I love the message here at the show. It's not infrastructure technology it's information technology and we want to put a highlight on that so like the message, like where it's going, here are the customers but can they get at the table when there's so many different there's the startups, there's the big players everybody pulling at where the customers are and the GDPR was an interesting angle 'cause it was the crispest, the most crisp conversation I've heard on GDPR. I know you've been talking about it at least the last six months on some Cube interviews, I've done a number of interviews. But it really crystallized for me this week at the show. >> I'm glad you mentioned that because I've done a couple shows where GDPR has come up and I was like okay, yeah we get it. It's coming. It's nasty. How are you going to help me again? And I think Veritas did a really good job this week of saying look, we are here to help. We're going to start with Discovery and they sort of laid out the journey and I think they made a good case for their portfolio aligning well with solving that problem. So this could be a nice little kicker there. One of the things I wanted to sort of riff on a little bit was the tam, the data protection space. It reminds when ServiceNow went public I know it was a story about Gartner Antlis was very negative on and saying a helpdesk is a dead business and then Frank Sluman did a masterful job of expanding the tam, explaining that tam, guiding the company to a massive opportunity. And I see a similar dynamic here. On the one hand I say wow. Got a lot of companies in this data protection space even though it's exploding lot of VC money coming in, you're seeing new entrants like Datrium now gets in the space even though they're not just backup, that's not their primary but I mean you certainly saw SimpliVity with what's kind of their specialty. But guys like Datos.IO and some of these new guys coming in like we talked about Rubric, etc there's a lot of players here. Is the market big enough to support those? Part of me says ehh, I don't know but then I think back to that ServiceNow example. I think the tam is going to explode because it's not about backup. And it's not even just about data protection. It is about information management and I think Veritas got that right. What I like about their chances is they're big. They've got a big install base and I think their vision is right and they don't have that cloud agenda. They're a pure software company even though they do sell some appliances sometimes. And they got what seemingly is good management. I think I'd like to see them attract even more management as they grow and as they start executing this and as I say, the ecosystem has got to grow. >> Yeah, so Dave, IT has to deal with information governance. That's the defense they need to play. There's going to be money thrown at that. Some of the conversation we had this week IT operations becomes one of those tail winds that should lift companies like Veritas to be able to have further discussion and grow those budgets to be able to be a much more important piece. >> Alright good, Stu. Thank you. Good working with you again. It's been a long few weeks here but we're at it again next week. The Cube is at Big Data NYC which is done in conjunction with Strata in New York City. We've got a big party on Wednesday night. Actually we've got a presentation, Peter Burrows, Neil Raden, Jim Cubillas and we got a panel. Talking about software eating the edge. That's on Wednesday at 37 Pillars. Tweet me at @dvellante if you don't have an invitation I'll get you one although I heard there was a waitlist last week but we'll get you in, don't worry. And then we're also at Splunk next week, I'm going to be at Dotconf in DC. We've done Dotconf since I think 2011 was the first year we did Dotconf. >> And I'll be keeping a big eye on Microsoft Ignite next week while we don't have the Cube there. Obviously pretty important things like Aster Stack expected to roll out and got so many shows Dave. >> So the Cube, we love digital content creating content, sharing with you our community. Follow @thecube that handle for the Cube gems, you'll see a bunch of videos. Go to thecube.net, that's where we host all the videos from all of our shows. And then siliconangle.com is where we write up our news and analysis of these events and news of the day and of course wikibon.com is our research site. A lot of really good deep work going on there. So thanks for watching everybody. This is Dave Vellante with Stu Mindeman. We're out from Veritas Vision 2017. We'll see you next time. (music)
SUMMARY :
Brought to you by Veritas. This is the Cube, the leader that they need to have. That's one of the things we learned here as opposed to you say Stu: Most of their revenue the capabilities to do that. at the DataMain was a great add on to their existing and the key to that, I think, and I love the message here at the show. Is the market big enough to support those? That's the defense they need to play. I'm going to be at Dotconf in DC. have the Cube there. and news of the day and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Mindeman | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Frank Sluman | PERSON | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Neil Raden | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Bill Coleman | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Jim Cubillas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
three billion | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
2.6, 2.7 billion | QUANTITY | 0.99+ |
Veritas' | ORGANIZATION | 0.99+ |
Wednesday night | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Peter Burrows | PERSON | 0.99+ |
7 billion | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
next week | DATE | 0.99+ |
Wednesday | DATE | 0.99+ |
14, 15 billion | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Heath Townsend | PERSON | 0.99+ |
20 billion | QUANTITY | 0.99+ |
Russinovich | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
two and a half billion | QUANTITY | 0.99+ |
DC | LOCATION | 0.99+ |
Carlyle | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
Gartner Antlis | ORGANIZATION | 0.99+ |
two and a half billion | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
Cube | COMMERCIAL_ITEM | 0.98+ |