Image Title

Search Results for unigrid:

Action Item, Graph DataBases | April 13, 2018


 

>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. (electronic music) Once again, we're broadcasting from our beautiful theCUBE Studios in Palo Alto, California. Here in the studio with me, George Gilbert, and remote, we have Neil Raden, Jim Kobielus, and David Floyer. Welcome, guys! >> Hey. >> Hi, there. >> We've got a really interesting topic today. We're going to be talking about graph databases, which probably just immediately turned off everybody. But we're actually not going to talk so much about it from a technology standpoint. We're really going to spend most of our time talking about it from the standpoint of the business problems that IT and technology are being asked to address, and the degree to which graph databases, in fact, can help us address those problems, and what do we need to do to actually address them. Human beings tend to think in terms of relationships of things to each other. So what the graph community talks about is graphed-shaped problems. And by graph-shaped problem we might mean that someone owns something and someone owns something else, or someone shares an asset, or it could be any number of different things. But we tend to think in terms of things and the relationship that those things have to other things. Now, the relational model has been an extremely successful way of representing data for a lot of different applications over the course of the last 30 years, and it's not likely to go away. But the question is, do these graph-shaped problems actually lend themselves to a new technology that can work with relational technology to accelerate the rate at which we can address new problems, accelerate the performance of those new problems, and ensure the flexibility and plasticity that we need within the application set, so that we can consistently use this as a basis for going out and extending the quality of our applications as we take on even more complex problems in the future. So let's start here. Jim Kobielus, when we think about graph databases, give us a little hint on the technology and where we are today. >> Yeah, well, graph databases have been around for quite a while in various forms, addressing various core-use cases such as social network analysis, recommendation engines, fraud detection, semantic search, and so on. The graph database technology is essentially very closely related to relational, but it's specialized to, when you think about it, Peter, the very heart of a graph-shaped business problem, the entity relationship polygram. And anybody who's studied databases has mastered, at least at a high level, entity relationship diagrams. The more complex these relationships grow among a growing range of entities, the more complex sort of the network structure becomes, in terms of linking them together at a logical level. So graph database technology was developed a while back to be able to support very complex graphs of entities, and relationships, in order to do, a lot of it's analytic. A lot of it's very focused on fast query, they call query traversal, among very large graphs, to find quick answers to questions that might involve who owns which products that they bought at which stores in which cities and are serviced by which support contractors and have which connections or interrelationships with other products they may have bought from us and our partners, so forth and so on. When you have very complex questions of this sort, they lend themselves to graph modeling. And to some degree, to the extent that you need to perform very complex queries of this sort very rapidly, graph databases, and there's a wide range of those on the market, have been optimized for that. But we also have graph abstraction layers over RDBMSes and multi-model databases. You'll find them running in IBM's databases, or Microsoft Cosmos DB, and so forth. You don't need graph-specialized databases in order to do graph queries, in order to manipulate graphs. That's the issue here. When does a specialized graph database serve your needs better than a non-graph-optimized but nonetheless graph-enabling database? That's the core question. >> So, Neil Raden, let's talk a little bit about the classes of business problems that could in fact be served by representing data utilizing a graph model. So these graph-shaped problems, independent of the underlying technology. Let's start there. What kinds of problems can business people start thinking about solving by thinking in terms of graphs of things and relationships amongst things? >> It all comes down to connectedness. That's the basis of a graph database, is how things are connected, either weakly or strongly. And these connected relationships can be very complicated. They can be based on very complex properties. A relational database is not based on, not only is it not based on connectedness, it's not based on connectedness at all. I'd like to say it's based on un-connectedness. And the whole idea in a relational database is that the intelligence about connectedness is buried in the predicate of a query. It's not in the database itself. So I don't know how overlaying graph abstractions on top of a relational database are a good idea. On the other hand, I don't know how stitching a relational database into your existing operation is going to work, either. We're going to have to see. But I can tell you that a major part of data science, machine learning, and AI is going to need to address the issue of causality, not just what's related to each other. And there's a lot of science behind using graphs to get at the causality problem. >> And we've seen, well, let's come back to that. I want to come back to that. But George Gilbert, we've kind of experienced a similar type of thing back in the '90s with the whole concept of object-orientated databases. They were represented as a way of re-conceiving data. The problem was that they had to go from the concept all the way down to the physical thing, and they didn't seem to work. What happened? >> Well it turns out, the big argument was, with object-oriented databases, we can model anything that's so much richer, especially since we're programming with objects. And it turns out, though, that theoretically, especially at that time, you could model anything down at the physical level or even the logical level in a relational database, and so those code bases were able to handle sort of similar, both ends of the use cases, both ends of the spectrum. But now that we have such extreme demands on our data management, rather than look at a whole application or multiple applications even sharing a single relational database, like some of the big enterprise apps, we have workloads within apps like recommendation engines, or a knowledge graph, which explains the relationship between people, places, and things. Or digital twins, or mapping your IT infrastructure and applications, and how they all hold together. You could do that in a relational database, but in a graph database, you can organize it so that you can have really fast analysis of these structures. But, the trade-off is, you're going to be much more restricted in how you can update the stuff. >> Alright, so we think about what happened, then, with some of the object-orientated technology, the original world database, the database was bound to the application, and the developer used the database to tell the application where to go find the data. >> George: Right. >> Relational data allowed us not to tell the applications where to find things, but rather how to find things, and that was persisted, and was very successful for a long time. Object-orientated technologies, in many respects, went back to the idea that the developer had to be very concrete about telling the application where the data was, but we didn't want to do things that way. Now, something's happened, David Floyer. One of the reasons why we had this challenge of representing data in a more abstract way across a lot of different forms without having it also being represented physically, and therefore a lot of different copies and a lot of different representations of the data which broke systems of record and everything else, was that the underlying technology was focused on just persisting data and not necessarily delivering it into these new types of datas, databases, data models, et cetera. But Flash changes that, doesn't it? Can't we imagine a world in which we can have our data in Flash and then, which is a technology that's more focused on delivering data, and then having that data be delivered to a lot of different representations, including things like graph databases, graph models. Is that accurate? >> Absolutely. In a moment I'll take it even further. I think the first point is that when we were designing real-time applications, transactional applications, we were very constrained, indeed, by the amount of data that we could get to. So, as a database administrator, I used to have a rule which you could, database developers could not issue more than 100 database calls. And the reason was that, they could always do more than that, but the applications became very unstable and they became very difficult to maintain. The cost of maintenance went up a lot. The whole area of Flash allows us to do a number of things, and the area of UniGrid enables us to do a number of things very differently. So that we can, for example, share data and have many different views of it. We can use UniGrid to be able to bring far greater amounts of power, compute power, GPUs, et cetera, to bear on specific workloads. I think the most useful thing to think about this is this type of architecture can be used to create systems of intelligence, where you have the traditional relational databases dealing with systems of record, and then you can have the AI systems, graph systems, all the other components there looking at the best way of providing data and making decisions in real time that can be fed back into the systems of record. >> Alright, alright. So let's-- >> George: I want to add to something on this. >> So, Neil, let me come back to you very quickly, sorry, George. Let me come back to Neil. I want to spend, go back to this question of what does a graph-shaped problem look like? Let's kind of run down it. We talked about AI, what about IoT, guys? Is IoT going to help us, is IoT going to drive this notion of looking at the world in terms of graphs more or less? What do you think, Neil? >> I don't know. I hadn't really thought about it, Peter, to tell you the truth. I think that one thing we leave out when we talk about graphs is we talk about, you know, nodes and edges and relationships and so forth, but you can also build a graph with very rich properties. And one thing you can get from a graph query that you can't get from a relational query, unless you write careful predicate, is it can actually do some thinking for you. It can tell you something you don't know. And I think that's important. So, without being too specific about IoT, I have to say that, you know, streaming data and trying to relate it to other data, getting down to, very quickly, what's going on, root-cause analysis, I think graph would be very helpful. >> Great, and, Jim Kobielus, how about you? >> I think, yeah I think that IoT is tailor-made for, or I should say, graph modeling and graph databases are tailor-made for the IoT. Let me explain. I think the IoT, the graph is very much a metadata technology, it's expressing context in a connected universe. Where the IoT is concerned it's all about connectivity, and so graphs, increasingly complex graphs of, say, individuals and the devices and the apps they use and locations and various contexts and so forth, these are increasingly graph-based. They're hierarchical and shifting and changing, and so in order to contextualize and personalize experience in a graph, in an IoT world, I think graph databases will be embedded in the very fabric of these environments. Microsoft has a strategy they announced about a year ago to build more of an intelligent edge around, a distributed graph across all their offerings. So I think graphs will become more important in this era, undoubtedly. >> George, what do you think? Business problems? >> Business problems on IoT. The knowledge graph that holds together digital twin, both of these lend themselves to graph modeling, but to use the object-oriented databases as an example, where object modeling took off was in the applications server, where you had the ability to program, in object-oriented language, and that mapped to a relational database. And that is an option, not the only one, but it's an option for handling graph-model data like a digital twin or IT operations. >> Well that suggests that what we're thinking about here, if we talk about graph as a metadata, and I think, Neil, this partly answers the question that you had about why would anybody want to do this, that we're representing the output of a relational data as a node in a network of data types or data forms so that the data itself may still be relationally structured, but from an application standpoint, the output of that query is, itself, a thing that is then used within the application. >> But to expand on that, if you store it underneath, as fully normalized, in relational language, laid out so that there's no duplicates and things like that, it gives you much faster update performance, but the really complex queries, typical of graph data models, would be very, very slow. So, once we have, say, more in memory technology, or we can manage under the covers the sort of multiple representations of the data-- >> Well that's what Flash is going to allow us to do. >> Okay. >> What David Floyer just talked about. >> George: Okay. >> So we can have a single, persistent, physical storage >> Yeah. >> but it can be represented in a lot of different ways so that we avoid some of the problems that you're starting to raise. If we had to copy the data and have physical, physical copies of the data on disc in a lot of different places then we would run into all kinds of consistency and update. It would probably break the model. We'd probably come back to the notion of a single data store. >> George: (mumbles) >> I want to move on here, guys. One really quick thing, David Floyer, I want to ask you. If there's, you mentioned when you were database administrator and you put restrictions on how many database actions an application or transaction was allowed to generate. When we think about what a business is going to have to do to take advantage of this, are there any particular, like one thing that we need to think about? What's going to change within an IT organization to take advantage of graph database? And we'll do the action items. >> Right. So the key here is the number of database calls can grow by a factor of probably a thousand times what it is now with what we can see is coming as technologies over the next couple of years. >> So let me put that in context, David. That's a single transaction now generating a hundred thousand, >> Correct. >> a hundred thousand database calls. >> Well, access calls to data. >> Right. >> Whatever type of database. And the important thing here is that a lot of that is going to move out, with the discussion of IoT, to where the data is coming in. Because the quicker you can do that, the earlier you can analyze that data, and you talked about IoT with possible different sources coming in, a simple one like traffic lights, for example. The traffic lights are being affected by the traffic lights around them within the city. Those sort of problems are ideal for this sort of graph database. And having all of that data locally and being processed locally in memory very, very close to where those sensors are, is going to be the key to developing solutions in this area. >> So, Neil, I've got one question from you, or one question for you. I'm going to put you on the spot. I just had a thought. And here's the thought. We talk a lot about, in some of the new technologies that could in fact be employed here, whether it be blockchain or even going back to SOA, but when we talk about what a system is going to have the authority to do about the idea of writing contracts that describe very, very discretely, what a system is or is not going to do. I have a feeling those contracts are not going to be written in relational terms. I have a feeling that, like most legal documents, they will be written in what looks more like graph terms. I'm extending that a little bit, but this has rights to do this at this point in time. Is that also, this notion of incorporating more contracts directly to how systems work, to assure that we have the appropriate authorities laid out. What do you think? Is that going to be easier or harder as a consequence of thinking in terms of these graph-shaped models? >> Boy, I don't know. Again, another thing I hadn't really thought about. But I do see some real gaps in thinking. Let me give you an analogy. OLAP databases came on the scene back in the '90s whatever. People in finance departments and whatever they loved OLAP. What they hated was the lack of scalability. And now what we see now is scalability isn't a problem and OLAP solutions are suddenly bursting out all over the place. So I think there's a role for a mental model of how you model your data and how you use it that's different from the relational model. I think the relational model has prominence and has that advantage of, what's it called? Occupancy or something. But I think that the graph is going to show some real capabilities that people are lacking right now. I think some of them are at the very high end, things, like I said, getting to causality. But I think that graph theory itself is so much richer than the simple concept of graphs that's implemented in graph databases today. >> Yeah, I agree with that totally. Okay, let's do the action item round. Jim Kobielus, I want to start with you. Jim, action item. >> Yeah, for data professionals and analytic professionals, focus on what graphs can't do, cannot do, because you hear a lot of hyperbolic, they're not useful for unstructured data or for machine learning in database. They're not as useful for schema on read. What they are useful for is the same core thing that relational is useful for which is schema on write applied to structured data. Number one. Number two, and I'll be quick on this, focus on the core use cases that are already proven out for graph databases. We've already ticked them off here, social network analysis, recommendation engines, influencer analysis, semantic web. There's a rich range of mature use cases for which semantic techniques are suited. And then finally, and I'll be very quick here, bear in mind that relational databases have been supporting graph modeling, graph traversal and so forth, for quite some time, including pretty much all the core mature enterprise databases. If you're using those databases already, and they can perform graph traversals and so forth reasonably well for your intended application, stick with that. No need to investigate the pure play, graph-optimized databases on the market. However, that said, there's plenty of good ones, including AWS is coming out with Neptune. Please explore the other alternatives, but don't feel like you have to go to a graph database first and foremost. >> Alright. David Floyer, action item. >> Action item. You are going to need to move your data center and your applications from the traditional way of thinking about it, of handling things, which is sequential copies going around, usually taking it two or three weeks. You're going to have to move towards a shared data model where the same set of data can have multiple views of it and multiple uses for multiple different types of databases. >> George Gilbert, action item. >> Okay, so when you're looking at, you have a graph-oriented problem, in other words the data is shaped like a graph, question is what type of database do you use? If you have really complex query and analysis use cases, probably best to use a graph database. If you have really complex update requirements, best to use a combination, perhaps of relational and graph or something like multi-model. We can learn from Facebook where, for years, they've built their source of truth for the social graph on a bunch of sharded MySQL databases with some layers on top. That's for analyzing the graph and doing graph searches. I'm sorry, for updating the graph and maintaining it and its integrity. But for reading the graph, they have an entirely different layer for comprehensive queries and manipulating and traversing all those relationships. So, you don't get a free lunch either way. You have to choose your sweet spots and the trade-offs associated with them. >> Alright, Neil Raden, action item. >> Well, first of all, I don't think the graph databases are subject to a lot of hype. I think it's just the opposite. I think they haven't gotten much hype at all. And maybe we're going to see that. But another thing is, a fundamental difference when you're looking at a graph and a graph query, it uses something called open world reasoning. A relational database uses closed world reasoning. I'll give you an example. Country has capital city. Now you have in your graph that China has capital city Beijing, China has capital city Beijing. That doesn't violate the graph. The graph simply understands and intuits that they're different names for the same thing. Now, if you love to write correlated sub-queries for many, many different relationships, I'd say stick to your relational database. I see unique capabilities in a graph that would be difficult to implement in a relational database. >> Alright. Thank you very much, guys. Let's talk about what the action item is for all of us. This week we talked about graph databases. We do believe that they have enormous potential, but we first off have to draw a distinction between graph theory, which is a way of looking at the world and envisioning and conceptualizing solutions to problems, and graph database technology, which has the advantages of being able, for certain classes of data models, to be able to very quickly both write and read data that is based on relationships and hierarchies and network structures that are difficult to represent in a normalized relational database manager. Ultimately, our expectation is that over the next few years, we're going to see an explosion in the class of business problems that lend themselves to a graph-modeling orientation. IoT is an example, very complex analytics systems will be an example, but it is not the only approach or the only way of doing things. But what is interesting, what is especially interesting, is over the last few years, a change in the underlying hardware technology is allowing us to utilize and expand the range of tools that we might use to support these new classes of applications. Specifically, the move to Flash allows us to sustain a single physical copy of data and then have that be represented in a lot of different ways to support a lot of different model forms and a lot of different application types, without undermining the fundamental consistency and integrity of the data itself. So that is going to allow us to utilize new types of technologies in ways that we haven't utilized before, because before, whether it was object-oriented technology or OLAP technology, there was always this problem of having to create new physical copies of data which led to enormous data administrative nightmares. So looking forward, the ability to use Flash as a basis for physically storing the data and delivering it out to a lot of different model and tool forms creates an opportunity for us to use technologies that, in fact, may more naturally map to the way that human beings think about things. Now, where is this likely to really play? We talked about IoT, we talked about other types of technologies. Where it's really likely to play is when the domain expertise of a business person is really pushing the envelope on the nature of the business problem. Historically, applications like accounting or whatnot, were very focused on highly stylized data models, things that didn't necessarily exist in the real world. You don't have double-entry bookkeeping running in the wild. You do have it in the legal code, but for some of the things that we want to build in the future, people, the devices they own, where they are, how they're doing things, that lends itself to a real-world experience and human beings tend to look at those using a graph orientation. And the expectations over the next few years, because of the changes in the physical technology, how we can store data, we will be able to utilize a new set of tools that are going to allow us to more quickly bring up applications, more naturally manage data associated with those applications, and, very important, utilize targeted technology in a broader set of complex application portfolios that are appropriate to solve that particular part of the problem, whether it's a recommendation engine or something else. Alright, so, once again, I want to thank the remote guys, Jim Kobielus, Neil Raden, and David Floyer. Thank you very much for being here. George Gilbert, you're in the studio with me. And, once again, I'm Peter Burris and you've been listening to Action Item. Thank you for joining us and we'll talk to you again soon. (electronic music)

Published Date : Apr 13 2018

SUMMARY :

Here in the studio with me, George Gilbert, and the degree to which graph databases, And to some degree, to the extent that you need to perform independent of the underlying technology. that the intelligence about connectedness from the concept all the way down both ends of the use cases, both ends of the spectrum. and the developer used the database and a lot of different representations of the data and the area of UniGrid enables us to do a number of things So let's-- So, Neil, let me come back to you very quickly, I have to say that, you know, and so in order to contextualize and personalize experience and that mapped to a relational database. so that the data itself may still be relationally But to expand on that, if you store it underneath, so that we avoid some of the problems What's going to change within an IT organization So the key here is the number of database calls can grow So let me put that in context, David. the earlier you can analyze that data, the authority to do about the idea of writing contracts But I think that the graph is going to show some real Okay, let's do the action item round. focus on the core use cases that are already proven out David Floyer, action item. You are going to need to move your data center and the trade-offs associated with them. are subject to a lot of hype. So looking forward, the ability to use Flash as a basis

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Jim KobielusPERSON

0.99+

Neil RadenPERSON

0.99+

GeorgePERSON

0.99+

NeilPERSON

0.99+

George GilbertPERSON

0.99+

Peter BurrisPERSON

0.99+

DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

April 13, 2018DATE

0.99+

PeterPERSON

0.99+

JimPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

one questionQUANTITY

0.99+

twoQUANTITY

0.99+

FacebookORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

BeijingLOCATION

0.99+

singleQUANTITY

0.99+

three weeksQUANTITY

0.99+

bothQUANTITY

0.99+

This weekDATE

0.99+

OneQUANTITY

0.99+

first pointQUANTITY

0.98+

MySQLTITLE

0.98+

more than 100 database callsQUANTITY

0.98+

ChinaLOCATION

0.98+

FlashTITLE

0.98+

oneQUANTITY

0.98+

todayDATE

0.97+

theCUBE StudiosORGANIZATION

0.95+

one thingQUANTITY

0.94+

'90sDATE

0.91+

single data storeQUANTITY

0.88+

doubleQUANTITY

0.87+

both endsQUANTITY

0.85+

a year agoDATE

0.85+

firstQUANTITY

0.84+

Number twoQUANTITY

0.84+

next couple of yearsDATE

0.83+

yearsDATE

0.82+

hundred thousandQUANTITY

0.79+

last 30 yearsDATE

0.72+

hundred thousand databaseQUANTITY

0.72+

thousand timesQUANTITY

0.72+

FlashPERSON

0.68+

CosmosTITLE

0.67+

WikibonORGANIZATION

0.63+

databaseQUANTITY

0.57+

aboutDATE

0.57+

Number oneQUANTITY

0.56+

Wikibon Predictions Webinar with Slides


 

(upbeat music) >> Hi, welcome to this year's Annual Wikibon Predictions. This is our 2018 version. Last year, we had a very successful webinar describing what we thought was going to happen in 2017 and beyond and we've assembled a team to do the same thing again this year. I'm very excited to be joined by the folks listed here on the screen. My name is Peter Burris. But with me is David Floyer, Jim Kobielus is remote. George Gilbert's here in our Pal Alto studio with me. Neil Raden is remote. David Vellante is here in the studio with me. And Stuart Miniman is back in our Marlboro office. So thank you analysts for attending and we look forward to a great teleconference today. Now what we're going to do over the course of the next 45 minutes or so is we're going to hit about 13 of the 22 predictions that we have for the coming year. So if you have additional questions, I want to reinforce this, if you have additional questions or things that don't get answered, if you're a client, give us a call. Reach out to us. We'll leave you with the contact information at the end of the session. But to start things off we just want to make sure that everybody understands where we're coming from. And let you know who is Wikibon. So Wikibon is a company that starts with the idea of what's important as to research communities. Communities are where the action is. Community is where the change is happening. And community is where the trends are being established. And so we use digital technologies like theCUbE, CrowdChat and others to really ensure that we are surfacing the best ideas that are in a community and making them available to our clients so that they can succeed successfully, they can be more successful in their endeavors. When we do that, our focus has always been on a very simple premise. And that is that we're moving to an era of digital business. For many people, digital business can mean virtually anything. For us it means something very specific. To us, the difference between business and digital business is data. A digital business uses data to differentially create and keep a customer. So borrowing from what Peter Drucker said if the goal of business is to create customers and keep and sustain customers, the goal of digital business is to use data to do that. And that's going to inform an enormous number of conversations and an enormous number of decisions and strategies over the next few years. We specifically believe that all businesses are going to have establish what we regard as the five core digital business capabilities. First, they're going to have to put in place concrete approaches to turning more data into work. It's not enough to just accrete data, to capture data or to move data around. You have to be very purposeful and planful in how you establish the means by which you turn that data into work so that you can create and keep more customers. Secondly, it's absolutely essential that we build kind of the three core technology issues here, technology capabilities of effectively doing a better job of capturing data and IoT and people, or internet of things and people, mobile computing for example, is going to be a crucial feature of that. You have to then once you capture that data, turn it into value. And we think this is the essence of what big data and in many respects AI is going to be all about. And then once you have the possibility, kind of the potential energy of that data in place, then you have to turn it into kinetic energy and generate work in your business through what we call systems of agency. Now, all of this is made possible by this significant transformation that happens to be conterminous with this transition to digital business. And that is the emergence of the cloud. The technology industry has always been defined by the problems it was able to solve, catalyzed by the characteristics of the technology that made it possible to solve them. And cloud is crucial to almost all of the new types of problems that we're going to solve. So these are the five digital business capabilities that we're going to talk about, where we're going to have our predictions. Let's start first and foremost with this notion of turn more data into work. So our first prediction relates to how data governance is likely to change in a global basis. If we believe that we need to turn more data into work well, businesses haven't generally adopted many of the principles associated with those practices. They haven't optimized to do that better. They haven't elevated those concepts within the business as broadly and successfully as they have or as they should. We think that's going to change in part by the emergence of GDPR or the General Data Protection Regulation. It's going to go in full effect in May 2018. A lot has been written about it. A lot has been talked about. But our core issues ultimately are is that the dictates associated with GDPR are going to elevate the conversation on a global basis. And it mandates something that's now called the data protection officer. We're going to talk about that in a second David Vellante. But if is going to have real teeth. So we were talking with one chief privacy officer not too long ago who suggested that had the Equifax breach occurred under the rules of GDPR that the actual finds that would have been levied would have been in excess of 160 billion dollars which is a little bit more than the zero dollars that has been fined thus far. Now we've seen new bills introduced in Congress but ultimately our observation and our conversations with a lot of data chief privacy officers or data protection officers is that in the B2B world, GDPR is going to strongly influence not just our businesses behavior regarding data in Europe but on a global basis. Now that has an enormous implication David Vellante because it certainly suggest this notion of a data protection officer is something now we've got another potential chief here. How do we think that's going to organize itself over the course of the next few years? >> Well thank you Peter. There are a lot of chiefs (laughs) in the house and sometimes it gets confusing as the CIO, there's the CDO and that's either chief digital officer or chief data officer. There's the CSO, could be strategy, sometimes that could be security. There's the CPO, is that privacy or product. As he says, it gets confusing sometimes. On theCUbE we talked to all of these roles so we wanted to try to add some clarity to that. First thing we want to say is that the CIO, the chief information officer, that role is not going away. A lot of people predict that, we think that's nonsense. They will continue to have a critical role. Digital transformations are the priority in organizations. And so the chief digital officer is evolving from more than just a strategy role to much more of an operation role. Generally speaking, these chiefs tend to report in our observation to the chief operating officer, president COO. And we see the chief digital officer as increasing operational responsibility aligning with the COO and getting incremental responsibility that's more operational in nature. So the prediction really is that the chief digital officer is going to emerge as a charismatic leader amongst these chiefs. And by 2022, nearly 50% of organizations will position the chief digital officer in a more prominent role than the CIO, the CISO, the CDO and the CPO. Those will still be critical roles. The CIO will be an enabler. The chief information security officer has a huge role obviously to play especially in terms of making security a teams sport and not just falling on IT's shoulders or the security team's shoulders. The chief data officer who really emerged from a records and data management role in many cases, particularly within regulated industries will still be responsible for that data architecture and data access working very closely with the emerging chief privacy officer and maybe even the chief data protection officer. Those roles will be pretty closely aligned. So again, these roles remain critical but the chief digital officer we see as increasing in prominence. >> Great, thank you very much David. So when we think about these two activities, what we're really describing is over the course of the next few years, we strongly believe that data will be regarded more as an asset within business and we'll see resources devoted to it and we'll see certainly management devoted to it. Now, that leads to the next set of questions as data becomes an asset, the pressure to acquire data becomes that much more acute. We believe strongly that IoT has an enormous implication longer term as a basis for thinking about how data gets acquired. Now, operational technology has been in place for a long time. We're not limiting ourselves just operational technology when we talk about this. We're really talking about the full range of devices that are going to provide and extend information and digital services out to consumers, out to the Edge, out to a number of other places. So let's start here. Over the course of the next few years, the Edge analytics are going to be an increasingly important feature overall of how technology decisions get made, how technology or digital business gets conceived and even ultimately how business gets defined. Now David Floyer's done a significant amount of work in this domain and we've provided that key finding on the right hand side. And what it shows is that if you take a look at an Edge based application, a stylized Edge based application and you presume that all the data moves back to an centralized cloud, you're going to increase your costs dramatically over a three year period. Now that moderates the idea or moderates the need ultimately for providing an approach to bringing greater autonomy, greater intelligence down to the Edge itself and we think that ultimately IoT and Edge analytics become increasingly synonymous. The challenge though is that as we evolve, while this has a pressure to keep more of the data at the Edge, that ultimately a lot of the data exhaust can someday become regarded as valuable data. And so as a consequence of that, there's still a countervailing impression to try to still move all data not at the moment of automation but for modeling and integration purposes, back to some other location. The thing that's going to determine that is going to be rate at which the cost of moving the data around go down. And our expectation is over the next few years when we think about the implications of some of the big cloud suppliers, Amazon, Google, others, that are building out significant networks to facilitate their business services may in fact have a greater impact on the common carriers or as great an impact on the common carriers as they have on any server or other infrastructure company. So our prediction over the next few years is watch what Amazon, watch what Google do as they try to drive costs down inside their networks because that will have an impact how much data moves from the Edge back to the cloud. It won't have an impact necessarily on the need for automation at the Edge because latency doesn't change but it will have a cost impact. Now that leads to a second consideration and the second consideration is ultimately that when we talk about greater autonomy at the Edge we need to think about how that's going to play out. Jim Kobielus. >> Jim: Hey thanks a lot Peter. Yeah, so what we're seeing at Wikibon is that more and more of the AI applications, more of the AI application development involves AI and more and more of the AI involves deployment of those models, deep learning machine learning and so forth to the Edges of the internet of things and people. And much of that AI will be operating autonomously with little or no round-tripping back to the cloud. What that's causing, in fact, we're seeing really about a quarter of the AI development projects (static interference with web-conference) as Edge deployment. What that involves is that more and more of that AI will be, those applications will be bespoke. They'll be one of a kind, or unique or an unprecedented application and what that means is that, you know, there's a lot of different deployment scenarios within which organizations will need to use new forms of learning to be able to ready that data, those AI applications to do their jobs effectively albeit to predictions of real time, guiding of an autonomous vehicle and so forth. Reinforcement learning is the core of what many of these kinds of projects, especially those that involve robotics. So really software is hitting the world and you know the biggest parts are being taken out of the Edge, much of that is AI, much of that autonomous, where there is no need or less need for real time latency in need of adaptive components, AI infused components where as they can learn by doing. From environmental variables, they can adapt their own algorithms to take the right actions. So, they'll have far reaching impacts on application development in 2018. For the developer, the new developer really is a data scientist at heart. They're going to have to tap into a new range of sources of data especially Edge sourced data from the senors on those devices. They're going to need to do commitment training and testing especially reinforcement learning which doesn't involve trained data so much as it involves being able to build an algorithm that can learn to maximum what's called accumulative reward function and if you do the training there adaptly in real time at the Edge and so forth and so on. So really, much of this will be bespoke in the sense that every Edge device increasingly will have its own set of parameters and its own set of objective functions which will need to be optimized. So that's one of the leading edge forces, trends, in development that we see in the coming year. Back to you Peter. >> Excellent Jim, thank you very much. The next question here how are you going to create value from data? So once you've, we've gone through a couple trends and we have multiple others about what's going to happen at the Edge. But as we think about how we're going to create value from data, Neil Raden. >> Neil: You know, the problem is that data science emerged rapidly out of sort of a perfect storm of big data and cloud computing and so forth. And people who had been involved in quantitative methods you know rapidly glommed onto the title because it was, lets face it, it was very glamorous and paid very well. But there weren't really good best practices. So what we have in data science is a pretty wide field of things that are called data science. My opinion is that the true data scientists are people who are scientists and are involved in developing new or improving algorithms as opposed to prepping data and applying models. So the whole field really kind of generated very quickly, in really, just in a few years. To me I called it generation zero which is more like data prep and model management all done manually. And it wasn't really sustainable in most organizations because for obvious reasons. So generation one, then some vendors stepped up with tool kits or benchmarks or whatever for data scientists and made it a little better. And generation two is what we're going to see in 2018, is the need for data scientists to no longer prep data or at least not spend very much time with it. And not to do model management because the software will not only manage the progression of the models but even recommend them and generate them and select the data and so forth. So it's in for a very big change and I think what you're going to see is that the ranks of data scientists are going to sort of bifurcate to old style, let me sit down and write some spaghetti code in R or Java or something and those that use these advanced tool kits to really get the work done. >> That's great Neil and of course, when we start talking about getting the work done, we are becoming increasingly dependent upon tools, aren't we George? But the tool marketplace for data science, for big data, has been somewhat fragmented and fractured. And hasn't necessarily focused on solving the problems of the data scientists. But in many respects focusing the problems that the tools themselves have. What's going to happen in the coming year when we start thinking about Neil's prescription that as the tools improve what's going to happen to the tools. >> Okay so, the big thing that we see supporting what Neil's talking about, what Neil was talking about is partly a symptom of a product issue and a go to market issue where the produce issue was we had a lot of best of breed products that were all designed to fit together. That in the broader big data space, that's the same issue that we faced with more narrowly with ArpiM Hadoop where you know, where we were trying to fit together a bunch of open source packages that had an admin and developer burden. More broadly, what Neil is talking about is sort of a richer end to end tools that handle both everything from the ingest all to the way to the operationalization and feedback of the models. But part of what has to go on here is that with open source, these open source tools the price point and the functional footprints that many of the vendors are supporting right now can't feed an enterprise sales force. Everyone talks with their open source business models about land and expand and inside sales. But the problem is once you want to go to wide deployment in an enterprise, you still need someone negotiating commercial terms at a senior level. You still need the technical people fitting the tools into a broader architecture. And most of the vendors that we have who are open source vendors today, don't have either the product breadth or the deal size to support traditional enterprise software. An account team would typically a million and a half to two million quota every year so we see consolidation and the consolidation again driven by the need for simplicity for the admins and the developers and for business model reasons to support enterprise sales force. >> All right, so what we're going to see happen in the course of the coming year is a lot of specialization and recognition of what is data science, what are the practices, how is it going to work, supported by an increasing quality of tools and a lot of tool vendors are going to be left behind. Now the third kind of notion here for those core technology capabilities is we still have to enact based on data. The good new is that big data is starting to show some returns in part because of some of the things that AI and other technologies are capable of doing. But we have to move beyond just creating the potential for, we have to turn that into work and that's what we mean ultimately by this notion of systems of agency. The idea that data driven applications will increasingly be act on behalf of a brand, on behalf of a company and building those systems out is going to be crucial. It's going to have a whole new set of disciplines and expertise required. So when we think about what's going to be required, it always starts with this notion of AI. A lot of folks are presuming however, that AI is going to be relatively easy to build or relatively easy to put together. We have a different opinion George. What do we think is going to happen as these next few years unfold related to AI adoption in large enterprises? >> Okay so, let's go back to the lessons we learned from sort of the big data, the raw, you know, let's put a data link in place which was sort of the top of everyone's agenda for several years. The expectation was it was going to cure cancer, taste like chocolate and cost a dollar. And uh. (laughing) It didn't quite work out that way. Partly because we had a burden on the administrator again of so many tools that weren't all designed to fit together, even though they were distributed together. And then the data scientists, the guys who had to take all this data that wasn't carefully curated yet. And turn that into advanced analytics and machine learning models. We have many of the same problems now with tool sets that are becoming more integrated but at lower levels. This is partly what Neil Raden was just talking about. What we have to recognize is something that we see all along, I mean since the beginning of (laughs) corporate computing. We have different levels of extraction and you know at the very bottom, when you're dealing with things like Tensorflow or MXNet, that's not for mainstream enterprises. That's for you know, the big sophisticated tech companies who are building new algorithms on those frameworks. There's a level above that where you're using like a spark cluster in the machine learning built into that. That's slightly more accessible but when we talk about mainstream enterprises taking advantage of AI, the low hanging fruit is for them to use the pre-trained models that the public cloud vendors have created with all the consumer data on speech, image recognition, natural language processing. And then some of those capabilities can be further combined into applications like managing a contact center and we'll see more from like Amazon, like recommendation engines, fulfillment optimization, pricing optimization. >> So our expectation ultimately George is that we're going to see a lot of this, a lot of AI adoption happen through existing applications because the vendors that are capable of acquiring a talent, taking or experimenting, creating value, software vendors are going to be where a lot of the talent ends up. So Neil, we have an example of that. Give us an example of what we think is going to happen in 2018 when we start thinking about exploiting AI and applications. >> Neil: I think that it's fairly clear to be the application of what's called advanced analytics and data science and even machine learning. But really, it's rapidly becoming a commonplace in organizations not just at the bottom of the triangle here. But I like the example of SalesForce.com. What they've done with Einstein, is they've made machine learning and I guess you can say, AI applications available to their customer base and why is that a good thing? Because their customer base already has a giant database of clean data that they can use. So you're going to see a huge number of applications being built with Einstein against Salesforce.com data. But there's another thing to consider and that is a long time ago Salesforce.com built connectors to a zillion times of external data. So, if you're a SalesForce.com customer using Einstein, you're going to be able to use those advanced tools without knowing anything about how to train a machine learning model and start to build those things. And I think that they're going to lead the industry in that sense. That's going to push their revenue next year to, I don't know, 11 billion dollars or 12 billion dollars. >> Great, thanks Neil. All right so when we think about further evidence of this and further impacts, we ultimately have to consider some of the challenges associated with how we're going to create application value continually from these tools. And that leads to the idea that one of the cobblers children, it's going to gain or benefit from AI will in fact be the developer organization. Jim, what's our prediction for how auto-programming impacts development? >> Jim: Thank you very much Peter. Yeah, automation, wow. Auto-programming like I said is the epitome of enterprise application development for us going forward. People know it as co-generation but that really understates the control of auto-programming as it's evolving. Within 2018, what we're going to see is that machine learning driven co-generation approach of becoming the forefront of innovation. We're seeing a lot of activity in the industry in which applications use ML to drive the productivity of developers for all kinds of applications. We're also seeing a fair amount of what's called RPA, robotic process automation. And really, how they differ is that ML will deliver or will drive co-generation, from what I call the inside out meaning, creating reams of code that are geared to optimize a particular application scenario. This is RPA which really takes over the outside in approach which is essentially, it's the evolution of screen scraping that it's able to infer the underlined code needed for applications of various sorts from the external artifacts, the screens and from sort of the flow of interactions and clips and so forth for a given application. We're going to see that ML and RPA will compliment each other in the next generation of auto-programming capabilities. And so, you know, really application development tedium is really the enemy of, one of the enemies of productivity (static interference with web-conference). This is a lot of work, very detailed painstaking work. And what they need is to be better, more nuanced and more adaptive auto-programming tools to be able to build the code at the pace that's absolutely necessary for this new environment of cloud computing. So really AI-related technologies can be applied and are being applied to application development productivity challenges of all sorts. AI is fundamental to RPA as well. We're seeing a fair number of the vendors in that stage incorporate ML driven OCR and natural language processing and screen scraping and so forth into their core tools to be able to quickly build up the logic albeit to drive sort of the verbiage outside in automation of fairly complex orchestration scenario. In 2018, we'll see more of these technologies come together. But you know, they're not a silver bullet. 'Cause fundamentally and for organizations that are considering going deeply down into auto-programming they're going to have to factor AI into their overall plans. They need to get knowledgeable about AI. They're going to need to bring more AI specialists into their core development teams to be able to select from the growing range of tools that are out there, RPA and ML driven auto-programming. Overall, really what we're seeing is that the AI, the data scientists, who's been the fundamental developer of AI, they're coming into the core of development tools and skills in organizations. And they're going to be fundamental to this whole trend in 2018 and beyond. If AI gets proven out in auto-programming, these developers will then be able to evangelize the core utility of the this technology, AI. In a variety of other backend but critically important investments that organizations will be making in 2018 and beyond. Especially in IT operations and in management, AI is big in that area as well. Back to you there, Peter. >> Yeah, we'll come to that a little bit later in the presentation Jim, that's a crucial point but the other thing we want to note here regarding ultimately how folks will create value out of these technologies is to consider the simple question of okay, how much will developers need to know about infrastructure? And one of the big things we see happening is this notion of serverless. And here we've called it serverless, developer more. Jim, why don't you take us through why we think serverless is going to have a significant impact on the industry, at least certainly from a developer perspective and developer productivity perspective. >> Jim: Yeah, thanks. Serverless is really having an impact already and has for the last several years now. Now, everybody, many are familiar in the developer world, AWS Lambda which is really the ground breaking public cloud service that incorporates the serverless capabilities which essentially is an extraction layer that enables developers to build stateless code that executes in a cloud environment without having to worry about and to build microservices, we don't have to worry about underlined management of containers and virtual machines and so forth. So in many ways, you know, serverless is a simplification strategy for developers. They don't have to worry about the underlying plumbing. They can worry, they need to worry about the code, of course. What are called Lambda functions or functional methods and so forth. Now functional programming has been around for quite a while but now it's coming to the form in this new era of serverless environment. What we'll see in 2018 is that we're predicting is that more than 50% of lean microservices employees, in the public cloud will be deployed in serverless environments. There's AWS and Microsoft has the Azure function. IMB has their own. Google has their own. There's a variety of private, there's a variety of multiple service cloud code bases for private deployment of serverless environments that we're seeing evolving and beginning to deploy in 2018. They all involve functional programming which really, along, you know, when coupled with serverless clouds, enables greater scale and speed in terms of development. And it's very agile friendly in the sense that you can quickly Github a functionally programmed serverless microservice in a hurry without having to manage state and so forth. It's very DevOps friendly. In the very real sense it's a lot faster than having to build and manage and tune. You know, containers and DM's and so forth. So it can enable a more real time and rapid and iterative development pipeline going forward in cloud computing. And really fundamentally what serverless is doing is it's pushing more of these Lamba functions to the Edge, to the Edges. If you're at an AWS Green event last week or the week before, but you notice AWS is putting a big push on putting Lambda functions at the Edge and devices for the IoT as we're going to see in 2018. Pretty much the entire cloud arena. Everybody will push more of the serverless, functional programming to the Edge devices. It's just a simplification strategy. And that actually is a powerful tool for speeding up some of the development metabolism. >> All right, so Jim let me jump in here and say that we've now introduced the, some of these benefits and really highlighted the role that the cloud is going to play. So, let's turn our attention to this question of cloud optimization. And Stu, I'm going to ask you to start us off by talking about what we mean by true private cloud and ultimately our prediction for private cloud. Do we have, why don't you take us through what we think is going to happen in this world of true private cloud? >> Stuart: Sure Peter, thanks a lot. So when Wikibon, when we launched the true private cloud terminology which was about two weeks ago next week, two years ago next week, it was in some ways coming together of a lot of trends similar to things that you know, George, Neil and James have been talking about. So, it is nothing new to say that we needed to simplify the IT stack. We all know, you know the tried and true discussion of you know, way too much of the budget is spent kind of keeping lights on. What we'd like to say is kind of running the business. If you squint through this beautiful chart that we have on here, a big piece of this is operational staffing is where we need to be able to make a significant change. And what we've been really excited and what led us to this initial market segment and what we're continuing to see good growth on is the move from traditional, really siloed infrastructure to you want to have, you know, infrastructure where it is software based. You want IT to really be able to focus on the application services that they're running. And what our focus for the this for the 2018 is of course it's the central point, it's the data that matters here. The whole reason we've infrastructured this to be able to run applications and one of the things that is a key determiner as to where and what I use is the data and how can I not only store that data but actually gain value from that data. Something we've talked about time and again and that is a major determining factor as to am I building this in a public cloud or am I doing it in you know my core. Is it something that is going to live on the Edge. So that's what we were saying here with the true private cloud is not only are we going to simplify our environment and therefore it's really the operational model that we talked about. So we often say the line, cloud is not a destination. But it's an operational model. So a true private cloud giving me some of the you know, feel and management type of capability that I had had in the public cloud. It's, as I said, not just virtualization. It's much more than that. But how can I start getting services and one of the extensions is true private cloud does not live in isolation. When we have kind of a core public cloud and Edge deployments, I need to think about the operational models. Where data lives, what processing happens need to be as environments, and what data we'll need to move between them and of course there's fundamental laws of physics that we need to consider in that. So, the prediction of course is that we know how much gear and focus has been on the traditional data center. And true private cloud helps that transformation to modernization and the big focus is many of these applications we've been talking about and uses of data sets are starting to come into these true private cloud environments. So, you know, we've had discussions. There's Spark, there's modern databases. Many of these, there's going to be many reasons why they might live in the private cloud environment. And therefore that's something that we're going to see tremendous growth and a lot of focus. And we're seeing a new wave of companies that are focusing on this to deliver solutions that will do more than just a step function for infrastructure or get us outside of our silos. But really helps us deliver on those cloud native applications where we pull in things like what Jim was talking about with serverless and the like. >> All right, so Stu, what that suggests ultimately is that data is going to dictate that everything's not going to end up in the private or in the public cloud or centralized public clouds because of latency costs, data governance and IP protection reasons. And there will be some others. At bare minimum, that means that we're going to have it in most large enterprises as least a couple of clouds. Talk to us about what this impact of multi cloud is going to look like over the course of the next few years. >> Stuart: Yeah, critical point there Peter. Because, right, unfortunately, we don't have one solution. There's nobody that we run into that say, oh, you know, I just do a single you know, one environment. You know it would be great if we only had one application to worry about. But as you've done this lovely diagram here, we all use lots of SaaS and increasingly, you know, Oracle, Microsoft, SalesForce, you know, all pushing everybody to multiple SaaS environments that has major impacts on my security and where my data lives. Public clouds, no doubt is growing at leaps and bounds. And many customers are choosing applications to live in different places. So just as in data centers, I would kind of look at it from an application standpoint and build up what I need. Often, there's you know, Amazon doing phenomenal. But you know, maybe there's things that I'm doing with Azure. Maybe there's things that's I'm doing with Google or others as well as my service providers for locality, for you know, specialized services, that there's reasons why people are doing it. And what customers would love is an operational model that can actually span between those. So we are very early in trying to attack this multi cloud environment. There's everything from licensing to security to you know, just operationally how do I manage those. And a piece of them that we were touching on in this prediction year, is Kubernetes actually can be a key enabler for that cloud native environment. As Jim talked about the serverless, what we'd really like is our developer to be able to focus on building their application and not think as much about the underlined infrastructure whether that be you know, racket servers that I built myself or public cloud infrastructures. So we really want to think more it's at the data and application level. It's SaaS and pass is the model and Kubernetes holds the promise to solve a piece of this puzzle. Now Kubernetes is not by no means a silver bullet for everything that we need. But it absolutely, it is doing very well. Our team was at the Linux, the CNCF show at KubeCon last week and there is you know, broad adoption from over 40 of the leading providers including Amazon is now a piece. Even SalesForce signed up to the CNCF. So Kubernetes is allowing me to be able to manage multi cloud workflows and therefore the prediction we have here Peter is that 50% of developing teams will be building, sustaining multi cloud with Kubernetes as a foundational component of that. >> That's excellent Stu. But when we think about it, the hardware of technology especially because of the opportunities associated with true private cloud, the hardware technologies are also going to evolve. There will be enough money here to sustain that investment. David Floyer, we do see another architecture on the horizon where for certain classes of workloads, we will be able to collapse and replicate many of these things in an economical, practical way on premise. We call that UniGrid, NVME is, over fabric is a crucial feature of UniGrid. >> Absolutely. So, NVMe takes, sorry NVMe over fabric or NVMe-oF takes NVMe which is out there as storage and turns it into a system framework. It's a major change in system architecture. We call this UniGrid. And it's going to be a focus of our research in 2018. Vendors are already out there. This is the fastest movement from early standards into products themselves. You can see on the chart that IMB have come out with NVMe over fabrics with the 900 storage connected to the power. Nine systems. NetApp have the EF750. A lot of other companies are there. Meta-Lox is out there looking for networks, for high speed networks. Acceler has a major part of the storage software. So and it's going to be used in particular with things like AI. So what are the drivers and benefits of this architecture? The key is that data is the bottleneck for application. We've talked about data. The amount of data is key to making applications more effective and higher value. So NVMe and NVMe over fabrics allows data to be accessed in microseconds as opposed to milliseconds. And it allows gigabytes of data per second as opposed to megabytes of data per second. And it also allows thousands of processes to access all of the data in very very low latencies. And that gives us amazing parallelism. So what's is about is disaggregation of storage and network and processes. There are some huge benefits from that. Not least of which is you save about 50% of the processor you get back because you don't have to do storage and networking on it. And you save from stranded storage. You save from stranded processor and networking capabilities. So it's overall, it's going to be cheaper. But more importantly, it makes it a basis for delivering systems of intelligence. And systems of intelligence are bringing together systems of record, the traditional systems, not rewriting them but attaching them to real time analytics, real time AI and being able to blend those two systems together because you've got all of that additional data you can bring to bare on a particular problem. So systems themselves have reached pretty well the limit of human management. So, one of the great benefits of UniGrid is to have a single metadata lab from all of that data, all of those processes. >> Peter: All those infrastructure elements. >> All those infrastructure elements. >> Peter: And application. >> And applications themselves. So what that leads to is a huge potential to improve automation of the data center and the application of AI to operations, operational AI. >> So George, it sounds like it's going to be one of the key potential areas where we'll see AI be practically adopted within business. What do we think is going to happen here as we think about the role that AI is going to play in IT operations management? >> Well if we go back to the analogy with big data that we thought was going to you know, cure cancer, taste like chocolate, cost a dollar, and it turned out that the application, the most wide spread application of big data was to offload ETL from expensive data warehouses. And what we expect is the first widespread application of AI embedded in applications for horizontal use where Neil mentioned SalesForce and the ability to use Einstein as SalesForce data and connected data. Now because the applications we're building are so complex that as Stu mentioned you know, we have this operational model with a true private cloud. It's actually not just the legacy stuff that's sucking up all the admin overhead. It's the complexity of the new applications and the stringency of the SLA's, means that we would have to turn millions of people into admins, the old you know, when the telephone networks started, everyone's going to have to be an operator. The only way we can get past this is if we sort of apply machine learning to IT Ops and application performance management. The key here is that the models can learn how the infrastructure is laid out and how it operates. And it can also learn about how all the application services and middleware works, behaving independently and with each other and how they tie with the infrastructure. The reason that's important is because all of a sudden you can get very high fidelity root cause analysis. In the old management technology, if you had an underlined problem, you'd have a whole sort of storm of alerts, because there was no reliable way to really triangulate on the or triage the root cause. Now, what's critical is if you have high fidelity root cause analysis, you can have really precise recommendations for remediation or automated remediation which is something that people will get comfortable with over time, that's not going to happen right away. But this is critical. And this is also the first large scale application of not just machine learning but machine data and so this topology of collecting widely desperate machine data and then applying models and then reconfiguring the software, it's training wheels for IoT apps where you're going to have it far more distributed and actuating devices instead of software. >> That's great, George. So let me sum up and then we'll take some questions. So very quickly, the action items that we have out of this overall session and again, we have another 15 or so predictions that we didn't get to today. But one is, as we said, digital business is the use of data assets to compete. And so ultimately, this notion is starting to diffuse rapidly. We're seeing it on theCUbE. We're seeing it on the the CrowdChats. We're seeing it in the increase of our customers. Ultimately, we believe that the users need to start preparing for even more business scrutiny over their technology management. For example, something very simple and David Floyer, you and I have talked about this extensively in our weekly action item research meeting, the idea of backing up and restoring a system is no longer in a digital business world. It's not just backing up and restoring a system or an application, we're talking about restoring the entire business. That's going to require greater business scrutiny over technology management. It's going to lead to new organizational structures. New challenges of adopting systems, et cetera. But, ultimately, our observations is that data is going to indicate technology directions across the board whether we talk about how businesses evolve or the roles that technology takes in business or we talk about the key business capability, digital business capabilities, of capturing data, turning it into value and then turning into work. Or whether we talk about how we think about cloud architecture and which organizations of cloud resources we're going to utilize. It all comes back to the role that data's going to play in helping us drive decisions. The last action item we want to put here before we get to the questions is clients, if we don't get to your question right now, contact us. Send us an inquiry. Support@silicongangle.freshdesk.com. And we'll respond to you as fast as we can over the course of the next day, two days, to try to answer your question. All right, David Vellante, you've been collecting some questions here. Why don't we see if we can take a couple of them before we close out. >> Yeah, we got about five or six minutes in the chat room, Jim Kobielus has been awesome helping out and so there's a lot of detailed answer there. The first, there's some questions and comments. The first one was, are there too many chiefs? And I guess, yeah. There's some title inflation. I guess my comment there would be titles are cheap, results aren't. So if you're creating chief X officers just for the, to check a box, you're probably wasting money. So you've got to give them clear roles. But I think each of these chiefs has clear roles to the extent that they are you know empowered. Another comment came up which is we don't want you know, Hadoop spaghetti soup all over again. Well true that. Are we at risk of having Hadoop spaghetti soup as the centricity of big data moves from Hadoop to AI and ML and deep learning? >> Well, my answer is we are at risk of that but that there's customer pressure and vendor economic pressure to start consolidating. And we'll also see, what we didn't see in the ArpiM big data era, with cloud vendors, they're just going to start making it easier to use some of the key services together. That's just natural. >> And I'll speak for Neil on this one too, very quickly, that the idea ultimately is as the discipline starts to mature, we won't have people that probably aren't really capable of doing some of this data science stuff, running around and buying a tool to try to supplement their knowledge and their experience. So, that's going to be another factor that I think ultimately leads to clarity in how we utilize these tools as we move into an AI oriented world. >> Okay, Jim is on mute so if you wouldn't mind unmuting him. There was a question, is ML a more informative way of describing AI? Jim, when you and I were in our Boston studio, I sort of asked a similar question. AI is sort of the uber category. Machine learning is math. Deep learning is a more sophisticated math. You have a detailed answer in the chat. But maybe you can give a brief summary. >> Jim: Sure, sure. I don't want too pedantic here but deep learning is essentially, it's a lot more hierarchical deeper stacks of neural network of layers to be able to infer high level extractions from data, you know face recognitions, sentiment analysis and so forth. Machine learning is the broader phenomenon. That's simply along a different and part various approaches for distilling patterns, correlations and algorithms from the data itself. What we've seen in the last week, five, six tenure, let's say, is that all of the neural network approaches for AI have come to the forefront. And in fact, the core often market place and the state of the art. AI is an ancient paradigm that's older than probably you or me that began and for the longest time was rules based system, expert systems. Those haven't gone away. The new era of AI we see as a combination of both statical approaches as well as rules based approaches, and possibly even orchestration based approaches like graph models or building broader context or AI for a variety of applications especially distributed Edge application. >> Okay, thank you and then another question slash comment, AI like graphics in 1985, we move from a separate category to a core part of all apps. AI infused apps, again, Jim, you have a very detailed answer in the chat room but maybe you can give the summary version. >> Jim: Well quickly now, the most disruptive applications we see across the world, enterprise, consumer and so forth, the advantage involves AI. You know at the heart of its machine learning, that's neural networking. I wouldn't say that every single application is doing AI. But the ones that are really blazing the trail in terms of changing the fabric of our lives very much, most of them have AI at their heart. That will continue as the state of the art of AI continues to advance. So really, one of the things we've been saying in our research at Wikibon `is that the data scientists or those skills and tools are the nucleus of the next generation application developer, really in every sphere of our lives. >> Great, quick comment is we will be sending out these slides to all participants. We'll be posting these slides. So thank you Kip for that question. >> And very importantly Dave, over the course of the next few days, most of our predictions docs will be posted up on Wikibon and we'll do a summary of everything that we've talked about here. >> So now the questions are coming through fast and furious. But let me just try to rapid fire here 'cause we only got about a minute left. True private cloud definition. Just say this, we have a detailed definition that we can share but essentially it's substantially mimicking the public cloud experience on PRIM. The way we like to say it is, bringing the cloud operating model to your data versus trying to force fit your business into the cloud. So we've got detailed definitions there that frankly are evolving. about PaaS, there's a question about PaaS. I think we have a prediction in one of our, you know, appendices predictions but maybe a quick word on PaaS. >> Yeah, very quick word on PaaS is that there's been an enormous amount of effort put on the idea of the PaaS marketplace. Cloud Foundry, others suggested that there would be a PaaS market that would evolve because you want to be able to effectively have mobility and migration and portability for this large cloud application. We're not seeing that happen necessarily but what we are seeing is that developers are increasingly becoming a force in dictating and driving cloud decision making and developers will start biasing their choices to the platforms that demonstrate that they have the best developer experience. So whether we call it PaaS, whether we call it something else. Providing the best developer experience is going to be really important to the future of the cloud market place. >> Okay great and then George, George O, George Gilbert, you'll follow up with George O with that other question we need some clarification on. There's a question, really David, I think it's for you. Will persistent dims emerge first on public clouds? >> Almost certainly. But public clouds are where everything is going first. And when we talked about UniGrid, that's where it's going first. And then, the NVMe over fabrics, that architecture is going to be in public clouds. And it has the same sort of benefits there. And NV dims will again develop pretty rapidly as a part of the NVMe over fabrics. >> Okay, we're out of time. We'll look through the chat and follow up with any other questions. Peter, back to you. >> Great, thanks very much Dave. So once again, we want to thank you everybody here that has participated in the webinar today. I apologize for, I feel like Hans Solo and saying it wasn't my fault. But having said that, none the less, I apologize Neil Raden and everybody who had to deal with us finding and unmuting people but we hope you got a lot out of today's conversation. Look for those additional pieces of research on Wikibon, that pertain to the specific predictions on each of these different things that we're talking about. And by all means, Support@silicongangle.freshdesk.com, if you have an additional question but we will follow up with as many as we can from those significant list that's starting to queue up. So thank you very much. This closes out our webinar. We appreciate your time. We look forward to working with you more in 2018. (upbeat music)

Published Date : Dec 16 2017

SUMMARY :

And that is the emergence of the cloud. but the chief digital officer we see how much data moves from the Edge back to the cloud. and more and more of the AI involves deployment and we have multiple others that the ranks of data scientists are going to sort Neil's prescription that as the tools improve And most of the vendors that we have that AI is going to be relatively easy to build the low hanging fruit is for them to use of the talent ends up. of the triangle here. And that leads to the idea the logic albeit to drive sort of the verbiage And one of the big things we see happening is in the sense that you can quickly the role that the cloud is going to play. Is it something that is going to live on the Edge. is that data is going to dictate that and Kubernetes holds the promise to solve the hardware technologies are also going to evolve. of the processor you get back and the application of AI to So George, it sounds like it's going to be one of the key and the stringency of the SLA's, over the course of the next day, two days, to the extent that they are you know empowered. in the ArpiM big data era, with cloud vendors, as the discipline starts to mature, AI is sort of the uber category. and the state of the art. in the chat room but maybe you can give the summary version. at Wikibon `is that the data scientists these slides to all participants. over the course of the next few days, bringing the cloud operating model to your data Providing the best developer experience is going to be with that other question we need some clarification on. that architecture is going to be in public clouds. Peter, back to you. on Wikibon, that pertain to the specific predictions

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

David VellantePERSON

0.99+

JimPERSON

0.99+

NeilPERSON

0.99+

DavidPERSON

0.99+

StuartPERSON

0.99+

Jim KobielusPERSON

0.99+

Neil RadenPERSON

0.99+

EuropeLOCATION

0.99+

AmazonORGANIZATION

0.99+

2018DATE

0.99+

AWSORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

GeorgePERSON

0.99+

WikibonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2017DATE

0.99+

Stuart MinimanPERSON

0.99+

George GilbertPERSON

0.99+

Peter DruckerPERSON

0.99+

May 2018DATE

0.99+

PeterPERSON

0.99+

MicrosoftORGANIZATION

0.99+

General Data Protection RegulationTITLE

0.99+

DavePERSON

0.99+

1985DATE

0.99+

50%QUANTITY

0.99+

Last yearDATE

0.99+

George OPERSON

0.99+

OracleORGANIZATION

0.99+

Hans SoloPERSON

0.99+

Support@silicongangle.freshdesk.comOTHER

0.99+

12 billion dollarsQUANTITY

0.99+

second considerationQUANTITY

0.99+

11 billion dollarsQUANTITY

0.99+

Nine systemsQUANTITY

0.99+

Dustin Kirkland, Canonical | KubeCon 2017


 

>> Announcer: Live from Austin, Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by: Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Hey, welcome back everyone. And we're live here in Austin, Texas. This is theCUBE's exclusive coverage of the Cloud Native conference and KubeCon for Kubernetes Conference. This is for the Linux Foundation. This is theCUBE. I'm John Furrier, the co-founder of Silicon ANGLE Media. My co, Stu Miniman. Our next guest is Dustin Kirkland Vice-President of product. The Ubuntu, Canonical, welcome to theCUBE. >> Thank you, John. >> So you're the product guy. You get the keys to the kingdom, as they would say in the product circles. Man, what a best time to be-- >> Dustin: They always say that. I don't think I've heard that one. >> Well, the product guys are, well all the action's happening on the product side. >> Dustin: We're right in the middle of it. >> Cause you got to have a road map. You got to have a 20 mile steer on the next horizon while you go up into the pasture and deliver value, but you always got to be watching for it always making decision on what to do, when to ship product, not you got the Cloud things are happening at a very accelerated rate. And then you got to bring it out to the customers. >> That's right. >> You're livin' on both sides of the world You got to look inside, you got to look outside. >> All three. There's the marketing angle too. which is what we're doing here right now. So there's engineering sales and this is the marketing. >> Alright so where are we with this? Because now you guys have always been on the front lines of open source. Great track record. Everyone knows the history there. What are the new things? What's the big aha moment that this event, largest they've had ever. They're not even three years old. Why is this happening? >> I love seeing these events in my hometown Austin, Texas. So I hope we keep coming back. The aha moment is how application development is fundamentally changing. Cloud Native is the title of the Cloud Native Computing Foundation and CloudNativeConference here. What does Cloud Native mean? It's a different form of writing applications. Just before we were talking about systems programing right? That's not exactly Cloud Native. Cloud Native programming is writing to API's that are Cloud exposed API's, integrating with software as a service. Creating applications that have no intelligence, whatsoever, about what's underneath them, Right? But taking advantage of that and all the ways that you would want and expect in a modern application. Fault tolerance, automatic updates, hyper security. Just security, security, security. That is the aha moment. The way applications are being developed is fundamentally changing. >> Interesting perspective we had on earlier. Lew Tucker from Cisco, (mumbles) in the (mumbles) History Museum, CTO at Cisco, and we have Kelsey Hightower co-chair for this conference and also very active in the community. Yet, in the perspective, and I'll over simplify and generalize it, but basically was: Hey, that's been going on for 30 years, it's just different now. Tell us the old way and new way. Because the old way, you kind of describing it you're going to build your own stuff, full stack, building all parts of the stack and do a lot of stuff that you didn't want to do. And now you have more, especially time on your hands if DevOps and infrastructure as code starts to happen. But doesn't mean that networking goes away, doesn't mean storage goes away, that some new lines are forming. Describe that dynamic of what's new and the new way what changes from the old way? >> Virtualization has brought about a different way of thinking about resources. Be those compute resources, chopping CPU's up into virtual CPU's, that's KVM ware. You mentioned network and storage. Now we virtualized both of those into software defined storage and software defined networking, right? We have things like OpenStack that brings that all together from an infrastructure perspective. and we now have Kubernetes that brings that to fare from an application perspective. Kubernetes helps you think about applications in a different way. I said that paradigm has changed. It's Kubernetes that helps implement that paradigm. So that developers can write an application to a container orchestrator like Kubernetes and take advantage of many of the advances we've made below that layer in the operating system and in the Cloud itself. So from that perspective the game has changed and the way you write your application is not the same as a the monolithic app we might have written on an IBM or a traditional system. >> Dustin, you say monolithic app versus oh my gosh the multi layered cake that we have today. We were talking about the keynote this morning where CNCF went from four projects to 14 projects, you got Kubernetes, You got things like DSDU on top. Help up tease that a little bit. What are the ones that, where's canonical engaged? What are you hearing from customers? What are they excited about? What are they still looking for? >> In a somewhat self-serving way, I'll use this opportunity to explain exactly what we do in helping build that layered cake. It starts with the OS. We provide a great operating system, Ubuntu that every developer would certainly know and understand and appreciate. That's the kernel, that's the systemd, that's the hyperviser, that's all the storage and drivers that makes an operating system work well on hardware. Lot's of hardware, IBM, Dell HP, Intel, all the rest. As well as in virtual machines, the public Clouds, Microsoft, Amazon, Google, VM ware and others. So, we take care of that operating system perspective. Within the CNCF and within in the Kubernetes ecosystem, It really starts with the Kubernetes distribution. So we provide a Kubernetes distribution, we call it Canonicals Distribution of Kubernetes, CDK. Which is open source Kubernetes with security patches applied. That's it. No special sauce, no extra proprietary extensions. It is open source Kubernetes. The reference platform for open source Kubernetes 100% conformed. Now, once you have Kubernetes as you say, "What are you hearing from customers?" We hear a lot of customers who want a Kubernetes. Once they have a Kubernetes, the next question is: "Now what do I do with it?" If they have applications that their developers have been writing to Google's Kubernetes Engine GKE, or Amazon's Kubernetes Engine, the new one announced last week at re:Invent, AKS. Or Microsoft's Kubernetes Engine, Microsoft-- >> Microsoft's AKS, Amazons EKS. A lot of TLA's out there, always. >> Thank you for the TLA dissection. If you've written the applications already having your own Kubernetes is great, because then your applications simply port and run on that. And we help customers get there. However, if you haven't written your first application, that's where actually, most of the industry is today. They want a Kubernetes, but they're not sure why. So, to that end, we're helping bring some of the interesting workloads that exists, open source workloads and putting those on top of Canonical Kubernetes. Yesterday, we press released a new product from Canonical, launched in conjunction with our partners at Rancher Labs, Which is the Cloud Native platform. The Cloud Native platform is Ubuntu plus Kubernetes plus Rancher. That combination, we've heard from customers and from users of Ubuntu inside and out. Everyone's interested in a developer work flow that includes open-source Ubuntu, open-source Kubernetes and open-source Rancher, Which really accelerates the velocity of development. And that end solution provides exactly that and it helps populate, that Kubernetes with really interesting workloads. >> Dustin, so we know Sheng, Shannon and the team, they know a thing or two about building stacks with open source. We've talked with you many times, OpenStack. Give us a little bit of compare and contrast, what we've been doing with OpenStack with Canonical, very heavily involved, doing great there versus the Cloud Native stacking. >> If you know Shannon and Sheng, I think you can understand and appreciate why Mark, myself and the rest of the Canonical team are really excited about this partnership. We really see eye-to-eye on open source principles First. Deliver great open source experiences first. And then taking that to market with a product that revolves around support. Ultimately, developer option up front is what's important, and some of those developer applications will make its way into production in a mission critical sense. Which open up support opportunities for both of us. And we certainly see eye-to-eye from that perspective. What we bring to bare is Ubuntu ecosystem of developers. The Ubuntu OpenStack infrastructure is a service where we've seen many of the world's largest organizations deploying their OpenStacks. Doing so on Ubuntu and with Ubuntu OpenStacks. With the launch of Kubernetes and Canonical Kubernetes, many of those same organizations are running their own Kubernetes along side OpenStack. Or, in some cases, on top of OpenStack. In a very few cases, instead of Openstack, in very special cases, often at the Edge or in certain tiny Cloud or micro Cloud scenarios. In all of these we see Rancher as a really, really good partner in helping to accelerate that developer work flow. Enabling developers to write code, commit code to GitHub repository, with full GitHub integration. Authenticate against an active directory with full RBAC controls. Everything that you would need in an enterprise to bring that application to bare from concept, to development, to test into production, and then the life cycle, once it gains its own life in production. >> What about the impact of customers? So, I'm an IT guy or I'm an architect and man, all this new stuff's comin' at me. I love my open source, I'm happy with space. I don't want to touch it, don't want to break it, but I want to innovate. This whole world can be a little bit noisy and new to them. How do you have that conversation with that potential customer or customer where you say, Look, we can get there. Use your app team here's what you want to shape up to be, here's service meshes and plugable, Whoa plugable (mumbles)! So, again, how do you simplify that when you have conversations? What's the narrative? What's the conversation like? >> Usually our introduction into the organization of a Fortune 500 company is by the developers inside of that company who already know Ubuntu. Who already have some experience with Kubernetes or have some experience with Rancher or any of those other-- >> So it's a bottoms up? >> Yeah, it's bottoms up. Absolutely, absolutely. The developer network around Ubuntu is far bigger than the organization that is Canonical. So that helps us with the intro. Once we're in there, and the developers write those first few apps, we do get the introductions to their IT director who then wants that comfy blanket. Customer support, maybe 24 by seven-- >> What's the experience like? Is it like going to the airport, go through TSA, and you got to take your shoes off, take your belt off. What kind of inspection, what is kind of is the culture because they want to move fast, but they got to be sure. There's always been the challenge when you have the internal advocate saying, "Look, if we want to go this way "this is going to be more the reality for companies." Developers are now major influencers. Not just some, here's the product we made a decision and they ship it to 'em, it's shifted. >> If there's one thing that I've learned in this sort of product management assignment, I'm a engineer by trade, but as a product manager now for almost five years, is that you really have to look at the different verticals and some verticals move at vastly different paces than other verticals. When we are in the tele close phase, We're in RFI's, requests for a quote or a request for information that may last months, nine months. And then go through entering into a procurement process that may last another nine months. And we're talking about 18 months in an industry here that is spinning up, we're talking about how fast this goes, which is vastly different than the work we do in Silicon Valley, right? With some of the largest dot-coms in the world that are built on Ubuntu, maybe an AWS or else where. Their adoption curve is significantly different and the procurement angle is really different. What they're looking to buy often on the US West Coast is not so much support, but they're looking to guide your roadmap. We offer for customers of that size and scale a different set of products something we call feature sponsorships, where those customers are less interested in 24 by seven telephone support and far more interested in sponsoring certain features into Ubuntu itself and helping drive the Ubuntu roadmap. We offer both of those a products and different verticals buy in different ways. We talked to media and entertainment, and the conversation's completely different. Oil and gas, conversation's completely different. >> So what are you doing here? What's the big effort at CloudNativeCon? >> So we've got a great booth and we're talking about Ubuntu as a pretty universal platform for almost anything you're doing in the Cloud. Whether that's on frame infrastructure as a service, OpenStack. People can coo coo OpenStack and point OpenStack versus Kubernetes against one another. We cannot see it more differently-- >> Well no I think it's more that it's got clarity on where the community's lines are because apps guys are moving off OpenStack that's natural. It's really found the home, OpenStack very relevant huge production flow, I talk to Johnathon Bryce about this all the time. There's no co cooing OpenStack. It's not like it's hurting. Just to clarify OpenStack is not going anywhere its just that there's been some comments about OpenStack refugees going to (mumbles), but they're going there anyway! Do you agree? >> Yeah I agree, and that choice is there on Ubuntu. So infrastructure is a service, OpenStack's a fantastic platform, platforms as a service or Cloud Native through Cloud Native development Kubernetes is an excellent platform. We see those running side by side. Two racks a systems or a single rack. Half of those machines are OpenStack, Half of those are Kubernetes and the same IT department manages both. We see IT departments that are all in OpenStack. Their entire data center is OpenStack. And we see Kubernetes as one workload inside of that Openstack. >> How do you see Kubernetes impact on containers? A lot of people are coo cooing containers. But they're not going anywhere either. >> It's fundamental. >> The ecosystem's changing, certainly the roles of each part (mumbles) is exploding. How do you talk about that? What's your opinion on how containers are evolving? >> Containers are evolving, but they've been around for a very long time as well. Kubernetes has helped make containers consumable. And doctored to an extent, before that the work we've done around Linux containers LXE LEXT as well. All of those technologies are fundamental to it and it take tight integration with the OS. >> Dustin, so I'm curious. One of the big challenges I have the U face is the proliferation of deployments for customers. It's not just data center or even Cloud. Edge is now a very big piece of it. How do you think that containers helps enable the little bit of that Cloud Native goes there, but what kind of stresses does that put on your product organization? >> Containers are adding fuel to the fire on both the Edge and the back end Cloud. What's exciting to me about the Edge is that every Edge device, every connected device is connected to something. What's it connected to, a Cloud somewhere. And that can be an OpenStack Cloud or a Kubernetes Cloud, that can be a public Cloud, that could be a private implementation of that Cloud. But every connected device, whether its a car or a plane or a train or a printer or a drone it's connected to something, it's connected to a bunch of services. We see containers being deployed on Ubuntu on those Edge devices, as the packaging format, as the application format, as the multi-tendency layer that keeps one application from DOSing or attacking or being protected from another application on that Edge device. We also see containers running the micro services in the Cloud on Ubuntu there as well. The Edge to me, is extremely interesting in how it ties back to the Cloud and to be transparent here, Canonical strategy and Canonical's play is actually quiet strong here with Ubuntu providing quite a bit of consistency across those two layers. So developers working on those applications on those devices, are often sitting right next to the developers working on those applications in the Cloud and both of them are seeing Ubuntu helping them go faster. >> Bottom line, where do you see the industry going and how do you guys fit into the next three years, what's your prediction? >> I'm going to go right back to what I was saying right there. That the connection between the Edge and the Cloud is our angle right there, and there is nothing that's stopping that right now. >> We were just talking with Joe Beda and our view is if it's a shoot and computing world, everything's an Edge. >> Yeah, that's right. That's exactly right. >> (mumbles) is an Edge. A light in a house is an Edge with a processor in it. >> So I think the data centers are getting smarter. You wanted a prediction for next year: The data center is getting smarter. We're seeing autonomous data centers. We see data centers using metals as a service mask to automatically provision those systems and manage those systems in a way that hardware look like a Cloud. >> AI and IOT, certainly two topics that are really hot trends that are very relevant as changing storage and networking those industries have to transform. Amazon's tele (mumbles), everything like LAN and serverless, you're starting to see the infrastructure as code take shape. >> And that's what sits on top of Kubernetes. That's what's driving Kubernetes adoption are those AI machine learning artificial intelligence workloads. A lot of media and transcoding workloads are taking advantage of Kubernetes everyday. >> Bottom line, that's software. Good software, smart software. Dustin, Thanks so much for coming theCube. We really appreciate it. Congratulations. Continued developer success. Good to have a great ecosystem. You guys have been successful for a very long time. As the world continues to be democratized with software as it gets smarter more pervasive and Cloud computing, grid computing, Unigrid. Whatever it's called it is all done by software and the Cloud. Thanks for coming on. It's theCube live coverage from Austin, Texas, here at KubeCon and CloudNativeCon 2017. I'm John Furrier, Stu Miniman, We'll be back with more after this short break. (lively music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by: Red Hat, the Linux Foundation, This is for the Linux Foundation. You get the keys to the kingdom, I don't think I've heard that one. the action's happening on the product side. to do, when to ship product, not you got the You got to look inside, you got to look outside. There's the marketing angle too. What are the new things? But taking advantage of that and all the ways and the new way what changes from the old way? and the way you write your application is not the same What are the ones that, where's canonical engaged? Lot's of hardware, IBM, Dell HP, Intel, all the rest. A lot of TLA's out there, always. Which is the Cloud Native platform. We've talked with you many times, OpenStack. And then taking that to market with What about the impact of customers? of a Fortune 500 company is by the developers So that helps us with the intro. There's always been the challenge when you have is that you really have to look at We cannot see it more differently-- It's really found the home, OpenStack very relevant Yeah I agree, and that choice is there on Ubuntu. How do you see Kubernetes impact on containers? the roles of each part (mumbles) is exploding. All of those technologies are fundamental to it One of the big challenges I have the U face We also see containers running the micro services That the connection between the Edge and the Cloud We were just talking with Joe Beda Yeah, that's right. A light in a house is an Edge with a processor in it. and manage those systems in a way the infrastructure as code take shape. And that's what sits on top of Kubernetes. As the world continues to be democratized with software

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

DustinPERSON

0.99+

Red HatORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

IBMORGANIZATION

0.99+

100%QUANTITY

0.99+

MarkPERSON

0.99+

DellORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

JohnPERSON

0.99+

nine monthsQUANTITY

0.99+

ShannonPERSON

0.99+

Rancher LabsORGANIZATION

0.99+

20 mileQUANTITY

0.99+

ShengPERSON

0.99+

next yearDATE

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

30 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

twoQUANTITY

0.99+

Silicon ANGLE MediaORGANIZATION

0.99+

HalfQUANTITY

0.99+

IntelORGANIZATION

0.99+

last weekDATE

0.99+

14 projectsQUANTITY

0.99+

24QUANTITY

0.99+

bothQUANTITY

0.99+

CanonicalsORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

two topicsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

Johnathon BrycePERSON

0.99+

both sidesQUANTITY

0.99+

KubernetesTITLE

0.99+

AmazonsORGANIZATION

0.99+

two layersQUANTITY

0.99+

Lew TuckerPERSON

0.99+

GoogleORGANIZATION

0.99+

Cloud NativeTITLE

0.98+

CDKORGANIZATION

0.98+

OpenStackTITLE

0.98+

Cloud Native Computing FoundationORGANIZATION

0.98+

Joe BedaPERSON

0.98+

EdgeTITLE

0.98+

UbuntuTITLE

0.98+

Cloud NativeEVENT

0.98+

four projectsQUANTITY

0.98+

OpenstackTITLE

0.98+

AWSORGANIZATION

0.98+

first applicationQUANTITY

0.97+

CUBEConversation with John Furrier & Peter Burris


 

(upbeat music) >> Hello everyone, welcome to a special CUBE Conversation here at the SiliconANGLE Media, CUBE and Wikibon studio in Palo Alto. I'm John Furrier, co-founder of SiliconANGLE Media, Inc. I'm here with Peter Burris, head of research, for a special Amazon Web Services re:Invent preview. We just had a great session with Peter's weekly Action Item roundtable meeting with analysts surrounding the trend. So that'll be up on YouTube, check that out. Really in-depth conversation around what to expect at Amazon Web Service's re:Invent coming up in about a week and a half, and great content in there. But I want to go here, Peter, have a conversation with you back and forth, 'cause we've been having a debate, ping-ponging back and forth around what we think might happen. We certainly have some visibility in some of the news that might be happening at re:Invent. But you guys have been doing a great job with the research. I want to get your thoughts and I want to just have a conversation around Amazon Web Services. Continuing to kick ass, they've had a run on their own for many, many years now. But they got competition. The visibility in Wall Street is clear. They know the profitability. The numbers are all taking shape. Microsoft stock's up from 26 to wherever it is now. It's clear the cloud is the game. That's what's going on, and you have, again, the top three: Amazon, Azure, Google. And then, you can argue four through seven, including Alibaba and others, big game going on. This is causing a lot of opportunities, but disruption to business models, technology architectures, and ultimately how customers are going to deploy their IT and/or their digital business. Your thoughts? >> I think one of the most interesting things about this, John, is that in the first 10 years of the cloud, it was implied that it was a cost play. Don't do IT anymore, it's blah, blah, blah, blah, blah, do the cloud, do AWS. And I think that because the competition is so real now, and a lot of businesses are starting to realize what actually could be done if you're able to use your data in new and different ways, and dramatically accelerate and transform your businesses, that all this has become a value play. And the minute that it becomes a value play, in other words, new types of work, new types of capabilities, then for Amazon, for AWS, it becomes an ecosystem play. So I think one of the things that's most interesting about this re:Invent, is it's, from my opinion, it's going to be the first one where it's truly a strong ecosystem story. It's about how Amazon is providing services that the rest of the world's going to be able to consume and create new types of value through the Amazon ecosystem. >> Great point, I want to bring up a topic that we've been talking on theCUBE in some of my other CUBE Conversations, as it relates to the ecosystem is, in all these major ways, and we've seen many, you've covered many ways as an analyst over the years, there's always been a gestation period between a disruptive enabler, you could talk about TCP/IP, you could talk about HTTP, there's always a period of gestation. Sometimes it's accelerated now more than ever, but you start to see the impact of that disruptive enabler. Certainly cloud, and what Amazon has done, has been a disruptive enabler. Value's been created, more value's being created, more and more everyday we're seeing it. You're starting to see new things pop up from this gestation period, problems that become opportunities. And competitors that are now partners, partners that are now competitors. So a full changeover is happening in the landscape because of it. So the question for you is, what are you seeing, given your experience in seeing other ways before, what is starting to be clear in terms of visibility that are becoming known points of obvious straight and narrow trends that are happening with this cloud enabling? >> Well, let's talk about perhaps one of the biggest differences between traditional IT and cloud-oriented IT. And to kind of tell that story, I'll do something that a lot of people don't think about when they think about innovation. But if you really think about innovation, you got to break it down into two distinct acts. There's the act of inventing, which is an engineering act. It's, how do I take knowledge of physics, or knowledge of sociology, or knowledge of something, and invent something new that reflects my understanding of the problem and creating a solution? And then there's an innovation act, which is always a social act. It's getting people to change the way they do things. Businesses to change the way they do things. That's a social act. And one of the interesting things about the transition, this transition, this cloud-based transition, is we're moving into a world where the social acts are much more synonymous with the actual engineering act. And by that I mean, when something is positioned as a service, that the customer gets and just acts on it because they're now renting a service, that is truly an innovation process. You are adopting it as a service and embedding it more quickly. What we're seeing now in many respects, going back to your core point, is everything being done as a service, it means that the binding of the inventing and the innovating is much more strong, and much more immediate. And AWS re:Invent's been a forum where we see this. It's not just inventing or putting forward a new product that may get out to market in six months or nine months. It is, here is a service, people are consuming it, we're embedding it in our other AWS stuff. We're putting this AI into how folks are going to manage AWS, and the invention innovation process collapses very quickly. >> That's a good point. I would just give you some validation on that by seeing other trend points that talk about that social piece. You hear about social engineering in cyber security, that that's now a big part of how hackers are getting in, through social engineering. Open-source software is a social engineering act, 'cause it's got a community dynamic. Blockchains, huge social engineering around how these companies are forming. So I would 100% agree, that's a great, great point. The other thing I'd ask you to elaborate on is something that is a trend that's obvious, 'cause everyone talks about the old way, new way. Legacy is being disrupted. New players like Amazon are disrupting the people like Oracle. And Oracle thinks they're winning, Amazon thinks they're winning. The scoreboards aren't the same, but here's the question. Technology used to be built to solve technology problems. You build a box, you ship it, and it works. Software, craft it, ship it. It does work or it doesn't work. Now software and technology we can use to solve non-technology problems. This brings it to a whole nother level when you take your social comment, an invention. This is now a new dynamic that tend to be, I don't want to say minimized in the old days, but the old ways was, load some boxes, rack it up, and you got a PC on your desk. We could work effectively on a network. Now it's completely going non-technology problems, healthcare, verticals. >> Here's the way we look at it, John. >> John: What's your thoughts on that? >> Our simple bromide is that we are in the midst of the transition in computing. And by that I mean, for the first 50 years we talked about known process, unknown technology. By that I mean, for example, have you ever seen a GAAP accounting convention wandering out in the wild? No, it doesn't exist, it's manmade, it's artifice. There's nothing wrong with it. We all agree what an accounting thing is, but it's all highly stylized and extremely well-defined. It's a known process. And the first 50 years were about taking those known processes in accounting, and in HR, and a lot of other domains, and then saying, okay, what's the right technology to automate as much of this as possible? And we did a phenomenal job of it. It started with mainframes, then client/server. And was it this server, or that server? Unix or something else? TCP/IP or some other network? But that was the first 50 years of computing. Now we've got a lot of those things out. In fact, cloud kind of summarizes and puts forward a common set of experiences, still a lot of technology questions are going to be important. I don't want to suggest that that's not important. But increasingly it's, okay, what are the processes that we're going to try to automate? So we're now in a world where the technology's much more known, but the processes are incredibly unknown. So we went from a known-- >> So what is that impact to the cloud players, like Amazon? Because what I'm trying to figure out is, what will be the posture on the keynotes? Is it going to be a speeds and feeds show? Or is it going to be much more holistic, business impact, or societal impact? >> The obvious one is that Amazon increasingly has to be able to render these common building blocks for infrastructure up through to developers, and a new way of thinking about how do you solve problems. And so a lot more of what we're likely to see this year is Amazon continuing to move up the stack and say, here's how you're going to look at a problem, here's how you're going to solve the problem, here's the tooling, and here's the ecosystem that we're going to bring along with us. So it's much more problem-solving at the value level, going back to what we talked about earlier, problem solving that creates new types of business value, as opposed to problem solving to reduce the costs of existing infrastructure. >> Now we have a VIP chat on crowdchat.net/awsreinvent. If you want to participate, we're going to open it. We're going to keep it open for a long time, weigh in on that. We just had a great research meeting that you do weekly called Action Item, which is a format that's designed to flush out the latest and greatest research that's tied to current events or trends. And then unpack the action item for buyers and customers, large businesses in the industry. What's the summary for the meeting we just had here? A lot of stuff being talked about, Unigrid, we're talking about under the hood with data, a lot of good stuff. What's the bottom line? How do you up-level it for the CIO or CXO that's watching or listening, doesn't have time to get in the weeds? >> Well, I think the three fundamental conclusions that we reached this year is that we expect AWS to spend a lot of time talking about AI, both as a generalized way of moving up the stack, as we talked about. Here's the services the developers are going to work with. Here's the tool kits that they're going to utilize, et cetera, to solve more general problems. But also AI being embedded more deeply within AWS and how it runs as a service, and how it integrates and works with other clouds. So AI machine learning for IT operations management through AWS. So AI's going to be a major feature. The second one we think that we're going to hear a lot about is, Amazon's been putting forward this notion that they were going to facilitate migration of Legacy applications into AWS. That's been a slog, but we expect to see a more focused effort by going after specific big software houses, that have large installed bases of on-premise stuff, and see if they can't, with the software house, bring more of that infrastructure, or more of those installations, into AWS. Now, I don't want to call VMware an application house, but not unlike what they did with VMware about a year and a half ago. The last one is that we don't think that Amazon is going to put forward a general purpose IoT Edge solution this year. We think that they're going to reveal further what their approach to those problems are, which is, bigger networks, more PoPs. >> More scale. >> More scale, a lot of additional services for building applications that operate within that framework, but not that kind of, here's what the hybrid cloud by Amazon is going to look like. >> Let's talk about competition in China. Obviously, they kind of go hand in hand. Obviously, Andy Jassy and the Amazon Web Services team are seeing for the first time, massive competition. Obviously Microsoft stocks, I might have mentioned earlier. So you're starting to see the competition wheels cranking. Oracle's certainly been all over Amazon, we know that. Microsoft's just upping their game, trying to catch up, and their numbers are looking good. You got SAP playing the multicloud game. You got Google differentiating on things like TenserFlow and other AI and developer tools. This is interesting. This is the first time Amazon's really had some competition, I won't say nipping at its heels, but putting pressure. It's not the one game in town. People are talking multicloud, kind of talking about lock-in. And then you got the China situation. You got Alibaba, technically the number four cloud by some standards. Some will argue that position. The point is, it's massive. >> Yeah, I think it's by any reasonable standard. They are a big cloud player. >> So let's go through that. China, let's start with China. Amazon just announced, and the news was broken by the Wall Street Journal, who actually got it wrong and didn't correct their story for almost 24 hours. Really kind of screwed up the market, everyone thought that they were selling AWS to China. It was a unique deal. Rob Hof and the team reported and corrected, >> Peter: At SiliconANGLE. >> At siliconangle.com, we got it right, and that is is that it was a $300 million data center deal, not intellectual property, but this is the China playbook. >> They sold their physical assets. They didn't sell their IP. They didn't sell the services or the ability to provide the services. >> Based upon my reporting, and this is again still, the facts on the ground are loose, 'cause China, it's hard to get the data. But from what I can gather, they were already doing business in China. Apple went through this, even though they're hardware, they still have software. Everyone has that standoff, but ultimately getting into China requires a government-owned partner, or a Chinese company. Government-owned is quasi, you could argue that. And then they expand from there. Apple now has, I think, six stores or more in Shanghai and all over China. So this is a growth opportunity for Amazon if they play it right. Thoughts on that? I mean, obviously we cover a lot of the Chinese companies here. >> Well, I don't want to present myself as an expert on this, John. I've been watching, the Silicon Valley ANGLE reporting has been my primary information source. But I think that it's interesting. We talk about hard assets and soft assets. Hard assets are buildings, machines, and in the IT world, it's the hardware, it's the building, et cetera. And when China talks about ownership, they talk about ownership of those assets. And it sounds to me anyway, like AWS has done a very interesting thing, where they said, okay, fine, you want 51% of the hard assets? Have 51% of the hard, have 100% of the hard assets. But we are going to decide what those assets look like, and we are going to continue to own and operate the software that runs on those assets. So it sounds like, through that, they're going to provide a service into China, whatever the underlying hardware assets are running on. Interesting play. >> Well, we get the story right, and the story is, they're going into China, and they had to cut a deal. (laughs) That's the story. >> But for the hard assets. >> For the hard assets, they didn't get intellectual property. I think it's a good deal for Amazon. We'll see, we're going to watch that closely. I'm going to ask Andy Jassy that specific question. Now on the competition. The FUD is off the charts, fear, uncertainty and doubt. You see that in competitive markets, the competition throwing FUD. Sometimes it's really blatantly obvious FUD, sometimes it's just how they report numbers. I've been, not critical, but pointing out that Azure includes Office 365. Well when you start getting down that road, do you bundle in the sales floor as a cloud player? So all these things start to-- >> Peter: Yeah. >> Of course, so what is true cloud? Are people parsing the categories too narrowly, in your opinion? What's the opinion from the research team on, on what is cloud? >> Well, what is cloud? We like to talk about the cloud experience where the data demand's free or business. So the cloud experience is basically, it's self-provisioning, it's a service, it is continuous, and it allows you a range of different options about what assets you do or do not want to own, according to the physical realities, the legal realities, and intellectual property realities of the data that runs your business. So that's kind of what we mean by cloud. So let's talk about a couple of these. First-- >> Hold on, before you get to those, Andy Jassy said a couple years ago, he believes all enterprises will move to the cloud. (laughs) I mean, he was kind of, of course, he's buying 100% Amazon, and Amazon's defined as cloud. But he's kind of referring to that the enterprise on-premise current business model, and the associated technology, will move to cloud. Now, I'm not sure he would agree that the true private cloud is the same as Amazon. But if he cuts a deal with VMware like he did, is that AWS? So will his prediction come true? Ultimately, everyone's saying that will never be full cloud. >> I think this is one of those things where we got to be a little bit careful about trying to read too much into what he said. But here's what we think. Our advice to customers is don't think about moving your enterprise to the cloud, think about moving the cloud to your enterprise. And I think that's the whole basis for the hybrid cloud conversation that we're having. And the reason why we say the cloud experience where your data demands, is that there are physical realities that every enterprise is going to have to deal with, latency, bandwidth. There are legal realities that every enterprise is going to have to deal with. GDPR, what it means to handle privacy and handle data. And then there's finally intellectual property realities that every enterprise is going to have to deal with. Amazon not wanting to sell its IP to a Chinese partner, to comply with Chinese laws. Every business faces these issues. And they're not going to go away. And that's what's going to shape every businesses configuration of how they're using the cloud. >> And by the way, when I did ask him that question, it might have been three years ago. I can't actually remember, I'm losing my mind here. But at that time, cloud was not yet endorsed as the viable way. So he might have been referring to, again, I'm going to ask him this when I see him in my one on one. He might have been referring to old enterprise ways. So I mean-- >> Let's be honest. Amazon has done such an incredible job of making this a real thing. And our opinion is that they're going to continue to grow as fast as the cloud industry, however we define it. What we tend to define, we think that SaaS is going to be a big player, and it's going to be the biggest part of the player. We think Infrastructure as a Service is going to continue to be critically important. We think that the competition for developers is going to heat up in a big way. AI, machine learning, deep learning, all of those things are going to be part of that competition. In our view of things, we're going to see SaaS be much bigger in a few years. We're going to see this notion of true private cloud, which is a cloud experience on-premise with your assets, because you need to control your data in different ways, is going to be bigger than IaaS, but it's all going to be cloud. >> I mean, in all poise, my opinion and what I'm looking for this year, Peter, just to kind of wrap up the segment is, I think, and if you look at Amazon's new ad campaign, the builders, that's a topic that we talked about last year. >> Peter: Developers. >> Developers. We are living in a world where DevOps is now going mainstream. And there are still cultural issues around, what does that actually mean for a business? The personnel, how they operate, and some of the things you guys point out in your true private cloud report, illuminates those things. And that is, whoever can automate and create great tooling for the DevOps culture going forward, whatever that's called, new developers, new normal? Whatever it is, that to me is going to be the competitive landscape. >> Let me parse that slightly, or put it slightly differently. I think everybody put forward this concept of DevOps as, hey, business, redefine yourself around DevOps. And it hasn't gone as well as a lot of people thought it would. I think what's really going to happen, I don't think you're disagreeing with me, John, is that we need to bring more developers into cloud building that cloud experience, building more of the application value, building more of the enterprise value, in cloud. Now that's happening, and they are going to start snapping this DevOps concept into place. But I think it really is going to boil down to, how are developers going to fully embrace the cloud? What's it going to look like? It's going to be multicloud. Let's go back to the competition. Microsoft, you're right, but they're a big SaaS player. Companies are building enormous relations, big contracts, with Microsoft. They're going to be there. Google, last year they couldn't get out of their own way. Diane Greene comes in, we see a much more focused effort. There's some real engineering that's going on for Google Cloud Services, or Platform, that wasn't there before. Google is emerging as a big player. We're having a lot of conversations with users, where they're taking Google very seriously. IBM is still out there, still got some things going on. You've already mentioned Alibaba, Tencent, a whole bunch of other players in the globe. This is going to be a market that's going to be very, very contentious, but Amazon's going to get there first share. >> And I think we pointed out years ago, that DevOps will merge to cloud developers. You nailed it, I think you just said it. Okay, Peter Burris, here for the Amazon Web Service preview. Of course theCUBE will be there with two sets. We're going to have over 75 interviews over the course of 3 days. In the hall, look for theCUBE, if you've watched this video and you want to come by. If you got a ticket, it's sold out. But come by if you have a ticket. We'll be there, in Las Vegas, for Amazon Web Services re:Invent. I'm John Furrier, thanks for watching this CUBE Conversation from Palo Alto. (upbeat techno music)

Published Date : Nov 17 2017

SUMMARY :

It's clear the cloud is the game. is that in the first 10 years of the cloud, So the question for you is, it means that the binding This brings it to a whole nother level And the first 50 years were about So it's much more problem-solving at the value level, flush out the latest and greatest research that's tied to Here's the services the developers are going to work with. but not that kind of, Obviously, Andy Jassy and the Amazon Web Services team I think it's by any reasonable standard. and the news was broken by the Wall Street Journal, and that is is that it was a $300 million data center deal, or the ability to provide the services. 'cause China, it's hard to get the data. And it sounds to me anyway, (laughs) That's the story. The FUD is off the charts, fear, uncertainty and doubt. of the data that runs your business. that the enterprise on-premise current business model, that every enterprise is going to have to deal with, And by the way, when I did ask him that question, And our opinion is that they're going to continue to grow the builders, that's a topic that we talked about last year. and some of the things you guys point out But I think it really is going to boil down to, And I think we pointed out years ago,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

AppleORGANIZATION

0.99+

ShanghaiLOCATION

0.99+

ChinaLOCATION

0.99+

Andy JassyPERSON

0.99+

Diane GreenePERSON

0.99+

PeterPERSON

0.99+

OracleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Rob HofPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

SiliconANGLE Media, Inc.ORGANIZATION

0.99+

TencentORGANIZATION

0.99+

$300 millionQUANTITY

0.99+

100%QUANTITY

0.99+

Las VegasLOCATION

0.99+

Palo AltoLOCATION

0.99+

51%QUANTITY

0.99+

last yearDATE

0.99+

nine monthsQUANTITY

0.99+

3 daysQUANTITY

0.99+

six storesQUANTITY

0.99+

Bruce Arthur, Entrepreneur, VP Engineering, Banter.ai | CUBE Conversation with John Furrier


 

(bright orchestral music) >> Hello everyone, and welcome to theCUBE Conversations here in Palo Alto Studios. For theCUBE, I'm John Furrier, the co-founder of SiliconANGLE Media inc. My next guest is Bruce Arthur, who's the Vice President of engineering at Banter.ai. Good friend, we've known each other for years, VP of engineering, developer, formerly at Apple. >> Yes. >> Worked on all the big products; the iPad-- had the the tin foil on your windows back in the day during Steve Jobs' awesome run there. Welcome to theCUBE. >> Thank you, it's good to be here. >> Yeah, great, you've got a ton of experience and I want to get your perspective as a developer, VP of engineering, entrepreneur, you're doing a startup around AI. Let's have a little banter. >> Sure. >> Banter.ai is a little bit a chat bot, but the rage is DevOps. Software really models change, infrastructure as code, cloud computing. Really a renaissance of software development going on right now. >> It is, it's changing a lot. >> What's your view on this? >> Well, so, years and years ago you would work really hard on your software. You would package it up in a box and you'd send it over the wall and you hope it works. And that seems very quaint now because now you write your software, you deploy it the first day, and you change it six times that day, and you're A/B-testing it, you're driving it forward, it's so much more interactive. It does require a different skillset. It also doesn't, how do I say this carefully? It used to be very easy to be craft, to have high craft and make a very polished product, but you didn't know if it was going to work. Today you know if it's going to work, but you often don't get to making sure it's high quality, high craft, high value. >> John: So, the iteration >> Exactly, the iteration runs so fast, which is highly valuable, but you sort of just a little bit of you miss the is this really something I am proud of and I can really work with it because you know, now the product definition can change so quickly, which is awesome but it is a big change. >> And that artisan crafting thing is interesting, but now some are saying that the UX side is interesting because, if you get the back end working, and you're iterating, you can still bring that artisan flavor back. We heard that cloud computing vendors like Amazon, and I was just in China for Alibaba, they're trying to bring this whole design artisan culture back. Your thoughts on the whole artisan craft in software, because now you have two stages, you have deploy, iterate, and then ultimately polish. >> Right, so, I think it's interesting, it used to be, engineering is so expensive and time-consuming. You have to design it upfront and you make one version of it and you're done. That has changed now that engineering has gotten easier. You have better tools, we have better things, you can make six versions and that used to be, so back in the day at Apple, you would make six versions, five of which Steve would hate and throw out, and eventually they would get better and better and better and then you would have something you're proud of. Now those are just exposed. Now everybody sees those, it's a very different process. So you, I think, the idea that you. Engineering used to be this scarce resource. It's becoming easier now to have many versions and have more engineers working on stuff, so now it is much more can I have three design teams, can they compete, can they make all good ideas, and then who's going to be the editor? Who evaluates them and decides I like this from this one, I like that, and now let's put this together to make the right product. >> So, at Apple, you mentioned Steve would reject, well, that's well-documented. >> Sure. >> It's publicly out there that he would like, really look at the design-side. Was it Waterfall-based, was it Agile, Scrum, did you guys, was it like, do you lay it all out in front of him and he points at it? What were some of the work flows like with Steve Jobs? >> So, when he was really excited about something he would want to meet with them every week. He'd want to see progress every week. He'd give lots of feedback every week, there'd be new ideas. It was very Steve-focused. I think the more constructive side of it was the design teams were always thinking about What can we build, how do we put it in front of him, and I remember there was a great quote from a designer that said. It's not that Steve designs great things, it's that you show him three things, and if you throw him three bad things, he'll pick the least bad. If you show him three great things, he'll pick the most great, But it's not, it was more about the, you've got to iterate in the process, you've got to try ideas, you take ideas from different people and some of them, like, they sound like a great idea. When we talk, it sounds really good. You build it, and you're like, that's just not, that's just not right. So, you want, how do I say this? You don't want to lock yourself in up front. You want to imagine them, you want to build them, you want to try 'em. >> And that's, I mean, I've gotten to know the family over the years, too, through some of the Palo Alto interactions, and that's the kind of misperception of Steve Jobs, was that he was the guy. He enabled people, he had that ethos that-- >> He was the editor, it's an old school journalism metaphor, which is, he had ideas, he wanted, but he also, he ran the team. He wanted to have people bring their ideas and come in. And then he decided, this is good, this is not. That's better, you can do better, let's try this. Or, sometimes, this whole thing stinks. It's just not going anywhere. So, like, it was much more of that. Now it's applied to software, and he was a marketing genius, about sort of knowing what people were going to go for, but there was a little bit of a myth for it, that there's one man designing everything. That is a very saleable marketing story. >> The mythical man. (laughs) >> Well, it's powerful, but no, there's a lot of people, and getting the best work of all those people. >> I mean, he's said on some of the great videos I've watched on YouTube over the years, Hire the best people, only work with the best, and they'll bring good stuff to the table. Now, I want to bring that kind of metaphor, one step further for this great learning lesson, again it's all well-documented on YouTube. Plenty of Steve videos there, but now when you go to DevOps, you mention the whole quality thing and you got to ship fast, iterate, you know there's a lot of moving fast break stuff as Zuckerberg would say, of Facebook, although he's edited his tune to say move fast and be reliable. (laughing) Welcome to the enterprise, welcome to software and operations. This is now a scale game at the enterprise side 'cause, you know, you start seeing open source software grow so much now, where a lot of the intellectual property might be only 10% of software. >> Right. >> You might be using other pieces. You're packaging it so that when you get it to the market, how do bring that culture? How do you get that innovation of, Okay, I'm iterating fast, how do I maintain the quality. What are some of your thoughts on that? Because you've got machine learning out there, you've got these cool things happening. >> Yup. So, you want, how do I say this? You just, you really need to leave time to schedule it. It needs to be in your list. There's a lot of figuring out what are we going to build and you have to try things, iterate things, see if they resonate with consumers. See if they resonate with people who want to pay. See if they resonate with investors. You have to figure than out fast, but then you have to know that, okay, this is a good prototype. Now I have to make it work better because the first version wouldn't scale well, now it has to scale, now it has to work right for people, now you have to have a review of: here's the bugs, here's the things that are not working. Why does this chatbot stop responding sometimes? What is causing that? Now, the great story is, with good DevOps, you actually have a system that's very good at finding and tracking those problems. In the old world, so the old world with the shrink-wrap software, you'd throw it over the fence. If it misbehaves, you will never know. Today you know. You've got alerts, you've got pagers going off, you've got logs, >> It's instrumented big-time. >> Yeah, exactly, you can find that stuff. So, since you can actually make, you can make very high-quality software because you have so much more data about what's going on with it, it's nice. And actually, chatbot software has this fascinating little side effect, with, because it's all chats and it's all text, there are no irreproducible bugs. You can go back and look at exactly what happened. I have a recording, I know exactly what happened, I know exactly what came in, I know what came out, and then I know that this failure happened. So, it's very reproducible, sort of, it's nice you can, it doesn't always work this way, but it's very easy to track down problems. >> It's event-based, it's really easy to manage. >> Exactly, and it's just text. You can just read it. It's not like I have to debug hacks, it's just these things were said and this thing died. >> No core dumps. (laughs) >> No, there's nothing that requires sophisticated analysis, well the code is one thing, but like, the sequence of events is very human-readable, very understandable. >> Alright, so let's talk about the younger generation. So, we've been around the block, you and I. We've talked, certainly many times around town, about the shifts, and we love these new waves. A lot of great waves coming in, we've seen many waves. What's going on, in your mind, with the younger generation? Because this is a, some exciting things happening. Decentralized internet. >> Bruce: Yup. >> There's blockchain, getting all the attention. Outside of the hype, Alpha VCs, Alpha engineers, Alpha entrepreneurs are really honing in on blockchain because they see the potential. >> Sure. >> Early people are seeing it. Then you've got cloud, obviously unlimited compute potentially, the new, you know, kind of agile market. All these young guys, they never shipped, actually never loaded Linux on a server. (laughing) So, like, what are you seeing for the younger guys? And what do you see as someone who's experienced, looking down at the next, you know, 20 year run we see. >> So, I think what I see that's most exciting is that we now have people solving very non-technical problems with technology. I think it used to be, you could build a computer, you could write code, but then, like, your space was limited to the computer in front of you. Like, I can do input and outputs. I can put things on the screen, I can make a video game, but it's in this box. Now everyone's thinking of much bigger, Solving bigger problems. >> John: Yeah, healthcare, we're seeing verticals. >> Yeah, healthcare's a massive one. You can, operation things, shipping products. I mean, who would've thought Amazon was going to be delivering things, basically. I mean, they're using technology to solve the physical delivery of objects. That is, the space of what people are tackling is massive. It' no longer just about silicon and programming, it's sort of, any problem out there, there's someone trying to apply technology, which is awesome and I think that's because these people these youngsters, they're digital natives. >> Yeah. >> They've come to expect that, of course video conferencing works, of course all these other items work. That I just need to figure out how to solve problems with them, and I'm hopeful we're going to see more human-sized problems solved. I think, you know, we have, technology has maybe exacerbated a few things and dislocated, cost a lot of people jobs. Disconnected some people from other sort of stabilizing forces, >> Fake news. (laughs) >> Fake news, you know, we need-- >> John: It's consequences, side effects. >> I hope we get people solving those problems because fake news should now be hard to solve. They'll figure it out, I think, but, like, the idea is, we need to, technology does have a bit of a responsibility to solve, fix some of the crap that it broke. Actually, there's things that need, old structures, journalism is an old profession. >> Yeah. >> And it used to actually have all these wonderful benefits, but when the classified business went down the tubes, it took all that stuff down. >> Yeah. >> And there needs to be a venue for that. There needs to be new outlets for people to sort of do research, look things up, and hold people to account. >> Yeah, and hopefully some of our tools we'll be >> I hope so. >> pulling out at Silicon Angle you'll be seeing some new stuff. Let's talk about, like just in general, some of the fashionable coolness around engineering. Machine learning, AI obviously tops the list. Something that's not as sexy, or as innovative things. >> Sure. >> Because you have machines and industrial manufacturing plant equipment to people's devices. Obviously you worked at Apple, so you understand that piece, with the watch and everything. >> Yup, >> So you've got, that's an internet, we're things, people are things too. So, machines and people are at the edge of the network. So, you've got this new kind of concept. What gets you excited? Talk about how you feel about those trends. >> So, there's a ton going on there. I think what's amazing is the idea that all these sensors and switches and all the remote pieces can start to have smarts on them. I think the downside of that is some of the early IoT stuff, you know, has a whole open SSL stack in it. And, you know, that can be out of date, and when you have security problems with that now your light switch has access to your tax returns and that's not really what you want. So, I think there's definitely, there's a world coming, I think, at a technical level, we need to make operating systems and tools and networking protocols that aren't general purpose because general purpose tools are hackable. >> John: Yeah. >> I need to have a sensor and a switch that know how to talk to each other, and that's it. They can't rewrite code, they can't rewrite their firmware, they can't, like, I want to be able to know that, you have a nice office here, if somebody came in and tried to hack your switches, would you ever know? And the answer's like, you'd have no idea, but when you have things that are on your network and that serve you, if they're a general, if they're a little general purpose computing device, they're a mess. Like, you know, a switch is simple. A microphone, a microphone is simple. There's an output from it, it needs, I think we, >> So differentiated software for device. >> Well, let's get back to old school. You studied operating systems back in the day. >> Yeah. >> A process can do whatever the hell it wants. It can read from memory, it can write to disk, it can talk to all these buses. It's a very, it can do, it's very general purpose. I don't want that in my switch. I want my switch to be sort of, much more of these old little micro-controller. >> Bounded. >> Yeah, it's in a little box. I mean, so the phone and the Mac have something called Sandbox, which sort of says, you get a smaller view of the world. You get a little piece of the disk, you can't see everything else, and those are parts of it, but I think you need even more. You need, sort of, this really, I don't want a general purpose thing, I want a very specific thing that says I'm allowed to do this and I'm allowed to talk to that server; I don't have access to the internet. I've got access to that server. >> You mentioned operating systems. I mean, obviously I grew up in the computer science genre of the '80s and you did as well. That was a revolution around Unix. >> Yes. >> And then Berkeley, BSD, and all that stuff that happened around the systems world, operating systems, was really the pioneers in computing at that time. It's interesting with cloud, it's almost a throwback now to systems thinking. >> Bruce: It's true, yeah. >> You know, people looking at, and you're discussing it. >> Bruce: Yeah, Yeah. >> It's a systems problem. >> Yeah, it is. >> It's just not in a box. >> Right, and I think we witnessed the, let's get everyone a general purpose computer and see what they can do. And that was amazing, but now you're like I don't want everything to be a general I want very specific, I want very little thing, dedicated things that do this really well. I don't want my thermostat actually tracking when I'm in the house. You know, I want it to know, eh, maybe there's someone in the house, but I don't want it to know it's me. I don't want it reporting to Google what's going on. I want it to track my temperature and manage that. >> Our Wikibon team calls the term Unigrid, I call it hypergrid because essentially it's grid computer; there's no differentiation between on-premise and cloud. >> Right. >> It's one pool of resource of compute and things processes. >> It is, although I think, and that's interesting, you want that, but again you want it, how do I say this? I get a little nervous when all of my data goes to some cloud that I can't control. Like, I would love if, I'll put it this way. If I have a camera in my house, and imagine I put security cameras up, I want that to sort of see what's going on, I don't want it to publish the video to anywhere that's out of my control. If it publishes a summary that says, oh, like, someone came to your door, I'm like, okay, that's a good, reasonable thing to know and I would want to get that. So, Palo Alto recently added, there's traffic cameras that are looking at traffic, and they record video, but everyone's very nervous about that fact. They don't want to be recorded on video. So, the camera, this is actually really good, the camera only reports number of cars, number of bikes, number of pedestrians, just raw numbers. So you're pushing the processing down to the end and you only get these very anonymous statistics out of it and that's the right model. I've got a device, it can do a lot of sophisticated processing, but it gives nice summary data that is very public, I don't think anyone's really >> There's a privacy issue there that they've factored into the design? >> Yes, exactly. It's privacy and it's also the appropriateness of the data, you don't want, yeah, people don't want a camera watching them when they go by, but they're happy and they're like, oh, yeah, that street has a big increase in traffic, And there's a lot of, there were accidents here and there's people running red lights. That's valuable knowledge, not the fact that it's you in your Tesla and you almost hit me. No. (laughs) >> Yeah, or he's speeding, slow down. >> Exactly, yeah, or actually if you recorded speeders the fact that there's a lot of speeding is very interesting. Who's doing it, okay, people get upset if that's recorded. >> Yeah, I'm glad that Palo Alto is solving their traffic problem, Palo Alto problems, as we say. In general, security's been a huge issue. We were talking before we came on, about just the security nightmare. >> Bruce: Yes. >> A lot of companies are out there scratching their heads. There's so much of digital transformation happening, that's the buzzword in the industry. What does that mean from your standpoint? Because engineers are now moving to the front lines. Developers, engineering, because now there's a visibility to not just the software, it's an end goal. They call it outcome. Do you talk to customers a lot around, through your entrepreneurial venture, around trying to back requirements into product and yet deliver value? Do you get any insight from the field of kind of problems, you know, businesses are generally tryna solve with tech? >> So, that's interesting, I think when we try to start tech companies, we usually have ideas and then we go test that premise on customers. Perhaps I'm not as adaptable as I should be. We're not actually going to customers and asking them what they want. We're asking them if this is the kind of thing that would solve their problems. And usually they're happy to talk to us. The tough one, then, is then are they going to become paying customers, there's talking and there's paying, and they're different lines. >> I mean, certainly is validation. >> Exactly, that's when you really know that they care. It is, it's a tough question. I think there's always, there's a category of entrepreneur that's always very knowledgable about a small number of customers and they solve their problems, and those people are successful and they're often, They often are more services-based, but they're solving problems because they know people. They know a lot of people, they know what their paying point are. >> Alright, so here's the real question I want to know is, have you been back to Apple in the new building? >> Have I been to, I have not been in the spaceship. (laughing) I have not been in the spaceship yet. I actually understand that in order to have the event there, they actually had to stop work on the rest of the building because the construction process makes everything so dirty; and they did not want everyone to see dirty windows, so they actually halted the construction, they scrubbed down the trees, they had the event, and now it's, but now it's back. >> Now it's back to, >> So, I'll get there at some point. >> Bruce Arthur it the Vice President of Banter.ai, entrepreneur, formerly of Apple, good friend, Final question for you, just what are you excited about these days and as you look out at the tooling and the computer science and the societal impact that is seen with cloud and all these technologies, and open source, what do you, what are you excited about? >> I'm most excited, I think we actually have now enough computing resources and enough tools at hand that we can actually go back and tackle some harder computer science problems. I think there's things that used to be so big that you're like, well, that's just not, That's too much data, we could never solve that. That's too much, that would take, you know, that would take a hundred computers a hundred years to figure out. Those are problems now that are becoming very tractable, and I think it's been the rise of, yeah, it starts with Google, but some other companies that sort of really made these very large problems are now tractable, and they're now solvable. >> And open source, your opinion on open source these days? >> Open source is great. >> Who doesn't love more code? (laughs) >> Well, I should back this up, Open source is the fastest way to share and to make progress. There are times where you need what's called proprietary, but in other words valuable, when you need valuable engineers to work on something and, you know, not knowing the providence or where something comes from is a little sticky, I think there's going to be space for both. I think open source is big, but there's going to be-- >> If you have a core competency, you really want to code it. >> Exactly, you want to write that up and you-- >> You can still participate in the communities. >> Right, and I think open source is also, it's awesome when it's following. If there's something else in front, it follows very fast, it does a very good job. It's very thorough, sometimes it doesn't know where to go and it sort of meanders, and that's when other people have advantages. >> Collective intelligence. >> Exactly. >> Bruce, thanks for coming on. I really appreciate it, good to see you. This is a Cube Conversation here in the Palo Alto studio, I'm John Furrier, thanks for watching. (light electronic music)

Published Date : Nov 17 2017

SUMMARY :

the co-founder of SiliconANGLE Media inc. had the the tin foil on your windows back in the day and I want to get your perspective as a a chat bot, but the rage is DevOps. it over the wall and you hope it works. just a little bit of you miss the but now some are saying that the UX side is interesting so back in the day at Apple, you would make six versions, So, at Apple, you mentioned Steve would reject, did you guys, was it like, do you You want to imagine them, you want to build them, Palo Alto interactions, and that's the kind of That's better, you can do better, let's try this. (laughs) a lot of people, and getting the best and you got to ship fast, iterate, you know You're packaging it so that when you get it to the market, and you have to try things, iterate things, So, since you can actually make, Exactly, and it's just text. (laughs) but like, the sequence of events is So, we've been around the block, you and I. Outside of the hype, Alpha VCs, Alpha engineers, compute potentially, the new, you know, kind of agile market. I think it used to be, you could build a computer, That is, the space of what people are tackling is massive. I think, you know, we have, technology has maybe (laughs) but, like, the idea is, we need to, And it used to actually have all these wonderful benefits, And there needs to be a venue for that. some of the fashionable coolness around engineering. Because you have machines and industrial So, machines and people are at the edge of the network. some of the early IoT stuff, you know, but when you have things that are on your network You studied operating systems back in the day. I want my switch to be sort of, much more of these and those are parts of it, but I think you need even more. of the '80s and you did as well. that happened around the systems world, someone in the house, but I don't want it to know it's me. Our Wikibon team calls the term Unigrid, and you only get these very anonymous statistics out of it appropriateness of the data, you don't want, the fact that there's a lot of speeding is very interesting. about just the security nightmare. you know, businesses are generally tryna solve with tech? and then we go test that premise on customers. Exactly, that's when you really know that they care. I have not been in the spaceship yet. and as you look out at the tooling and the computer science That's too much, that would take, you know, engineers to work on something and, you know, and it sort of meanders, and that's when other people I really appreciate it, good to see you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

AmazonORGANIZATION

0.99+

BrucePERSON

0.99+

ZuckerbergPERSON

0.99+

Steve JobsPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

Bruce ArthurPERSON

0.99+

AlibabaORGANIZATION

0.99+

six timesQUANTITY

0.99+

ChinaLOCATION

0.99+

Steve Jobs'PERSON

0.99+

GoogleORGANIZATION

0.99+

20 yearQUANTITY

0.99+

AppleORGANIZATION

0.99+

TodayDATE

0.99+

fiveQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

LinuxTITLE

0.99+

first versionQUANTITY

0.99+

one versionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two stagesQUANTITY

0.99+

SiliconANGLE Media inc.ORGANIZATION

0.98+

six versionsQUANTITY

0.98+

bothQUANTITY

0.98+

three design teamsQUANTITY

0.98+

Banter.aiORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

WikibonORGANIZATION

0.98+

first dayQUANTITY

0.98+

YouTubeORGANIZATION

0.97+

three thingsQUANTITY

0.97+

theCUBEORGANIZATION

0.96+

DevOpsTITLE

0.95+

a hundred yearsQUANTITY

0.94+

Palo Alto StudiosLOCATION

0.94+

TeslaORGANIZATION

0.93+

MacCOMMERCIAL_ITEM

0.92+

UnixTITLE

0.91+

BSDORGANIZATION

0.9+

yearsDATE

0.89+

three bad thingsQUANTITY

0.89+

AlphaORGANIZATION

0.89+

one thingQUANTITY

0.88+

a hundred computersQUANTITY

0.88+

10%QUANTITY

0.88+

Silicon AngleLOCATION

0.87+

three great thingsQUANTITY

0.85+

AgileTITLE

0.84+

one manQUANTITY

0.83+

ScrumTITLE

0.82+

Palo AltoORGANIZATION

0.8+

one stepQUANTITY

0.8+

one poolQUANTITY

0.79+

Vice PresidentPERSON

0.78+

chatbotTITLE

0.77+

PaloORGANIZATION

0.76+

Banter.aiPERSON

0.74+

AltoLOCATION

0.72+

'80sDATE

0.7+

years agoDATE

0.69+

BerkeleyLOCATION

0.69+

SandboxCOMMERCIAL_ITEM

0.65+

wavesEVENT

0.58+

everyQUANTITY

0.58+

Wikibon Analyst Meeting | Dell EMC Analyst Summit


 

>> Welcome to another edition of Wikibon's Weekly Research Meeting on theCUBE. (techno music) I'm Peter Burris, and once again I'm joined by, in studio, George Gilbert, David Floyer. On the phone we have Dave Vellante, Stu Miniman, Ralph Finos, and Neil Raden. And this week we're going to be visiting Dell EMC's Analyst Summit. And we thought we'd take some time today to go deeper into the transition that Dell and EMC have been on in the past few years, touching upon some of the value that they've been creating for customers and addressing some of the things that we think they're going to have to do to continue on the path that they're on and continue to deliver value to the marketplace. Now, to look back over the course of the past year, it was about a year ago that the transaction actually closed. And in the ensuing year, there's been a fair amount of change. We've seen some interesting moves by Dell to bring the companies together, a fair amount of conversation about how bigger is better. And at the most recent VMworld, we saw a lot of great news of VMworld, VMware in particular working more closely with Amazon and others, or AWS and others. So we've seen some very positive things happen in the course of the past year. But there are still some crucial questions that are addressed. And to kick us off, Dave Vellante, where are we one year in and what are we expecting to hear this week? >> Dave: And foremost, Michael Dell was trying to transform his company. It wasn't happening fast enough. He had to go private. He wanted to be an enterprise player, and amazingly, he and Silver Lake came up with four billion dollars in cash. And they may very well pull off one of the greatest wealth creation trades in the history of the computer industry because for four billion dollars, they're getting an asset that's worth somewhere north of 50 billion, and they're paying down the debt that they used to lever that acquisition through cash flow. So like I say, for a pittance (laughs) of four billion dollars, they're going to turn that into a lot of dough, tens and tens of billions. If you look at EMC pre the M and A, I'm sorry, if you look at Dell pre M and A, pre-merger, their transformation was largely failing. The company was making a lot of acquisitions but it wasn't able to reshape itself fast enough. If you look at EMC pre-merger, it was a powerhouse, but it was suffering from this decade-long collapse of infrastructure hardware and software pricing, which was very much a drag on growth and cash flow. So the company was forced to find a white knight, which came in the form of Michael Dell. So you had this low gross margin company, Dell's public gross margin before it went private were in the teens. EMC was in the roughly 60%. Merge those together and you get a roughly 30% plus gross margin entity. I don't think they're there yet. I think they got a lot of work to do. So a lot of talk about integration. And there's some familiarity with these two companies because they had a fairly large OEM deal for the better part of a decade in the 90s. But culturally, it's quite different. Dell's a very metrics-driven culture with a lot of financial discipline. EMC's kind of a take the hill, do whatever it takes culture. And they're in the process of bringing those together, and a lot of cuts are taking place. So we want to understand what impacts those will have to customers. The other point I want to make is that without VMware, in my view anyway, the combination of these companies would not be nearly as interesting. In fact, it would be quite boring. So the core of these companies, you know, have faced a lot of challenges. But they do have VMware to leverage. And I think the challenge that customers really need to think about is how does this company continue to innovate now that they can't really do M and A? If you look at EMC, for years, they would spend money on R and D and make incremental improvements to its product lines and then fill the gaps with M and A. And there're many, many examples of that, Isilon, Data Domain, XtremIO, and dozens of others. That kept EMC competitive. So how does Dell continue that strength? It spends about four and a half billion a year on R and D, and according to Wikibon's figures, that's about 6% of revenue. If you compare that with other companies, Oracle, Amazon, they're into the 12%. Google's mid-teens. Microsoft, obviously to 12, 13%. Cisco's up there. EMC itself was spending 12% on R and D. So IBM's only about 6%, but remember IBM, about two thirds of the company is services. It's not R and D heavy. So Dell has got to cut costs. It's a must. And what implications does that have on the service levels that customers have grown to expect, and what's the implications on Dell's roadmap? I think we would posit that a lot of the cash cows are going to get funded in a way that allows them to have a managed decline in that business. And it's likely that customers are going to see reduced roadmap functions going forward. So a key challenge that I see for Dell EMC is growth. The strength is really VMware, and the leverage of the VMware and their own install base I think gives Dell EMC the ability to keep pace with its competitors because it's got kind of the inside baseball there. It's got a little bit of supply chain leverage, and of course its sales force and its channels are a definite advantage for this company. But it's got a lot of weaknesses and challenges. Complexity of the portfolio, it's got a big debt load that hamstrings its ability to do M and A. I think services is actually a big opportunity for this company. Servicing its large install base. And I think the key threat is cloud and China. I think China, with its low-cost structure, made a deal like this inevitable. So I come back to the point of Michael Dell's got to cut in order to stay competitive. >> Peter: Alright, so one of the, sorry- >> Dave: Next week, hear a lot about sort of innovation strategies, which are going to relate to the edge. Dell EMC has not announced an edge strategy. It needs to. It's behind HPE in that regard, one its major competitors. And it's got to get into the game. And it's going to be really interesting to see how they are leveraging data to participate in that IOT business. >> Great summary, Dave. So you mentioned that one of the key challenges that virtually every company faces is how do they reposition themselves in a world in which the infrastructure platform, foundation, is going to be more cloud-oriented. Stu Miniman, why don't you take us through, very quickly, where Dell EMC is relative to the cloud? >> Stu: Yeah, great question, Peter. And just to set that up, it's important to talk about one of the key initiatives from Dell and EMC coming together, one of the synergies that Michael Dell has highlighted is really around the move from converged infrastructure to hyper converged infrastructure. And this is also the foundational layer that Dell EMC uses today for a lot of their cloud solutions. So EMC has done a great job with the first wave of converged infrastructure through partnering with Cisco. They created the Vblock, which is now VxBlock, which is now a multi-billion dollar revenue stream. And Dell did a really good job of jumping on early with the hyper converged infrastructure trend. So I'd written research years ago that not only was it through partnerships but through OEM deals, if you look at most of the solutions that were being sold on the market, the underlying server for them was Dell. And that was even before the EMC acquisition. Once they acquired EMC, they really get kind of control, if you will, of the VMware VSAN business, which is a very significant player. They have an OEM relationship with Nutanix, who's doing quite well in the space, and they put together their own full-stack solution, which takes Dell's hardware, the VMware VSAN, and the go-to-market processes of what used to be VCE, and they put together VxRail, which is doing quite well from a revenue and a growth standpoint. And the reason I set this all up to talk about cloud is that if you look at Dell's positioning, a lot of their cloud starts at that foundational infrastructure level. They have all of these enterprise hybrid clouds and different solutions that they've been offering for a few years. And underneath those, really it is a simplified infrastructure hardware offering. So whether that is the traditional VCE converged infrastructure solutions or the newer hyper converged infrastructure solutions, that's the base level. And then there's software that wraps on top of it. So they've done a decent amount of revenue. The concern I have is, you know, Peter, you laid out, it's very much a software world. We've been talking a lot at Wikibon about the multi-cloud nature of what's going on. And while Dell and the Dell family have a very strong position in the on-premises market, that's really they're center strength, is around hardware and customer and the enterprises data center. And the threat is public cloud and multi-cloud. And if it centers around hardware and especially when you dig down and say, "okay, I want to sell more servers," which is one of the primary drivers that Michael wants to have with his whole family of solutions, how much can you really live across these in various environments? Of course, they have partnerships with Microsoft. There's the VMware partnerships with Amazon, which is interesting, how they even partner with the likes of Google and others, it can be looked at. But from that kind of center strength is on premises and therefore they're not really living heavily in the public and multi-cloud world, unless you look at Pivotal. So Pivotal's a software, and that's where they're going to say that the big push is, but it's these massive shifts of large install base of EMC, Dell, and VMware, compared to the public cloud that are doing the land grabs. So this is where it's really interesting to look at. And the announcement that we're interested to look at is how IOT and edge fits into all of this. So David Foyer and you, Peter, research about how- >> Peter: Yeah, well, we'll get to that. >> Stu: There's a lot of nuance there. >> We'll get to that in a second, Stu. But one of the things I wanted to mention to David Floyer is that certainly in the case of Dell, they have been a major player in the Intel ecosystem. And as we think about what's going to happen over the course of the next couple of years, what's going to happen with Intel? It's going to continue to dominate. And what's that going to mean for Dell? >> Sure, Dell's success, I mean, what Stu has been talking about is the importance of volume for Dell, being a volume player. And obviously when they're looking at Intel, the PC is a declining market, and ARM is doing incredibly well in the mobile and other marketplaces. And Dell's success is essentially tied to Intel. So the question to ask is if Intel starts to lose market share to ARM and maybe even IBM, what is the impact on that on Dell? And in particular, what is the impact on the edge? And so if you look at the edge, there are two primary parts. We put forward there are two parts of the edge. There's the primary data, which is coming from the sensors themselves, from the cameras and other things like that. So there's the primary edge, and there's the secondary edge, which is after that data has been processed. And if you think about the primary edge, AI and DL go to the primary edge because that's where the data is coming in, and you want the highest fidelity of data. So you want to do the processing as close as possible to that. So you're looking at these examples in autonomous cars. You're seeing it in security cameras, that all of that processing is going to much cheaper chips, very, very close to the data itself. What that means is that most of that IOT, or could mean, is that most of that IOT could go to other vendors, other than Intel, to go to the ARM vendors. And if you look at that market, it's going to be very specialized in the particular industry and the particular problem it's trying to solve. So it's likely that non-IT vendors are going to be in that business. And you're likely to be selling to OT and not the IT. So all of those are challenges to Dell in attacking the edge. They can win the secondary edge, which is the compressed data, initially compressing it 1,000 to one, probably going to a million to one compression of the data coming from the sensors to a much higher value data but much, much smaller amounts, both on the compute side and on the storage side. So if that bifurcation happens at the edge, the size of marketplace is going to be very considerably reduced for Intel. And Dell has in my view a strategic decision to make of whether they get into being part of that ARM ecosystem for the edge. There's a strong argument that's saying that they would need to do that. >> And they will be announcing something on Monday, I believe, or next week. We're going to hear a lot about that. But when we think, ultimately, about the software that Dell and EMC are going to have to think about, they're very strong in VMware, which is important, and there's no question that virtual machines will remain important, if not only from an install base standpoint but from, in the future, how the cloud is organized and arranged and managed. Pivotal also is an interesting play, especially as it does a better job of incorporating more of the open source elements that are becoming very attractive to developers. But George, let me ask you a question, ultimately, about where is Dell in some of these more advanced software worlds? When we think about machine learning, when we think about AI, these are not strong markets right now, are not huge markets right now, but they're leading indicators. They're going to provide cues about where the industry's going to go and who's going to get a chance to provide the tooling for them. So what's our take right now, where Dell is, Dell EMC is relative to some of these technologies? >> Okay, so that was a good lead in for my take on all the great research David Floyer's done, which is when we go through big advances in hardware, typically relative price performance changes between CPU, memory, storage, networking. When we see big relative changes between those, then there's an opportunity for the software to be re-architected significantly. So in this case, what we call unigrid, what David's called unigrid previously is the ability to build scale-out, extremely high-performance clusters to the point where we don't have to bottleneck on shared storage like a SAN anymore. In other words, we can treat the private memory for each node as if it were storage, direct-attached storage, but it is now so fast in getting between nodes and to the memory in a node that for all intents and purposes, it can perform as if you had a shared storage small cluster before. Only now this can scale out to hundreds, perhaps thousands, of nodes. The significance of that is we are in an era of big data and big analytics. And so the issue here is can Dell sort of work with the most advanced software vendors who are trying to push the envelope to build much larger-scale data management software than they've been able to. Now, Dell has an upward, sort of an uphill climb to master the cloud vendors. They build their own infrastructure hardware. But they've done pools of GPUs, for instance, to accelerate machine learning training. Dell could work with these data management vendors to get pools of this scale-out hardware in the clouds to take advantage of the NoSQL databases, the NewSQL databases. There's an opportunity to leapfrog. What we found out at Oracle, at their user conference this week was even though they're building similar hardware, their database is not yet ready to take advantage of it. So there is an opportunity for Dell to start making inroads in the cloud where their generic infrastructure wouldn't. Now, one more comment on the edge, I know David was saying on the sort of edge device, that's looking more and more like it doesn't have to be Intel-compatible. But if you go to the edge gateway, the thing that bridges OT and IT, that's probably going to be their best opportunity on the edge. The challenge, though, is it's not clear how easy it will be in a low-touch sort of go-to-market model that Dell is accustomed to because like they discovered in the late 90s, it cost $6,000 per year per PC to support. And no one believed that number until Intel did a study on itself and verified it. The protocols from all the sensors on the OT side are so horribly complex and legacy-oriented that even the big auto manufacturers keep track of the different ones on a spreadsheet. So mapping the IT gateway server to all the OT edge devices may turn out to be horribly complex for a few years. >> Oh, it's not a question of may. It is going to be horribly complex for the next few years. (laughing) I don't think there's any question about that. But look, here's what I want to do. I want to ask one more question. And I'm going to go do a round table and ask everybody to give me what the opportunity is and what the threat is. But before I do that, the one thing we haven't discussed, and Dave Vellante, I'm going to throw it over to you, is we've looked at the past of Dell talks a lot about the advantages of its size and the economies of scale that it gets. And Dell's not in the semiconductor business or at least not in a big way. And that's one place where you absolutely do get economies of scale. They got VMware in the system software business, which is an important point. So there may be some economies there. But in manufacturing and assembly, as you said earlier, Dave, that is all under consideration when we think about where the real cost efficiencies are going to be. One of the key places may be in the overall engagement model. The ability to bring a broad portfolio, package it up, and make it available to a customer with the appropriate set of services, and I think this is why you said services is still an opportunity. But what does it mean to get to the Dell EMC overall engagement model as Dell finds or looks to find ways to cut costs, to continue to pay down its debt and show a better income statement? >> Dave: So let me take the customer view. I mean, I think you're right. This whole end to end narrative that you hear from Dell, for years you heard it from HP, I don't think it really makes that much of a difference. There is some supply chain leverage, no question. So you can get somewhat cheaper components, you could probably get supplies, which are very tight right now. So there are definitely some tactical advantages for customers, but I think your point is right on. The real leverage is the engagement model. And the interesting thing from I think our standpoint is that you've got a very high-touch EMC direct sales force, and that's got to expand into the channel. Now, EMC's done a pretty good job with the channel over the last, you know, half a decade. Dell doesn't have as good a reputation there. Its channel partners are many more but perhaps not as sophisticated. So I think one of the things to watch is the channel transformation and then how Dell EMC brings its services and its packages to the market. I think that's very, very important for customers in terms of reducing a lot of the complexity in the Dell EMC portfolio, which just doubled in complexity. So I think that is something that is going to be a critical indicator. It's an opportunity, and at the same time, if they blow it, it's a big threat to this organization. I think it's one of the most important things, especially, as you pointed out, in the context of cost cutting. If they lose sight of the importance of the customer, they could hit some bumps in the road and open it up for competition to come in and swoop some of their business. I don't think they will. I think Michael Dell is very focused on the customer, and EMC's culture has always been that way. So I would bet on them succeeding there, but it's not a trivial task. >> Yeah, I would agree with you. In fact, one of the statements that we heard from Michael Dell and other executives at Dell EMC at VMworld, over and over and over again, on theCUBE and elsewhere, was this notion of open with an opinion. And in many respects, the opinion is not just something that they say. It's something that they do through their packaging and how they put their technologies into the marketplace. Okay, guys, rapid fire, really, really, really short answers. Let's start with the threats. And then we'll close with the positive note on the strengths. David Floyer, really quick, biggest threat that we're looking at next week? >> The biggest threat is the evolution of ARM processes, and if they keep to an Intel-only strategy, that to me is their biggest threat. Those could offer a competition in both mobile, increasing percentages of mobile, and also also in the IOT and other processor areas. >> Alright, George Gilbert, biggest threat? >> Okay, two, summarizing the comments I made before, one, they may not be able to get the cloud vendors to adopt pools of their scale-out infrastructure because the software companies may not be ready to take advantage of it yet. So that's cloud side. >> No, you just get one. Dave Vellante. >> Dave: Interest rates. (laughing) >> Peter: Excellent. Stu Miniman. >> Stu: Software. >> Peter: Okay, come on Stu. Give me an area. >> Stu: Dell's a hardware company! Everything George said, there's no way the cloud guys are going to adopt Dell EMC's infrastructure gear. This is a software play. Dell's been cutting their software assets, and I'm really worried that I'm going to see an edge box, you know, that doesn't have the intelligence that they need to put the intelligence that they say that they're going to put in. >> So, specifically, it's software that's capable of running the edge centers, so to speak. Ralph Finos. >> Ralph: Yeah, I think the hardware race to the bottom. That's a big part of their business, and I think that's a challenge when you're looking at going head on head, with HPE especially. >> Peter: Neil Raden, Neil Raden. >> Neil: Private managed cloud. >> Or what we call true private cloud, which goes back to what Stu said, related to the software and whether or not it ends up being manageable. Okay, threats. David Floyer. >> You mean? >> Or I mean opportunities, strengths. >> Opportunities, yes. The opportunity is being by far the biggest IT place out there, and the opportunity to suck up other customers inside that. So that's a big opportunity to me. They can continue to grow by acquisition. Even companies the size of IBM might be future opportunities. >> George Gilbert. >> On the opposite side of what I said earlier, they really could work with the data management vendors because we really do need scale-out infrastructure. And the cloud vendors so far have not spec'd any or built any. And at the same time, they could- >> Just one, George. (laughing) Stu Miniman. >> Dave: Muted. >> Peter: Dave Vellante. >> Dave: I would say one of the biggest opportunities is 500,000 VMware customers. They've got the server piece, the networking piece kind of, and storage. And combine that with their services prowess, I think it's a huge opportunity for them. >> Peter: Stu, you there? Ralph Finos. >> Stu: Sorry. >> Peter: Okay, there you go. >> Stu: Dave stole mine, but it's not the VMware install base, it's really the Dell EMC install base, and those customers that they can continue moving along that journey. >> Peter: Ralph Finos. >> Ralph: Yeah, highly successful software platform that's going to be great. >> Peter: Neil Raden. >> Neil: Too big to fail. >> Alright, I'm going to give you my bottom lines here, then. So this week we discussed Dell EMC and our expectations for the Analyst Summit and our observations on what Dell has to say. But very quickly, we observed that Dell EMC is a financial play that's likely to make a number of people a lot of money, which by the way has cultural implications because that has to be spread around Dell EMC to the employee base. Otherwise some of the challenges associated with cost cutting on the horizon may be something of an issue. So the whole cultural challenges faced by this merger are not insignificant, even as the financial engineering that's going on seems to be going quite well. Our observation is that the cloud world ultimately is being driven by software and the ability to do software, with the other observation that the traditional hardware plays tied back to Intel will by themselves not be enough to guarantee success in the multitude of different cloud options that will become available, or opportunities that will become available to a wide array of companies. We do believe the true private cloud will remain crucially important, and we expect that Dell EMC will be a major player there. But we are concerned about how Dell is going to evolve as a, or Dell EMC is going to evolve as a player at the edge and the degree to which they will be able to enhance their strategy by extending relationships to other sources of hardware and components and technology, including, crucially, the technologies associated with analytics. We went through a range of different threats. If we identify two that are especially interesting, one, interest rates. If the interest rates go up, making Dell's debt more expensive, that's going to lead to some strategic changes. The second one, software. This is a software play. Dell has to demonstrate that it can, through its 6% of R and D, generate a platform that's capable of fully automating or increasing the degree to which Dell EMC technologies can be automated. In many conversations we've had with CIOs, they've been very clear. One of the key criteria for the future choices of suppliers will be the degree to which that supplier fits into their automation strategy. Dell's got a lot of work to do there. On the big opportunities side, the number one from most of us has been VMware and the VMware install base. Huge opportunity that presents a pathway for a lot of customers to get to the cloud that cannot be discounted. The second opportunity that we think is very important that I'll put out there is that Dell EMC still has a lot of customers with a lot of questions about how digital transformation's going to work. And if Dell EMC can establish itself as a thought leader in the relationship between business, digital business, and technology and bring the right technology set, including software but also packaging of other technologies, to those customers in a true private cloud format, then Dell has the potential to bias the marketplace to their platform even as the marketplace chooses in an increasingly rich set of mainly SaaS but public cloud options. Thanks very much, and we look forward to speaking with you next week on the Wikibon Weekly Research Meeting here on theCUBE. (techno music)

Published Date : Oct 9 2017

SUMMARY :

And in the ensuing year, there's been And it's likely that customers are going to see And it's got to get into the game. platform, foundation, is going to be more cloud-oriented. and the go-to-market processes of what used to be VCE, certainly in the case of Dell, So the question to ask is Dell EMC is relative to some of these technologies? in the clouds to take advantage and ask everybody to give me what the opportunity is and that's got to expand into the channel. And in many respects, the opinion is not just and if they keep to an Intel-only strategy, one, they may not be able to get No, you just get one. Dave: Interest rates. Peter: Excellent. Peter: Okay, come on Stu. the cloud guys are going to adopt that's capable of running the edge centers, so to speak. Ralph: Yeah, I think the hardware race to the bottom. related to the software and whether or not So that's a big opportunity to me. And the cloud vendors so far have not spec'd any Stu Miniman. And combine that with their services prowess, Peter: Stu, you there? install base, it's really the Dell EMC install base, that's going to be great. and the ability to do software,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

Neil RadenPERSON

0.99+

Dave VellantePERSON

0.99+

EMCORGANIZATION

0.99+

DavePERSON

0.99+

MichaelPERSON

0.99+

Peter BurrisPERSON

0.99+

Ralph FinosPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CiscoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

RalphPERSON

0.99+

PeterPERSON

0.99+

AWSORGANIZATION

0.99+

NeilPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Michael DellPERSON

0.99+

David FoyerPERSON

0.99+

NutanixORGANIZATION

0.99+

MondayDATE

0.99+

12%QUANTITY

0.99+

HPORGANIZATION

0.99+

hundredsQUANTITY

0.99+

next weekDATE

0.99+

20170908 Wikibon Analyst Meeting Peter Burris


 

(upbeat music) >> Welcome to this week's edition of Wikibon Research Meeting on the Cube. This week we're going to talk about a rather important issue that raises a lot of questions about the future of the industry and that is, how are information technology organizations going to manage the wide array of new applications, new types of users, new types of business relationships that's going to engender significant complexity in the way applications are organized, architected and run. One of the possibilities is that we'll see an increased use of machine learning, ultimately inside information technology and operations management applications and while this has tremendous potential, it's not without risk and it's not going to be simple. These technologies sound great on paper but they typically engender an enormous amount of work and a lot of complexity themselves to run. Having said that, there are good reasons to suspect that this approach will in fact be crucial to ultimately helping IT achieve the productivity that it needs to support digital business needs. Now a big challenge here is that the technology, while it looks good, as I said, nonetheless is pretty immature and today's world, there's a breadth first and a depth first approach to thinking about this. Breadth first works on or worries about end to end visibility into how applications work across multiple clouds, on premise in the cloud, across applications, wherever they might be. You get an enormous amount of visibility and alerts but you also get a lot of false positives and that creates a challenge because these tools just don't have enormous visibility into how the individual components are working or how their relationships are set up, they just look at the broad spectrum of how work is being conducted. The second class is looking at depth first which is really based on the digital twin notion that's popular within the IOT world and that is vendors delivering out of the box models that are capable of doing a great job of creating a digital simulacrum of a particular resource so that it can be modeled and tracked and tested. Now again, a lot of potential, a lot of questions about how machine learning and iTom are going to come together. George, what is one of the key catalysts here? Somewhere in here there's a question about people. >> Okay there's a talent question, always with the introduction of new technology, it's people processed technology. The people end of the equation here is that we've been trying to upskill and create a new class of application developer as Jim has identified. This new class is a data scientist and they focus on data intensive applications and machine learning technology. The reason I bring up the technology is when we have this landscape that you described, that is getting so complex where we're building on business transaction applications, extending them with systems of engagement and then the operational infrastructure that supports both of them, we're getting many orders of magnitude more complexity in multiple dimensions and in data and so we need a major step function in the technology to simplify the management of that because just the way we choked on the deployment, mainstream deployment of big data technology in terms of lack of the specialized administrators, we are similarly choking on the deployment of very high value machine learning applications because it takes a while to train a new generation of data scientists. >> So George, we got a lot of challenges here in trying to train people but we're also expecting that we're going to be better trained technology with some of these new questions, so Jim let me throw it to you. When we think ultimately about this machine learning approach, what are some of the considerations that people have to worry about as they envision the challenges associated with training some of these new systems? >> Yeah I think one of the key challenges with training new systems for iTom is, do you have a reference data set? The predominant approach to machine learning is something called supervised learning where you're training it on rhythm against some data that represents what you're trying to detect or predict or classify. If for IT and operations management, you're looking for anomalies, for unprecedented events, black swan events and so forth. Clearly, if they're unprecedented, there's probably not going to be a reference data set that you can use to detect them or hopefully before they happen and neutralize them. That's an important consideration and supervised learning breaks down if you can't find a reference data example. Now there are approaches to machine learning, they're called cluster analysis or unsupervised learning, alert to something called cluster analysis algorithms which would be able to look for clusters in the data that might be indicative of correlations that might be useful to drill into, might be indicative of anomalous events and so forth. What I'm getting as it that when you're then considering ML, machine learning in the broader perspective of IT and operations management, do you go supervised learning, do you go with unsupervised learning for the anopolis, do you, if you want to remediate it, that you have a clear set of steps to follow from precedent, you might also want something called reinforcement learning. What I'm getting at is that all the aspects of training the models to acquire the knowledge necessary to manage the IT operations. >> Jim, let me interrupt, what we've got here is a lot of new complexity and we've got a need for more people and we've got a need for additional understanding of how we're going to train these systems but this is going to become an increasingly challenging problem. David Floyer, you've done some really interesting research on with the entire team that we call unigrid. Unigrid is looking at the likely future of systems as we're capable of putting more data proximate to other data and use that as a basis for dramatically improving our ability to, in a speedy, nearly real-time way, drive automation between many of these new application forms. It seems as though depth first, or what we're calling depth first, is going to be an essential element of how unigrid's going to deploy. Take us through that scenario and what do you think about how these are going to come together? >> Yes, I agree. The biggest, in our opinion, the biggest return on investment is going to come from being able to take the big data models, the complex models and make those simple enough that they can, in real time, help the acceleration, the automation of business processes. That seems to be the biggest return on this and unigrid is allowing a huge amount more data to be available in near real-time, 100 to 1000 times more data and that gives us an opportunity for business analytics which includes of course AI and machine learning and basic models, etc. to be used to take that data and apply it to the particular business problem, whether it be fraud control, whether it be any other business processing. The point I'm making here is that coding techniques are going to be very, very stretched. Coding techniques for an edge application in the enterprise itself and also of course coding techniques for pushing down stuff to the IOT and to the other agents. Those coding techniques are going to focus on performance first to begin with. At the same time, a lot of that coding will come from ISVs into existing applications and with it, the ISVs have the problem of ensuring that this type of system can be managed. >> So George, I'm going to throw it back to you at this point in time because based on what Dave has just said, that there's new technology on the horizon that has the potential to drive the business need for this type of technology, we'll get to that in a little bit more detail in a second, but is it possible that at least the depth first side of these ML and IT and iTom applications could become the first successful packaged apps that use machine learning in a featured way? >> That's my belief, and the reason is that even though there's going to be great business value in linking, say big data apps and systems of record and web mobile apps, say for fraud prevention or detection applications where you really want low latency integration, most of the big data applications today are more high latency integration where you're doing training and inferencing more in batch mode and connecting them with high latency with the systems of record or web and mobile apps. When you have that looser connection, high latency connection, it's possible to focus just on the domain, the depth first. Because it's depth first, the models have much more knowledge built in about the topology and operation of that single domain and that knowledge is what allows them to have very precise and very low latency remediation either recommendations or automated actions. >> But the challenge with just looking at it from a depth first standpoint is that as the infrastructure, as the relationships amongst technologies and toolings inside an infrastructure application portfolio is that information is not revealed and becomes more crucial overall to the operation of the system. Now we got to look a little bit at this notion of breadth first, the idea of tooling support end to end. That's a little bit more problematic, there's a lot of tools that are trying to do that today, a lot of services trying to do that today, but one of the things that's clearly missing is an overall good understanding of the dependency that these two tools have on machine learning. Jim, what can you tell us about how overall some of these breadth first products seem to be dependent or not on some of these technologies. >> Yeah, first of all breadth first products, what's neat is above, basically an overall layer is graph analysis, graph modeling to be able to follow a hundred interactions of transactions and business flows across your distributed IT infrastructure, to be able to build that entire narrative of what's causing a problem or might be causing a problem. That's critically important but as you're looking at depth first and you just go back and forth between depth first, like digital twin as a fundamental concept and a fundamentally important infrastructure for depth first, because the digital twin infrastructure maintains the data that can be used for training data for supervised machine learning looking into issues from individual entities. If you can combine overall graph modeling at the breadth first level for iTom with the supervised learning based on digital twin for depth first, that makes for a powerful combination. I'm talking in a speculative way, George has been doing the research, but I'm seeing a lot of uptake of graph modeling technology in the sphere, now maybe George could tell us otherwise, but I think that's what needs to happen. >> I think conceptually, the technology is capable of providing this George, I think that it's going to take some time however, to see it fully exploited. What do you got to say about that? >> I do want to address Jim, your comments about training which is the graph that you're referring to is precisely the word when I use topology figuring that more people will understand that and it's in the depth first product that the models have been pre-trained, supervised and trained by the vendor so they come baked in to know how to figure out the customer's topology and build what you call the graph. Technically, that's the more correct way of describing it and that those models, pre-trained and supervised have enough knowledge also to figure out the behavior which I call the operations of those applications, it's when you get into the breadth first that it's harder because you have no bounds to make assumptions about, it's harder to figure out that topology and operational behavior. >> But coming back to the question I asked, the fact that it's not available today, as depth first products accrete capabilities and demonstrate success, and let's presume that they are because there is evidence that they are, that will increase the likelihood that they are generating data that can then be used by breadth first products. But that raises an interesting question. It's a question that certainly I've thought about as well, is that is, Nick, ultimately where is the clearing house for ascertaining the claims these technologies will not and work together, have you seen examples in the past of standards, at this level of complexity coming together that can ensure that claims in fact, or that these technologies can in fact increasingly work together. Have we've seen other places where this has happened? >> Good question. My answer is that I don't know. >> Well but there have been standards bodies for example that did some extremely complex stuff in IO. Where we saw an explosion in the number of storage and printer and other devices and we saw separation of function between CPUs and channels where standards around SCUZI and what not, in fact were relatively successful, but I don't know that they're going to be as, but there is specific engineering tests at the electricity and physics level and it's going to be interesting to see whether those types of tests emerge here in the software world. All right, I want to segue from this directly into business impacts because ultimately there's a major question for every user that's listening to this and that is this is new technology, we know the business is going to demand it in a lot of ways. The machine learning in business activities, as David Floyer talked about, business processes, but the big question is how is this going to end up in the IT organization? In fact is it going to turn into a crucial research that makes IT more or less successful? Neil Raden, we've got examples of this happening again in the past, where significant technology discontinuities just hit both the business and IT at the same time. What happened? >> Well, in a lot of cases it was a disaster. In many more cases, it was a financial disaster. We had companies spending hundreds of billions of dollars implementing an ERP system and at the end, they still didn't have what they wanted. Look, people not just in IT, not just in business, not just in technology, consistently take complex problems and try to reduce them to something simple so they can understand them. Nowhere is that more common than in medical research where they point at a surrogate endpoint and they try to prove the surrogate endpoint but they end up proving nothing about the disease they're trying to cure. I think that this problem now, it's gone beyond an inventory of applications and organizations, far too complex for people to really grasp all at once. Rather than come up with a simplified solution, I think we can be looking to software vendors to be coming up with packages to do this. But it's not going to be a black box. It's going to require a great deal of configuration and tuning within each company because everyone's a little different. That's what I think is going to happen and the other thing is, I think we're going to have AI on AI. You're going to have a data scientist work bench where the work bench recommends which models to try, runs the replicates, crunches the numbers, generates the reports, keeps track of what's happening, goes back to see what's happened because five years ago, data scientists were basically doing everything in R and Java and Python and there's a mountain of terrible code out there that's unmaintainable because they're not professional programmers, so we have to fix that. >> George? >> Neil, I would agree with you for the breadth first products where the customer has to do a lot of the training on the job with their product. But in the depth first products, they actually build in such richly trained models that there really is, even in the case of some of the examples that we've researched, they don't even have facilities for customers to add say the complex event processing for analytics for new rules. In other words, they're trained to look at the configuration settings, the environment variables, the setup across services, the topology. In other words it's like Steve Jobs says, it just works on a predefined depth first domain like a big data stack. >> So we're likely to see this happen in the depth first and then ultimately see what happens in the breadth first but at the end of the day, it still has to continue to attract capital to make these technologies work, make them evolve and make the business cases possible. David, again you have spent a lot of time looking at this notion of business case and we can see that there's a key value to using machine learning in say fraud detection, but putting shoes on the cobbler's children of IT has been a problem for years. What do you think? Are we going to see IT get the resources it needs starting with depth first but so that it can build out a breadth oriented solution? >> My view is that for what it's worth, is we're going to focus or IT is going to focus on getting in applications which use these technologies and they will go into the places for that business where it makes most sense. If you're an insurance company, you can make hundreds of millions of dollars with fraud detection. If you are in other businesses, you want to focus on security or potential security. The applications that go in with huge amounts more data and more complexity within them, initially in my view will be managed as specific applications and the requirements of AI requirements to manage them will be focused on those particular applications, often by the ISVs themselves. Then from that, they'll be learning about how to do it and from that will come broader type of solutions. >> That's further evidence that we're going to see a fair amount of initial successes more in the depth first side, application specific management. But there's going to be a lot of efforts over the next few years for breadth first companies to grow because there's potentially significant increasing returns from being the first vendor out there that can build the ecosystem that ties all of these depth first products together. Neil, I want to leave you with a last thought here. You mentioned it earlier and you've done a lot of work on this over the years, you assert that at the end of the day, a lot of these new technologies, similar to what David just said, are going to come in through applications by application providers themselves. Just give us a quick sense of what that scenario's going to look like. >> I think that the technology sector runs on two different concepts. One is I have a great idea, maybe I could sell it. Did you hear that, I just got a message my connection was down there. Technology vendors will say that I have a, >> All right we're actually losing you, so Dave Alante, let me give you the last word. When you think about some of the organizational implications of doing this, what do we see as some of the biggest near term issues that IT's going to have to focus on to move from being purely reactive to actually getting out in front and perhaps even helping to lead the business to adopt these technologies. >> Well I think it's worth instructive to review the problem that's out there and the business impact that it'll have an what many of the vendors have proposed through software, but I think there are also some practical things that IT organizations can do before they start throwing technology at the problem. We all know that IT has been reactive generally to operations issues and it's affected a laundry list of things in the business, not only productivity, availability of critical systems, data quality, application performance and on and on. But the bottom line is it increases business risk and cost and so when the organizations that I talk to, they obviously want to be proactive. Vendors are promising that they have tools to allow them to be more proactive, but they really want to reduce the false positives. They don't want to chase down trivial events and of course cloud complicates all this. What the vendor community has done is it's promised end to end visibility on infrastructure platforms including clouds and the ability to discover and manage events and identify anomalies in a proactive manner. Maybe even automate remediation steps, all important things, I would suggest that these need to map to critical business processes and organizations need to have an understanding or they're not going to understand the business impact and it's got to extend to cloud. Now, is AI and ML the answer, maybe, but before going there, I would suggest that organizations look at three things that they can do. The first is, the fact is that most outages on infrastructure come from failed or poorly applied changes, so start with good change management and you'll attack probably 70% of the problem in our estimation. The second thing that we, I think would point to users, is that they should narrow down their promises and get their SLA's firmed up so they can meet them and exceed them and build up credibility with an organization before taking on wider responsibilities and increasing project skills and I think the third thing is start acting like a cloud provider. You got to be clear about the services that you offer, you want to communicate the SLA's, you know clearly they're associated with those services and charge for them appropriately so that you can fund your business. Do these three things before you start throwing technology at the problem. >> That's a great wrap. The one thing I'd add to that Dave, before we actually get to the wrap itself is that I find it intriguing that the processes of thinking through the skills we need and the training that we're going to have to do of people and increasing the training, whether it's supervised, unsupervised, reinforced, of some of these systems, will help us think through exactly the type of prescriptions that you just put forward. All right, let's wrap. This has been a great research meeting. This week, we talked about the emergence of machine learning technologies inside IT operations management solutions. The observation we make is that increasingly, businesses becoming dependent on multicloud including a lot of SAS technologies and application forms and using that as a basis for extending their regional markets and providing increasingly specialized services to customers. This is putting an enormous pressure on the relationship between brand, customer experience and technology management. As customers demand to be treated more uniquely, the technology has to respond, but as we increase the specificity of technology, it increases the complexity associated with actually managing that technology. We believe that there will be an opportunity for IT organizations to utilize machine learning and related AI type and big data technologies inside their iTom capabilities but that the journey to get there is not going to be simple. It's not going to be easy and it's going to require an enormous amount of change. The first thing we observe is that there is this idea of what we call breadth first technology or breadth first machine learning in iTom, which is really looking end to end. The problem is, without concrete deep models, we look at individual resources or resource pools, end up with a lot of false positives and you lose a lot of the opportunity to talk about how different component trees working together. Depth first, which is probably the first place that machine learning's going to show up in a lot of these iTom technologies, provides an out of the box digital twin from the vendor that typically involves or utilizes a lot of testing on whether or not that twin in fact is representative and is an accurate simulacrum of the resource that's under management. Our expectation is that we will see a greater utilization of depth first tooling inactivity, even as users continue to experiment with breadth first options. As we look on the technology horizon, there will be another forcing function here and that is the emergence of what we call unigrid. The idea that increasingly you can envision systems that bring storage, network and computing under a single management framework at enormous scale, putting data very close to other data so that we can run dramatically new forms of automation within a business, and that is absolutely going to require a combination of depth first as well as breadth first technology to evolve. A lot of need, lot of change on how the IT organization works, a lot of understanding of how this training's going to work. The last point we'll make here is that this is not something that's going to work if IT pursues this in isolation. This is not your old IT where we advocated for some new technology, bought it in, played for it, create a solution and look around for the problem to work with. In fact, the way that this is likely to happen and it further reinforces the depth first approach of being successful here is we'll likely see the business demand certain classes of applications that can in fact be made more functional, faster, more reliable, more integratable through some of these machine learning like technologies to provide a superior business outcome. That will require significant depth first capabilities in how we use machine learning to manage those applications. Speed them up, make them more complex, make them more integrated. We're going to need a lot of help to ensure that we're capable of improving the productivity of IT organizations and related partnerships that actually sustain a business's digital business capabilities. What's the bottom line? What's the action item? The action item here is user organizations need to start exploring these new technologies, but do so in a way that has proximate near term implications for how the organization works. For example, remember that most outages are in fact created not by technology but by human error. Button up how you think about utilizing some of these technologies to better capture and report and alert folks, alert the remainder of the organization to human error. The second thing to note very importantly, is that the promises of technology are not to be depended upon as we work with business to establish SLA's. Get your SLA's in place so the business can in fact have visibility to some of the changes that you're making through superior SLA's because that will help you with the overall business case. Now very importantly, cloud suppliers are succeeding as new business entities because they're doing a phenomenal job of introducing this and related technologies into their operations. The cloud business is not just a new procurement model. It's a new operating model and start to think about how your overall operating plans and practices and commitments are or are not ready to fully incorporate a lot of these new technologies. Be more of a cloud supplier yourselves. All right, that closes this week's Friday research meeting from Wikibon on the Cube. We're going to be here next week, talk to you soon. (upbeat music)

Published Date : Sep 11 2017

SUMMARY :

and a lot of complexity themselves to run. in the technology to simplify the management of that so Jim let me throw it to you. What I'm getting at is that all the aspects is going to be an essential element and basic models, etc. to be used to take that data low latency integration, most of the big data applications from a depth first standpoint is that as the infrastructure, is graph analysis, graph modeling to be able to follow going to take some time however, to see it fully exploited. that the models have been pre-trained, supervised and demonstrate success, and let's presume that they are My answer is that I don't know. but I don't know that they're going to be as, and at the end, they still didn't have what they wanted. a lot of the training on the job with their product. but at the end of the day, it still has to continue of AI requirements to manage them will be focused that scenario's going to look like. Did you hear that, I just got a message near term issues that IT's going to have to focus on and the ability to discover and manage events but that the journey to get there is not going to be simple.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

JimPERSON

0.99+

DavidPERSON

0.99+

Neil RadenPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

Dave AlantePERSON

0.99+

DavePERSON

0.99+

Steve JobsPERSON

0.99+

Peter BurrisPERSON

0.99+

70%QUANTITY

0.99+

UnigridORGANIZATION

0.99+

100QUANTITY

0.99+

WikibonORGANIZATION

0.99+

next weekDATE

0.99+

two toolsQUANTITY

0.99+

bothQUANTITY

0.99+

NickPERSON

0.99+

This weekDATE

0.99+

JavaTITLE

0.99+

this weekDATE

0.99+

firstQUANTITY

0.99+

second classQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

unigridORGANIZATION

0.99+

second thingQUANTITY

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

each companyQUANTITY

0.98+

first productsQUANTITY

0.98+

first productQUANTITY

0.98+

first vendorQUANTITY

0.98+

hundreds of millions of dollarsQUANTITY

0.97+

1000 timesQUANTITY

0.97+

first sideQUANTITY

0.97+

first levelQUANTITY

0.96+

first standpointQUANTITY

0.96+

first domainQUANTITY

0.96+

first optionsQUANTITY

0.96+

third thingQUANTITY

0.96+

FridayDATE

0.96+

singleQUANTITY

0.96+

first approachQUANTITY

0.95+

iTomORGANIZATION

0.95+

hundreds of billions of dollarsQUANTITY

0.95+

first approachQUANTITY

0.94+

Wikibon Research MeetingEVENT

0.93+

Wrap - Pure Accelerate 2017 - #PureAccelerate #theCUBE


 

>> Announcer: LIVE from San Francisco, it's theCUBE, covering Pure Accelerate 2017. Brought to you by Pure Storage. >> Welcome back to San Francisco everybody, this is Dave Vellante with David Floyer, and this is theCUBE, the leader in live tech coverage, we go out to the events, we Extract the Signal from the Noise, this is Pure Accelerate 2017. This is the second year of Pure Accelerate. Last year was a little north of here at, right outside AT&T Park. Pure, it's pretty funny, Pure chose this venue, it's like this old, rusted out, steel warehouse, where they used to make battleships, and they're going to tear this down after the show, so of course the metaphor is spinning rust, old legacy systems that Pure is essentially replacing, this is like a swan song, goodbye to the old days, welcome in the new. So very clever marketing by Pure. I mean they did a great job setting up this rusty old building-- >> It's bad. Nice, it's a nice building. >> Hopefully it doesn't fall down on our heads and, so, but let's get to the event. The messaging was very strong here. I mean, they pull no punches. >> You know, legacy, slow, expensive, not agile, we're fast and simple, come with us. Of course the narrative from the big guys is, "Oh Pure, they're small, they're losing money, "you know, they're in a little niche." But you see this company as I said earlier when Matt Kixmoeller was on. They've hit escape velocity. >> Absolutely. >> They're not going out of business-- >> Nope. Okay, there's a lot of companies you see them-- >> And they're making a profit. >> Yeah, you read their financials and you say ah oh, this company's in deep you know what. No, they're not making a profit yet, Pure. >> They are projecting to make a profit in the next six months. >> But they basically got you know, 500 and what, twenty-five million dollars in the balance sheet, their negative-free cash flow gets them through by my calculation, in the next nine or 10 years, because they have zero debt. They could easily take out debt if they wanted to, growing at 30% a year. They'll do a billion dollars this year, 2.4 billion dollar market cap. They didn't have a big brain drain six months after the IPO, which was really important, it was like, you know business as usual. They've maintained the core management team. I know Jonathan Martin's you know, moving on, but they're bringing in Todd Forsythe to run marketing. A very seasoned marketing executive so, you know, things are really pretty interesting. The fact is, we haven't seen a billion dollar storage company that's independent since NetApp, there's only one left, NetApp. EMC is now Dell EMC. 3PAR never made it even close to a billion outside of HPE. Isilon couldn't make it, Compellent couldn't make it, Data Domain you know, couldn't make it as a billion dollar company. None of those guys could ever reach that level of escape velocity, that it appears that Pure and Nutanix are both on. Your thoughts David Floyer. >> I couldn't agree more. They have made their whole mantra, simplicity. They've really brought in the same sort of simplicity as Nutanix is doing. Those are the companies that seem to have been really making it, because the fundamental value proposition to their customers is, "You don't need to put in lots of people "to manage this, it'll manage itself." And I think that's, they've stuck to that, and they are been very successful with that simple message. Obviously taking a flash product, and replacing old rusts with it is, makes it much simpler, they're starting off from a very good starting point. But they've extended that right the way up to a whole lot of Cloud services with Pure. They've extended it in the whole philosophy of how they put data services together. I'm very impressed with that. It reminds me of Ashley, the early days of-- >> Of NetApp. >> No, of NetApp and also of the 3PAR. >> Oh, yeah, yeah, absolutely, simplicity, great storage services, Tier 1. When I say NetApp, I'm thinking, you know, simplicity in storage services as well. But you know, this is the joke that I been making all week is that you talk to a practitioner you say, "What's your storage strategy?" Oh, I buy EMC for block, I buy NetApp for file. At Pure it's sort of, not only challenging that convention, but they're trying to move the market to the big data, and analytics, and they also have a unique perspective on converge and hyper-converge. They count a deep position hyper-converge that's you know, okay for certain use cases, not really scalable, not really applicable to a lot of the things we're doing. You know, Nutanix could, might even reach a billion dollars before Pure, so it's going to be interesting. >> Well, I think they have a second strategy there, which is to be an OEM supplier. Their work with Cisco for example. They're an OEM supplier there. They are bending to the requirements of being an OEM supplier, and I think that's their way into the hyper-converge market is working with certain vendors, certain areas, providing the storage in the way that that integrator wants, and acting in that way, and I think that's a smart strategy. I think that's the way that they're going to survive in the traditional market. But what's, to me, interesting anyway, is that they are really starting to break out into different markets, into the AI market, into flash for big data, into that type of market, and with a very interesting approach, which is, you can't afford to take all the data from the edge to the center, so you need us, and you need to process that data using us, because it's in real time these days. You need that speed, and then you want to minimize the amount of data that you move up the stack to the center. I think it's a very interesting strategy. >> So their competing against, you know, a lot of massive companies I mean, and they're competing with this notion of simplicity, some speed and innovation in these new areas. I mean look at, compare this with you know, EMC's portfolio, now Dell, EMC's portfolio. It's never been more complicated right? But, they got one of everything. They've got a massive distribution channel. They can solve a lot of problems. HPE, a little bit more focused, then Dell EMC. Really going hard after the edge. So they bring some interesting competition there. >> And they bring their service side, which is-- >> As does Dell. So they got servers right? Which is something that Pure has to partner on. And then IBM it's like you know, they kind of still got their toe in infrastructure, but you know they're, Ginni Rometty's heart is not in it you know? But they, they have it, they can make money at it, and you know, they're making the software to find but... And then you get a lot of little guys kind of bubbling. Well, Nimble got taken out, SimpliVity, which of course was converged, hyper-converged. A lot of sort of new emerging guys, you got, you know guys like Datrium out there, Iguazio. Infinidat is another one, much, much smaller, growing pretty rapidly. You know, what are your thoughts, can any of these guys become a billion-dollar company, I mean we've talked for years David about... Remember we wrote a piece? Can EMC remain independent? Well, the answer was no, right? Can Pure remain independent in your view? >> I don't believe it could do it, it was, as just purely storage, except by taking the OEM route. But I think if they go after it as a data company, as a information company, information processing company, and focus on the software that's required to do that, along with the processes, I think they can, yes. I think there's room for somebody-- >> Well, you heard what Kix said. Matt Kixmoeller said, "We might have to take storage "out of the name." >> Out of the name, that's right. >> Maybe, right? >> Yes, I think they will, yeah. >> So they're playing in a big (mumbles), and the (mumbles) enormous, so let's talk about some of the stuff we've been working on. The True Private Cloud report is hot. I think it's very relevant here. On-Prem customers want to substantially mimic the Public Cloud. Not just virtualization, management, orchestration, simplified provisioning, a business model that provides elasticity, including pricing elasticity. HPE actually had some interesting commentary there, on their On-Demand Pricing. Not just the rental model, so they're doing some interesting things, I think you'll see others follow suit there. I find Pure to be very Cloud-like in that regard, in terms of Evergreen, I mean they essentially have a Sass subscription model for their appliance. >> And they're going after the stacked vendors as well, in this OEM mode. >> Yeah, they call it four to one thousand Cloud vendors, so you're True Private Cloud Report, what was significant about that was, to me anyway, was a hundred and fifty billion dollars approximately, is going to exit the market in terms of IT labor that's doing today, non-differentiated lifting of patching, provisioning, server provisioning, (mumbles) provisioning, storage management, performance management, tuning, all the stuff that adds no value to the business, it just keeps the lights on. That's going to go away, and it's going to shift into Public Cloud, and what we call True Private Cloud. Now True Private Cloud is going, in our view, to be larger than infrastructures that serve us in the Public Cloud, not as large as Sass, and it's the fastest growing part of the market today, from a smaller base. >> And also will deal with the edge. It will go down to the edge. >> So punctuate down, so also down to the edge so, what's driving that True Private Cloud market? >> What's driving it is (mumbles), to a large extent, because you need stuff to be low latency, and you need therefore, Private Clouds on the edge, in the center. Data has a high degree of gravity, it's difficult to move out. So you want to move the application to where that data is. So if data starts in the Cloud, it should keep stay in the Cloud, if it starts in the edge, you want to keep it there and let it die, most of it die there, and if it starts in headquarters again, no point in moving it just for the sake of moving it. So where possible, Private Cloud is going to be the better way of dealing with data at the edge, and data in headquarters, which is a lot of data. >> Okay, so a lot of announcements here today, NVMe, and NVMe Fabric you know, pushing hard, into file and object, which really they're the only ones with all-flash doing that. I think again, I think others will follow suit, once they have, start having some success there. What are some of the things that you are working on with the Wikibon Team these days? >> Well, the next thing we're doing is the update of the, well two things. We're doing a piece on what we call Unigrid, which is this new NVMe of a fabric architecture, which we think is going to be very, very important to all enterprise computing. The ability to merge the traditional state applications, applications of record with the large AI, and other big data applications. >> Relevations, what we've talking about here. >> Very relevant indeed, and that's the architecture that we believe will bring that together. And then after that we're doing our service end, and converged infrastructure report and the how, showing how the two of those are merging. >> Great, that's a report that's always been, been very highly anticipated. I think this is our third or fourth doing that right? >> Fourth year. >> Right, fourth year so great looking forward to that. Well David, thanks very much for co-hosting with me-- >> Your very welcome. >> And it's been a pleasure working with you. Okay that's it, we're one day here at Pure Accelerate. Tomorrow we're at Hortonworks, DataWorks Summit, we were there today actually as well, and Cloud Foundry Summit. Of course we're also at the AWS Public Sector, John Furrier is down there. So yeah, theCUBE is crazy busy. Next week we're in Munich at, IBM has an event, the Data Summit, and then the week after that we're at Nutanix dot next. There's a lot going on theCUBE, check out SiliconANGLE.tv, to find out where we're going to be next. Go to Wiki.com for all the research, and SiliconANGLE.com for all the news, thanks you guys, great job, thanks to Pure, we're out, this is theCUBE. See you next time. (retro music)

Published Date : Jun 14 2017

SUMMARY :

Brought to you by Pure Storage. and they're going to tear this down after the show, Nice, it's a nice building. so, but let's get to the event. Of course the narrative from the big guys is, Okay, there's a lot of companies you see them-- this company's in deep you know what. in the next six months. But they basically got you know, 500 and what, Those are the companies that seem to have been is that you talk to a practitioner you say, from the edge to the center, I mean look at, compare this with you know, and you know, they're making the software to find but... and focus on the software that's required to do that, "out of the name." and the (mumbles) enormous, And they're going after the stacked vendors as well, and it's the fastest growing part of the market today, And also will deal with the edge. the better way of dealing with data at the edge, What are some of the things that you are working on Well, the next thing we're doing is and converged infrastructure report and the how, I think this is our third or fourth doing that right? Well David, thanks very much for co-hosting with me-- and SiliconANGLE.com for all the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

CiscoORGANIZATION

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

EMCORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Matt KixmoellerPERSON

0.99+

DellORGANIZATION

0.99+

MunichLOCATION

0.99+

twoQUANTITY

0.99+

Jonathan MartinPERSON

0.99+

NutanixORGANIZATION

0.99+

thirdQUANTITY

0.99+

San FranciscoLOCATION

0.99+

todayDATE

0.99+

fourthQUANTITY

0.99+

Last yearDATE

0.99+

NetAppORGANIZATION

0.99+

2017DATE

0.99+

Ginni RomettyPERSON

0.99+

DatriumORGANIZATION

0.99+

PureORGANIZATION

0.99+

InfinidatORGANIZATION

0.99+

Next weekDATE

0.99+

twenty-five million dollarsQUANTITY

0.99+

KixPERSON

0.99+

second strategyQUANTITY

0.99+

500QUANTITY

0.99+

IsilonORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

Fourth yearQUANTITY

0.99+

TomorrowDATE

0.99+

Data SummitEVENT

0.99+

fourth yearQUANTITY

0.99+

Cloud Foundry SummitEVENT

0.98+

two thingsQUANTITY

0.98+

NetAppTITLE

0.98+

AWS Public SectorORGANIZATION

0.98+

UnigridORGANIZATION

0.97+

this yearDATE

0.97+

Todd ForsythePERSON

0.97+

second yearQUANTITY

0.97+

oneQUANTITY

0.97+

HPEORGANIZATION

0.97+

NVMeORGANIZATION

0.96+

fourQUANTITY

0.96+

a hundred and fifty billion dollarsQUANTITY

0.96+

Wikibon TeamORGANIZATION

0.96+

billion dollarsQUANTITY

0.96+

bothQUANTITY

0.95+

one dayQUANTITY

0.95+

billion-dollarQUANTITY

0.95+

DataWorks SummitEVENT

0.95+

one thousandQUANTITY

0.95+

EvergreenORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

IguazioPERSON

0.92+

30% a yearQUANTITY

0.91+

six monthsQUANTITY

0.91+

a billion dollarsQUANTITY

0.9+

2.4 billion dollarQUANTITY

0.9+

AshleyPERSON

0.89+

John FurrierPERSON

0.89+

SassTITLE

0.88+

zero debtQUANTITY

0.88+

SiliconANGLE.comORGANIZATION

0.87+

next six monthsDATE

0.87+

billion dollarQUANTITY

0.86+