AI and Hybrid Cloud Storage | Wikibon Action Item | May 2019
Hi, I'm Peter Burris, and this is Wikibon's Action Item. We're joined here in the studio by David Floyer. Hi David. >> Hi there. >> And remote, we've got Jim Kobielus. Hi, Jim. >> Hi everybody. >> Now, Jim, you probably can't see this, but for those who are watching, when we do see the broad set, notice that David Floyer's got his Game of Thrones coffee cup with us. Now that has nothing to do with the topic. David, and Jim, we're going to be talking about this challenge that businesses have, that enterprises have, as they think about making practical use of AI. The presumption for many years was that we were going to move all the data up into the Cloud in a central location, and all workloads were going to be run there. As we've gained experience, it's very clear that we're actually going to see a greater distribution function, partly in response to a greater distribution of data. But what does that tell about the relationship between AI, AI workloads, storage, and hybrid Cloud? David, why don't you give us a little clue as to where we're going to go from here. >> Well I think the first thing we have to do is separate out the two types of workload. There's the development of the AI solution, the inference code, et cetera, the dealing with all of the data required for that. And then there is the execution of that code, which is the inference code itself. And the two are very different in characteristics. For the development, you've got a lot of data. It's very likely to be data-bound. And storage is a very important component of that, as well as computer and the GPUs. For the inference, that's much more compute-bound. Again, compute neural networks, GPUs, are very, very relevant to that portion. Storage is much more ephemeral in the sense that the data will come in and you will need to execute on it. But that data will be part of the, the compute will be part of that sensor, and you will want the storage to be actually in the DIMM itself, or non-volatile DIMM, right up as part of the processing. And you'll want to share that data only locally in real time, through some sort of mesh computing. So, very different compute requirements, storage requirements, and architectural requirements. >> Yeah, let's go back to that notion of the different storage types in a second, but Jim, David described how the workloads are going to play out. Give a sense of what the pipelines are going to look like, because that's what people are building right now, is the pipelines for actually executing these workloads. How will they differ? How do they differ in the different locations? >> Yeah, so the entire DataOps pipeline for data science, data analytics, AI in other words. And so what you're looking at here is all the processes from discovering and adjusting the data to transforming and preparing and correcting it, cleansing it, to modeling and training the AI models, to serving them out for inferencing along the lines of what David's describing. So, there's different types of AI models and one builds from different data to do different types of inferencing. And each of these different pipelines might be highly, often is, highly specific to a particular use case. You know, AI for robotics, that's a very different use case from AI for natural language processing, embedded for example in an e-commerce portal environment. So, what you're looking at here is different pipelines that all share a common sort of flow of activities and phases. And you need a data scientist to build and test, train and evaluate and serve out the various models to the consuming end devices or application. >> So, David we've got 50 or so years of computing. Where the primary role of storage was to assist a transaction and the data associated with that transaction that has occurred. And that's you know, disk and then you have all the way out to tape if we're talking about archive. Flash changes that equation. >> Absolutely changes it. >> AI absolutely demands a different way of thinking. Here we're not talking about persisting our data we're talking about delivering data, really fast. As you said, sometimes very ephemeral. And so, it requires a different set of technologies. What are some of the limitations that historically storage has been putting on some of these workloads? And how are we breaching those limitations, to make them possible? >> Well if we take only 10 years ago, the start of the big data was Hadoop. And that was spreading the data over very cheap disks and hard disks. With the compute there, and you spread that data and you did it all in parallel on very cheap nodes. So, that was the initial but that is a very expensive way of doing it now because you're tying the data to that set of nodes. They're all connected together so, a more modern way of doing it is to use Flash, to use multiple copies of that data but logical copies or snapshots of that Flash. And to be able to apply as many processes, nodes as is appropriate for that particular workload. And that is a far more efficient and faster way of processing that or getting through that sort of workload. And it really does make a difference of tenfold in terms of elapsed time and ability to get through that. And the overall cost is very similar. >> So that's true in the inferencing or, I'm sorry, in the modeling. What about in the inferencing side of things? >> Well, the inferencing side is again, very different. Because you are dealing with the data coming in from the sensors or coming in from other sensors or smart sensors. So, what you want to do there is process that data with the inference code as quickly as you can, in real time. Most of the time in real time. So, when you're doing that, you're holding the current data actually in memory. Or maybe in what's called non-volatile DIMM and VDIMM. Which gives you a larger amount. But, you almost certainly don't have the time to go and store that data and you certainly don't want to store it if you can avoid it because it is a large amount of data and if I open my... >> Has limited derivative use. >> Exactly. >> Yeah. >> So you want to get all or quickly get all the value out of that data. Compact it right down using whatever techniques you can, and then take just the results of that inference up to other ones. Now at the beginning of the cycle, you may need more but at the end of the cycle, you'll need very little. >> So Jim, the AI world has built algorithms over many, many, many years. Many which still persist today but they were building these algorithms with the idea that they were going to use kind of slower technologies. How is the AI world rethinking algorithms, architectures, pipelines, use cases? As a consequence of these new storage capabilities that David's describing? >> Well yeah, well, AI has become widely distributed in terms of its architecture increasingly and often. Increasingly it's running over containerized, Kubernetes orchestrated fabrics. And a lot of this is going on in the area of training, of models and distributing pieces of those models out to various nodes within an edge architecture. It may not be edge in the internet of things sense but, widely distributed, highly parallel environments. As a way of speeding up the training and speeding up the modeling and really speeding up the evaluation of many models running in parallel in an approach called ensemble modeling. To be able to converge on a predictive solution, more rapidly. So, that's very much what David's describing is that that's leveraging the fact that memory is far faster than any storage technology we have out there. And so, being able to distribute pieces of the overall modeling and training and even data prep of workloads. It's able to speed up the deployment of highly optimized and highly sophisticated AI models for the cutting edge, you know, challenges we face like the Event Horizon telescope for example. That we're all aware of when they were able to essentially make a visualization of a black hole. That relied on a form of highly distributed AI called Grid Computing. For example, I mean the challenges like that demand a highly distributed memory-centric orchestrated approach to tackling. >> So, you're essentially moving the code to the data as opposed to moving all of the data all the way out to the one central point. >> Well so if we think about that notion of moving code to the data. And I started off by suggesting that. In many respects, the Cloud is an architectural approach to how you distribute your workloads as opposed to an approach to centralizing everything in some public Cloud. I think increasingly, application architects and IT organizations and service providers are all seeing things in that way. This is a way of more broadly distributing workloads. Now as we think about, we talked briefly about the relationship between storage and AI workloads but we don't want to leave anyone with the impression that we're at a device level. We're really talking about a network of data that has to be associated with a network of storage. >> Yes. >> Now that suggests a different way of thinking about how - about data and data administration storage. We're not thinking about devices, we're really trying to move that conversation up into data services. What kind of data services are especially crucial to supporting some of these distributed AI workloads? >> Yes. So there are the standard ones that you need for all data which is the backup and safety and encryption security, control. >> Primary storage allocation. >> All of that, you need that in place. But on top of that, you need other things as well. Because you need to understand the mesh, the distributed hybrid Cloud that you have, and you need to know what the capabilities are of each of those nodes, you need to know the latencies between each of those nodes - >> Let me stop you here for a second. When you say "you need to know," do you mean "I as an individual need to know" or "the system needs to know"? >> It needs to be known, and it's too complex, far too complex for an individual ever to solve problems like this so it needs, in fact, its own little AI environment to be able to optimize and check the SLAs so that particular inference coding can be achieved in the way that it's set up. >> So it sounds like - >> It's a mesh type of computer. >> Yeah, so it sounds like one of the first use cases for AI, practical, commercial use cases, will be AI within the data plane itself because the AI workloads are going to drive such a complex model and utilization of data that if you don't have that the whole thing will probably just fold in on itself. Jim, how would you characterize this relationship between AI inside the system, and how should people think about that and is that really going to be a practical, near-term commercial application that folks should be paying attention to? >> Well looking at the Cloud native world, what we need and what we're increasingly seeing out there are solutions, tools, really data planes, that are able to associate a distributed storage infrastructure of a very hybridized nature in terms of disk and flash and so forth with a highly distributed containerized application environment. So for example just last week at Jeredhad I met with the folks from Robin Systems and they're one of the solution providers providing those capabilities to associate, like I said, the storage Cloud with the containerized, essentially application, or Cloud applications that are out there, you know, what we need there, like you've indicated, are the ability to use AI to continue to look for patterns of performance issues, bottlenecks, and so forth and to drive the ongoing placement of data storage nodes and servers which in clusters and so forth as way of making sure that storage resources are always used efficiently that SLAs as David indicated are always observed in an automated fashion as the native placement and workload placement decisions are being made and so ultimately that the AI itself, whatever it's doing like recognizing faces or recognizing human language, is able to do it as efficiently and really as cheaply as possible. >> Right, so let me summarize what we've got so far. We've got that there is a relationship between storage and AI, that the workload suggests that we're going to have centralized modeling, large volumes of data, we're going to have distributed inferencing, smaller on data, more complex computing. Flash is crucial, mesh is crucial, and increasingly because of the distributed nature of these applications, there's going to have to be very specific and specialized AI in the infrastructure, in that mesh itself, to administer a lot of these data resources. >> Absolutely. >> So, but we want to be careful here, right David? We don't want to suggest that we have, just as the notion of everything goes into a centralized Cloud under a central administrative effort, we also don't want to suggest this notion that there's this broad, heterogeneous, common, democratized, every service available everywhere. Let's bring hybrid Cloud into this. >> Right. >> How will hybrid Cloud ultimately evolve to ensure that we get common services where we need them? And know where we don't have common services so that we can factor those constraints? >> So it's useful to think about the hybrid Cloud from the point of view of the development which will be fairly normal types of computing and be in really large centers and the edges themselves, which will be what we call autonomous Clouds. Those are the ones at the edge which need to be self-sufficient. So if you have an autonomous car, you can't guarantee that you will have communication to it. And most - a lot of IOTs in distant places which again, on chips or distant places, where you can't guarantee. So they have to be able to run much more by themselves. So that's one important characteristic so that autonomous one needs to be self-sufficient itself and have within it all the capabilities of running that particular code. And then passing up data when it can. >> Now you gave examples where it's physically required to do that, but it's also OT examples. >> Exactly. >> Operational technologies where you need to have that air gap to ensure that bad guys can't get into your data. >> Yes, absolutely, I mean if you think about a boat, a ship, it has multiple very clear air gaps and a nuclear power station has a total air gap around it. You must have those sort of air gaps. So it's a different architecture for different uses for different areas. But of course data is going to come up from those autonomous, upwards, but it will be a very small amount of the data that's actually being processed. The data, and there'll be requests down to those autonomous Clouds for additional processing of one sort or another. So there still will be a discussion, communication, between them, to ensure that the final outcome, the business outcome, is met. >> All right, so I'm going to ask each of you guys to give me a quick prediction. David, I'm going to ask you about storage and then Jim I'm going to ask you about AI in light of David's prediction about storage. So David, as we think about where these AI workloads seem to be going, how is storage technology going to evolve to make AI applications easier to deal with, easier to run, cheaper to run, more secure? >> Well, the fundamental move is towards larger amounts of Flash. And the new thing is that larger amounts of non-volatile DIMM, the memory in the computer itself, those are going to get much, much bigger, those are going to help with the execution of these real-time applications and there's going to be high-speed communication between short distances between the different nodes and this mesh architecture. So that's on the inference side, there's a big change happening in that space. On the development side the storage will move towards sharing data. So having a copy of the data which is available to everybody, and that data will be distributed. So sharing that data, having that data distributed, will then enable the sorts of ways of using that data which will retain context, which is incredibly important, and avoid the cost and the loss of value because of the time taken of moving that data from A to B. >> All right, so to summarize, we've got a new level in the storage hierarchy that puts between Flash and memory to really accelerate things, and then secondly we've got this notion that increasingly we have to provide a way of handling time and context so that we sustain fidelity especially in more real-time applications. Jim, given that this is where storage is going to go, what does that say about AI? >> What it says about AI is that first of all, we're talking about like David said, meshes of meshes, every edge node is increasingly becoming a mesh in its own right with disparate CPUs and GPUs and whatever, doing different inferencing on each device, but every one of these, like a smart car, will have plenty of embedded storage to process a lot of data locally that may need to be kept locally for lots of very good reasons, like a black box in case of an accident, but also in terms of e-discovery of the data and the models that might have led up to an accident that might have caused fatalities and whatnot. So when we look at where AI is going, AI is going into the mesh of mesh, meshes of meshes, where there's AI running it in each of the nodes within the meshes, and the meshes themselves will operate as autonomous decisioning nodes within a broader environment. Now in terms of the context, the context increasingly that surrounds all of the AI within these distributed architectures will be in the form of graphs and graphs are something distinct from the statistical algorithms that we built AI out of. We're talking about knowledge graphs, we're talking about social graphs, we're talking about behavioral graphs, so graph technology is just getting going. For example, Microsoft recently built, they made a big continued push into threading graph - contextual graph technology - into everything they do. So that's where I see AI going is up from statistical models to graph models as the broader metadata framework for binding everything together. >> Excellent. All right guys, so Jim, I think another topic another time might be the mesh mess. (laughs) But we won't do that now. All right, let's summarize really quickly. We've talked about how the relationship between AI, storage and hybrid Clouds are going to evolve. Number one, AI workloads are at least differentiated by where we handle modeling, large amounts of data still need a lot of compute, but we're really focused on large amounts of data and moving that data around very, very quickly. But therefore proximate to where the workload resides. Great, great application for Clouds, large, public as well as private. On the other side, where the inferencing work is done, that's going to be very compute-bound, smaller data volumes, but very, very fast data. Lot of flash everywhere. The second thing we observed is that these new AI applications are going to be used and applied in a lot of different domains, both within human interaction as well as real-time domains within IOT, et cetera, but that as we evolve, we're going to see a greater relationship between the nature of the workload and the class of the storage, and that is going to be a crucial feature for storage administrators and storage vendors over the next few year is to ensure that that specialization is reflected in what's known. What's needed. Now the last point that we'll make very quickly is that as we look forward, the whole concept of hybrid Cloud where we can have greater predictability into the nature of data-oriented services that are available for different workloads is going to be really, really important. We're not going to have all data services common in all places. But we do want to make sure that we can assure whether it's a container-based application or some other structure, that we can ensure that the data that is required will be there in the context, form and metadata structures that are required. Ultimately, as we look forward, we see new classes of storage evolving that bring data even closer to the compute side, and we see new data models emerging, such as graph models, that are a better overall reflection of how this distributed data is going to evolve within hybrid Cloud environments. David Floyer, Jim Kobielus, Wikibon analysts, I'm Peter Burris, once again, this has been Action Item.
SUMMARY :
We're joined here in the studio by David Floyer. And remote, we've got Jim Kobielus. Now that has nothing to do with the topic. in the sense that the data will come in of the different storage types in a second, and adjusting the data to transforming out to tape if we're talking about archive. What are some of the limitations that historically storage of the big data was Hadoop. What about in the inferencing side of things? and store that data and you certainly don't want to store it Now at the beginning of the cycle, you may need more but So Jim, the AI world has built algorithms for the cutting edge, you know, challenges we face as opposed to moving all of the data that has to be associated with a network of storage. to supporting some of these distributed AI workloads? and encryption security, control. the distributed hybrid Cloud that you have, "I as an individual need to know" in the way that it's set up. and is that really going to be a practical, are the ability to use AI to continue to look and increasingly because of the distributed nature just as the notion of everything goes and the edges themselves, which will be what we call to do that, but it's also OT examples. to have that air gap to ensure But of course data is going to come up and then Jim I'm going to ask you about AI because of the time taken of moving that data from A to B. and context so that we sustain fidelity and the models that might have led up to an accident and that is going to be a crucial feature
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Robin Systems | ORGANIZATION | 0.99+ |
May 2019 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each device | QUANTITY | 0.98+ |
Flash | TITLE | 0.97+ |
10 years ago | DATE | 0.96+ |
Jeredhad | ORGANIZATION | 0.95+ |
today | DATE | 0.9+ |
first use cases | QUANTITY | 0.85+ |
first thing | QUANTITY | 0.84+ |
one important characteristic | QUANTITY | 0.76+ |
secondly | QUANTITY | 0.76+ |
one central point | QUANTITY | 0.74+ |
Event Horizon | COMMERCIAL_ITEM | 0.72+ |
many years | QUANTITY | 0.71+ |
50 or so years | QUANTITY | 0.7+ |
Cloud | TITLE | 0.67+ |
first | QUANTITY | 0.66+ |
next few year | DATE | 0.65+ |
lot of data | QUANTITY | 0.62+ |
VDIMM | OTHER | 0.59+ |
every one | QUANTITY | 0.58+ |
second | QUANTITY | 0.57+ |
DataOps | TITLE | 0.46+ |
Kubernetes | TITLE | 0.44+ |
Wikibon Action Item | Wikibon Conversation, February 2019
(electronic music) >> Hi, I'm Peter Burris. Welcome to Wikibon action item from theCUBE Studios in Palo Alto, California. So today we've got a great conversation and what we're going to be talking about is hybrid cloud. Hybrid cloud's been in the news a lot, lately. Larger consequences, from changes made by AWS, as they announced Outpost, and acknowledged for the first time that there's going to be a greater distribution of data and a greater distribution of function as enterprises, move to the cloud. We've been on top of this for quite some time, and have actually coined what we called true hybrid cloud. Which is the idea that increasingly we're going to see a need for a common set of capabilities and services, in multiple locations, so that the cloud can move to the data, and not the data automatically being presumed to move to the cloud. Now to have that conversation and to reveal some new research on what the cost and value propositions of the different options are that are available today. We've got David Foyer, David welcome to theCUBE. >> Thank you. >> So David, let's start, when we talk about hybrid cloud, we are seeing, a continuum of different options starting to emerge. What are the defining characteristics? >> So, yes, we're seeing a continuum emerging. We have a what we call stand alone of course at one end of the spectrum, and then we have multi cloud, and then we have loosely and tightly coupled, and then we have true, and as you go up the spectrum. So the dependents upon data depend on the data plain, dependents upon low latency, dependents on writing a systems of record, records. All of those increase as we're going from high latency and high bandwidth all the way up to low latency. >> So let me see if I got that right. So true hybrid cloud is at one end. >> Yes. >> And true hybrid cloud is, low latency, right on your work loads, simple as possible administration. That means we are typically going to have, a common stack in all locations. >> Yes. >> Next to that is this notion of tightly coupled, hybrid cloud, which could be higher latency, write oriented, could probably has a common set of software, on all nodes, that handles state. And then kind of this notion of loosely coupled. Multi well hybrid cloud, which is high latency, read oriented, which may have just API level coordination and commonality on all nodes. >> Yep that's right, and then you go down even further to just multi cloud, where you're just connecting things and each of them is independent of each other. >> So if I'm a CIO and I'm looking at a move to a cloud, I have to think about green field applications and the natural distribution of data for those green field applications, and that's going to help me choose which class of hybrid cloud, I'm going to use. But let's talk about the more challenging set of scenarios for most CIO's, which is the existing legacy applications. >> The systems of record. >> Yeah, the systems of record as I try to bring those, those cloud like experience to those applications, how am I going through that thought process? >> So, we have some choices, the choices are I could move it up too lift and shift, up to one of the cloud's, one of the large cloud's, many of them are around, and what if I, if I do that, what I need to be looking at is, what is the cost of moving that data, and what is the cost of pushing that up into the cloud, and what's the conversion cost, if I needed to move, to another database. >> And I think that's the biggest one, so that's just the cost of moving the data, which is just an ingress cost, it's a cost of format changes. >> Absolutely >> You know, migration and all the other elements, conversion changes et cetera. >> Right, so what I did in my research was focus on systems of record, the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things, you know, systems of engagements, analytics et cetera. But those systems of record have to work. You need to know if you've taken an order. You need to have consistency about that order. You need to know always that you can recover any data, you need in your financials et cetera, all of that is mission critical systems of record. And that's the piece that I focused on here, and I focused on. >> So again these are low latency. >> Very, low latency, yes. >> Write oriented. >> Very write oriented, types of applications, and, I focused on Oracle because the majority, of systems of record, run on Oracle databases, the large scale ones at least. So, that's what we are focusing on here. So I, looking at the different options for a CIO, of how they would go, and there are three main options open at the moment, there's Oracle, Cloud, Cloud at customer, which gives the cloud experience. There is Microsoft Azure Stack, which has a Oracle database version of it, and Outposts, but we eliminated Outposts not because, it's not going to be any good, but because it's not there yet. >> You can't do research on it if it doesn't exist yet. >> (laughs) That's right. So, we focused on Oracle and Azure, and we focused on, what was the benefit of moving from a traditional environment, where you've got best of breed essentially on site, to this cloud environment. >> So, if we think about it, the normal way of thinking about this kind of a research, is that people talk about R.O.I, and historically that's been done by looking, by keeping the amount of work that's performed, as given, constant and then looking at how the different technology components compare from a call standpoint. But a move to Cloud, the promise of a move to Cloud is not predicated on lowering costs per say. You may have other financial considerations of course but, it's really predicated on the notion of the cloud experience. Which is intended to improve business results, so if we think about R.O.I, as being a numerator question, with the value is the amount of work you do, versus a denominator question which is what resources are consumed to perform that work. It's not just the denominator side, we really need to think about the numerator side as well. >> The value you are creating, yes. >> So, what kind of thing's are we focused on when we think about that value created, as a consequence of possibilities and options of the Cloud. >> Right, so both are important, so obviously when you move, to a cloud environment, you can simplify operations in particular, you can simplify recovery, you can simplify a whole number of things within the IT shop and those give you extra resources. And then the question is, do you just cash in on those resources and say okay I've made some changes, or do you use those resources to improve the ability of your systems to work. One important characteristic of IT, all IT and systems of record in particular, is that you get depreciation of that asset. Over time it becomes less fitted, to the environment, that it started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know, most work done in an IT shop is on the maintenance side. >> Meaning it's an enhancement. >> It's maintenance and enhancement, yes. So making more resources available, and making it easier to do that maintenance, and making less things that are going to interfere with that, faster time to maintenance, faster time to new applications or improvements. Is really fundamental to systems of record. So that is the value that you can bring to it and you also bring value with lower better availability, higher availability as well. So those are the thing's we have put into the model, to see how the different approaches, and we were looking at really a total, one supplier being responsible for everything, which was the Oracle environment, and Oracle Cloud at Customer, to a sort of hybrid environment, more hybrid environment, where you had. >> Or mixed, or mixed. >> Mixed environment, yes. Where you had the equipment coming from different places. >> One vendor. >> The service, the Oracle, the Azure service, coming from Microsoft, and of course the database coming then from Oracle itself. And we found tremendous improvement in the value that you could get because of the single source. We found that a better model. >> So, the common source lead to efficiencies, that then allowed a business to generate new classes, of value >> Correct. >> Cause' as you said, you know, 70 plus percent of an IT or business is spent on technology, is associated with maintaining what's there, enhancing what's there, and a very limited amount is focused on new green field, and new types of applications. So if you can reduce the amount of time and energy, that goes into that heritage set of applications, those systems of record, then that opens up, that frees up resources to do some other things. >> And, having the flexibility now with things like Azure Stack and in the future AWS, of putting that resource either on premise or in the cloud, means that you can make decisions about where you process these things, about where the data is, about where the data needs to be, the best placement for the data for what you're trying to do. >> That decision is predicated on things like latency, but also regulatory environment, intellectual property control. >> And the cost of moving data up and down. So the three laws of the cloud, so having that flexibility of keeping it where you want to is a tremendous value in again, in terms of, the speed of deployment and the speed of improvement. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator side. So the denominator again is, the resources consume, to deliver the work to the business, but when we talk about that denominator side, perhaps opening up additional monies, to do new types of development, new types of work. But, take us through some of the issues like what is a cloud experience associated with, single vendor, faster development, give us some of the issues that are really driving the value proposition above the line. >> The whole issue about Cloud is that you take away all of the requirements to deal with the hardware, deal with the orchestration of the storage, deal with all of these things, so instead of taking weeks, months to put in extra resources, you say, I want them, and it's there. >> So you're taking administrative tasks, out of the flow. >> Out of the flow yes. >> And as a consequence, things happen faster, so time of value is one of the first ones, give us another one. >> So, obviously the ability to have. It's a cloud environment, so if you're a vendor, of that cloud, what you want to be able to do, is to make incremental changes, quickly as opposed to waiting for a new release and work on the release basis. So that fundamental speed to change, speed to improve, bring in new features, bring in new services, a cloud first type model, that is a very powerful way for the vendor to push out new things, and for the consumer to absorb them. >> Right, so the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation, ability to innovate. And then the third most important part is, if you re-invest those resources that you have saved into new services, new capabilities of doing that, to me the most important thing, long term for systems of record is to be able to make them go faster, and use that extra latency time there to bring in systems of analytics, AI systems, other systems, and provide automation of individual business processes, increased automation. That is going to happen overtime, that's a slow adding to it, but it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide, a clear path to, improving the current systems of record. And that is a more faster and more cost effective way, than going in for a conversion, or moving the data up to the cloud or lift and shift, for these types of applications. >> So these are all kind of related, so I get superior innovation speeds, because I'm taking new technology and faster, I get faster time to value, because, I'm not having to perform a bunch of tasks, and I can get, I can imbue additional types of work, in support of automation, without dramatically, expanding the transactional latency and a rival way of transactions within the system of record. Okay so, how did Oracle and Azure, with Oracle, stack up in your analysis? >> So first of all important is both are viable solutions, they both would work. Okay, but the impact in terms of the total business value including obviously any savings on people and things like that, was 290 nearly, $300 million additional, this was for a >> For how big a company? >> For a fortune 2000 customer, so it was around two billion dollars, so a lot of money, over five years, a lot of money. Either way, you would save 200 million, if you were with the Azure, but 300 with the Oracle. So that to me is far, far higher than the costs of IT, for that particular company, it's a strategic decision, to be able to get more value out quicker, and for this class of work load, on Oracle, then Oracle at Cloud was the best decision, to be absolutely fair, if you were on Microsoft's database, and you wanted to go to Microsoft Azure, that would be the better bet. You would get back a lot of those benefits. >> So stay within the stack if you can. >> Correct. >> Alright, so, two billion dollars a year, five years. $10 billion revenue, roughly. >> Between 200 million in saving, for One Microsoft, Azure, plus Oracle. 300 million so a 1% swing, talk to us about speed, value what happens in, a numerator side of that equation? >> So, it is lower in cost, but you have a higher, the cost of the actual cloud, is a little higher, so, overall the pure hardware, equipment class is a wash, it's not going to change much. >> Got it. >> It might be a little bit more expensive. You make the savings, as well because of the people, less operators, simpler environment. Those are the savings you're going to make, and then you are going to push those back, into the organization, as increase value that can be given to the line of the business. >> So the conclusion to the research is that if you are a CIO, you look at your legacy application, it's going to be difficult to move and you go with the stack that's best for those, legacy applications. >> Correct. >> So the vast majority of systems of record, are running on Oracle. >> Large scale. >> Large scale, then that means Oracle Cloud at Customers, is the superior fit for most circumstances. >> For a lot of those. >> If you're not there though then look at other options. >> Absolutely. >> Alright, David Foyer. >> Thank you. >> Thanks, very much for being on the cube today. And you've been watching another Wikibon action item, from theCUBE Studios in Palo Alto California, I'm Peter Burris, thanks very much for watching. (electronic music)
SUMMARY :
and acknowledged for the first time that there's of different options starting to emerge. and then we have true, and as you go up the spectrum. So let me see if I got that right. That means we are typically going to have, Next to that to just multi cloud, where you're just connecting things and that's going to help me choose which class of if I needed to move, to another database. so that's just the cost of moving the data, You know, migration and all the other elements, You need to know always that you can recover any data, So again these So I, looking at the different options for a CIO, and we focused on, what was the benefit of a move to Cloud is not predicated as a consequence of possibilities and options of the Cloud. You have to do maintenance and work, and as you know, So that is the value that you can bring to it Where you had the equipment coming from different places. in the value that you could get So if you can reduce the amount of time and energy, of putting that resource either on premise or in the cloud, That decision is predicated on things like And the cost of moving data up and down. So the denominator again is, the resources consume, all of the requirements to deal with the hardware, so time of value is one of the first ones, and for the consumer to absorb them. Right, so the first one is time to value, adding to it, but it means you can use those I get faster time to value, Okay, but the impact in terms of the total business value So that to me is far, far higher than the costs of IT, Alright, so, two billion dollars a year, five years. 300 million so a 1% swing, talk to us about the cost of the actual cloud, is a little higher, that can be given to the line of the business. So the conclusion to the research is that So the vast majority of systems of record, is the superior fit for most circumstances. And you've been watching another Wikibon action item,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
February 2019 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
200 million | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
1% | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
70 plus percent | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five | QUANTITY | 0.98+ |
around two billion dollars | QUANTITY | 0.98+ |
over five years | QUANTITY | 0.98+ |
300 million | QUANTITY | 0.98+ |
years | QUANTITY | 0.98+ |
Azure Stack | TITLE | 0.98+ |
three laws | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.97+ |
$300 million | QUANTITY | 0.97+ |
three main options | QUANTITY | 0.97+ |
One vendor | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
single source | QUANTITY | 0.94+ |
first type | QUANTITY | 0.94+ |
300 | QUANTITY | 0.93+ |
single | QUANTITY | 0.92+ |
Azure | ORGANIZATION | 0.92+ |
290 | QUANTITY | 0.91+ |
one end | QUANTITY | 0.88+ |
first | QUANTITY | 0.88+ |
Outpost | ORGANIZATION | 0.88+ |
one supplier | QUANTITY | 0.88+ |
Azure | TITLE | 0.86+ |
first ones | QUANTITY | 0.85+ |
two billion dollars a year | QUANTITY | 0.84+ |
Wikibon | EVENT | 0.78+ |
Wikibon | TITLE | 0.71+ |
characteristic | QUANTITY | 0.68+ |
2000 customer | QUANTITY | 0.67+ |
Wikibon | ORGANIZATION | 0.67+ |
Oracle Cloud | TITLE | 0.58+ |
theCUBE | ORGANIZATION | 0.56+ |
Cloud | ORGANIZATION | 0.38+ |
Cloud | TITLE | 0.35+ |
Wikibon Action Item, Cloud-first Options | Wikibon Conversation, February 2019
>> Hi, I'm Peter Burroughs Wellcome to wicked bon action >> item from the Cube Studios in Palo Alto, California So today we've got a great conversation, and what we're going to be talking about is hybrid cloud hybrid. Claude's been in the news a lot lately. Largest consequences from changes made by a Ws is they announced Outpost and acknowledged for the first time that there's going to be a greater distribution of data on a greater distribution of function as enterprise has moved to the cloud. We've been on top of this for quite some time, and it actually coined what we call true hybrid cloud, which is the idea that increasingly, we're going to see a need for a common set of capabilities and services in multiple locations so that the cloud could move to the data and not the data automatically being presumed to move to the cloud. >> Now to have that >> conversation and to reveal some new research on what the cost in value propositions of the different options are available. Today. We've >> got David Foyer. David. Welcome to the Cube. Thank you. So, David, let's start. When we talk about Hybrid Cloud, we're seeing a continuum of different options start to emerge. What are the defining characteristics? >> Yes, we're seeing it could continue him emerging. We have what we've called standalone off course. That one is end of the spectrum on DH. There we have multi cloud, and then we have loosely and tightly coupled, and then we have true and as you go up the spectrum. So the dependence upon data depends on the data plane dependence upon low latent see dependance on writing does a systems of record records. All of those increase as we going from from lonely for High Leighton Sea and High Band with all way up to low late. >> So let me see if I got this right. It's true. I've a cloud is at one end and true. Either cloud is low late and see right on into workloads simplest possible administration. That means we're typically goingto have a common stack in all locations. Next to that is this notion of tightly coupled hybrid cloud, which could be higher late. And see, right oriented could probably has a common set of software on all no common mental state. And then, kind of this. This notion of loosely coupled right multi or hybrid cloud, which is low, high late and see, write or read oriented, which may have just a P I level coordination and commonality and all >> that's right. And then you go down even further to just multi cloud, where you're just connecting things, and each of them is independent off each other. >> So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield applications and the natural distribution of data for those Greenfield applications. And that's going to help me choose which class of hybrid clawed him and he used. But let's talk about the more challenging from a set of scenarios for most CEOs, which is the existing legacy applications as I cry that Rangel yeah, systems of record. As I try to bring those those cloud like experience to those applications, how am I going through that thought process? >> So we have some choices. The choices are I could move it up to lift and shift up to on a one of the clouds by the large clouds, many of them around. And what if I if I do that what I'm need to be looking at is, what is the cost of moving that data? And what is the cost of pushing that up into the cloud and lost the conversion cast if I need to move to another database, >> and I think that's the biggest one. So it just costs of moving the data, which is just uninterested. It's a cost of format changes at our migration and all the other out conversion changes. >> So what I did in my research was focus on systems of record, the the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things their systems, the engagement analytics, etcetera. But those systems of record have to work. They you need to know if you've taken on order, you need to have consistency about that order. You need to know always that you can recover any data you need in your financials, etcetera. All of that is mission critical systems of record. Andi, that's the piece that I focused on here, and I focused on >> sort of. These are loaded and she >> low, very low, latent, right oriented, very right orientated types of applications. And I focused on the oracle because the majority ofthe systems of record run on Oracle databases on the large scale ones, at least so that's what we're we're focusing on here. So I looking at the different options for a C I O off. How they would go on DH. There are three main options open at the moment. There's there's Arkalyk Cloud Cloud, a customer, which gives thie the cloud experience. There is Microsoft as your stack, which has a a Oracle database version of it on DH outposts. But we eliminated outposts not because it's not going to be any good, but because it's not there yet, is >> you get your Razor John thing. >> That's right. So we focused on Oracle on DH as you and we focused on what was the benefit of moving from a traditional environment where you've got best of breed essentially on site to this cloud environment. >> So if we think about it, the normal way of thinking about this kind of a research is that people talk about R. A Y and historically that's been done by looking by keeping the amount of work that's performed has given constant and then looking at how the different technology components compare from a call standpoint. But a move to cloud the promise of a move to cloud is not predicated on lowering costs per se, but may have other financial considerations, of course, but it's really predicated on the notion of the cod experience, which is intended to improve business results. So we think about our lives being a numerator question. Value is the amount of work you do versus the denominator question, which is what resources are consumed to perform that work. It's not just the denominator side we really need to think about. The numerator side is well, you create. So what? What kind of things are we focused >> on? What we think about that value created his consequence of possibilities and options of the cloud. >> Right? So both are important. So so Obviously, when you move to a cloud environment, you can simplify operations. In particular, you can simplify recovery. You, Khun simplify a whole number of things within the shop and those give you extra resources on. Then the question is, Do you just cash in on those resources and say OK, I've made some changes, Or do you use those resources to improve the ability of your systems to work and one important characteristic off it alight and systems of record in particular is that you get depreciation of that asset. Over time, it becomes less fitted to the environment it has started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know most means most work done in my tea shop is on the maintenance side minutes. An enhancement. It's maintenance. An enhancement, yes. So making more resources available on making it easier to do that maintenance are making less, less things that are going to interfere with that faster time to to to maintenance faster time. Two new applications or improvements is really fundamental to systems of record, so that is the value that you can bring to it. And you also bring value with lower of better availability, higher availability as well. So those are the things that we put into the model to see how the different approaches. And we were looking at really a total one. One supplier being responsible for everything, which was the Oracle environment of Oracle clouded customer to a sort of hybrid invite more hybrid environment where you had the the the work environment where you had the equipment coming from different place vendor that the service, the oracle, the as your service coming from Microsoft and, of course, the database coming then from Arkham itself. And we found from tremendous improvement in the value that you could get because of this single source. We found that a better model. >> So the common source led to efficiencies that then allowed a business to generate new classes of value. Because, as you said, you know, seventy plus percent of a night organ orb business is spending. Biology is associate with maintaining which they're enhancing. What's there in a very limited amount is focused on new greenfield or new types of applications. So if you can reduce the amount of time energy that goes into that heritage set of applications those systems of record, the not opens up that frees up resources to do some of the things >> on DH Having inflexibility now with things like As your stack conned in the future E. W. S off. Putting that resource either on premise or in the cloud, means that you can make decisions about where you process things things about where the data is about, where the data needs to be, the best placement of the data for what you're trying to do >> and that that decision is predicated on things like late in sea, but also regulatory, environment and intellectual property, controlling >> the custom moving data up and down. So the three laws of off off the cloud so having that flexibility of moving, keeping it where you want to, is a tremendous value in again in terms ofthe the speed of deployment on the speed of improved. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator sites that the denominator again is the resources consumed to deliver the work to the business. But when we talk about that denominator side, know you perhaps opening up additional monies to do new types of development new times of work. But take us through some of the issues like you know what is a cloud experience associated with single vendor Faster development. Give us some of the issues that are really driving the value proposition. Look above the line. >> I mean, the whole issue about cloud is that you go on, take away all of the requirements to deal with the hardware deal with the orchestration off the storage deal with all of these things. So instead of taking weeks, months to put in extra resources, you say I want them on is there. >> So you're taking out administrate your taking administrative tasks out of the flow out of the flow, and as a consequence, things happen. Faster is the time of values. One of the first one. Give us another one. >> So obviously the ability to no I have it's a cloud environment. So if you're a vendor of that cloud, what you want to be able to do is to make incremental changes quickly, as opposed to awaiting for a new release and work on a release basis. So that fundamental speed to change speed to improve, bring in new features. Bringing new services a cloud first type model that is a very powerful way for the vendor to push out new things. And for the consumer, too, has absorbed them. >> Right? So the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation ability to innovate. And then the third. The third most important part is if you if you re invest those resources that you've saved into new services new capabilities of doing that. To me, the most important thing long term for systems of record is to be able to make them go faster and use that extra Leighton see time there to bring in systems off systems of analytics A. I systems other systems on provide automation of individual business processes, increased automation that is gonna happen over time. That's that's a slow adding to it. But it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide a clear path to improving the current systems of record. And that is a much faster and more cost effective way than going in for a conversion or moving the data upto the cloud or shifting lift and shift. For these types of acts, >> what kind of they're all kind of related? So I get, I get. I get superior innovation speeds because I'm taking new technology and faster. I get faster time to value because I'm not having to perform much of tasks, and I could get future could imbue additional types of work in support of automation without dramatically expanding the transactional wait and see on arrival rate of turns actions within the system of record. Okay, So how did Oracle and Azure with Oracle stack up in your analysis? >> So first of all, important is both a viable solutions. They both would work okay, but the impact in terms of the total business value, including obviously any savings on people and things like that, was two hundred nineteen eighty three hundred million dollars additional. This was for Robert to come in for a a Fortune two thousand customer, so it was around two billion dollars. So a lot of money over five years, a lot of money. Either way, you would save two hundred million if you were with with the zero but three hundred with the oracle, so that that to me, is far, far higher than the costs of I T. For that particular company, it's It is a strategic decision to be able to get more value out quicker. And for this class of workload on Oracle than Arkalyk, Cloud was the best decision to be absolutely fair If you were on Microsoft's database. And you wanted to go to Microsoft as you. That would be the better bet you would. You would get back a lot of those benefits, >> so stay with him. The stack, if you can't. Correct. All right, So So two billion dollars a year. Five years, ten billion dollars in revenue, roughly between two hundred million and saving for one Congress all around three. Treasure Quest. Oracle three hundred millions were one percent swing. Talk to us about speed value. What >> happens in the numerator side of that equation >> S Oh, so it is lower in caste, but you have a higher. The cast of the actual cloud is a little a little higher. So overall, the pure hardware equipment Cass is is awash is not going to change much. It might be a little bit more expensive. You make the savings a cz? Well, because of the people you less less operators, simpler environment. Those are the savings you're going to make. And then you're going to push those back into into the organization a cz increased value that could be given to the line of business. >> So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to move, and you go with stack. That's best for those legacy applications. And since the vast majority of systems of record or running all scale large scale, then that means work. A cloud of customers is a superior fit for most from a lot of chances. So if you're not there, though, when you look at other options, all right, David Floy er thank you. Thanks very much for being on the Cube today, and you've been watching other wicked bon action >> item from the Cube Studios and Power Rialto, California on Peter Burke's Thanks very much for watching.
SUMMARY :
capabilities and services in multiple locations so that the cloud could move to the data conversation and to reveal some new research on what the cost in value propositions of the different options are What are the defining characteristics? So the dependence upon data Next to that is this notion of tightly coupled And then you go down even further to just multi cloud, So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield and lost the conversion cast if I need to move to another database, So it just costs of moving the data, which is just uninterested. You need to know always that you can recover any data you These are loaded and she So I looking at the different So we focused on Oracle on Value is the amount of work you do versus What we think about that value created his consequence of possibilities and options of the cloud. of record, so that is the value that you can bring to it. So the common source led to efficiencies that then allowed a business to generate new premise or in the cloud, means that you can make decisions about where you process things So the three laws of again is the resources consumed to deliver the work to the business. go on, take away all of the requirements to deal with the hardware One of the first one. So obviously the ability to no So the first one is time to value, but also it's lower cost in for a conversion or moving the data upto the cloud or shifting lift I get faster time to value because I'm not having to is far, far higher than the costs of I T. For that particular company, Talk to us about speed value. Well, because of the people you less less operators, simpler environment. So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
February 2019 | DATE | 0.99+ |
ten billion dollars | QUANTITY | 0.99+ |
one percent | QUANTITY | 0.99+ |
two hundred million | QUANTITY | 0.99+ |
Claude | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
zero | QUANTITY | 0.99+ |
Five years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Arkalyk | ORGANIZATION | 0.99+ |
Power Rialto | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
three hundred millions | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two hundred | QUANTITY | 0.99+ |
seventy plus percent | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.98+ |
around two billion dollars | QUANTITY | 0.98+ |
oracle | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
Peter Burke | PERSON | 0.97+ |
over five years | QUANTITY | 0.97+ |
Leighton | ORGANIZATION | 0.97+ |
David Floy er | PERSON | 0.97+ |
three hundred | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
two thousand customer | QUANTITY | 0.96+ |
Two new applications | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Peter Burroughs | PERSON | 0.96+ |
first type | QUANTITY | 0.95+ |
One supplier | QUANTITY | 0.95+ |
High Band | LOCATION | 0.95+ |
single source | QUANTITY | 0.95+ |
two billion dollars a year | QUANTITY | 0.95+ |
three | QUANTITY | 0.93+ |
Khun | ORGANIZATION | 0.93+ |
Treasure Quest | ORGANIZATION | 0.93+ |
nineteen eighty three hundred million dollars | QUANTITY | 0.92+ |
three laws | QUANTITY | 0.92+ |
Congress | ORGANIZATION | 0.91+ |
R. A Y | OTHER | 0.9+ |
Greenfield | ORGANIZATION | 0.89+ |
Azure | ORGANIZATION | 0.88+ |
one end | QUANTITY | 0.87+ |
Wikibon | ORGANIZATION | 0.87+ |
Outpost | ORGANIZATION | 0.85+ |
High Leighton Sea | LOCATION | 0.85+ |
three main options | QUANTITY | 0.85+ |
California | LOCATION | 0.82+ |
first | QUANTITY | 0.78+ |
Arkham | LOCATION | 0.76+ |
Cube | ORGANIZATION | 0.76+ |
Razor John | PERSON | 0.63+ |
Cloud Cloud | COMMERCIAL_ITEM | 0.54+ |
Rangel | PERSON | 0.48+ |
Fortune | TITLE | 0.47+ |
Cloud | ORGANIZATION | 0.43+ |
Action Item | Blockchain & GDPR, May 4, 2018
hi I'm Peter Burris and welcome to this week's action item once again we're broadcasting from our beautiful the cube Studios in Palo Alto California and the wiki bond team is a little bit smaller this week for variety of reasons I'm being joined remotely by Neil Raiden and Jim Kabila's how you doing guys we're doing great Peter I'd be good thank you alright and it's actually a good team what we're gonna talk about we're gonna be specifically talking about some interesting developments and 14 days or so gdpr is gonna kick in and people who are behind will find themselves potentially subject to significant fines we actually were talking to a chief privacy officer here in the US who told us that had the Equinix breach occurred in Europe after May 25 2008 eeen it would have cost or Equifax the Equifax breach it would have cost Equifax over 160 billion dollars so these are very very real types of money that we're talking about but as we started thinking about some of the implications of gdpr and when it's going to happen and the circumstances of its of its success or failure and what its gonna mean commercially to businesses we also started trying to fold in a second trend and that second trend is the role of bitcoins going to play Bitcoin has a number of different benefits we'll get into some of that in a bit but one of them is that the data is immutable and gdpr has certain expectations regarding a firm's flexibility and how it can manage and handle data and blockchain may not line up with some of those issues as well as a lot of the Braque blockchain advocates might think Jim what are some of the specifics well Peter yeah blockchain is the underlying distributed hyper ledger or trusted database underlying Bitcoin and many other things blockchain yeah you know the one of the core things about blockchain that makes it distinctive is that you can create records and append them to block change you can read from them but can't delete them or update them it's not a crud database it's essentially for you to be able to go in and you know and erase a personally identifiable information record on an EU subject is you EU citizen in a blockchain it's not possible if you stored it there in other words blockchain then at the very start because it's an immutable database would not allow you to comply with the GDP ours were quite that people have been given a right to be forgotten as what what it's called that is a huge issue that might put the big kibosh on implementation of blockchain not just for PII in the EU but really for multinational businesses anybody who does business in Europe and the core you know coordination is like you know we're disregard brexit for now like Germany and France and Italy you got to be conformant completely worldwide essentially with your in your your PII management capabilities in order to pass muster with the regulators in the EU and avoid these massive fines blockchain seems like it would be incompatible with that compliance so where does the blockchain industry go or does it go anywhere or will it shrink well the mania died because of the GDP our slap in the face probably not there is a second issue as well Jim Lise I think there is and that is blockchain is allows for anonymity which means that everybody effectively has a copy of the ledger anywhere in the world so if you've got personally identifiable information coming out of the EU and you're a member or you're a part of that blockchain Network living in California you get a copy of the ledger now you may not be able to read the details and maybe that protects folks who might implement applications in blockchain but it's a combination of both the fact that the ledger is fully distributed and that you can't go in and make adjustments so that people can be forgotten based on EU laws if I got that right that's right and then there's a gray area you can't encrypt any and every record in a blockchain and conceal it from the prying eyes of people in California or in Thailand or wherever in the EU but that doesn't delete it that's not the same as erasing or deleting so there's a gray issue and there's no clarity from the EU regulators on this what if you use secret keys to encrypt individual records PII on a blockchain and then lost the keys or deleted the keys is that effectively would that be the same as he racing the record even though those bits still be there to be unreadable none of this has really been addressed in practice and so it's all a gray area it's a huge risk factor for companies that are considering exploring uses of blockchain for managing identity and you know security and all that other good stuff related to the records of people living in EU member countries so it seems as though we have two things they're gonna have that are that are likely to happen first off it's very clear that a lot of the GDP are related regulations were written in advance of comprehending what blockchain might be and so it doesn't and GDP are typically doesn't dictate implementation styles so it may have to be amended to accommodate some of the blocks a blockchain implementation style but it also suggests that increasingly we're going to hear from a design standpoint the breaking up of data associated with a transaction so that some of the metadata associated with that transaction may end up in the blockchain but some of the actual PII related data that is more sensitive from a GDP or other standpoint might remain outside of the blockchain so the blockchain effectively becomes a distributed secure network for managing metadata in certain types of complex transactions this is is that is that in scope of what we're talking about Jim yeah I bet you've raised and alluded to a big issue for implementers there will be on chain implementations of particular data data applications and off chain implementations off chain off blockchain will probably be all the PII you know in databases relational and so forth that allow you to do deletes and updates and so forth in you know to comply with you know gdpr and so forth and similar mandates elsewhere gdpr is not the only privacy mandate on earth and then there's on chain applications that you'll word the data what data sets will you store in blockchain you mentioned metadata now metadata I'm not sure because metadata quite often is is updated for lots of reasons for lots of operational patience but really fundamentally if we look at what a blockchain is it's a audit log it's an archive potentially of a just industry fashioned historical data that never changes and you don't want it to change ideally I mean I get an audit log you know let's say in the Internet of Things autonomous vehicles crashed and so forth and the data on how they operate should be stored you know either in a black box on the devices on the cars themself and also possibly backed up to a distributed blockchain where there is a transact or there's a there they a trusted persistent resilient record of what went on that would be a perfect idea for using block chains for storing perhaps trusted timestamp maybe encrypted records on things like that because ultimately the regulators and the courts and the lawyers and everybody else will want to come back and subpoena and use those records to and analyze what went on I mean for example that's an idea where something like a block shape and simile might be employed that doesn't necessarily have to involve PII unless of course it's an individual persons car and so there's all those great areas for those kinds of applications so right now it's kind of looking fuzzy for blockchain in lots of applications where identity can be either you know where you can infer easily the infer the identity of individuals from data that may not on the face of it look like it's PII so Neal I want to come back to you because it's this notion of being able to infer one of the things that's been going on in the industry for the past well 60 years is the dream of being able to create a transaction and persist that data but then generate derivative you out of that data through things like analytics data sharing etc blockchain because it is but you know it basically locks that data away from prying eyes it kind of suggests that we want to be careful about utilizing blockchain for applications where the data could have significant or could generate significant derivative use what do you think well we've known for a long long time that if you have anonymized data in the data set that it can merge that data with data from another data set relatively easy to find out who the individuals are right you add you add DNA stuff to that eh our records surveys things from social media you know everything about people and that's dangerous because we used to think that while losing are losing our privacy means that are going to keep giving us recommendations to buy these hands and shoes it's much more sinister than that you can be discriminated against in employment in insurance in your credit rating and all sorts of things so it's it's I think a really burning issue but what does it have to do with blockchain and G GD R that's an important question I think that blockchain is a really emerge short technology right now and like all image search technologies it's either going to evolve very quickly or it's gonna wither and die I'm not going to speculate which one it's going to be but this issue of how you can use it and how you can monetize data and things that are immutable I think they're all unanswered questions for the wider role of applications but to me it seems like you can get away from the immutable part by taking previous information and simply locking it away with encryption or something else and adding new information the problem becomes I think what happens to that data once someone uses it for other purpose than putting it in a ledger and the other question I have about GD d are in blockchain is who's enforcing this one army of people are sifting through all the stated at the side use and violation does it take a breach before they have it or is there something else going on the act of participating in a blockchain equivalent to owning or or having some visibility or something into a system so I am gdpr again hasn't doesn't seem to have answers to that question Jim what were you gonna say yeah the EU and its member nations have not worked out have not worked out those issues in terms of how will you know they monitor enforcement and enforce GDP are in practical terms I mean clearly it's gonna require on the parts of Germany and France and the others and maybe you know out of Brussels there might be some major Directorate for GDP our monitoring and oversight in terms of you know both companies operating in those nations as well as overseas with European Berger's none of that's been worked out by those nations clearly that's like you know it's just like the implementation issues like blockchain are not blockchain it's we're moving it toward the end of the month with you know not only those issues networked out many companies many enterprises both in Europe and elsewhere are not GDP are ready there may be some of them I'm not gonna name names may make a good boast that they are but know nobody really knows what it needs to be ready at this point I just this came to me very clearly when I asked Bernard Marr well-known author and you know influencer and the big data space at UM in Berlin a few weeks ago at at the data works and I said Bernard you know you consult all over with big companies what percentage of your clients and without giving names do you think are really truly GDP are already perm age when he said very few because they're not sure what it means either everybody's groping their way towards some kind of a hopefully risk mitigations threatened risk mitigation strategy for you know addressing this issue well the technology certainly is moving faster than the law and I'd say an argue even faster than the ethics it's going to be very interesting to see how things play out so we're just for anybody that's interested we are actually in the midst right now of doing right now doing some a nice piece of research on blockchain patterns for applications and what we're talking about essentially here is the idea that blockchain will be applicable to certain classes of applications but a whole bunch of other applications it will not be applicable to so it's another example of a technology that initially people go oh wow that's the technology it's going to solve all problems all date is going to move into the cloud Jim you like to point out Hadoop all data and all applications are going to migrate to the doop and clearly it's not going to happen Neil the way I would answer the question is it blockchain reduces the opportunity for multiple parties to enter into opportunism so that you can use a blockchain as a basis for assuring certain classes of behaviors as a group as a community and and and B and had that be relatively audible and understandable so it can reduce the opportunity for opportunism so you know companies like IBM probably you're right that the idea of a supply chain oriented blockchain that's capable of of assuring that all parties when they are working together are not exploiting holes in the contracts that they're actually complying in getting equal value out of whatever that blockchain system is and they're not gaining it while they can go off and use their own data to do other things if they want that's kind of the in chain and out of chain notion so it's going to be very interesting to see what happens over the course of next few years but clearly even in the example that I described the whole question of gdb our compliance doesn't go away all right so let's get to some action items here Nia what's your action item I suppose but when it comes to gdpr and blockchain I just have a huge number of questions about how they're actually going to be able to enforce it and when it comes to a personal information you know back in the Middle Ages when we went to the market to buy a baby pig they put it in a bag and tied it because they wouldn't want the piglet to run away because it'd take too much trouble to find it but when you got at home sometimes they actually didn't give you a pig they gave you a cat and when you opened up bag the cat was out of the bag that's where the phrase comes from so I'm just waiting for the cat to come out of the bag I I think this sounds like a real fad that was built around Bitcoin and we're trying to find some way to use it in some other way but I'm I just don't know what it is I'm not convinced Jim oxidiser my yeah my advice for Dana managers is to start to segment your data sets into those that are forgettable under gdpr and those that are unforgettable but forgettable ones is anything that has publicly identifiable information or that can be easily aggregated into identifying specific attributes and specific people whether they're in Europe or elsewhere is a secondary issue The Unforgettable is a stuff that it has to remain inviolate and persistent and can that be deleted and so forth the stuff all the unforgettables are suited to writing to one or more locked chains but they are not kosher with gdpr and other privacy mandates and focusing on the unforgettable data whatever that might be then conceivably investigate using blockchain for distributed you know you know access and so forth but they're mine the blockchain just one database technology among many in a very hybrid data architecture you got the Whitman way to skin the cat in terms of HDFS versus blockchain versus you know you know no first no sequel variants don't imagine because blockchain is the flavor of mania of the day that you got to go there there's lots and lots of alternatives all right so here's our action item overall this week we discussed on action item the coming confrontation between gdpr which is has been in effect for a while but actually fines will start being levied after May 25th and blockchain GPR has relatively or prescribed relatively script strict rules regarding a firm's control over personally identifiable in from you have to have it stored within the bounds of the EU if it's derives from an EU source and also it has to be forgettable that source if they choose to be forgotten the firm that owns that data or administers and stewards that data has to be able to get rid of it this is in conflict with blockchain which says that the Ledger's associated with a blockchain will be first of all fully distributed and second of all immutable and that provides some very powerful application opportunities but it's not gdpr compliant on the face of it over the course of the next few years no doubt we will see the EU and other bodies try to bring blockchain and block thing related technologies into a regulatory regime that actually is administrable as as well as auditable and enforceable but it's not there yet does that mean that folks in the EU should not be thinking about blockchains we don't know it means it introduces a risk that has to be accommodated but we at least think that the that what has to happen is data managers on a global basis need to start adding to it this concept of forgettable data and unforgettable data to ensure the cake can remain in compliance the final thing will say is that ultimately blockchain is another one of those technologies that has great science-fiction qualities to it but when you actually start thinking about how you're going to deploy it there are very practical realities associated with what it means to build an application on top of a blockchain datastore ultimately our expectation is that blockchain will be an important technology but it's going to take a number of years for knowledge to diffuse about what blockchain actually is suitable for and what it's not suitable for and this question of gdpr and blockchain interactions is going to be a important catalyst to having some of those conversations once again Neil Jim thank you very much for participating in action today my pleasure I'm Peter burger I'm Peter bursts and you've been once again listening to a wiki bond action item until we talk again
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Thailand | LOCATION | 0.99+ |
Jim Kabila | PERSON | 0.99+ |
Neil Raiden | PERSON | 0.99+ |
May 4, 2018 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Berlin | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Bernard | PERSON | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Bernard Marr | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Lise | PERSON | 0.99+ |
May 25 2008 | DATE | 0.99+ |
second issue | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
14 days | QUANTITY | 0.99+ |
Neil Jim | PERSON | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
both companies | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
second trend | QUANTITY | 0.98+ |
Neal | PERSON | 0.98+ |
second trend | QUANTITY | 0.98+ |
over 160 billion dollars | QUANTITY | 0.98+ |
Brussels | LOCATION | 0.97+ |
Jim oxidiser | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
EU | LOCATION | 0.96+ |
this week | DATE | 0.96+ |
Neil | PERSON | 0.95+ |
Germany | LOCATION | 0.95+ |
two things | QUANTITY | 0.95+ |
this week | DATE | 0.94+ |
today | DATE | 0.93+ |
this week | DATE | 0.93+ |
60 years | QUANTITY | 0.92+ |
Middle Ages | DATE | 0.92+ |
first | QUANTITY | 0.91+ |
gdpr | TITLE | 0.91+ |
Whitman | PERSON | 0.9+ |
France | LOCATION | 0.88+ |
May 25th | DATE | 0.88+ |
a few weeks ago | DATE | 0.86+ |
Braque | ORGANIZATION | 0.86+ |
gdpr | ORGANIZATION | 0.86+ |
Directorate for GDP | ORGANIZATION | 0.78+ |
GDPR | TITLE | 0.77+ |
Italy | LOCATION | 0.75+ |
Dana | PERSON | 0.74+ |
one database | QUANTITY | 0.74+ |
lots | QUANTITY | 0.73+ |
Hadoop | TITLE | 0.7+ |
next few years | DATE | 0.69+ |
one of those | QUANTITY | 0.68+ |
end | DATE | 0.68+ |
wiki bond | ORGANIZATION | 0.68+ |
next few years | DATE | 0.67+ |
Equinix | ORGANIZATION | 0.62+ |
number of years | QUANTITY | 0.62+ |
of people | QUANTITY | 0.61+ |
cube Studios | ORGANIZATION | 0.61+ |
Wikibon Action Item, Quick Take | Neil Raden, 5/4/2018
hi I'm Peter Burroughs welcome to a wiki bond action item quick take Neal Raiden Terry data announced earnings this week what does it tell us about Terry data and the overall market for analytics well tear date announced their first quarter earnings and they beat estimates for both earnings than revenues but they but lo they announced lower guidance for the fiscal year which I guess you know failed to impress Wall Street but recurring quarter one revenue was up 11% nearly a year to three hundred and two million dollars but perpetual revenue was down 23% from quarter one seventeen consulting was up to 135 million for the quarter you know not not altogether shabby for a company in transition but I think what it shows is that Teradata is executing this transitional program and there are some pluses and minuses but they're making progress jury's out but I think overall I'd consider it a good quarter what does it tell us about the market anything we can glean from their daters results about the market overall Neal it's hard to say there's a lot of you know at the ATW conference last week I listened to the keynote from Mike Ferguson I've known Mike for years and I think I always think that Mike's the real deal because he spends all of his time doing consulting and when he speaks he's there to tell us what's happening it he gave a great presentation about datawarehouse versus data Lake and if if he's correct there is still a market for a company like Terra data so you know we'll just have to see excellent Neil Raiden thanks very much this has been a wiki bond critique or actually it's been a wiki bond action item quick-take talk to you again
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil Raiden | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mike Ferguson | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
5/4/2018 | DATE | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Peter Burroughs | PERSON | 0.99+ |
23% | QUANTITY | 0.98+ |
Terra data | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
Neal | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
up to 135 million | QUANTITY | 0.94+ |
nearly a year | QUANTITY | 0.9+ |
first quarter | DATE | 0.88+ |
Wall Street | ORGANIZATION | 0.87+ |
three hundred and two million dollars | QUANTITY | 0.85+ |
years | QUANTITY | 0.8+ |
11% | QUANTITY | 0.8+ |
ATW conference | EVENT | 0.77+ |
one | QUANTITY | 0.76+ |
seventeen | QUANTITY | 0.75+ |
Terry | PERSON | 0.68+ |
Neal Raiden | ORGANIZATION | 0.67+ |
wiki | TITLE | 0.66+ |
Wikibon Action Item | The Roadmap to Automation | April 27, 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (upbeat digital music) >> Cameraman: Three, two, one. >> Hi. Once again, we're broadcasting from our beautiful Palo Alto studios, theCUBE studios, and this week we've got another great group. David Floyer in the studio with me along with George Gilbert. And on the phone we've got Jim Kobielus and Ralph Finos. Hey, guys. >> Hi there. >> So we're going to talk about something that's going to become a big issue. It's only now starting to emerge. And that is, what will be the roadmap to automation? Automation is going to be absolutely crucial for the success of IT in the future and the success of any digital business. At its core, many people have presumed that automation was about reducing labor. So introducing software and other technologies, we would effectively be able to substitute for administrative, operator, and related labor. And while that is absolutely a feature of what we're talking about, the bigger issue is ultimately is that we cannot conceive of more complex workloads that are capable of providing better customer experience, superior operations, all the other things a digital business ultimately wants to achieve. If we don't have a capability for simplifying how those underlying resources get put together, configured, or organized, orchestrated, and ultimately sustained delivery of. So the other part of automation is to allow for much more work that can be performed on the same resources much faster. It's a basis for how we think about plasticity and the ability to reconfigure resources very quickly. Now, the challenge is this industry, the IT industry has always used standards as a weapon. We use standards as a basis of creating eco systems or scale, or mass for even something as, like mainframes. Where there weren't hundreds of millions of potential users. But IBM was successful at using that as a basis for driving their costs down and approving a superior product. That's clearly what Microsoft and Intel did many years ago, was achieve that kind of scale through the driving more, and more, and more, ultimately, volume of the technology, and they won. But along the way though, each time, each generation has featured a significant amount of competition at how those interfaces came together and how they worked. And this is going to be the mother of all standard-oriented competition. How does one automation framework and another automation framework fit together? One being able to create value in a way that serves another automation framework, but ultimately as a, for many companies, a way of creating more scale onto their platform. More volume onto that platform. So this notion of how automation is going to evolve is going to be crucially important. David Floyer, are APIs going to be enough to solve this problem? >> No. That's a short answer to that. This is a very complex problem, and I think it's worthwhile spending a minute just on what are the component parts that need to be brought together. We're going to have a multi-cloud environment. Multiple private clouds, multiple public clouds, and they've got to work together in some way. And the automation is about, and you've got the Edge as well. So you've got a huge amount of data all across all of these different areas. And automation and orchestration across that, are as you said, not just about efficiency, they're about making it work. Making it able to be, to work and to be available. So all of the issues of availability, of security, of compliance, all of these difficult issues are a subject to getting this whole environment to be able to work together through a set of APIs, yes, but a lot lot more than that. And in particular, when you think about it, to me, volume of data is critical. Is who has access to that data. >> Peter: Now, why is that? >> Because if you're dealing with AI and you're dealing with any form of automation like this, the more data you have, the better your models are. And if you can increase that amount of data, as Google show every day, you will maintain that handle on all that control over that area. >> So you said something really important, because the implied assumption, and obviously, it's a major feature of what's going on, is that we've been talking about doing more automation for a long time. But what's different this time is the availability of AI and machine learning, for example, >> Right. as a basis for recognizing patterns, taking remedial action or taking predictive action to avoid the need for remedial action. And it's the availability of that data that's going to improve the quality of those models. >> Yes. Now, George, you've done a lot of work around this a whole notion of ML for ITOM. What are the kind of different approaches? If there's two ways that we're looking at it right now, what are the two ways? >> So there are two ends of the extreme. One is I want to see end to end what's going on across my private cloud or clouds. As well as if I have different applications in different public clouds. But that's very difficult. You get end-to-end visibility but you have to relax a lot of assumptions about what's where. >> And that's called the-- >> Breadth first. So the pro is end-to-end visibility. Con is you don't know how all the pieces fit together quite as well, so you get less fidelity in terms of diagnosing root causes. >> So you're trying to optimize at a macro level while recognizing that you can't optimize at a micro level. >> Right. Now the other approach, the other end of the spectrum, is depth first. Where you constrain the set of workloads and services that you're building and that you know about, and how they fit together. And then the models, based on the data you collect there, can become so rich that you have very very high fidelity root cause determination which allows you to do very precise recommendations or even automated remediation. What we haven't figured out hot to do yet is marry the depth first with the breadth first. So that you have multiple focus depth first. That's very tricky. >> Now, if you think about how the industry has evolved, we wrote some stuff about what we call, what I call the iron triangle. Which is basically a very tight relationship between specialists in technology. So the people who were responsible for a particular asset, be it storage, or the system, or the network. The vendors, who provided a lot of the knowledge about how that worked, and therefore made that specialist more or less successful and competent. And then the automation technology that that vendor ultimately provided. Now, that was not automation technology that was associated with AI or anything along those lines. It was kind of out of the box, buy our tool, and this is how you're going to automate various workflows or scripts, or whatever else it might be. And every effort to try to break that has been met with screaming because, well, you're now breaking my automation routines. So the depth-first approach, even without ML, has been the way that we've done it historically. But, David, you're talking about something different. It's the availability of the data that starts to change that. >> Yeah. >> So are we going to start seeing new compacts put in place between users and vendors and OEMs and a lot of these other folks? And it sounds like it's going to be about access to the data. >> Absolutely. So you're going to start. let's start at the bottom. You've got people who have a particular component, whatever that component is. It might be storage. It might be networking. Whatever that component is. They have products in that area which will be collecting data. And they will need for their particular area to provide a degree of automation. A degree of capability. And they need to do two things. They need to do that optimization and also provide data to other people. So they have to have an OEM agreement not just for the equipment that they provide, but for the data that they're going to give and the data they're going to give back. The automatization of the data, for example, going up and the availability of data to help themselves. >> So contracts effectively mean that you're going to have to negotiate value capture on the data side as well as the revenue side. >> Absolutely. >> The ability to do contracting historically has been around individual products. And so we're pretty good at that. So we can say, you will buy this product. I'm delivering you the value. And then the utility of that product is up to you. When we start going to service contracts, we get a little bit different kind of an arrangement. Now, it's an ongoing continuous delivery. But for the most part, a lot of those service contracts have been predicated to known in advance classes of functions, like Salesforce, for example. Or the SASS business where you're able to write a contract that says over time you will have access to this service. When we start talking about some of this automation though, now we're talking about ongoing, but highly bespoke, and potentially highly divergent, over a relatively short period of time, that you have a hard time writing contracts that will prescribe the range of behaviors and the promise about how those behaviors are actually going to perform. I don't think we're there yet. What do you guys think? >> Well, >> No, no way. I mean, >> Especially when you think about realtime. (laughing) >> Yeah. It has to be realtime to get to the end point of automating the actual reply than the actual action that you take. That's where you have to get to. You can't, It won't be sufficient in realtime. I think it's a very interesting area, this contracts area. If you think about solutions for it, I would be going straight towards blockchain type architectures and dynamic blockchain contracts that would have to be put in place. >> Peter: But they're not realtime. >> The contracts aren't realtime. The contracts will never be realtime, but the >> Accessed? access to the data and the understanding of what data is required. Those will be realtime. >> Well, we'll see. I mean, the theorem's what? Every 12 seconds? >> Well. That's >> Everything gets updated? >> That's To me, that's good enough. >> Okay. >> That's realtime enough. It's not going to solve the problem of somebody >> Peter: It's not going to solve the problem at the edge. >> At the very edge, but it's certainly sufficient to solve the problem of contracts. >> Okay. >> But, and I would add to that and say, in addition to having all this data available. Let's go back like 10, 20 years and look at Cisco. A lot of their differentiation and what entrenched them was sort of universal familiarity with their admin interfaces and they might not expose APIs in a way that would make it common across their competitors. But if you had data from them and a constrained number of other providers for around which you would build let's say, these modern big data applications. It's if you constrain the problem, you can get to the depth first. >> Yeah, but Cisco is a great example of it's an archetype for what I said earlier, that notion of an iron triangle. You had Cisco admins >> Yeah. that were certified to run Cisco gear and therefore had a strong incentive to ensure that more Cisco gear was purchased utilizing a Cisco command line interface that did incorporate a fair amount of automation for that Cisco gear and it was almost impossible for a lot of companies to penetrate that tight arrangement between the Cisco admin that was certified, the Cisco gear, and the COI. >> And the exact same thing happened with Oracle. The Oracle admin skillset was pervasive within large >> Peter: Happened with everybody. >> Yes, absolutely >> But, >> Peter: The only reason it didn't happen in the IBM mainframe, David, was because of a >> It did happen, yeah, >> Well, but it did happen, but governments stepped in and said, this violates antitrust. And IBM was forced by law, by court decree, to open up those interfaces. >> Yes. That's true. >> But are we going to see the same type of thing >> I think it's very interesting to see the shape of this market. When we look a little bit ahead. People like Amazon are going to have IAS, they're going to be running applications. They are going to go for the depth way of doing things across, or what which way around is it? >> Peter: The breadth. They're going to be end to end. >> But they will go depth in individual-- >> Components. Or show of, but they will put together their own type of things for their services. >> Right. >> Equally, other players like Dell, for example, have a lot of different products. A lot of different components in a lot of different areas. They have to go piece by piece and put together a consortium of suppliers to them. Storage suppliers, chip suppliers, and put together that outside and it's going to have to be a different type of solution that they put together. HP will have the same issue there. And as of people like CA, for example, who we'll see an opportunity for them to be come in again with great products and overlooking the whole of all of this data coming in. >> Peter: Oh, sure. Absolutely. >> So there's a lot of players who could be in this area. Microsoft, I missed out, of course they will have the two ends that they can combine together. >> Well, they may have an advantage that nobody else has-- >> Exactly. Yeah. because they're strong in both places. But I have Jim Kobielus. Let me check, are you there now? Do we got Jim back? >> Can you hear me? >> Peter: I can barely hear you, Jim. Could we bring Jim's volume up a little bit? So, Jim, I asked the question earlier, about we have the tooling for AI. We know how to get data. How to build models and how to apply the models in a broad brush way. And we're certainly starting to see that happen within the IT operations management world. The ITOM world, but we don't yet know how we're going to write these contracts that are capable of better anticipating, putting in place a regime that really describes how the, what are the limits of data sharing? What are the limits of derivative use? Et cetera. I argued, and here in the studio we generally agreed, that's we still haven't figured that out and that this is going to be one of the places where the tension between, at least in the B2B world, data availability and derivative use and where you capture value and where those profitables go, is going to be significant. But I want to get your take. Has the AI community >> Yeah. started figuring out how we're going to contractually handle obligations around data, data use, data sharing, data derivative use. >> The short answer is, no they have not. The longer answer is, that can you hear me, first of all? >> Peter: Barely. >> Okay. Should I keep talking? >> Yeah. Go ahead. >> Okay. The short answer is, no that the AI community has not addressed those, those IP protection issues. But there is a growing push in the AI community to leverage blockchain for such requirements in terms of block chains to store smart contracts where related to downstream utilization of data and derivative models. But that's extraordinarily early on in its development in terms of insight in the AI community and in the blockchain community as well. In other words, in fact, in one of the posts that I'm working on right now, is looking at a company called 8base that's actually using blockchain to store all of those assets, those artifacts for the development and lifecycle along with the smart contracts to drive those downstream uses. So what I'm saying is that there's lots of smart people like yourselves are thinking about these problems, but there's no consensus, definitely, in the AI community for how to manage all those rights downstream. >> All right. So very quickly, Ralph Finos, if you're there. I want to get your perspective >> Yeah. on what this means from markets, market leadership. What do you think? How's this going to impact who are the leaders, who's likely to continue to grow and gain even more strength? What're your thoughts on this? >> Yeah. I think, my perspective on this thing in the near term is to focus on simplification. And to focus on depth, because you can get return, you can get payback for that kind of work and it simplifies the overall picture so when you're going broad, you've got less of a problem to deal with. To link all these things together. So I'm going to go with the Shaker kind of perspective on the world is to make things simple. And to focus there. And I think the complexity of what we're talking about for breadth is too difficult to handle at this point in time. I don't see it happening any time in the near future. >> Although there are some companies, like Splunk, for example, that are doing a decent job of presenting a more of a breadth approach, but they're not going deep into the various elements. So, George, really quick. Let's talk to you. >> I beg to disagree on that one. >> Peter: Oh! >> They're actually, they built a platform, originally that was breadth first. They built all these, essentially, forwarders which could understand the formats of the output of all sorts of different devices and services. But then they started building what they called curated experiences which is the equivalent of what we call depth first. They're doing it for IT service management. They're doing it for what's called user behavior. Analytics, which is it's a way of tracking bad actors or bad devices on a network. And they're going to be pumping out more of those. What's not clear yet, is how they're going to integrate those so that IT service management understands security and vice versa. >> And I think that's one of the key things, George, is that ultimately, the real question will be or not the real question, but when we think about the roadmap, it's probably that security is going to be early on one of the things that gets addressed here. And again, it's not just security from a perimeter standpoint. Some people are calling it a software-based perimeter. Our perspective is the data's going to go everywhere and ultimately how do you sustain a zero trust world where you know your data is going to be out in the clear so what are you going to do about it? All right. So look. Let's wrap this one up. Jim Kobielus, let's give you the first Action Item. Jim, Action Item. >> Action Item. Wow. Action Item Automation is just to follow the stack of assets that drive automation and figure out your overall sharing architecture for sharing out these assets. I think the core asset will remain orchestration models. I don't think predictive models in AI are a huge piece of the overall automation pie in terms of the logic. So just focus on building out and protecting and sharing and reusing your orchestration models. Those are critically important. In any domain. End to end or in specific automation domains. >> Peter: David Floyer, Action Item. >> So my Action Item is to acknowledge that the world of building your own automation yourself around a whole lot of piece parts that you put together are over. You won't have access to a sufficient data. So enterprises must take a broad view of getting data, of getting components that have data be giving them data. Make contracts with people to give them data, masking or whatever it is and become part of a broader scheme that will allow them to meet the automation requirements of the 21st century. >> Ralph Finos, Action Item. >> Yeah. Again, I would reiterate the importance of keeping it simple. Taking care of the depth questions and moving forward from there. The complexity is enormous, and-- >> Peter: George Gilbert, Action Item. >> I say, start with what customers always start with with a new technology, which is a constrained environment like a pilot and there's two areas that are potentially high return. One is big data, where it's been a multi vendor or multi-vendor component mix, and a mess. And so you take that and you constrain that and make that a depth-first approach in the cloud where there is data to manage that. And the second one is security, where we have now a more and more trained applications just for that. I say, don't start with a platform. Start with those solutions and then start adding more solutions around that. >> All right. Great. So here's our overall Action Item. The question of automation or roadmap to automation is crucial for multiple reasons. But one of the most important ones is it's inconceivable to us to envision how a business can institute even more complex applications if we don't have a way of improving the degree of automation on the underlying infrastructure. How this is going to play out, we're not exactly sure. But we do think that there are a few principals that are going to be important that users have to focus on. Number one is data. Be very clear that there is value in your data, both to you as well as to your suppliers and as you think about writing contracts, don't write contracts that are focused on a product now. Focus on even that product as a service over time where you are sharing data back and forth in addition to getting some return out of whatever assets you've put in place. And make sure that the negotiations specifically acknowledge the value of that data to your suppliers as well. Number two, that there is certainly going to be a scale here. There's certainly going to be a volume question here. And as we think about where a lot of the new approaches to doing these or this notion of automation, is going to come out of the cloud vendors. Once again, the cloud vendors are articulating what the overall model is going to look like. What that cloud experience is going to look like. And it's going to be a challenge to other suppliers who are providing an on-premises true private cloud and Edge orientation where the data must live sometimes it is not something that they just want to do because they want to do it. Because that data requires it to be able to reflect that cloud operating model. And expect, ultimately, that your suppliers also are going to have to have very clear contractual relationships with the cloud players and each other for how that data gets shared. Ultimately, however, we think it's crucially important that any CIO recognized that the existing environment that they have right now is not converged. The existing environment today remains operators, suppliers of technology, and suppliers of automation capabilities and breaking that up is going to be crucial. Not only to achieving automation objectives, but to achieve a converged infrastructure, hyper converged infrastructure, multi-cloud arrangements, including private cloud, true private cloud, and the cloud itself. And this is going to be a management challenge, goes way beyond just products and technology, to actually incorporating how you think about your shopping, organized, how you institutionalize the work that the business requires, and therefore what you identify as a tasks that will be first to be automated. Our expectation, security's going to be early on. Why? Because your CEO and your board of directors are going to demand it. So think about how automation can be improved and enhanced through a security lens, but do so in a way that ensures that over time you can bring new capabilities on with a depth-first approach at least, to the breadth that you need within your shop and within your business, your digital business, to achieve the success and the results that you want. Okay. Once again, I want to thank David Floyer and George Gilbert here in the studio with us. On the phone, Ralph Finos and Jim Kobielus. Couldn't get Neil Raiden in today, sorry Neil. And I am Peter Burris, and this has been an Action Item. Talk to you again soon. (upbeat digital music)
SUMMARY :
and welcome to another Wikibon Action Item. And on the phone we've got Jim Kobielus and Ralph Finos. and the ability to reconfigure resources very quickly. that need to be brought together. the more data you have, is the availability of AI and machine learning, And it's the availability of that data What are the kind of different approaches? You get end-to-end visibility but you have to relax So the pro is end-to-end visibility. while recognizing that you can't optimize at a micro level. So that you have multiple focus depth first. that starts to change that. And it sounds like it's going to be about access to the data. and the data they're going to give back. have to negotiate value capture on the data side and the promise about how those behaviors I mean, Especially when you think about realtime. than the actual action that you take. but the access to the data and the understanding I mean, the theorem's what? To me, that's good enough. It's not going to solve the problem of somebody but it's certainly sufficient to solve the problem in addition to having all this data available. Yeah, but Cisco is a great example of and therefore had a strong incentive to ensure And the exact same thing happened with Oracle. to open up those interfaces. They are going to go for the depth way of doing things They're going to be end to end. but they will put together their own type of things that outside and it's going to have to be a different type Peter: Oh, sure. the two ends that they can combine together. Let me check, are you there now? and that this is going to be one of the places to contractually handle obligations around data, The longer answer is, that and in the blockchain community as well. I want to get your perspective How's this going to impact who are the leaders, So I'm going to go with the Shaker kind of perspective Let's talk to you. I beg to disagree And they're going to be pumping out more of those. Our perspective is the data's going to go everywhere Action Item Automation is just to follow that the world of building your own automation yourself Taking care of the depth questions and make that a depth-first approach in the cloud Because that data requires it to be able to reflect
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
April 27, 2018 | DATE | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Neil Raiden | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
8base | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
two areas | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
each generation | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both places | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Three | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
SASS | ORGANIZATION | 0.98+ |
this week | DATE | 0.97+ |
each time | QUANTITY | 0.97+ |
two ends | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
ORGANIZATION | 0.96+ | |
first | QUANTITY | 0.96+ |
second one | QUANTITY | 0.94+ |
CA | LOCATION | 0.92+ |
Action Item, Graph DataBases | April 13, 2018
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. (electronic music) Once again, we're broadcasting from our beautiful theCUBE Studios in Palo Alto, California. Here in the studio with me, George Gilbert, and remote, we have Neil Raden, Jim Kobielus, and David Floyer. Welcome, guys! >> Hey. >> Hi, there. >> We've got a really interesting topic today. We're going to be talking about graph databases, which probably just immediately turned off everybody. But we're actually not going to talk so much about it from a technology standpoint. We're really going to spend most of our time talking about it from the standpoint of the business problems that IT and technology are being asked to address, and the degree to which graph databases, in fact, can help us address those problems, and what do we need to do to actually address them. Human beings tend to think in terms of relationships of things to each other. So what the graph community talks about is graphed-shaped problems. And by graph-shaped problem we might mean that someone owns something and someone owns something else, or someone shares an asset, or it could be any number of different things. But we tend to think in terms of things and the relationship that those things have to other things. Now, the relational model has been an extremely successful way of representing data for a lot of different applications over the course of the last 30 years, and it's not likely to go away. But the question is, do these graph-shaped problems actually lend themselves to a new technology that can work with relational technology to accelerate the rate at which we can address new problems, accelerate the performance of those new problems, and ensure the flexibility and plasticity that we need within the application set, so that we can consistently use this as a basis for going out and extending the quality of our applications as we take on even more complex problems in the future. So let's start here. Jim Kobielus, when we think about graph databases, give us a little hint on the technology and where we are today. >> Yeah, well, graph databases have been around for quite a while in various forms, addressing various core-use cases such as social network analysis, recommendation engines, fraud detection, semantic search, and so on. The graph database technology is essentially very closely related to relational, but it's specialized to, when you think about it, Peter, the very heart of a graph-shaped business problem, the entity relationship polygram. And anybody who's studied databases has mastered, at least at a high level, entity relationship diagrams. The more complex these relationships grow among a growing range of entities, the more complex sort of the network structure becomes, in terms of linking them together at a logical level. So graph database technology was developed a while back to be able to support very complex graphs of entities, and relationships, in order to do, a lot of it's analytic. A lot of it's very focused on fast query, they call query traversal, among very large graphs, to find quick answers to questions that might involve who owns which products that they bought at which stores in which cities and are serviced by which support contractors and have which connections or interrelationships with other products they may have bought from us and our partners, so forth and so on. When you have very complex questions of this sort, they lend themselves to graph modeling. And to some degree, to the extent that you need to perform very complex queries of this sort very rapidly, graph databases, and there's a wide range of those on the market, have been optimized for that. But we also have graph abstraction layers over RDBMSes and multi-model databases. You'll find them running in IBM's databases, or Microsoft Cosmos DB, and so forth. You don't need graph-specialized databases in order to do graph queries, in order to manipulate graphs. That's the issue here. When does a specialized graph database serve your needs better than a non-graph-optimized but nonetheless graph-enabling database? That's the core question. >> So, Neil Raden, let's talk a little bit about the classes of business problems that could in fact be served by representing data utilizing a graph model. So these graph-shaped problems, independent of the underlying technology. Let's start there. What kinds of problems can business people start thinking about solving by thinking in terms of graphs of things and relationships amongst things? >> It all comes down to connectedness. That's the basis of a graph database, is how things are connected, either weakly or strongly. And these connected relationships can be very complicated. They can be based on very complex properties. A relational database is not based on, not only is it not based on connectedness, it's not based on connectedness at all. I'd like to say it's based on un-connectedness. And the whole idea in a relational database is that the intelligence about connectedness is buried in the predicate of a query. It's not in the database itself. So I don't know how overlaying graph abstractions on top of a relational database are a good idea. On the other hand, I don't know how stitching a relational database into your existing operation is going to work, either. We're going to have to see. But I can tell you that a major part of data science, machine learning, and AI is going to need to address the issue of causality, not just what's related to each other. And there's a lot of science behind using graphs to get at the causality problem. >> And we've seen, well, let's come back to that. I want to come back to that. But George Gilbert, we've kind of experienced a similar type of thing back in the '90s with the whole concept of object-orientated databases. They were represented as a way of re-conceiving data. The problem was that they had to go from the concept all the way down to the physical thing, and they didn't seem to work. What happened? >> Well it turns out, the big argument was, with object-oriented databases, we can model anything that's so much richer, especially since we're programming with objects. And it turns out, though, that theoretically, especially at that time, you could model anything down at the physical level or even the logical level in a relational database, and so those code bases were able to handle sort of similar, both ends of the use cases, both ends of the spectrum. But now that we have such extreme demands on our data management, rather than look at a whole application or multiple applications even sharing a single relational database, like some of the big enterprise apps, we have workloads within apps like recommendation engines, or a knowledge graph, which explains the relationship between people, places, and things. Or digital twins, or mapping your IT infrastructure and applications, and how they all hold together. You could do that in a relational database, but in a graph database, you can organize it so that you can have really fast analysis of these structures. But, the trade-off is, you're going to be much more restricted in how you can update the stuff. >> Alright, so we think about what happened, then, with some of the object-orientated technology, the original world database, the database was bound to the application, and the developer used the database to tell the application where to go find the data. >> George: Right. >> Relational data allowed us not to tell the applications where to find things, but rather how to find things, and that was persisted, and was very successful for a long time. Object-orientated technologies, in many respects, went back to the idea that the developer had to be very concrete about telling the application where the data was, but we didn't want to do things that way. Now, something's happened, David Floyer. One of the reasons why we had this challenge of representing data in a more abstract way across a lot of different forms without having it also being represented physically, and therefore a lot of different copies and a lot of different representations of the data which broke systems of record and everything else, was that the underlying technology was focused on just persisting data and not necessarily delivering it into these new types of datas, databases, data models, et cetera. But Flash changes that, doesn't it? Can't we imagine a world in which we can have our data in Flash and then, which is a technology that's more focused on delivering data, and then having that data be delivered to a lot of different representations, including things like graph databases, graph models. Is that accurate? >> Absolutely. In a moment I'll take it even further. I think the first point is that when we were designing real-time applications, transactional applications, we were very constrained, indeed, by the amount of data that we could get to. So, as a database administrator, I used to have a rule which you could, database developers could not issue more than 100 database calls. And the reason was that, they could always do more than that, but the applications became very unstable and they became very difficult to maintain. The cost of maintenance went up a lot. The whole area of Flash allows us to do a number of things, and the area of UniGrid enables us to do a number of things very differently. So that we can, for example, share data and have many different views of it. We can use UniGrid to be able to bring far greater amounts of power, compute power, GPUs, et cetera, to bear on specific workloads. I think the most useful thing to think about this is this type of architecture can be used to create systems of intelligence, where you have the traditional relational databases dealing with systems of record, and then you can have the AI systems, graph systems, all the other components there looking at the best way of providing data and making decisions in real time that can be fed back into the systems of record. >> Alright, alright. So let's-- >> George: I want to add to something on this. >> So, Neil, let me come back to you very quickly, sorry, George. Let me come back to Neil. I want to spend, go back to this question of what does a graph-shaped problem look like? Let's kind of run down it. We talked about AI, what about IoT, guys? Is IoT going to help us, is IoT going to drive this notion of looking at the world in terms of graphs more or less? What do you think, Neil? >> I don't know. I hadn't really thought about it, Peter, to tell you the truth. I think that one thing we leave out when we talk about graphs is we talk about, you know, nodes and edges and relationships and so forth, but you can also build a graph with very rich properties. And one thing you can get from a graph query that you can't get from a relational query, unless you write careful predicate, is it can actually do some thinking for you. It can tell you something you don't know. And I think that's important. So, without being too specific about IoT, I have to say that, you know, streaming data and trying to relate it to other data, getting down to, very quickly, what's going on, root-cause analysis, I think graph would be very helpful. >> Great, and, Jim Kobielus, how about you? >> I think, yeah I think that IoT is tailor-made for, or I should say, graph modeling and graph databases are tailor-made for the IoT. Let me explain. I think the IoT, the graph is very much a metadata technology, it's expressing context in a connected universe. Where the IoT is concerned it's all about connectivity, and so graphs, increasingly complex graphs of, say, individuals and the devices and the apps they use and locations and various contexts and so forth, these are increasingly graph-based. They're hierarchical and shifting and changing, and so in order to contextualize and personalize experience in a graph, in an IoT world, I think graph databases will be embedded in the very fabric of these environments. Microsoft has a strategy they announced about a year ago to build more of an intelligent edge around, a distributed graph across all their offerings. So I think graphs will become more important in this era, undoubtedly. >> George, what do you think? Business problems? >> Business problems on IoT. The knowledge graph that holds together digital twin, both of these lend themselves to graph modeling, but to use the object-oriented databases as an example, where object modeling took off was in the applications server, where you had the ability to program, in object-oriented language, and that mapped to a relational database. And that is an option, not the only one, but it's an option for handling graph-model data like a digital twin or IT operations. >> Well that suggests that what we're thinking about here, if we talk about graph as a metadata, and I think, Neil, this partly answers the question that you had about why would anybody want to do this, that we're representing the output of a relational data as a node in a network of data types or data forms so that the data itself may still be relationally structured, but from an application standpoint, the output of that query is, itself, a thing that is then used within the application. >> But to expand on that, if you store it underneath, as fully normalized, in relational language, laid out so that there's no duplicates and things like that, it gives you much faster update performance, but the really complex queries, typical of graph data models, would be very, very slow. So, once we have, say, more in memory technology, or we can manage under the covers the sort of multiple representations of the data-- >> Well that's what Flash is going to allow us to do. >> Okay. >> What David Floyer just talked about. >> George: Okay. >> So we can have a single, persistent, physical storage >> Yeah. >> but it can be represented in a lot of different ways so that we avoid some of the problems that you're starting to raise. If we had to copy the data and have physical, physical copies of the data on disc in a lot of different places then we would run into all kinds of consistency and update. It would probably break the model. We'd probably come back to the notion of a single data store. >> George: (mumbles) >> I want to move on here, guys. One really quick thing, David Floyer, I want to ask you. If there's, you mentioned when you were database administrator and you put restrictions on how many database actions an application or transaction was allowed to generate. When we think about what a business is going to have to do to take advantage of this, are there any particular, like one thing that we need to think about? What's going to change within an IT organization to take advantage of graph database? And we'll do the action items. >> Right. So the key here is the number of database calls can grow by a factor of probably a thousand times what it is now with what we can see is coming as technologies over the next couple of years. >> So let me put that in context, David. That's a single transaction now generating a hundred thousand, >> Correct. >> a hundred thousand database calls. >> Well, access calls to data. >> Right. >> Whatever type of database. And the important thing here is that a lot of that is going to move out, with the discussion of IoT, to where the data is coming in. Because the quicker you can do that, the earlier you can analyze that data, and you talked about IoT with possible different sources coming in, a simple one like traffic lights, for example. The traffic lights are being affected by the traffic lights around them within the city. Those sort of problems are ideal for this sort of graph database. And having all of that data locally and being processed locally in memory very, very close to where those sensors are, is going to be the key to developing solutions in this area. >> So, Neil, I've got one question from you, or one question for you. I'm going to put you on the spot. I just had a thought. And here's the thought. We talk a lot about, in some of the new technologies that could in fact be employed here, whether it be blockchain or even going back to SOA, but when we talk about what a system is going to have the authority to do about the idea of writing contracts that describe very, very discretely, what a system is or is not going to do. I have a feeling those contracts are not going to be written in relational terms. I have a feeling that, like most legal documents, they will be written in what looks more like graph terms. I'm extending that a little bit, but this has rights to do this at this point in time. Is that also, this notion of incorporating more contracts directly to how systems work, to assure that we have the appropriate authorities laid out. What do you think? Is that going to be easier or harder as a consequence of thinking in terms of these graph-shaped models? >> Boy, I don't know. Again, another thing I hadn't really thought about. But I do see some real gaps in thinking. Let me give you an analogy. OLAP databases came on the scene back in the '90s whatever. People in finance departments and whatever they loved OLAP. What they hated was the lack of scalability. And now what we see now is scalability isn't a problem and OLAP solutions are suddenly bursting out all over the place. So I think there's a role for a mental model of how you model your data and how you use it that's different from the relational model. I think the relational model has prominence and has that advantage of, what's it called? Occupancy or something. But I think that the graph is going to show some real capabilities that people are lacking right now. I think some of them are at the very high end, things, like I said, getting to causality. But I think that graph theory itself is so much richer than the simple concept of graphs that's implemented in graph databases today. >> Yeah, I agree with that totally. Okay, let's do the action item round. Jim Kobielus, I want to start with you. Jim, action item. >> Yeah, for data professionals and analytic professionals, focus on what graphs can't do, cannot do, because you hear a lot of hyperbolic, they're not useful for unstructured data or for machine learning in database. They're not as useful for schema on read. What they are useful for is the same core thing that relational is useful for which is schema on write applied to structured data. Number one. Number two, and I'll be quick on this, focus on the core use cases that are already proven out for graph databases. We've already ticked them off here, social network analysis, recommendation engines, influencer analysis, semantic web. There's a rich range of mature use cases for which semantic techniques are suited. And then finally, and I'll be very quick here, bear in mind that relational databases have been supporting graph modeling, graph traversal and so forth, for quite some time, including pretty much all the core mature enterprise databases. If you're using those databases already, and they can perform graph traversals and so forth reasonably well for your intended application, stick with that. No need to investigate the pure play, graph-optimized databases on the market. However, that said, there's plenty of good ones, including AWS is coming out with Neptune. Please explore the other alternatives, but don't feel like you have to go to a graph database first and foremost. >> Alright. David Floyer, action item. >> Action item. You are going to need to move your data center and your applications from the traditional way of thinking about it, of handling things, which is sequential copies going around, usually taking it two or three weeks. You're going to have to move towards a shared data model where the same set of data can have multiple views of it and multiple uses for multiple different types of databases. >> George Gilbert, action item. >> Okay, so when you're looking at, you have a graph-oriented problem, in other words the data is shaped like a graph, question is what type of database do you use? If you have really complex query and analysis use cases, probably best to use a graph database. If you have really complex update requirements, best to use a combination, perhaps of relational and graph or something like multi-model. We can learn from Facebook where, for years, they've built their source of truth for the social graph on a bunch of sharded MySQL databases with some layers on top. That's for analyzing the graph and doing graph searches. I'm sorry, for updating the graph and maintaining it and its integrity. But for reading the graph, they have an entirely different layer for comprehensive queries and manipulating and traversing all those relationships. So, you don't get a free lunch either way. You have to choose your sweet spots and the trade-offs associated with them. >> Alright, Neil Raden, action item. >> Well, first of all, I don't think the graph databases are subject to a lot of hype. I think it's just the opposite. I think they haven't gotten much hype at all. And maybe we're going to see that. But another thing is, a fundamental difference when you're looking at a graph and a graph query, it uses something called open world reasoning. A relational database uses closed world reasoning. I'll give you an example. Country has capital city. Now you have in your graph that China has capital city Beijing, China has capital city Beijing. That doesn't violate the graph. The graph simply understands and intuits that they're different names for the same thing. Now, if you love to write correlated sub-queries for many, many different relationships, I'd say stick to your relational database. I see unique capabilities in a graph that would be difficult to implement in a relational database. >> Alright. Thank you very much, guys. Let's talk about what the action item is for all of us. This week we talked about graph databases. We do believe that they have enormous potential, but we first off have to draw a distinction between graph theory, which is a way of looking at the world and envisioning and conceptualizing solutions to problems, and graph database technology, which has the advantages of being able, for certain classes of data models, to be able to very quickly both write and read data that is based on relationships and hierarchies and network structures that are difficult to represent in a normalized relational database manager. Ultimately, our expectation is that over the next few years, we're going to see an explosion in the class of business problems that lend themselves to a graph-modeling orientation. IoT is an example, very complex analytics systems will be an example, but it is not the only approach or the only way of doing things. But what is interesting, what is especially interesting, is over the last few years, a change in the underlying hardware technology is allowing us to utilize and expand the range of tools that we might use to support these new classes of applications. Specifically, the move to Flash allows us to sustain a single physical copy of data and then have that be represented in a lot of different ways to support a lot of different model forms and a lot of different application types, without undermining the fundamental consistency and integrity of the data itself. So that is going to allow us to utilize new types of technologies in ways that we haven't utilized before, because before, whether it was object-oriented technology or OLAP technology, there was always this problem of having to create new physical copies of data which led to enormous data administrative nightmares. So looking forward, the ability to use Flash as a basis for physically storing the data and delivering it out to a lot of different model and tool forms creates an opportunity for us to use technologies that, in fact, may more naturally map to the way that human beings think about things. Now, where is this likely to really play? We talked about IoT, we talked about other types of technologies. Where it's really likely to play is when the domain expertise of a business person is really pushing the envelope on the nature of the business problem. Historically, applications like accounting or whatnot, were very focused on highly stylized data models, things that didn't necessarily exist in the real world. You don't have double-entry bookkeeping running in the wild. You do have it in the legal code, but for some of the things that we want to build in the future, people, the devices they own, where they are, how they're doing things, that lends itself to a real-world experience and human beings tend to look at those using a graph orientation. And the expectations over the next few years, because of the changes in the physical technology, how we can store data, we will be able to utilize a new set of tools that are going to allow us to more quickly bring up applications, more naturally manage data associated with those applications, and, very important, utilize targeted technology in a broader set of complex application portfolios that are appropriate to solve that particular part of the problem, whether it's a recommendation engine or something else. Alright, so, once again, I want to thank the remote guys, Jim Kobielus, Neil Raden, and David Floyer. Thank you very much for being here. George Gilbert, you're in the studio with me. And, once again, I'm Peter Burris and you've been listening to Action Item. Thank you for joining us and we'll talk to you again soon. (electronic music)
SUMMARY :
Here in the studio with me, George Gilbert, and the degree to which graph databases, And to some degree, to the extent that you need to perform independent of the underlying technology. that the intelligence about connectedness from the concept all the way down both ends of the use cases, both ends of the spectrum. and the developer used the database and a lot of different representations of the data and the area of UniGrid enables us to do a number of things So let's-- So, Neil, let me come back to you very quickly, I have to say that, you know, and so in order to contextualize and personalize experience and that mapped to a relational database. so that the data itself may still be relationally But to expand on that, if you store it underneath, so that we avoid some of the problems What's going to change within an IT organization So the key here is the number of database calls can grow So let me put that in context, David. the earlier you can analyze that data, the authority to do about the idea of writing contracts But I think that the graph is going to show some real Okay, let's do the action item round. focus on the core use cases that are already proven out David Floyer, action item. You are going to need to move your data center and the trade-offs associated with them. are subject to a lot of hype. So looking forward, the ability to use Flash as a basis
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
April 13, 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
one question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
Beijing | LOCATION | 0.99+ |
single | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
This week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
first point | QUANTITY | 0.98+ |
MySQL | TITLE | 0.98+ |
more than 100 database calls | QUANTITY | 0.98+ |
China | LOCATION | 0.98+ |
Flash | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.95+ |
one thing | QUANTITY | 0.94+ |
'90s | DATE | 0.91+ |
single data store | QUANTITY | 0.88+ |
double | QUANTITY | 0.87+ |
both ends | QUANTITY | 0.85+ |
a year ago | DATE | 0.85+ |
first | QUANTITY | 0.84+ |
Number two | QUANTITY | 0.84+ |
next couple of years | DATE | 0.83+ |
years | DATE | 0.82+ |
hundred thousand | QUANTITY | 0.79+ |
last 30 years | DATE | 0.72+ |
hundred thousand database | QUANTITY | 0.72+ |
thousand times | QUANTITY | 0.72+ |
Flash | PERSON | 0.68+ |
Cosmos | TITLE | 0.67+ |
Wikibon | ORGANIZATION | 0.63+ |
database | QUANTITY | 0.57+ |
about | DATE | 0.57+ |
Number one | QUANTITY | 0.56+ |
Action Item | How to get more value out of your data, April 06, 2018
>> Hi I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) One of the most pressing strategic issues that businesses face is how to get more value out of their data, In our opinion that's the essence of a digital business transformation, is the using of data as an asset to improve your operations and take better advantage of market opportunities. The problem of data though, it's shareable, it's copyable, it's reusable. It's easy to create derivative value out of it. One of the biggest misnomers in the digital business world is the notion that data is the new fuel or the new oil. It's not, You can only use oil once. You can apply it to a purpose and not multiple purposes. Data you can apply to a lot of purposes, which is why you are able to get such interesting and increasing returns to that asset if you use it appropriately. Now, this becomes especially important for technology companies that are attempting to provide digital business technologies or services or other capabilities to their customers. In the consumer world, it started to reach a head. Questions about Facebook's reuse of a person's data through an ad based business model is now starting to lead people to question the degree to which the information asymmetry about what I'm giving and how they're using it is really worth the value that I get out of Facebook, is something that consumers and certainly governments are starting to talk about. it's also one of the bases for GDPR, which is going to start enforcing significant fines in the next month or so. In the B2B world that question is going to become especially acute. Why? Because as we try to add intelligence to the services and the products that we are utilizing within digital business, some of that requires a degree of, or some sort of relationship where some amount of data is passed to improve the models and machine learning and AI that are associated with that intelligence. Now, some companies have come out and said flat out they're not going to reuse a customer's data. IBM being a good example of that. When Ginni Rometty at IBM Think said, we're not going to reuse our customer's data. The question for the panel here is, is that going to be a part of a differentiating value proposition in the marketplace? Are we going to see circumstances in which companies keep products and services low by reusing a client's data and others sustaining their experience and sustaining a trust model say they won't. How is that going to play out in front of customers? So joining me today here in the studio, David Floyer. >> Hi there. >> And on the remote lines we have Neil Raden, Jim Kobielus, George Gilbert, and Ralph Finos. Hey, guys. >> All: Hey. >> All right so... Neil, let me start with you. You've been in the BI world as a user, as a consultant, for many, many number of years. Help us understand the relationship between data, assets, ownership, and strategy. >> Oh, God. Well, I don't know that I've been in the BI world. Anyway, as a consultant when we would do a project for a company, there were very clear lines of what belong to us and what belong to the client. They were paying us generously. They would allow us to come in to their company and do things that they needed and in return we treated them with respect. We wouldn't take their data. We wouldn't take their data models that we built, for example, and sell them to another company. That's just, as far as I'm concerned, that's just theft. So if I'm housing another company's data because I'm a cloud provider or some sort of application provider and I say well, you know, I can use this data too. To me the analogy is, I'm a warehousing company and independently I go into the warehouse and I say, you know, these guys aren't moving their inventory fast enough, I think I'll sell some of it. It just isn't right. >> I think it's a great point. Jim Kobielus. As we think about the role that data, machine learning play, training models, delivering new classes of services, we don't have a clean answer right now. So what's your thought on how this is likely to play out? >> I agree totally with Neil, first of all. If it's somebody else's data, you don't own it, therefore you can't sell and you can't monetize it, clearly. But where you have derivative assets, like machine learning models that are derivative from data, it's the same phenomena, it's the same issue at a higher level. You can build and train, or should, your machine learning models only from data that you have legal access to. You own or you have license and so forth. So as you're building these derivative assets, first and foremost, make sure as you're populating your data lake, to build and to do the training, that you have clear ownership over the data. So with GDPR and so forth, we have to be doubly triply vigilant to make sure that we're not using data that we don't have authorized ownership or access to. That is critically important. And so, I get kind of queasy when I hear some people say we use blockchain to make... the sharing of training data more distributed and federated or whatever. It's like wait a second. That doesn't solve the issues of ownership. That makes it even more problematic. If you get this massive blockchain of data coming from hither and yon, who owns what? How do you know? Do you dare build any models whatsoever from any of that data? That's a huge gray area that nobody's really addressed yet. >> Yeah well, it might mean that the blockchain has been poorly designed. I think that we talked in one of the previous Action Items about the role that blockchain design's going to play. But moving aside from the blockchain, so it seems as though we generally agree that data is owned by somebody typically and that the ownership of it, as Neil said, means that you can't intercept it at some point in time just because it is easily copied and then generate rents on it yourself. David Floyer, what does that mean from a ongoing systems design and development standpoint? How are we going to assure, as Jim said, not only that we know what data is ours but make sure that we have the right protection strategies, in a sense, in place to make sure that the data as it moves, we have some influence and control over it. >> Well, my starting point is that AI and AI infused products are fueled by data. You need that data, and Jim and Neil have already talked about that. In my opinion, the most effective way of improving a company's products, whatever the products are, from manufacturing, agriculture, financial services, is to use AI infused capabilities. That is likely to give you the best return on your money and businesses need to focus on their own products. That's the first place you are trying to protect from anybody coming in. Businesses own that data. They own the data about your products, in use by your customers, use that data to improve your products with AI infused function and use it before your competition eats your lunch. >> But let's build on that. So we're not saying that, for example, if you're a storage system supplier, since that's a relatively easy one. You've got very, very fast SSDs. Very, very fast NVMe over Fabric. Great technology. You can collect data about how that system is working but that doesn't give you rights to then also collect data about how the customer's using the system. >> There is a line which you need to make sure that you are covering. For example, Call Home on a product, any product, whose data is that? You need to make sure that you can use that data. You have some sort of agreement with the customer and that's a win-win because you're using that data to improve the product, prove things about it. But that's very, very clear that you should have a contractual relationship, as Jim and Neil were pointing out. You need the right to use that data. It can't come beyond the hand. But you must get it because if you don't get it, you won't be able to improve your products. >> Now, we're talking here about technology products which have often very concrete and obvious ownership and people who are specifically responsible for administering them. But when we start getting into the IoT domain or in other places where the device is infused with intelligence and it might be collecting data that's not directly associated with its purpose, just by virtue of the nature of sensors that are out there and the whole concept of digital twin introduces some tension in all this. George Gilbert. Take us through what's been happening with the overall suppliers of technology that are related to digital twin building, designing, etc. How are they securing or making promises committing to their customers that they will not cross this data boundary as they improve the quality of their twins? >> Well, as you quoted Ginni Rometty starting out, she's saying IBM, unlike its competitors, will not take advantage and leverage and monetize your data. But it's a little more subtle than that and digital twins are just sort of another manifestation of industry-specific sort of solution development that we've done for decades. The differences, as Jim and David have pointed out, that with machine learning, it's not so much code that's at the heart of these digital twins, it's the machine learning models and the data is what informs those models. Now... So you don't want all your secret sauce to go from Mercedes Benz to BMW but at the same time the economics of industry solutions means that you do want some of the repeatability that we've always gotten from industry solutions. You might have parts that are just company specific. And so in IBM's case, if you really parse what they're saying, they take what they learn in terms of the models from the data when they're working with BMW, and some of that is going to go into the industry specific models that they're going to use when they're working with Mercedes-Benz. If you really, really sort of peel the onion back and ask them, it's not the models, it's not the features of the models, but it's the coefficients that weight the features or variables in the models that they will keep segregated by customer. So in other words, you get some of the benefits, the economic benefits of reuse across customers with similar expertise but you don't actually get all of the secret sauce. >> Now, Ralph Finos-- >> And I agree with George here. I think that's an interesting topic. That's one of the important points. It's not kosher to monetize data that you don't own but conceivably if you can abstract from that data at some higher level, like George's describing, in terms of weights and coefficients and so forth, in a neural network that's derivative from the model. At some point in the abstraction, you should be able to monetize. I mean, it's like a paraphrase of some copyrighted material. A paraphrase, I'm not a lawyer, but you can, you can sell a paraphrase because it's your own original work that's based obviously on your reading of Moby Dick or whatever it is you're paraphrasing. >> Yeah, I think-- >> Jim I-- >> Peter: Go ahead, Neil. >> I agree with that but there's a line. There was a guy who worked at Capital One, this was about ten years ago, and he was their chief statistician or whatever. This was before we had words like machine learning and data science, it was called statistics and predictive analytics. He left the company and formed his own company and rewrote and recoded all of the algorithms he had for about 20 different predictive models. Formed a company and then licensed that stuff to Sybase and Teradata and whatnot. Now, the question I have is, did that cross the line or didn't it? These were algorithms actually developed inside Capital One. Did he have the right to use those, even if he wrote new computer code to make them run in databases? So it's more than just data, I think. It's a, well, it's a marketplace and I think that if you own something someone should not be able to take it and make money on it. But that doesn't mean you can't make an agreement with them to do that, and I think we're going to see a lot of that. IMSN gets data on prescription drugs and IRI and Nielsen gets scanner data and they pay for it and then they add value to it and they resell it. So I think that's really the issue is the use has to be understood by all the parties and the compensation has to be appropriate to the use. >> All right, so Ralph Finos. As a guy who looks at market models and handles a lot of the fundamentals for how we do our forecasting, look at this from the standpoint of how people are going to make money because clearly what we're talking about sounds like is the idea that any derivative use is embedded in algorithms. Seeing how those contracts get set up and I got a comment on that in a second, but the promise, a number of years ago, is that people are going to start selling data willy-nilly as a basis for their economic, a way of capturing value out of their economic activities or their business activities, hasn't matured yet generally. Do we see like this brand new data economy, where everybody's selling data to each other, being the way that this all plays out? >> Yeah, I'm having a hard time imagining this as a marketplace. I think we pointed at the manufacturing industries, technology industries, where some of this makes some sense. But I think from a practitioner perspective, you're looking for variables that are meaningful that are in a form you can actually use to make prediction. That you understand what the the history and the validity of that of that data is. And in a lot of cases there's a lot of garbage out there that you can't use. And the notion of paying for something that ultimately you look at and say, oh crap, it's not, this isn't really helping me, is going to be... maybe not an insurmountable barrier but it's going to create some obstacles in the market for adoption of this kind of thought process. We have to think about the utility of the data that feeds your models. >> Yeah, I think there's going to be a lot, like there's going to be a lot of legal questions raised and I recommend that people go look at a recent SiliconANGLE article written by Mike Wheatley and edited by our Editor In Chief Robert Hof about Microsoft letting technology partners own right to joint innovations. This is a quite a difference. This is quite a change for Microsoft who used to send you, if you sent an email with an idea to them, you'd often get an email back saying oh, just to let you know any correspondence we have here is the property of Microsoft. So there clearly is tension in the model about how we're going to utilize data and enable derivative use and how we're going to share, how we're going to appropriate value and share in the returns of that. I think this is going to be an absolutely central feature of business models, certainly in the digital business world for quite some time. The last thing I'll note and then I'll get to the Action Items, the last thing I'll mention here is that one of the biggest challenges in whenever we start talking about how we set up businesses and institutionalize the work that's done, is to look at the nature of the assets and the scope of the assets and in circumstances where the asset is used by two parties and it's generating a high degree of value, as measured by the transactions against those assets, there's always going to be a tendency for one party to try to take ownership of it. One party that's able to generate greater returns than the other, almost always makes move to try to take more control out of that asset and that's the basis of governance. And so everybody talks about data governance as though it's like something that you worry about with your backup and restore. Well, that's important but this notion of data governance increasingly is going to become a feature of strategy and boardroom conversations about what it really means to create data assets, sustain those data assets, get value out of them, and how we determine whether or not the right balance is being struck between the value that we're getting out of our data and third parties are getting out of our data, including customers. So with that, let's do a quick Action Item. David Floyer, I'm looking at you. Why don't we start here. David Floyer, Action Item. >> So my Action Item is for businesses, you should focus. Focus on data about your products in use by your customers, to improve, help improve the quality of your products and fuse AI into those products as one of the most efficient ways of adding value to it. And do that before your competition has a chance to come in and get data that will stop you from doing that. >> George Gilbert, Action Item. >> I guess mine would be that... in most cases you you want to embrace some amount of reuse because of the economics involved from your joint development with a solution provider. But if others are going to get some benefit from sort of reusing some of the intellectual property that informs models that you build, make sure you negotiate with your vendor that any upgrades to those models, whether they're digital twins or in other forms, that there's a canonical version that can come back and be an upgraded path for you as well. >> Jim Kobielus, Action Item. >> My Action Item is for businesses to regard your data as a product that you monetize yourself. Or if you are unable to monetize it yourself, if there is a partner, like a supplier or a customer who can monetize that data, then negotiate the terms of that monetization in your your relationship and be vigilant on that so you get a piece of that stream. Even if the bulk of the work is done by your partner. >> Neil Raden, Action Item. >> It's all based on transparency. Your data is your data. No one else can take it without your consent. That doesn't mean that you can't get involved in relationships where there's an agreement to do that. But the problem is most agreements, especially when you look at a business consumer, are so onerous that nobody reads them and nobody understands them. So the person providing the data has to have an unequivocal right to sell it to you and the person buying it has to really understand what the limits are that they can do with it. >> Ralph Finos, Action Item. You're muted Ralph. But it was brilliant, whatever it was. >> Well it was and I really can't say much more than that. (Peter laughs) But I think from a practitioner perspective and I understand that from a manufacturing perspective how the value could be there. But as a practitioner if you're fishing for data out there that someone has that might look like something you can use, chances are it's not. And you need to be real careful about spending money to get data that you're not really clear is going to help you. >> Great. All right, thanks very much team. So here's our Action Item conclusion for today. The whole concept of digital business is predicated in the idea of using data assets in a differential way to better serve your markets and improve your operations. It's your data. Increasingly, that is going to be the base for differentiation. And any weak undertaking to allow that data to get out has the potential that someone else can, through their data science and their capabilities, re-engineer much of what you regard as your differentiation. We've had conversations with leading data scientists who say that if someone were to sell customer data into a open marketplace, that it would take about four days for a great data scientist to re-engineer almost everything about your customer base. So as a consequence, we have to tread lightly here as we think about what it means to release data into the wild. Ultimately, the challenge there for any business will be: how do I establish the appropriate governance and protections, not just looking at the technology but rather looking at the overall notion of the data assets. If you don't understand how to monetize your data and nonetheless enter into a partnership with somebody else, by definition that partner is going to generate greater value out of your data than you are. There's significant information asymmetries here. So it's something that, every company must undertake an understanding of how to generate value out of their data. We don't think that there's going to be a general-purpose marketplace for sharing data in a lot of ways. This is going to be a heavily contracted arrangement but it doesn't mean that we should not take great steps or important steps right now to start doing a better job of instrumenting our products and services so that we can start collecting data about our products and services because the path forward is going to demonstrate that we're going to be able to improve, dramatically improve the quality of the goods and services we sell by reducing the assets specificities for our customers by making them more intelligent and more programmable. Finally, is this going to be a feature of a differentiated business relationship through trust? We're open to that. Personally, I'll speak for myself, I think it will. I think that there is going to be an important element, ultimately, of being able to demonstrate to a customer base, to a marketplace, that you take privacy, data ownership, and intellectual property control of data assets seriously and that you are very, very specific, very transparent, in how you're going to use those in derivative business transactions. All right. So once again, David Floyer, thank you very much here in the studio. On the phone: Neil Raden, Ralph Finos, Jim Kobielus, and George Gilbert. This has been another Wikibon Action Item. (electronic music)
SUMMARY :
and the products that we are utilizing And on the remote lines we have Neil Raden, You've been in the BI world as a user, as a consultant, and independently I go into the warehouse and I say, So what's your thought on how this is likely to play out? that you have clear ownership over the data. and that the ownership of it, as Neil said, That is likely to give you the best return on your money but that doesn't give you rights to then also You need the right to use that data. and the whole concept of digital twin and some of that is going to go into It's not kosher to monetize data that you don't own and the compensation has to be appropriate to the use. and handles a lot of the fundamentals and the validity of that of that data is. and that's the basis of governance. and get data that will stop you from doing that. because of the economics involved from your Even if the bulk of the work is done by your partner. and the person buying it has to really understand But it was brilliant, whatever it was. how the value could be there. and that you are very, very specific,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Mike Wheatley | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
IRI | ORGANIZATION | 0.99+ |
Nielsen | ORGANIZATION | 0.99+ |
April 06, 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
one party | QUANTITY | 0.99+ |
two parties | QUANTITY | 0.99+ |
Mercedes-Benz | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mercedes Benz | ORGANIZATION | 0.99+ |
One party | QUANTITY | 0.99+ |
Robert Hof | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Ralph | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
IMSN | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
Teradata | ORGANIZATION | 0.98+ |
next month | DATE | 0.96+ |
Moby Dick | TITLE | 0.95+ |
about 20 different predictive models | QUANTITY | 0.95+ |
Sybase | ORGANIZATION | 0.95+ |
decades | QUANTITY | 0.93+ |
about ten years ago | DATE | 0.88+ |
about four days | QUANTITY | 0.86+ |
second | QUANTITY | 0.83+ |
once | QUANTITY | 0.82+ |
Wikibon | ORGANIZATION | 0.8+ |
of years ago | DATE | 0.77+ |
Action | ORGANIZATION | 0.68+ |
SiliconANGLE | TITLE | 0.66+ |
twins | QUANTITY | 0.64+ |
Editor In Chief | PERSON | 0.61+ |
Items | QUANTITY | 0.58+ |
twin | QUANTITY | 0.48+ |
Think | ORGANIZATION | 0.46+ |
Action Item Quick Take | David Floyer | Flash and SSD, April 2018
>> Hi, I'm Peter Burris with another Wikibon Action Item Quick Take. David Floyer, you've been at the vanguard of talking about the role that Flash, SSD's, and others, other technologies are going to have in the technology industry, predicting early on that it was going to eclipse HDD, even though you got a lot of blow back about the "We're going to remain expensive and small". That's changed. What's going on? >> Well, I've got a prediction that we'll have petabyte drives, SSD drives, within five years. Let me tell you a little bit why. So there's this new type of SSD that's coming into town. It's the mega SSD, and Nimbus Data has just announced this mega SSD. It's a hundred terabyte drive. It's very high density, obviously. It has much fewer, uh, much fewer? It has fewer IOPS and bandwidth than SSD. The access density is much better than HDD, but still obviously lower than high-performance SSD. Much, much lower space power than either SSD or HDD in terms of environmentals. It's three and a half inch. That's compatible with HDD. It's obviously looking to go into the same slots. A hundred terabytes today, two hundred terabytes, 10x, that 10x of the Hammer drives that are coming in from HDD's in 2019, 2020, and the delta will increase over time. It's still more expensive than HDD per bit, but it's, and it's not a direct replacement, but much greater ability to integrate with data services and other things like that. So the prediction, then, is get ready for mega SSD's. It's going to carve out a space at the low end of SSD's and into the HDD's, and we're going to have one petabyte, or more, drives within five years. >> Big stuff from small things. David Floyer, thank you very much. And, once again, this has been a Wikibon Action Item Quick Take. (chill techno music)
SUMMARY :
about the "We're going to remain expensive and small". It's the mega SSD, and Nimbus Data has just announced Wikibon Action Item Quick Take.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
April 2018 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
10x | QUANTITY | 0.99+ |
two hundred terabytes | QUANTITY | 0.99+ |
three and a half inch | QUANTITY | 0.99+ |
Nimbus Data | ORGANIZATION | 0.98+ |
A hundred terabytes | QUANTITY | 0.98+ |
one petabyte | QUANTITY | 0.97+ |
today | DATE | 0.95+ |
five years | QUANTITY | 0.94+ |
Wikibon | ORGANIZATION | 0.92+ |
a hundred terabyte | QUANTITY | 0.83+ |
petabyte | QUANTITY | 0.56+ |
Action Item | March 30, 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) Once again, we're broadcasting from theCUBE studios in beautiful Palo Alto. Here in the studio with me are George Gilbert and David Floyer. And remote, we have Neil Raden and Jim Kobielus. Welcome everybody. >> David: Thank you. >> So this is kind of an interesting topic that we're going to talk about this week. And it really is how are we going to find new ways to generate derivative use out of many of the applications, especially web-based applications that are have been built over the last 20 years. A basic premise of digital business is that the difference between business and digital business is the data and how you craft data as an asset. Well, as we all know in any universal Turing machine, data is the basis for representing both the things that you're acting upon but also the algorithms, the software itself. Software is data and the basic principles of how we capture software oriented data assets or software assets and then turn them into derivative sources of value and then reapply them to new types of problems is going to become an increasingly important issue as we think about the world of digital business is going to play over the course of the next few years. Now, there are a lot of different domains where this might work but one in particular that's especially as important is in the web application world where we've had a lot of application developers and a lot of tools be a little bit more focused on how we use web based services to manipulate things and get software to do the things we want to do and also it's a source of a lot of the data that's been streaming into big data applications. And so it's a natural place to think about how we're going to be able to create derivative use or derivative value out of crucial software assets. How are we going to capture those assets, turn them into something that has a different role for the business, performs different types of work, and then reapply them. So to start the conversation, Jim Kobielus. Why don't you take us through what some of these tools start to look like. >> Hello, Peter. Yes, so really what we're looking at here, in order to capture these assets, the web applications, we first have to generate those applications and the bulk of that worker course is and remains manual. And in fact, there is a proliferation of web application development frameworks on the market and the range of them continues to grow. Everything from React to Angular to Ember and Node.js and so forth. So one of the core issues that we're seeing out there in the development world is... are there too many of these. Is there any prospect for simplification and consolidation and convergence on web application development framework to make the front-end choices for developers a bit easier and straightforward in terms of the front-end development of JavaScript and HTML as well as the back-end development of the logic to handle the interactions; not only with the front-end on the UI side but also with the infrastructure web services and so forth. Once you've developed the applications, you, a professional programmer, then and only then can we consider the derivative uses you're describing such as incorporation or orchestration of web apps through robotic process automation and so forth. So the issue is how can we simplify or is there a trend toward simplification or will there soon be a trend towards simplification of a front-end manual development. And right now, I'm not seeing a whole lot of action in this direction of a simplification on the front-end development. It's just a fact. >> So we're not seeing a lot of simplification and convergence on the actual frameworks for creating software or creating these types of applications. But we're starting to see some interesting trends for stuff that's already been created. How can we generate derivative use out of it? And also per some of our augmented programming research, new ways of envisioning the role that artificial intelligence machine learning, etc, can play in identifying patterns of utilization so that we are better able to target those types of things that could be used for derivative or could be applied to derivative use. Have I got that right, Jim? >> Yeah, exactly. AI within robotic process automation, anything that could has already been built can be captured through natural language processing, through a computer image recognition, OCR, and so forth. And then trans, in that way, it's an asset that can be repurposed in countless ways and that's the beauty RPA or where it's going. So the issue is then not so much capture of existing assets but how can we speed up and really automate the original development of all that UI logic? I think RPA is part of the solution but not the entire solution, meaning RPA provides visual front-end tools for the rest of us to orchestrate more of the front-end development of the application UI and interaction logic. >> And it's also popping up-- >> That's part of broader low-code-- >> Yeah, it's also popping up at a lot of the interviews that we're doing with CIOs about related types of things but I want to scope this appropriately. So we're not talking about how we're going to take those transaction processing applications, David Floyer, and envelope them and containerize them and segment them and apply a new software. That's not what we're talking about, nor are we talking about the machine to machine world. Robot process automation really is a tool for creating robots out of human time interfaces that can scale the amount of work and recombine it in different ways. But we're not really talking about the two extremes. The hardcore IoT or the hardcore systems of record. Right? >> Absolutely. But one question I have for Jim and yourself is the philosophy for most people developing these days is mobile first. The days of having an HTML layout on a screen have gone. If you aren't mobile first, that's going to be pretty well a disaster for any particular development. So Jim, how does RPA and how does your discussion fit in with mobile and all of the complexity that mobile brings? All of the alternative ways that you can do things with mobile. >> Yeah. Well David, of course, low-code tools, there are many. There are dozens out there. There are many of those that are geared towards primarily supporting of fast automated development of mobile applications to run on a variety of devices and you know, mobile UIs. That's part of the solution as it were but also in the standard web application development world. know there's these frameworks that I've described. Everything from React to Angular to Vue to Ember, everything else, are moving towards a concept, more than concept, it's a framework or paradigm called progressive web apps. And what progressive web apps are all about, that's really the mainstream of web application development now is blurring the distinction between mobile and web and desktop applications because you build applications, JavaScript applications for browsers. The apps look and behave as if they were real-time interactive in memory mobile apps. What that means is that they download fresh content throughout a browsing session progressively. I'm putting to determine air quotes because that's where the progressive web app comes in. And they don't require the end-user to visit an app store or download software. They don't require anything in terms of any special capabilities in terms of synchronizing data from servers to run in memory natively inside of web accessible containers that are local to the browser. They just feel mobile even though they, excuse me, they may be running on a standard desktop with narrowband connectivity and so forth. So they scream and they scream in the context of their standard JavaScript Ajax browser obsession. >> So when we think about this it got, jeez Jim it almost sounds like like client-side Java but I think you're we're talking about something, as you said, that that evolves as the customer uses it and there's a lot of techniques and approaches that we've been using to do some of those things. But George Gilbert, the reason I bring up the notion of client-side Java is because we've seen other initiatives over the years try to do this. Now, partly they failed because, David Floyer, they focused on too much and tried to standardize or presume that everything required a common approach and we know that that's always going to fail. But what are some of the other things that we need to think about as we think about ways of creating derivative use out of software or in digital assets. >> Okay, so. I come at it from two angles. And as Jim pointed out, there's been a Cambrian explosion of creativity and innovation on frankly on client-side development and server-side development. But if you look at how we're going to recombine our application assets, we tried 20 years ago with EAI but that was, and it's sort of like MuleSoft but only was for on-prem apps. And it didn't work because every app was bespoke essentially-- >> Well, it worked for point-to-point classes of applications. >> Yeah, but it required bespoke development for every-- >> Peter: Correct. >> Every instance because the apps were so customized. >> Peter: And the interfaces were so customized. >> Yes. At the same time we were trying to build higher-level application development capabilities on desktop productivity tools with macros and then scripting languages, cross application, and visual development or using applications as visual development building blocks. Now, you put those two things together and you have the ability to work with user interfaces by building on, I'm sorry, to work with applications that have user interfaces and you have the functionality that's in the richer enterprise applications and now we have the technology to say let's program by example on essentially a concrete use case and a concrete workflow. And then you go back in and you progressively generalize it so it can handle more exception conditions and edge conditions. In other words, you start with... it's like you start with the concrete and you get progressively more abstract. >> Peter: You start with the work that the application performs. >> Yeah. >> And not knowledge of the application itself. >> Yes. But the key thing is, as you said, recombining assets because we're sort of marrying the best of EAI world with the best of the visual client-side development world. Where, as Jim points out, machine learning is making it easier for the tools to stay up to date as the user interfaces change across releases. This means that, I wouldn't say this as easy as spreadsheet development, it's just not. >> It's not like building spreadsheet macros but it's more along those lines. >> Yeah, but it's not as low-level as just building raw JavaScript because, and Jim's great example of JavaScript client-side frameworks. Look at our Gmail inbox application that millions of people use. That just downloads a new version whenever they want to drop it and they're just shipping JavaScript over to us. But the the key thing and this is, Peter, your point about digital business. By combining user interfaces, we can bridge applications that were silos then we can automate the work the humans were doing to bridge those silos and then we can reconstitute workflows in much more efficient-- >> Around the digital assets, which is kind of how business ultimately evolves. And that's a crucial element of this whole thing. So let's change direction a little bit because we're talking about, as Jim said, we've been talking about the fact that there are all these frameworks out there. There may be some consolidation on the horizon, we're researching that right now. Although there's not a lot of evidence that it's happening but there clearly is an enormous number of digital assets that are in place inside these web-based applications, whether it be relative to mobile or something else. And we want to find derivative use of or we want to create derivative use out of them and there's some new tools that allow us to do that in a relatively simple straightforward way, like RPA and there are certainly others. But that's not where this ends up. We know that this is increasingly going to be a target for AI, what we've been calling augmented programming and the ability to use machine learning and related types of technologies to be able to reveal, make transparent, gain visibility into, patterns within applications and within the use of data and then have that become a crucial feature of the development process. And increasingly even potentially to start actually creating code automatically based on very clear guidance about what work needs to be performed. Jim, what's happening in that world right now? >> Oh, let's see. So basically, I think what's going to happen over time is that more of the development cycle for web applications will incorporate not just the derivative assets, the AI to be able to decompose existing UI elements and recombine them. Enable flexible and automated recombination in various ways but also will enable greater tuning of the UI in an automated fashion through A/B testing that's in line to the development cycle based on metrics that AI is able to sift through in terms of... different UI designs can be put out into production applications in real time and then really tested with different categories of users and then the best suited or best fit a design based on like reducing user abandonment rates and speeding up access to commonly required capabilities and so forth. The metrics can be rolled in line to the automation process to automatically select the best fit UI design that had been developed through automated means. In other words, this real-world experimentation of the UI has been going on for quite some time in many enterprises and it's often, increasingly it involves data scientists who are managing the predictive models to sort of very much drive the whole promotion process of promoting the best fit design to production status. I think this will accelerate. We'll take more of these in line metrics on UI and then we brought, I believe, into more RPA style environments so the rest of us building out these front ends are automating more of our transactions and many more of the UIs can't take advantage of the fact that we'll let the infrastructure choose the best fit of the designs for us without us having to worry about doing A/B testing and all that stuff. The cloud will handle it. >> So it's a big vision. This notion of it, even eventually through more concrete standard, well understood processes to apply some of these AIML technologies to being able to choose options for the developer and even automate some elements of those options based on policy and rules. Neil Raden, again, we've been looking at similar types of things for years. How's that worked in the past and let's talk a bit about what needs to happen now to make sure that if it's going to work, it's going to work this time. >> Well, it really hasn't worked very well. And the reason it hasn't worked very well is because no one has figured out a representational framework to really capture all the important information about these objects. It's just too hard to find them. Everybody knows that when you develop software, 80% of it is grunt work. It's just junk. You know, it's taking out the trash and it's setting things up and whatever. And the real creative stuff is a very small part of it. So if you could alleviate the developer from having to do all that junk by just picking up pieces of code that have already been written and tested, that would be big. But the idea of this has been overwhelmed by the scale and the complexity. And people have tried to create libraries like JavaBeans and object-oriented programming and that sort of thing. They've tried to create catalogs of these things. They've used relational databases, doesn't work. My feeling and I hate to use the word because it always puts people to sleep is some kind of ontology that's deep enough and rich enough to really do this. >> Oh, hold on Neil, I'm feeling... (laughs) >> Yeah. Well, I mean, what good is it, I mean go to Git, right. You can find a thousand things but you don't know which one is really going to work for you because it's not rich enough, it doesn't have enough information. It needs to have quality metrics. It needs to have reviews by people who have used converging and whatever. So that's that's where I think we run into trouble. >> Yeah, I know. >> As far as robots, yeah? >> Go ahead. >> As far as robots writing code, you're going to have the same problem. >> No, well here's where I think it's different this time and I want to throw it out to you guys and see if it's accurate and we'll get to the action items. Here's where I think it's different. In the past, partly perhaps because it's where developers were most fascinated, we try to create object-oriented database and object oriented representations of data and object oriented, using object oriented models as a way of thinking about it. And object oriented code and object oriented this and and a lot of it was relatively low in the stack. And we try to create everything from scratch and it turned out that whenever we did that, it was almost like CASE from many years ago. You create it in the tool and then you maintain it out of the tool and you lose all organization of how it worked. What we're talking about here, and the reason why I think this is different, I think Neil is absolutely right. It's because we're focusing our attention on the assets within an application that create the actual business value. What does the application do and try to encapsulate those and render those as things that are reusable without necessarily doing an enormous amount of work on the back-end. Now, we have to be worried about the back-end. It's not going to do any good to do a whole bunch of RPA or related types of stuff on the front-end that kicks off an enormous number of transactions that goes after a little server that's 15 years old. That's historically only handled a few transactions a minute. So we have to be very careful about how we do this. But nonetheless, by focusing more attention on what is generating value in the business, namely the actions that the application delivers as opposed to knowledge of the application itself, namely how it does it then I think that we're constraining the problem pretty dramatically subject to the realities of what it means to actually be able to maintain and scale applications that may be asked to do more work. What do you guys think about that? >> Now Peter, let me say one more thing about this, about robots. I think you're all a lot more sanguine about AI and robots doing these kinds of things. I'm not. Let me read to you have three pickup lines that a deep neural network developed after being trained to do pickup lines. You must be a tringle? 'Cause you're the only thing here. Hey baby, you're to be a key? Because I can bear your toot? Now, what kind of code would-- >> Well look, the problems look, we go back 50 years and ELIZA and the whole notion of whatever it was. The interactive psychology. Look, let's be honest about this. Neil, you're making a great point. I don't know that any of us are more or less sanguine and that probably is a good topic for a future action item. What are the practical limits of AI and how that's going to change over time. But let's be relatively simple here. The good news about applying AI inside IT problems is that you're starting with engineered systems, with engineered data forms, and engineered data types, and you're working with engineers, and a lot of that stuff is relatively well structured. Certainly more structured than the outside world and it starts with digital assets. That's why a AI for IT operations management is more likely. That's why AI for application programming is more likely to work as opposed to AI to do pickup lines, which is as you said semantically it's all over the place. There's very, very few people that are going to conform to a set of conventions for... Well, I want to move away from the concept of pickup lines and set conventions for other social interactions that are very, very complex. We don't look at a face and get excited or not in a way that corresponds to an obvious well-understood semantic problem. >> Exactly, the value that these applications deliver is in their engagement with the real world of experience and that's not the, you can't encode the real world of human lived experience in a crisp clear way. It simply has to be proven out in the applications or engagement through people or not through people, with the real world outcome and then some outcomes like the ones that Neil read off there, in terms of those ridiculous pickup lines. Most of those kinds of automated solutions won't make a freaking bit of sense because you need humans with their brains. >> Yeah, you need human engagement. So coming back to this key point, the constraint that we're putting on this right now and the reason why certainly, perhaps I'm a little bit more ebullient than you might be Neil. But I want to be careful about this because I also have some pretty strong feelings about where what the limits of AI are, regardless of what Elon Musk says. That at the end of the day, we're talking about digital objects, not real objects, that are engineered, not, haven't evolved over a few billion years, to deliver certain outputs and data that's been tested and relatively well verified. As opposed to have an unlimited, at least from human experience standpoint, potential set of outcomes. So in that small world and certainly the infrastructure universe is part of that and what we're saying is increasingly the application development universe is going to be part of that as part of the digital business transformation. I think it's fair to say that we're going to start seeing AI machine learning and some of these other things being applied to that realm with some degree of success. But, something to watch for. All right, so let's do action item. David Floyer, why don't we start with you. Action item. >> In addressing this, I think that the keys in terms of business focus is first of all mobiles, you have to design things for mobile. So any use of any particular platform or particular set of tools has to lead to mobile being first. And the mobiles are changing rapidly with the amount of data that's being generated on the mobile itself, around the mobile. So that's the first point I would make from a business perspective. And the second is that from a business perspective, one of the key things is that you can reduce cost. Automation must be a key element of this and therefore designing things that will take out tasks and remove tasks, make things more efficient, is going to be an incredibly important part of this. >> And reduce errors. >> And reduce errors, absolutely. Probably most important is reduce errors. Is to take those out of the of the chain and where you can speed things up by removing human intervention and human tasks and raising what humans are doing to a higher level. >> Other things. George Gilbert, action item. >> Okay, so. Really quickly on David's point that we have many more application forms and expressions that we have to present like mobile first. And going back to using RPA as an example. The UiPath product that we've been working with, the core of its capability is to be able to identify specific UI elements in a very complex presentation, whether it's on a web browser or whether it's on a native app on your desktop or whether it's mobile. I don't know how complete they are on mobile because I'm not sure if they did that first but that core capability to identify in a complex, essentially collection and hierarchy of UI elements, that's what makes it powerful. Now on the AI part, I don't think it's as easy as pointing it at one app and then another and say go make them talk. It's more like helping you on the parts where they might be a little ambiguous, like if pieces move around from release to release, things like that. So my action item is say start prototyping with the RPA tools because that's probably, they're probably robust enough to start integrating your enterprise apps. And the only big new wrinkle that's come out in the last several weeks that is now in everyone's consciousness is the MuleSoft acquisition by Salesforce because that's going back to the EAI model. And we will see more app to app integration at the cloud level that's now possible. >> Neil Raden, action item. >> Well, you know, Mark Twain said, there's only two kinds of people in the world. The kind who think there are only two kinds of people in the world and the ones who know better. I'm going to deviate from that a little and say that there's really two kinds of software developers in the world. They're the true computer scientists who want to write great code. It's elegant, it's maintainable, it adheres to all the rules, it's creative. And then there's an army of people who are just trying to get something done. So the boss comes to you and says we've got to get a new website up apologizing for selling the data of 50 million of our customers and you need to do it in three days. Now, those are the kind of people who need access to things that can be reused. And I think there's a huge market for that, as well as all these other software development robots so to speak. >> Jim Kobielus, action item. >> Yeah, for simplifying web application development, I think that developers need to distinguish between back-end and front-end framework. There's a lot of convergence around the back-end framework. Specifically Node.js. So you can basically decouple the decision in terms of front-end frameworks from that and you need to write upfront. Make sure that you have a back-end that supports many front ends because there are many front ends in the world. Secondly, the front ends themselves seem to be moving towards React and Angular and Vue as being the predominant ones. You'll find more programmers who are familiar with those. And then thirdly, as you move towards consolidation on to fewer frameworks on the front-end, move towards low-code tools that allow you just with the push of a button, you know visual development, being able to deploy the built out UI to a full range of mobile devices and web applications. And to close my action item... I'll second what David said. Move toward a mobile first development approach for web applications with a focus on progressive web applications that can run on mobiles and others. Where they give a mobile experience. With intermittent connectivity, with push notifications, with a real-time in memory fast experience. Move towards a mobile first development paradigm for all of your your browser facing applications and that really is the simplification strategy you can and should pursue right now on the development side because web apps are so important, you need a strategy. >> Yeah, so mobile irrespective of the... irrespective of the underlying biology or what have you of the user. All right, so here's our action item. Our view on digital business is that a digital business uses data differently than a normal business. And a digital business transformation ultimately is about how do we increase our visibility into our data assets and find new ways of creating new types of value so that we can better compete in markets. Now, that includes data but it also includes application elements, which also are data. And we think increasingly enterprises must take a more planful and purposeful approach to identifying new ways of deriving additional streams of value out of application assets, especially web application assets. Now, this is a dream that's been put forward for a number of years and sometimes it's work better than others. But in today's world we see a number of technologies emerging that are likely, at least in this more constrained world, to present a significant new set of avenues for creating new types of digital value. Specifically tools like RPA, remote process automation, that are looking at the outcomes of an application and allow programmers use a by example approach to start identifying what are the UI elements, what those UI elements do, how they could be combined, so that they can be composed into new things and thereby provide a new application approach, a new application integration approach which is not at the data and not at the code but more at the work that a human being would naturally do. These allow for greater scale and greater automation and a number of other benefits. The reality though is that you also have to be very cognizant as you do this, even though you can find these, find these assets, find a new derivative form and apply them very quickly to new potential business opportunities that you have to know what's happening at the back-end as well. Whether it's how you go about creating the assets, with some of the front-end tooling, and being very cognizant of which front ends are going to be better or not better or better able at creating these more reusable assets. Or whether you're talking about still how relatively mundane things like how a database serialized has access to data and will fall over because you've created an automated front-end that's just throwing a lot of transactions at it. The reality is there's always going to be complexity. We're not going to see all the problems being solved but some of the new tools allow us to focus more attention on where the real business value is created by apps, find ways to reuse that, and apply it, and bring it into a digital business transformation approach. All right. Once again. George Gilbert, David Floyer, here in the studio. Neil Raden, Jim Kobielus, remote. You've been watching Wikibon Action Item. Until next time, thanks for joining us. (electronic music)
SUMMARY :
Here in the studio with me are and get software to do the things we want to do and the range of them continues to grow. and convergence on the actual frameworks and that's the beauty RPA or where it's going. that can scale the amount of work and all of the complexity that mobile brings? but also in the standard web application development world. and we know that that's always going to fail. and innovation on frankly on client-side development classes of applications. and you have the ability to work with user interfaces that the application performs. But the key thing is, as you said, recombining assets but it's more along those lines. and they're just shipping JavaScript over to us. and the ability to use machine learning and many more of the UIs can't take advantage of the fact some of these AIML technologies to and rich enough to really do this. Oh, hold on Neil, I'm feeling... I mean go to Git, right. you're going to have the same problem. and the reason why I think this is different, Let me read to you have three pickup lines and how that's going to change over time. and that's not the, you can't encode and the reason why certainly, one of the key things is that you can reduce cost. and where you can speed things up George Gilbert, action item. the core of its capability is to So the boss comes to you and says and that really is the simplification strategy that are looking at the outcomes of an application
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Mark Twain | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
March 30, 2018 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
50 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
two kinds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Angular | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
Elon Musk | PERSON | 0.99+ |
MuleSoft | ORGANIZATION | 0.99+ |
two angles | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Gmail | TITLE | 0.98+ |
millions of people | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
two extremes | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
dozens | QUANTITY | 0.98+ |
one question | QUANTITY | 0.98+ |
React | TITLE | 0.98+ |
one app | QUANTITY | 0.97+ |
Ember | TITLE | 0.97+ |
Vue | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
today | DATE | 0.96+ |
this week | DATE | 0.95+ |
Secondly | QUANTITY | 0.94+ |
Ajax | TITLE | 0.94+ |
JavaBeans | TITLE | 0.93+ |
RPA | TITLE | 0.91+ |
Wikibon | TITLE | 0.91+ |
thirdly | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
CASE | TITLE | 0.88+ |
David Floyer | Action Item Quick Take - March 30, 2018
>> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. David Floyer, big news from Redmond, what's going on? >> Well, big Microsoft announcement. If we go back a few years before Nadella took over, Ballmer was a great believer in one Microsoft. They bought Nokia, they were looking at putting Windows into everything, it was a Windows led, one Microsoft organization. And a lot of ambitious ideas were cut off because they didn't get the sign off by, for example, the Windows group. Nadella's first action, and I actually was there, was to announce Office on the iPhone. A major, major thing that had been proposed for a long time was being held up internally. And now he's gone even further. The focus, clear focus of Microsoft is on the cloud, you know 50% plus CAGR on the cloud, Office 365 CAGR 41% and AI, focusing on AI and obviously the intelligent age as well. So Windows 10, Myerson, the leader there, is out, 2% CAGR, he missed his one billion Windows target, by a long way, something like 50%. Windows functionality is being distributed, essentially, across the whole of Microsoft. So hardware is taking the Xbox and the Surface. Windows server itself is going to the cloud. So, big change from the historical look of Microsoft, but, a trimming down of the organization and a much clearer focus on the key things driving Microsoft's fantastic increase in net worth. >> So Microsoft retooling to take advantage and be more relevant, sustain it's relevance in the new era of computing. Once again, this has been a Wikibon Action Item Quick Take. (soft electronic music)
SUMMARY :
David Floyer, big news from Redmond, what's going on? So Windows 10, Myerson, the leader there, is out, in the new era of computing.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
March 30, 2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nadella | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Ballmer | PERSON | 0.99+ |
2% | QUANTITY | 0.99+ |
one billion | QUANTITY | 0.99+ |
Windows 10 | TITLE | 0.99+ |
Office 365 | TITLE | 0.99+ |
Windows | TITLE | 0.98+ |
Xbox | COMMERCIAL_ITEM | 0.98+ |
Surface | COMMERCIAL_ITEM | 0.96+ |
first action | QUANTITY | 0.95+ |
Office | TITLE | 0.94+ |
Myerson | PERSON | 0.93+ |
one | QUANTITY | 0.92+ |
Wikibon | ORGANIZATION | 0.87+ |
Redmond | LOCATION | 0.86+ |
41% | QUANTITY | 0.84+ |
Windows | ORGANIZATION | 0.8+ |
few years | DATE | 0.43+ |
Jim Kobielus | Action Item Quick Take - March 30, 2018
>> Hi, I'm Peter Burris, and welcome to a Wikibon Action Item Quick Take. Jim Kobielus, lots going on in the world of AI and storage. If we think about what happened in storage over the years, it used to be for disc space, get data into a persistent state, and for some of the flash base, it's get data out faster. What happened this week between Pure and NVIDIA to make it easier to get data out faster, especially for AI applications? >> Yeah Peter, this week at NVIDIA's annual conference, GPU technology conference, they announced a partnership with Pure Storage. In fact they released a jointly developed product called AIRI...A-I-R-I standing for AI Ready Infrastructure. What's significant about AIRI is that it is a... Well, I'll tell you years ago, I'm showing my age there was this constant well of data warehousing appliance, a pre-bundled, pre-integrated assembly of storage and compute and software for specific workloads. Though, I wouldn't use the term appliance here, it's a similar concept. In the AI space, there's a need for pre-integrated storage and compute devices...racks...for training workloads and other core, very compute and very data-intensive workloads for AI And that's what the Pure Storage NVIDIA AIRI is all about. It includes Pure Storage's Flashblade storage technology, plus four NVIDIA DCX supercomputers that are running the latest GPUs, the Tesla V100. As well as providing a fast interconnect of NVIDIA's. Plus, also bundling software, NVIDIA's AI frame was from modeling, there's a management tool from Pure Storage. What this is, this is a harbinger of what we expect, and Wikibon will be a broader range from these vendors and others of pre-built optimized AI storage products for premises based deployment, for hyperquads, really for complex AI pipelines involving data... Scientist data, engineers and others. We're very excited about this particular product, we think it has great potential and we believe there's a lot of pent-up demand for these kinds of pre-built hardware products. And that, in many ways, was by far the most significant story in the AI space this week. >> All right, so this has been...thanks very much for that Jim. So, more to come, moving more compute closer to the data. Part of a bigger trend. This has been a Wikibon Action Item Quick Take. >> (smooth techno music)
SUMMARY :
What happened this week story in the AI space this week. All right, so this has been...thanks very much
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
March 30, 2018 | DATE | 0.99+ |
Jim | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Pure | ORGANIZATION | 0.98+ |
this week | DATE | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
Pure Storage | ORGANIZATION | 0.97+ |
DCX | COMMERCIAL_ITEM | 0.95+ |
years | DATE | 0.93+ |
AIRI | ORGANIZATION | 0.84+ |
V100 | COMMERCIAL_ITEM | 0.8+ |
Pure Storage | COMMERCIAL_ITEM | 0.6+ |
Tesla | ORGANIZATION | 0.57+ |
AIRI | TITLE | 0.56+ |
Wikibon Action Item | March 23rd, 2018
>> Hi, I'm Peter Burris, and welcome to another Wikibon Action Item. (funky electronic music) This was a very interesting week in the tech industry, specifically because IBM's Think Conference aggregated in a large number of people. Now, The CUBE was there. Dave Vellante, John Furrier, and myself all participated in somewhere in the vicinity of 60 or 70 interviews with thought leaders in the industry, including a number of very senior IBM executives. The reason why this becomes so important is because IBM made a proposal to the industry about how some of the digital disruption that the market faces is likely to unfold. The normal approach or the normal mindset that people have used is that startups, digital native companies were going to change the way that everything was going to operate, and the dinosaurs were going to go by the wayside. IBM's interesting proposal is that the dinosaurs actually are going to learn to dance, utilizing or playing on a book title from a number of years ago. And the specific argument was laid out by Ginni Rometty in her keynote, when she said that there are number of factors that are especially important here. Factor number one is that increasingly, businesses are going to recognize that the role that their data plays in competition is on the ascending. It's getting more important. Now, this is something that Wikibon's been arguing for quite some time. In fact, we have said that the whole key to digital disruption and digital business is to acknowledge the difference between business and digital business, is the role that data and data assets play in your business. So we have strong agreement there. But on top of that, Ginni Rometty made the observation that 80% of the data that could be accessed and put the work in business has not yet been made available to the new activities, the new processes that are essential to changing the way customers are engaged, businesses operate, and overall change and disruption occurs. So her suggestion is that that 80%, that vast amount of data that could be applied that's not being tapped, is embedded deep within the incumbents. And so the core argument from IBM is that the incumbent companies, not the digital natives, not the startups, but the incumbent companies are poised to make a significant, to have a significant role in disrupting how markets operate, because of the value of their data that hasn't currently been put to work and made available to new types of work. That was the thesis that we heard this week, and that's what we're going to talk about today. Are the incumbent really going to strike back? So Dave Vellante, let me start with you. You were at Think, you heard the same type of argument. What did you walk away with? >> So when I first heard the term incumbent disruptors, I was very skeptical, and I still am. But I like the concept and I like it a lot. So let me explain why I like it and why I think there's some real challenges. If I'm a large incumbent global 2,000, I'm not going to just roll over because the world is changing and software is eating my world. Rather what I'm going to do is I'm going to use my considerable assets to compete, and so that includes my customers, my employees, my ecosystem, the partnerships that I have there, et cetera. The reason why I'm skeptical is because incumbents aren't organized around their data assets. Their data assets are stovepipe, they're all over the place. And the skills to leverage that data value, monetize that data, understand the contribution that data makes toward monetization, those skills are limited. They're bespoke and they're very narrow. They're within lines of business or divisions. So there's a huge AI gap between the true digital business and an incumbent business. Now, I don't think all is lost. I think a lot of strategies can work, from M&A to transformation projects, joint ventures, spin-offs. Yeah, IBM gave some examples. They put up Verizon and American Airlines. I don't see them yet as the incumbent disruptors. But then there was another example of IBM Maersk doing some very interesting and disrupting things, Royal Bank of Canada doing some pretty interesting things. >> But in a joint venture forum, Dave, to your point, they specifically set up a joint venture that would be organized around this data, didn't they? >> Yes, and that's really the point I'm trying to make. All is not lost. There are certain things that you can do, many things that you can do as an incumbent. And it's really game on for the next wave of innovation. >> So we agree as a general principle that data is really important, David Floyer. And that's been our thesis for quite some time. But Ginni put something out there, Ginni Rometty put something out there. My good friend, Ginni Rometty, put something out there that 80% of the data that could be applied to disruption, better customer engagement, better operations, new markets, is not being utilized. What do we think about that? Is that number real? >> If you look at the data inside any organization, there's a lot of structured data. And that has better ability to move through an organization. Equally, there's a huge amount of unstructured data that goes in emails. It goes in voicemails, it goes in shared documents. It goes in diagrams, PowerPoints, et cetera, that also is data which is very much locked up in the way that Dave Vellante was talking about, locked up in a particular process or in a particular area. So is there a large amount of data that could be used inside an organization? Is it private, is it theirs? Yes, there is. The question is, how do you tap that data? How do you organize around that data to release it? >> So this is kind of a chicken and egg kind of a problem. Neil Raden, I'm going to turn to you. When we think about this chicken and egg problem, the question is do we organize in anticipation of creating these assets? Do we establish new processes in anticipation of creating these data assets? Or do we create the data assets first and then re-institutionalize the work? And the reason why it's a chicken and egg kind of problem is because it takes an enormous amount of leadership will to affect the way a business works before the asset's in place. But it's unclear that we're going to get the asset that we want unless we affect the reorganization, institutionalization. Neil, is it going to be a chicken? Is it going to be the egg? Or is this one of the biggest problems that these guys are going to have? >> Well, I'm a little skeptical about this 80% number. I need some convincing before I comment on that. But I would rather see, when David mentioned the PowerPoint slides or email or that sort of thing, I would rather see that information curated by the application itself, rather than dragged out in broad data and reinterpreted in something. I think that's very dangerous. I think we saw that in data warehousing. (mumbling) But when you look at building data lakes, you throw all this stuff into a data lake. And then after the fact, somebody has to say, "Well, what does this data mean?" So I find it kind of a problem. >> So Jim Kobielus, a couple weeks ago Microsoft actually introduced a technology or a toolkit that could in fact be applied to move this kind of advance processing for dragging value out of a PowerPoint or a Word document or something else, close and proximate to the application. Is that, I mean, what Neil just suggested I think is a very, very good point. Are we going to see these kinds of new technologies directly embedded within applications to help users narrowly, but businesses more broadly, lift that information out of these applications so it can be freed up for other uses? >> I think yeah, on some level, Peter, this is a topic called dark data. It's been discussed in data management circles for a long time. The vast majority, I think 75 to 80% is the number that I see in the research, is locked up in terms of it's not searchable, it's not easily discoverable. It's not mashupable, I'm making up a word. But the term mashup hasn't been used in years, but I think it's a good one. What it's all about is if we want to make the most out of our incumbent's data, then we need to give the business, the business people, the tools to find the data where it is, to mash it up into new forms and analytics and so forth, in order to monetize it and sell it, make money off of it. So there are a wide range of data discovery and other tools that support a fairly self-service combination and composition of composite data object. I don't know that, however, that the culture of monetizing existing dataset and pulling dark data into productized forms, I don't think that's taken root in any organization anywhere. I think that's just something that consultants talk about as something that gee, should be done, but I don't think it's happening in the real world. >> And I think you're probably correct about that, but I still think Neil raised a great point. And I would expect, and I think we all believe, that increasingly this is not going to come as a result of massive changes in adoption of new data science, like practices everywhere, but an embedding of these technologies. Machine learning algorithms, approaches to finding patterns within application data, in the applications themselves, which is exactly what Neil was saying. So I think that what we're going to see, and I wanted some validation from you guys about this, is increasingly tools being used by application providers to reveal data that's in applications, and not open source, independent tool chains that then ex-post-facto get applied to all kinds of different data sources in an attempt for the organization to pull the stuff out. David Floyer, what do you think? >> I agree with you. I think there's a great opportunity for the IT industry in this area to put together solutions which can go and fit in. On the basis of existing applications, there's a huge amount of potential, for example, of ERP systems to link in with IOT systems, for example, and provide a data across an organization. Rather than designing your own IOT system, I think people are going to buy-in pre-made ones. They're going to put the devices in, the data's going to come in, and the AI work will be done as part of that, as part of implementing that. And right across the board, there is tremendous opportunity to improve the applications that currently exist, or put in new versions of applications to address this question of data sharing across an organization. >> Yeah, I think that's going to be a big piece of what happens. And it also says, Neil Raden, something about whether or not enormous machine learning deities in the sky, some of which might start with the letter W, are going to be the best and only way to unlock this data. Is this going to be something that, we're suggesting now that it's something that's going to be increasingly-distributed closer to applications, less invasive and disruptive to people, more invasive and disruptive to the applications and the systems that are in place. And what do you think, Neil? Is that a better way of thinking about this? >> Yeah, let me give you an example. Data science the way it's been practiced is a mess. You have one person who's trying to find the data, trying to understand the data, complete your selection, designing experiments, doing runs, and so forth, coming up with formulas and then putting them in the cluster with funny names so they can try to remember which one was which. And now what you have are a number of software companies who've come up with brilliant ways of managing that process, of really helping the data science to create a work process in curating the data and so forth. So if you want to know something about this particular model, you don't have to go to the person and say, "Why did you do that model? "What exactly were you thinking?" That information would be available right there in the workbench. And I think that's a good model for, frankly, everything. >> So let's-- >> Development pipeline toolkits. That's a hot theme. >> Yeah, it's a very hot theme. But Jim, I don't think you think but I'm going to test it. I don't think we're going to see AI pipeline toolkits be immediately or be accessed by your average end user who's putting together a contract, so that that toolkit or so that data is automatically munched and ingested or ingested and munched by some AI pipeline. This is going to happen in an application. So the person's going to continue to do their work, and then the tooling will or will not grab that information and then combine it with other things through the application itself into the pipeline. We got that right? >> Yeah, but I think this is all being, everything you described is being embedded in applications that are making calls to backend cloud services that have themselves been built by data scientists and exposed through rest APIs. Steve, Peter, everything you're describing is coming to applications fairly rapidly. >> I think that's a good point, but I want to test it. I want to test that. So Ralph Finos, you've been paying a lot of attention during reporting season to what some of the big guys are saying on some of their calls and in some of their public statements. One company in particular, Oracle, has been finessing a transformation, shall we say? What are they saying about how this is going as we think about their customer base, the transformation of their customer base, and the degree to which applications are or are not playing a role in those transformations? >> Yeah, I think in their last earnings call a couple days ago that the point that they were making around the decline and the-- >> Again, this is Oracle. So in Oracle's last earnings call, yeah. >> Yeah, I'm sorry, yeah. And the decline and the revenue growth rate in the public cloud, the SAS end of their business, was a function really of a slowdown of the original acquisitions they made to kind of show up as being a transformative cloud vendor, and that are basically beginning to run out of gas. And I think if you're looking at marketing applications and sales-related applications and content-type of applications, those are kind of hitting a natural high of growth. And I think what they were saying is that from a migration perspective on ERP, that that's going to take a while to get done. They were saying something like 10 or 15% of their customer base had just begun doing some sort of migration. And that's a data around ERP and those kinds of applications. So it's a long slog ahead of them, but I'd rather be in their shoes, I think, for the long run than trying to kind of jazz up in the near-term some kind of pseudo-SAS cloud growth based on acquisition and low-lying fruit. >> Yeah, because they have a public cloud, right? I mean, at least they're in the game. >> Yeah, and they have to show they're in the game. >> Yeah, and specifically they're talking about their applications as clouds themselves. So they're not just saying here's a set of resources that you can build, too. They're saying here's a set of SAS-based applications that you can build around. >> Dave: Right. Go ahead, Ralph, sorry. >> Yeah, yeah. And I think the notion there is the migration to their ERP and their systems of record applications that they're saying, this is going to take a long time for people to do that migration because of complexity in process. >> So the last point, or Dave Vellante, did you have a point you want to make before I jump into a new thought here? >> I just compare and contrast IBM and Oracle. They have public clouds, they have SAS. Many others don't. I think this is a major different point of differentiation. >> Alright, so we've talked about whether or not this notion of data as a source of value's important, and we agree it is. We still don't know whether or not 80% is the right number, but it is some large number that's currently not being utilized and applied to work differently than the data currently is. And that likely creates some significant opportunities for transformation. Do we ultimately think that the incumbents, again, I mention the chicken and the egg problem. Do we ultimately think that the incumbents are... Is this going to be a test of whether or not the incumbents are going to be around in 10 years? The degree to which they enact the types of transformation we thought about. Dave Vellante, you said you were skeptical. You heard the story. We've had the conversation. Will incumbents who do this in fact be in a better position? >> Well, incumbents that do take action absolutely will be in a better position. But I think that's the real question. I personally believe that every industry is going to get disrupted by digital, and I think a lot of companies are not prepared for this and are going to be in deep trouble. >> Alright, so one more thought, because we're talking about industries overall. There's so many elements we haven't gotten to, but there's one absolute thing I want to talk about. Specifically the difference between B2C and B2B companies. Clearly the B2C industries have been disrupted, many of them pretty significantly, over the last few years. Not too long ago, I have multiple not-necessarily-good memories of running the aisles of Toys R Us sometime after 10 o'clock at night, right around December 24th. I can't do that anymore, and it's not because my kids are grown. Or I won't be able to do that soon anymore. So B2C industries seem to have been moved faster, because the digital natives are able to take advantage of the fact that a lot of these B2C industries did not have direct and strong relationships with those customers. I would posit that a lot of the B2B industries are really where the action's going to take. And the kind of way I would think about it, and David Floyer, I'll turn to you first. The way I would think about it is that in the B2C world, it's new markets and new ways of doing things, which is where the disruption's going to take place. So more of a substitution as opposed to a churn. But in the B2B markets, it's disrupting greater efficiencies, greater automation, greater engagement with existing customers, as well as finding new businesses and opportunities. What do you think about that? >> I think the B2B market is much more stable. Relationships, business relationships, very, very important. They take a long time to change. >> Peter: But much of that isn't digital. >> A lot of that is not digital. I agree with that. However, I think that the underlying change that's happening is one of automation. B2B are struggling to put into place automation with robots, automation everywhere. What you see, for example, in Amazon is a dedication to automation, to making things more efficient. And I think that's, to me, the biggest challenges, owning up to the fact that they have to change their automation, get themselves far more efficient. And if they don't succeed in doing that, then their ability to survive or their likelihood of being taken over with a reverse takeover becomes higher and higher and higher. So how do you go about that level, huge increase in automation that is needed to survive, I think is the biggest question for B2B players. >> And when we think about automation, David Floyer, we're not talking about the manufacturing arms or only talking about the manufacturing arms. We're talking about a lot of new software automation. Dave Vellante, Jim Kobielus, RPA is kind of a new thing. Dave, we saw some interesting things at Think. Bring us up to speed quickly on what the community at Think was talking about with RPA. >> Well, I tell you. There were a lot of people in financial services, which is IBM's stronghold. And they're using software robots to automate a lot of the backend stuff that humans were doing. That's a major, major use case. I would say 25 to 30% of the financial services organizations that I talked to had active RPA projects ongoing at the moment. I don't know. Jim, what are your thoughts? >> Yeah, I think backend automation is where B2B disruption is happening. As the organizations are able to automate more of their backend, digitize more of their backend functions and accelerate them and improve the throughput of transactions, are those that will clean up. I think for the B2C space, it's the frontend automation of the digitalization of the engagement channels. But RPA is essentially a key that's unlocking backend automation for everybody, because it allows more of the frontend business analysts and those who are not traditionally BPM, or business process re-engineering professionals, to begin to take standard administrative processes and begin to automate them from, as it were, the outside-in in a greater way. So I think RPA is a secret key for that. I think we'll see some of the more disruptive organizations, businesses, take RPA and use it to essentially just reverse-engineer, as it were, existing processes, but in an automated fashion, and drive that improvement but in the backend by AI. >> I just love the term software robots. I just think that that's, I think that so strongly evokes what's going to happen here. >> If I could add, I think there's a huge need to simplify that space. The other thing I witnessed at IBM Think is it's still pretty complicated. It's still a heavy lift. There's a lot of big services component to this, which is probably why IBM loves it. But there's a massive market, I think, to simplify the adoption or RPA. >> I completely agree. We have to open the aperture as well. Again, the goal is not to train people new things, new data science, new automation stuff, but to provide tools and increasingly embed those tools into stuff that people are already using, so that the disruption and the changes happen more as a consequence of continuing to do what the people do. Alright, so let's hit the action item we're on, guys. It's been a great conversation. Again, we haven't talked about GDPR. We haven't talked about a wide array of different factors that are going to be an issue. I think this is something we're going to talk about. But on the narrow issue of can the disruptors strike back? Neil Raden, let's start with you. Neil Raden, action item. >> I've been saying since 1975 that I should be hanging around with a better class of people, but I do spend a lot of time in the insurance industry. And I have been getting a consensus that in the next five to 10 years, there will no longer be underwriters for claims adjustments. That business is ready for massive, massive change. >> And those are disruptors, largely. Jim Kobielus, action item. >> Action item. In terms of business disruption, is just not to imagine that because you were the incumbent in the past era in some solution category that's declining, that that automatically guarantees you, that makes your data fit for seizing opportunities in the future. As we've learned from Blockbuster Video, the fact that they had all this customer data didn't give them any defenses against Netflix coming along and cleaning their coffin, putting them out of business. So the next generation of disruptor will not have any legacy data to work from, and they'll be able to work miracles because they made a strategic bet on some frontend digital channel that made all the difference. >> Ralph Finos, action item. >> Yeah, I think there's a notion here of siege mentality. And I think the incumbents are in the castle walls, and the disruptors are outside the castle walls. And sometimes the disruptors, you know, scale the walls. Sometimes they don't. But I think being inside the walls is a long-run tougher thing to be at. >> Dave Vellante, action item. >> I want to pick up on something Neil said. I think it's alluring for some of these industries, like insurance and financial services and healthcare, even parts of government, that really haven't been disrupted in a huge way yet to say, "Well, I'll wait and I'll see what happens." I think that's a huge mistake. I think you have to start immediately thinking about strategies, particularly around your data, as we talked about earlier. Maybe it's M&A, maybe it's joint ventures, maybe it's spinning out new companies. But the time is past where you should be acting. >> David Floyer, action item. >> I think that it's easier to focus on something that you can actually do. So my action item is that the focus of most B2B companies should be looking at all of their processes and incrementally automating them, taking out the people cost, taking out the cost, other costs, automating those processes as much as possible. That, in my opinion, is the most likely path to being in the position that you can continue to be competitive. Without that focus, it's likely that you're going to be disrupted. >> Alright. So the one thing I'll say about that, David, is when I think you say people cost I think you mean the administrative cost associated with people. >> And people doing things, automating jobs. >> Alright, so we have been talking here in today's Wikibon Action Item about the question, will the incumbents be able to strike back? The argument we heard at IBM Think this past week, and this is the third week of March, was that data is an asset that can be applied to significantly disrupt industries, and that incumbents have a lot of data that hasn't been bought into play in the disruptive flow. And IBM's argument is that we're going to see a lot of incumbents start putting their data into play, more of their data assets into play. And that's going to have a significant impact ultimately on industry structure, customer engagement, the nature of the products and services that are available over the course of the next decade. We agree. We generally agree. We might nitpick about whether it's 80%, whether it's 60%. But in general, the observation is an enormous amount of data that exists within a large company, that's related to how they conduct business, is siloed and locked away and is used once and is not made available, is dark and is not made available for derivative uses. That could, in fact, lead to significant consequential improvements in how a business's transaction costs are ultimately distributed. Automation's going to be a big deal. David Floyer's mentioned this in the past. I'm also of the opinion that there's going to be a lot of new opportunities for revenue enhancement and products. I think that's going to be as big, but it's very clear that to start it makes an enormous amount of sense to take a look at where your existing transaction costs are, where existing information asymmetries exist, and see what you can do to unlock that data, make it available to other processes, and start to do a better job of automating local and specific to those activities. And we generally ask our clients to take a look at what is your value proposition? What are the outcomes that are necessary for that value proposition? What activities are most important to creating those outcomes? And then find those that, by doing a better job of unlocking new data, you can better automate those activities. In general, our belief is that there's a significant difference between B2C and B2B businesses. Why? Because a lot of B2C businesses never really had that direct connection, therefore never really had as much of the market and customer data about what was going on. A lot of point-of-sale perhaps, but not a lot of other types of data. And then the disruptors stepped in and created direct relationships, gathered that data and were able to rapidly innovate products and services that served consumers differently. Where a lot of that new opportunity exists is in the B2B world. And here's where the real incumbents are going to start flexing their muscles over the course of the next decade, as they find those opportunities to engage differently, to automate existing practices and activities, change their cost model, and introduce new approaches to operating that are cloud-based, blockchain-based, data-based, based on data, and find new ways to utilize their people. If there's one big caution we have about this, it's this. Ultimately, the tooling is not broadly mature. The people necessary to build a lot of these tools are increasingly moving into the traditional disruptors, the legacy disruptors if we will. AWS, Netflix, Microsoft, companies more along those lines. That talent is very dear still in the industry, and it's going to require an enormous effort to bring those new types of technologies that can in fact liberate some of this data. We looked at things like RPA, robot process automation. We look at the big application providers to increasingly imbue their products and services with some of these new technologies. And ultimately, paradoxically perhaps, we look for the incumbent disruptors to find ways to disrupt without disrupting their own employees and customers. So embedding more of these new technologies in an ethical way directly into the systems and applications that serve people, so that the people face minimal changes to learning new tricks, because the systems themselves have gotten much more automated and much more... Are able to learn and evolve and adjust much more rapidly in a way that still corresponds to the way people do work. So our action item. Any company in the B2B space that is waiting for data to emerge as an asset in their business, so that they can then do all the institutional, re-institutionalizing of work and reorganizing of work and new types of investment, is not going to be in business in 10 years. Or it's going to have a very tough time with it. The big challenge for the board and the CIO, and it's not successfully been done in the past, at least not too often, is to start the process today without necessarily having access to the data, of starting to think about how the work's going to change, think about the way their organization's going to have to be set up. This is not business process re-engineering. This is organizing around future value of data, the options that data can create, and employ that approach to start doing local automation, serve customers, and change the way partnerships work, and ultimately plan out for an extended period of time how their digital business is going to evolve. Once again, I want to thank David Floyer here in the studio with me. Neil Raden, Dave Vellante, Ralph Finos, Jim Kobielus remote. Thanks very much guys. For all of our clients, once again this has been a Wikibon Action Item. We'll talk to you again. Thanks for watching. (funky electronic music)
SUMMARY :
is that the dinosaurs actually are going to learn to dance, And the skills to leverage that data value, Yes, and that's really the point I'm trying to make. that 80% of the data that could be applied to disruption, And that has better ability to move through an organization. that these guys are going to have? And then after the fact, somebody has to say, close and proximate to the application. that the culture of monetizing existing dataset in an attempt for the organization to pull the stuff out. the data's going to come in, Yeah, I think that's going to be a big piece of what happens. of really helping the data science That's a hot theme. So the person's going to continue to do their work, that are making calls to backend cloud services and the degree to which applications are So in Oracle's last earnings call, yeah. and that are basically beginning to run out of gas. I mean, at least they're in the game. here's a set of resources that you can build, too. is the migration to their ERP I think this is a major different point of differentiation. and applied to work differently than the data currently is. and are going to be in deep trouble. So more of a substitution as opposed to a churn. They take a long time to change. And I think that's, to me, the biggest challenges, or only talking about the manufacturing arms. of the financial services organizations that I talked to and drive that improvement but in the backend by AI. I just love the term software robots. There's a lot of big services component to this, of different factors that are going to be an issue. that in the next five to 10 years, And those are disruptors, largely. that made all the difference. And sometimes the disruptors, you know, scale the walls. But the time is past where you should be acting. So my action item is that the focus of most B2B companies So the one thing I'll say about that, David, and employ that approach to start doing local automation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
75 | QUANTITY | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
March 23rd, 2018 | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Toys R Us | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Think | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Ginni | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
10 years | QUANTITY | 0.99+ |
1975 | DATE | 0.99+ |
Word | TITLE | 0.99+ |
Royal Bank of Canada | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Wikibon Action Item Quick Take | Infinidat Event Coverage, March 2018
>> Hi I'm Peter Burris, and welcome to another Wikibon Action Item Quick Take. Dave Vellante, interesting community event next week. What's going on? >> So Infinidat is a company that was started by Moshe Yanai. He invented symmetrics... Probably the single most important storage product of all time. At any rate, he started this new company Infinidat. They tend to do things differently. They're a one product company, but Tuesday March 27th, they're extending their portfolio pretty dramatically. We're going to be covering that. We have a crowd chat going on. It's, again Tuesday March 27th, 10:30 eastern time. Crowdchat.net/infinichat. Check it out. >> Great. That's been our Wikibon Action Item Quick Take. Talk to you soon. (upbeat music)
SUMMARY :
to another Wikibon Action Item Quick Take. We're going to be covering that. Talk to you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
Andrew | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Andrew Liu | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Hertz | ORGANIZATION | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Avis | ORGANIZATION | 0.99+ |
Tuesday March 27th | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
ASOS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Orlando, Florida | LOCATION | 0.99+ |
Crowdchat.net/infinichat | OTHER | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Moshe Yanai | PERSON | 0.99+ |
a week later | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Each year | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Azure Cosmos DB | TITLE | 0.98+ |
millions of vehicles | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
Cohesity | ORGANIZATION | 0.98+ |
200 different countries | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
Cassandra | TITLE | 0.98+ |
Redmond, Washington | LOCATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
hundred, 200 countries | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
Office | TITLE | 0.97+ |
each one | QUANTITY | 0.97+ |
a week | QUANTITY | 0.97+ |
single-digit | QUANTITY | 0.96+ |
Lego | ORGANIZATION | 0.95+ |
about four years ago | DATE | 0.95+ |
200 millisecond | QUANTITY | 0.95+ |
about eight years ago | DATE | 0.94+ |
MongoDB | TITLE | 0.93+ |
two awesome properties | QUANTITY | 0.93+ |
Microsoft Ignite | ORGANIZATION | 0.93+ |
single line | QUANTITY | 0.92+ |
first party | QUANTITY | 0.92+ |
Azure Cosmos | TITLE | 0.92+ |
one region | QUANTITY | 0.92+ |
Cosmos | TITLE | 0.91+ |
Azure | TITLE | 0.89+ |
millions of writes per second | QUANTITY | 0.89+ |
Wikibon Action Item Quick Take | The Role of Digital Disruption, March 2018
>> Hi this is Peter Burris with the Wikibon Action Item Quick Take. Wikibon's investing significantly on a significant research project right now to take a look at the role that digital disruption's playing as it pertains to data protection. In fact, we think this is so important that we're actually starting to coin the term digital business protection. We're looking for practitioners, senior people who are concerned about how they're going to adopt the crucial technologies that are going to make it possible for digital businesses to protect themselves, both from a storage availability standpoint, backup restore, security protection, the role that AI is going to play in identifying patterns. Do a better job of staging data around the organization. We're looking at doing this important crowd chat in the first couple of weeks of April. So if you want to participate, and we want to get as many people as possible, it's a great way to get your ideas in about digital business protection and what kind of software is going to be required to do it. But very importantly, what kind of journey businesses are going to go on to move their organizations through this crucial new technology capability. @PLBurris, @ P L B U R R I S. Hit me up, let's start talking about digital business protection. Talk to you soon. (upbeat music)
SUMMARY :
the role that AI is going to play in identifying patterns.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
March 2018 | DATE | 0.99+ |
@PLBurris | PERSON | 0.98+ |
both | QUANTITY | 0.97+ |
@ P L B U R R I S. | PERSON | 0.91+ |
April | DATE | 0.88+ |
first couple | QUANTITY | 0.68+ |
The Role of Digital | TITLE | 0.57+ |
of weeks | DATE | 0.47+ |
Wikibon Action Item Quick Take | OCP Summit, March 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item Quick Take. David Floyer, you were at OCP, the Open Compute Platform this past week and saw some really interesting things. A couple companies stood out for you including- >> Liqid. They are a very very interesting company. They went GA with a PCIE switch. That's the very very high speed switch that all the systems work off. And what this does is essentially enable virtualization of CPU storage, GPU, systems network, anything that can connect to a PCI bus without the software overhead, without the VMware overhead, without the KVM overhead. This is very exciting that you can have Bare-metal virtualization of the product and put together your own architecture of systems. And one particular example that struck me as being very useful. If you're doing benchmarks and you're trying to do benchmarks with one GPU, two GPU's or more storage or whatever it is you want. This seems to be an ideal way of being able to do that so quickly. I think this is a very exciting product. It's a competitor with Intel's RSD, Rack Scale Design. That's obviously another thing that they seem to have beaten them to, GA to something that works. There's a 30,000 developer kit- >> Mark: 30,000 dollar. >> 30,000 dollar developer kit. I would recommend that that's a best buy for enterprising Cloud provider data centers. >> Excellent. >> So David Floyer at OCP this week, Liqid's new bus technology. Check it out. This has been a Wikibon Action Item Quick Take with Peter Burris and David Floyer. (techno music)
SUMMARY :
David Floyer, you were at OCP, the Open Compute Platform This is very exciting that you can have Bare-metal I would recommend that that's a best buy with Peter Burris and David Floyer.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
30,000 dollar | QUANTITY | 0.99+ |
March 2018 | DATE | 0.99+ |
Liqid | ORGANIZATION | 0.99+ |
OCP | ORGANIZATION | 0.98+ |
two GPU | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
one GPU | QUANTITY | 0.96+ |
this week | DATE | 0.94+ |
one particular example | QUANTITY | 0.91+ |
couple companies | QUANTITY | 0.85+ |
30,000 developer | QUANTITY | 0.85+ |
OCP Summit | EVENT | 0.77+ |
GA | LOCATION | 0.76+ |
Wikibon | ORGANIZATION | 0.76+ |
past week | DATE | 0.75+ |
Wikibon Action Item Quick Take | David Floyer | OCP Summit, March 2018
>> Hi I'm Peter Burris, and welcome once again to another Wikibon Action Item Quick Take. David Floyer you were at OCP, the Open Compute Platform show, or summit this week, wandered the floor, talked to a lot of people, one company in particular stood out, Nimbus Data, what'd you hear? >> Well they had a very interesting announcement of their 100 terrabyte three and a half inch SSD, called the ExaData. That's a lot of storage in a very small space. It's high capacity SSDs, in my opinion, are going to be very important. They are denser, much less power, much less space, not as much performance, but fit in very nicely between the lowest level of disc, hard disc storage and the upper level. So they are going to be very useful in lower tier two applications. Very low friction for adoption there. They're going to be useful in tier three, but they're not direct replacement for disc. They work in a slightly different way. So the friction is going to be a little bit higher there. And then in tier four, there's again very interesting of putting all of the metadata about large amounts of data and put the metadata on high capacity SSD to enable much faster access at a tier four level. So action item for me is have a look at my research, and have a look at the general pricing: it's about half of what a standard SSD is. >> Excellent so this is once again a Wikibon Action Item Quick Take. David Floyer talking about Nimbus Data and their new high capacity, slightly lower performance, cost effective SSD. (upbeat music)
SUMMARY :
to another Wikibon Action Item Quick Take. So they are going to be very useful and their new high capacity, slightly lower performance,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Steve Mulaney | PERSON | 0.99+ |
George | PERSON | 0.99+ |
John Currier | PERSON | 0.99+ |
Derek Monahan | PERSON | 0.99+ |
Justin Smith | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
George Buckman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Stephen | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Steve Eleni | PERSON | 0.99+ |
Bobby Willoughby | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
John Ford | PERSON | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Missouri | LOCATION | 0.99+ |
twenty-year | QUANTITY | 0.99+ |
Luis Castillo | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Ellie Mae | PERSON | 0.99+ |
80 percent | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
twenty years | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
John fritz | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
North America | LOCATION | 0.99+ |
Jennifer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Michael Keaton | PERSON | 0.99+ |
Santa Clara, CA | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
National Instruments | ORGANIZATION | 0.99+ |
Jon Fourier | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Toby Foster | PERSON | 0.99+ |
hundred-percent | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
11 years | QUANTITY | 0.99+ |
Stacey | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
two sides | QUANTITY | 0.99+ |
18 months ago | DATE | 0.99+ |
two types | QUANTITY | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Wikibon Action Item Quick Take | Dave Vellante | Overall Digital Business Protection, March 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item Quick Take. Dave Vellante, data protection, cloud orchestration, overall digital business protection, pretty critical issue. We got some interesting things going on. What's happening? >> As organizations go digital, I see the confluence of privacy, security, data protection, business continuity coming together and I really would like to talk to CSOs in our community how they look at protecting their business in a digital world. So @dvellante, love to just do a crowd chat on this. Love your opinions as to what you think is changing in digital business data protection. >> Great so that's @dvellante, so reach out to Dave, let's get some folks together talking about this crucially important conversation. We'll be doing it in a couple of weeks. Thanks very much. This has been another Wikibon Action Item Quick Take. (upbeat music)
SUMMARY :
Dave Vellante, data protection, cloud orchestration, data protection, business continuity so reach out to Dave, let's get some folks together
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
@dvellante | PERSON | 0.98+ |
Wikibon | TITLE | 0.62+ |
Wikibon Action Item Quick Take | Microsoft AI Platform for Windows, March 2018
>> Hi I'm Peter Burris and welcome to another Wikibon Action Action Item Quick Take. Jim Kobielus, Microsoft seems to be gettin' ready to do a makeover of application development. What's going on? >> Yeah, that's pretty exciting, Peter. So, last week, on the 7th, Microsoft added in one of their Developer Days and now something called AI Platform for Windows and let me explain why that's important. Because that is going to bring Machine Learning down to the desktop applications, anything that's written to run on Windows 10. And why that's important is that, starting with Visual Studio 15.7, they'll be an ability for developers who don't know anything about Machine Learning to be able to, in a very visual way, create Machine Learning models, that they can then have trained in the cloud, and then deployed to their Windows applications, whatever it might be, and to do real-time, local inferencing in those applications, without need for round-tripping back to the cloud. So, what we're looking at now is they're going to bring this capability into the core of Visual Studio and then they're going to be backwards compatible with previous versions of Visual Studio. What that means is, I can just imagine, over the next couple of years, most Windows applications will be heavily ML enabled, so that more and more of the application logic at the desktop in Windows, will be driven by ML, they'll be less need for apps, as we've known them, historically, pre-packaged bundles of code. It'll be dynamic logic. It'll be ML. So, I think this is really marking the beginning of the end of the app era at the device level, I think. So, I'm really excited and we're looking forward to hearing more about Microsoft, where they're going with AI Platform for Windows, but I think that's a landmark announcement we'll stay tuned for. >> Excellent. Jim Kobielus, thank you very much. This has been another Wikibon Action Item Quick Take. (soft digital music)
SUMMARY :
Jim Kobielus, Microsoft seems to be gettin' ready to do and then they're going to be backwards compatible with previous Jim Kobielus, thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Visual Studio 15.7 | TITLE | 0.99+ |
Windows 10 | TITLE | 0.99+ |
Visual Studio | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Windows | TITLE | 0.99+ |
Peter | PERSON | 0.98+ |
Platform | TITLE | 0.74+ |
years | DATE | 0.72+ |
7th | DATE | 0.68+ |
next couple | DATE | 0.55+ |
Wikibon | TITLE | 0.54+ |
Wikibon | ORGANIZATION | 0.38+ |
Wikibon Action Item | De-risking Digital Business | March 2018
>> Hi I'm Peter Burris. Welcome to another Wikibon Action Item. (upbeat music) We're once again broadcasting from theCube's beautiful Palo Alto, California studio. I'm joined here in the studio by George Gilbert and David Floyer. And then remotely, we have Jim Kobielus, David Vellante, Neil Raden and Ralph Finos. Hi guys. >> Hey. >> Hi >> How you all doing? >> This is a great, great group of people to talk about the topic we're going to talk about, guys. We're going to talk about the notion of de-risking digital business. Now, the reason why this becomes interesting is, the Wikibon perspective for quite some time has been that the difference between business and digital business is the role that data assets play in a digital business. Now, if you think about what that means. Every business institutionalizes its work around what it regards as its most important assets. A bottling company, for example, organizes around the bottling plant. A financial services company organizes around the regulatory impacts or limitations on how they share information and what is regarded as fair use of data and other resources, and assets. The same thing exists in a digital business. There's a difference between, say, Sears and Walmart. Walmart mades use of data differently than Sears. And that specific assets that are employed and had a significant impact on how the retail business was structured. Along comes Amazon, which is even deeper in the use of data as a basis for how it conducts its business and Amazon is institutionalizing work in quite different ways and has been incredibly successful. We could go on and on and on with a number of different examples of this, and we'll get into that. But what it means ultimately is that the tie between data and what is regarded as valuable in the business is becoming increasingly clear, even if it's not perfect. And so traditional approaches to de-risking data, through backup and restore, now needs to be re-thought so that it's not just de-risking the data, it's de-risking the data assets. And, since those data assets are so central to the business operations of many of these digital businesses, what it means to de-risk the whole business. So, David Vellante, give us a starting point. How should folks think about this different approach to envisioning business? And digital business, and the notion of risk? >> Okay thanks Peter, I mean I agree with a lot of what you just said and I want to pick up on that. I see the future of digital business as really built around data sort of agreeing with you, building on what you just said. Really where organizations are putting data at the core and increasingly I believe that organizations that have traditionally relied on human expertise as the primary differentiator, will be disrupted by companies where data is the fundamental value driver and I think there are some examples of that and I'm sure we'll talk about it. And in this new world humans have expertise that leverage the organization's data model and create value from that data with augmented machine intelligence. I'm not crazy about the term artificial intelligence. And you hear a lot about data-driven companies and I think such companies are going to have a technology foundation that is increasingly described as autonomous, aware, anticipatory, and importantly in the context of today's discussion, self-healing. So able to withstand failures and recover very quickly. So de-risking a digital business is going to require new ways of thinking about data protection and security and privacy. Specifically as it relates to data protection, I think it's going to be a fundamental component of the so-called data-driven company's technology fabric. This can be designed into applications, into data stores, into file systems, into middleware, and into infrastructure, as code. And many technology companies are going to try to attack this problem from a lot of different angles. Trying to infuse machine intelligence into the hardware, software and automated processes. And the premise is that meaty companies will architect their technology foundations, not as a set of remote cloud services that they're calling, but rather as a ubiquitous set of functional capabilities that largely mimic a range of human activities. Including storing, backing up, and virtually instantaneous recovery from failure. >> So let me build on that. So what you're kind of saying if I can summarize, and we'll get into whether or not it's human expertise or some other approach or notion of business. But you're saying that increasingly patterns in the data are going to have absolute consequential impacts on how a business ultimately behaves. We got that right? >> Yeah absolutely. And how you construct that data model, and provide access to the data model, is going to be a fundamental determinant of success. >> Neil Raden, does that mean that people are no longer important? >> Well no, no I wouldn't say that at all. I'm talking with the head of a medical school a couple of weeks ago, and he said something that really resonated. He said that there're as many doctors who graduated at the bottom of their class as the top of their class. And I think that's true of organizations too. You know what, 20 years ago I had the privilege of interviewing Peter Drucker for an hour and he foresaw this, 20 years ago, he said that people who run companies have traditionally had IT departments that provided operational data but they needed to start to figure out how to get value from that data and not only get value from that data but get value from data outside the company, not just internal data. So he kind of saw this big data thing happening 20 years ago. Unfortunately, he had a prejudice for senior executives. You know, he never really thought about any other people in an organization except the highest people. And I think what we're talking about here is really the whole organization. I think that, I have some concerns about the ability of organizations to really implement this without a lot of fumbles. I mean it's fine to talk about the five digital giants but there's a lot of companies out there that, you know the bar isn't really that high for them to stay in business. And they just seem to get along. And I think if we're going to de-risk we really need to help companies understand the whole process of transformation, not just the technology. >> Well, take us through it. What is this process of transformation? That includes the role of technology but is bigger than the role of technology. >> Well, it's like anything else, right. There has to be communication, there has to be some element of control, there has to be a lot of flexibility and most importantly I think there has to be acceptability by the people who are going to be affected by it, that is the right thing to do. And I would say you start with assumptions, I call it assumption analysis, in other words let's all get together and figure out what our assumptions are, and see if we can't line em up. Typically IT is not good at this. So I think it's going to require the help of a lot of practitioners who can guide them. >> So Dave Vellante, reconcile one point that you made I want to come back to this notion of how we're moving from businesses built on expertise and people to businesses built on expertise resident as patterns in the data, or data models. Why is it that the most valuable companies in the world seem to be the ones that have the most real hardcore data scientists. Isn't that expertise and people? >> Yeah it is, and I think it's worth pointing out. Look, the stock market is volatile, but right now the top-five companies: Apple, Amazon, Google, Facebook and Microsoft, in terms of market cap, account for about $3.5 trillion and there's a big distance between them, and they've clearly surpassed the big banks and the oil companies. Now again, that could change, but I believe that it's because they are data-driven. So called data-driven. Does that mean they don't need humans? No, but human expertise surrounds the data as opposed to most companies, human expertise is at the center and the data lives in silos and I think it's very hard to protect data, and leverage data, that lives in silos. >> Yes, so here's where I'll take exception to that, Dave. And I want to get everybody to build on top of this just very quickly. I think that human expertise has surrounded, in other businesses, the buildings. Or, the bottling plant. Or, the wealth management. Or, the platoon. So I think that the organization of assets has always been the determining factor of how a business behaves and we institutionalized work, in other words where we put people, based on the business' understanding of assets. Do you disagree with that? Is that, are we wrong in that regard? I think data scientists are an example of reinstitutionalizing work around a very core asset in this case, data. >> Yeah, you're saying that the most valuable asset is shifting from some of those physical assets, the bottling plant et cetera, to data. >> Yeah we are, we are. Absolutely. Alright, David Foyer. >> Neil: I'd like to come in. >> Panelist: I agree with that too. >> Okay, go ahead Neil. >> I'd like to give an example from the news. Cigna's acquisition of Express Scripts for $67 billion. Who the hell is Cigna, right? Connecticut General is just a sleepy life insurance company and INA was a second-tier property and casualty company. They merged a long time ago, they got into health insurance and suddenly, who's Express Scripts? I mean that's a company that nobody ever even heard of. They're a pharmacy benefit manager, what is that? They're an information management company, period. That's all they do. >> David Foyer, what does this mean from a technology standpoint? >> So I wanted to to emphasize one thing that evolution has always taught us. That you have to be able to come from where you are. You have to be able to evolve from where you are and take the assets that you have. And the assets that people have are their current systems of records, other things like that. They must be able to evolve into the future to better utilize what those systems are. And the other thing I would like to say-- >> Let me give you an example just to interrupt you, because this is a very important point. One of the primary reasons why the telecommunications companies, whom so many people believed, analysts believed, had this fundamental advantage, because so much information's flowing through them is when you're writing assets off for 30 years, that kind of locks you into an operational mode, doesn't it? >> Exactly. And the other thing I want to emphasize is that the most important thing is sources of data not the data itself. So for example, real-time data is very very important. So what is your source of your real-time data? If you've given that away to Google or your IOT vendor you have made a fundamental strategic mistake. So understanding the sources of data, making sure that you have access to that data, is going to enable you to be able to build the sort of processes and data digitalization. >> So let's turn that concept into kind of a Geoffrey Moore kind of strategy bromide. At the end of the day you look at your value proposition and then what activities are central to that value proposition and what data is thrown off by those activities and what data's required by those activities. >> Right, both internal-- >> We got that right? >> Yeah. Both internal and external data. What are those sources that you require? Yes, that's exactly right. And then you need to put together a plan which takes you from where you are, as the sources of data and then focuses on how you can use that data to either improve revenue or to reduce costs, or a combination of those two things, as a series of specific exercises. And in particular, using that data to automate in real-time as much as possible. That to me is the fundamental requirement to actually be able to do this and make money from it. If you look at every example, it's all real-time. It's real-time bidding at Google, it's real-time allocation of resources by Uber. That is where people need to focus on. So it's those steps, practical steps, that organizations need to take that I think we should be giving a lot of focus on. >> You mention Uber. David Vellante, we're just not talking about the, once again, talking about the Uberization of things, are we? Or is that what we mean here? So, what we'll do is we'll turn the conversation very quickly over to you George. And there are existing today a number of different domains where we're starting to see a new emphasis on how we start pricing some of this risk. Because when we think about de-risking as it relates to data give us an example of one. >> Well we were talking earlier, in financial services risk itself is priced just the way time is priced in terms of what premium you'll pay in terms of interest rates. But there's also something that's softer that's come into much more widely-held consciousness recently which is reputational risk. Which is different from operational risk. Reputational risk is about, are you a trusted steward for data? Some of that could be personal information and a use case that's very prominent now with the European GDPR regulation is, you know, if I ask you as a consumer or an individual to erase my data, can you say with extreme confidence that you have? That's just one example. >> Well I'll give you a specific number on that. We've mentioned it here on Action Item before. I had a conversation with a Chief Privacy Officer a few months ago who told me that they had priced out what the fines to Equifax would have been had the problem occurred after GDPR fines were enacted. It was $160 billion, was the estimate. There's not a lot of companies on the planet that could deal with $160 billion liability. Like that. >> Okay, so we have a price now that might have been kind of, sort of mushy before. And the notion of trust hasn't really changed over time what's changed is the technical implementations that support it. And in the old world with systems of record we basically collected from our operational applications as much data as we could put it in the data warehouse and it's data marked satellites. And we try to govern it within that perimeter. But now we know that data basically originates and goes just about anywhere. There's no well-defined perimeter. It's much more porous, far more distributed. You might think of it as a distributed data fabric and the only way you can be a trusted steward of that is if you now, across the silos, without trying to centralize all the data that's in silos or across them, you can enforce, who's allowed to access it, what they're allowed to do, audit who's done what to what type of data, when and where? And then there's a variety of approaches. Just to pick two, one is where it's discovery-oriented to figure out what's going on with the data estate. Using machine learning this is, Alation is an example. And then there's another example, which is where you try and get everyone to plug into what's essentially a new system catalog. That's not in a in a deviant mesh but that acts like the fabric for your data fabric, deviant mesh. >> That's an example of another, one of the properties of looking at coming at this. But when we think, Dave Vellante coming back to you for a second. When we think about the conversation there's been a lot of presumption or a lot of bromide. Analysts like to talk about, don't get Uberized. We're not just talking about getting Uberized. We're talking about something a little bit different aren't we? >> Well yeah, absolutely. I think Uber's going to get Uberized, personally. But I think there's a lot of evidence, I mentioned the big five, but if you look at Spotify, Waze, AirbnB, yes Uber, yes Twitter, Netflix, Bitcoin is an example, 23andme. These are all examples of companies that, I'll go back to what I said before, are putting data at the core and building humans expertise around that core to leverage that expertise. And I think it's easy to sit back, for some companies to sit back and say, "Well I'm going to wait and see what happens." But to me anyway, there's a big gap between kind of the haves and the have-nots. And I think that, that gap is around applying machine intelligence to data and applying cloud economics. Zero marginal economics and API economy. An always-on sort of mentality, et cetera et cetera. And that's what the economy, in my view anyway, is going to look like in the future. >> So let me put out a challenge, Jim I'm going to come to you in a second, very quickly on some of the things that start looking like data assets. But today, when we talk about data protection we're talking about simply a whole bunch of applications and a whole bunch of devices. Just spinning that data off, so we have it at a third site. And then we can, and it takes to someone in real-time, and then if there's a catastrophe or we have, you know, large or small, being able to restore it often in hours or days. So we're talking about an improvement on RPO and RTO but when we talk about data assets, and I'm going to come to you in a second with that David Floyer, but when we talk about data assets, we're talking about, not only the data, the bits. We're talking about the relationships and the organization, and the metadata, as being a key element of that. So David, I'm sorry Jim Kobielus, just really quickly, thirty seconds. Models, what do they look like? What are the new nature of some of these assets look like? >> Well the new nature of these assets are the machine learning models that are driving so many business processes right now. And so really the core assets there are the data obviously from which they are developed, and also from which they are trained. But also very much the knowledge of the data scientists and engineers who build and tune this stuff. And so really, what you need to do is, you need to protect that knowledge and grow that knowledge base of data science professionals in your organization, in a way that builds on it. And hopefully you keep the smartest people in house. And they can encode more of their knowledge in automated programs to manage the entire pipeline of development. >> We're not talking about files. We're not even talking about databases, are we David Floyer? We're talking about something different. Algorithms and models are today's technology's really really set up to do a good job of protecting the full organization of those data assets. >> I would say that they're not even being thought about yet. And going back on what Jim was saying, Those data scientists are the only people who understand that in the same way as in the year 2000, the COBOL programmers were the only people who understood what was going on inside those applications. And we as an industry have to allow organizations to be able to protect the assets inside their applications and use AI if you like to actually understand what is in those applications and how are they working? And I think that's an incredibly important de-risking is ensuring that you're not dependent on a few experts who could leave at any moment, in the same way as COBOL programmers could have left. >> But it's not just the data, and it's not just the metadata, it really is the data structure. >> It is the model. Just the whole way that this has been put together and the reason why. And the ability to continue to upgrade that and change that over time. So those assets are incredibly important but at the moment there is no way that you can, there isn't technology available for you to actually protect those assets. >> So if I combine what you just said with what Neil Raden was talking about, David Vallante's put forward a good vision of what's required. Neil Raden's made the observation that this is going to be much more than technology. There's a lot of change, not change management at a low level inside the IT, but business change and the technology companies also have to step up and be able to support this. We're seeing this, we're seeing a number of different vendor types start to enter into this space. Certainly storage guys, Dylon Sears talking about doing a better job of data protection we're seeing middleware companies, TIBCO and DISCO, talk about doing this differently. We're seeing file systems, Scality, WekaIO talk about doing this differently. Backup and restore companies, Veeam, Veritas. I mean, everybody's looking at this and they're all coming at it. Just really quickly David, where's the inside track at this point? >> For me it is so much whitespace as to be unbelievable. >> So nobody has an inside track yet. >> Nobody has an inside track. Just to start with a few things. It's clear that you should keep data where it is. The cost of moving data around an organization from inside to out, is crazy. >> So companies that keep data in place, or technologies to keep data in place, are going to have an advantage. >> Much, much, much greater advantage. Sure, there must be backups somewhere. But you need to keep the working copies of data where they are because it's the real-time access, usually that's important. So if it originates in the cloud, keep it in the cloud. If it originates in a data-provider, on another cloud, that's where you should keep it. If it originates on your premise, keep it where it originated. >> Unless you need to combine it. But that's a new origination point. >> Then you're taking subsets of that data and then combining that up for itself. So that would be my first point. So organizations are going to need to put together what George was talking about, this metadata of all the data, how it interconnects, how it's being used. The flow of data through the organization, it's amazing to me that when you go to an IT shop they cannot define for you how the data flows through that data center or that organization. That's the requirement that you have to have and AI is going to be part of that solution, of looking at all of the applications and the data and telling you where it's going and how it's working together. >> So the second thing would be companies that are able to build or conceive of networks as data. Will also have an advantage. And I think I'd add a third one. Companies that demonstrate perennial observations, a real understanding of the unbelievable change that's required you can't just say, oh Facebook wants this therefore everybody's going to want it. There's going to be a lot of push marketing that goes on at the technology side. Alright so let's get to some Action Items. David Vellante, I'll start with you. Action Item. >> Well the future's going to be one where systems see, they talk, they sense, they recognize, they control, they optimize. It may be tempting to say, you know what I'm going to wait, I'm going to sit back and wait to figure out how I'm going to close that machine intelligence gap. I think that's a mistake. I think you have to start now, and you have to start with your data model. >> George Gilbert, Action Item. >> I think you have to keep in mind the guardrails related to governance, and trust, when you're building applications on the new data fabric. And you can take the approach of a platform-oriented one where you're plugging into an API, like Apache Atlas, that Hortonworks is driving, or a discovery-oriented one as David was talking about which would be something like Alation, using machine learning. But if, let's say the use case starts out as an IOT, edge analytics and cloud inferencing, that data science pipeline itself has to now be part of this fabric. Including the output of the design time. Meaning the models themselves, so they can be managed. >> Excellent. Jim Kobielus, you've been pretty quiet but I know you've got a lot to offer. Action Item, Jim. >> I'll be very brief. What you need to do is protect your data science knowledge base. That's the way to de-risk this entire process. And that involves more than just a data catalog. You need a data science expertise registry within your distributed value chain. And you need to manage that as a very human asset that needs to grow. That is your number one asset going forward. >> Ralph Finos, you've also been pretty quiet. Action Item, Ralph. >> Yeah, I think you've got to be careful about what you're trying to get done. Whether it's, it depends on your industry, whether it's finance or whether it's the entertainment business, there are different requirements about data in those different environments. And you need to be cautious about that and you need leadership on the executive business side of things. The last thing in the world you want to do is depend on data scientists to figure this stuff out. >> And I'll give you the second to last answer or Action Item. Neil Raden, Action Item. >> I think there's been a lot of progress lately in creating tools for data scientists to be more efficient and they need to be, because the big digital giants are draining them from other companies. So that's very encouraging. But in general I think becoming a data-driven, a digital transformation company for most companies, is a big job and I think they need to it in piece parts because if they try to do it all at once they're going to be in trouble. >> Alright, so that's great conversation guys. Oh, David Floyer, Action Item. David's looking at me saying, ah what about me? David Floyer, Action Item. >> (laughing) So my Action Item comes from an Irish proverb. Which if you ask for directions they will always answer you, "I wouldn't start from here." So the Action Item that I have is, if somebody is coming in saying you have to re-do all of your applications and re-write them from scratch, and start in a completely different direction, that is going to be a 20-year job and you're not going to ever get it done. So you have to start from what you have. The digital assets that you have, and you have to focus on improving those with additional applications, additional data using that as the foundation for how you build that business with a clear long-term view. And if you look at some of the examples that were given early, particularly in the insurance industries, that's what they did. >> Thank you very much guys. So, let's do an overall Action Item. We've been talking today about the challenges of de-risking digital business which ties directly to the overall understanding of the role of data assets play in businesses and the technology's ability to move from just protecting data, restoring data, to actually restoring the relationships in the data, the structures of the data and very importantly the models that are resident in the data. This is going to be a significant journey. There's clear evidence that this is driving a new valuation within the business. Folks talk about data as the new oil. We don't necessarily see things that way because data, quite frankly, is a very very different kind of asset. The cost could be shared because it doesn't suffer the same limits on scarcity. So as a consequence, what has to happen is, you have to start with where you are. What is your current value proposition? And what data do you have in support of that value proposition? And then whiteboard it, clean slate it and say, what data would we like to have in support of the activities that we perform? Figure out what those gaps are. Find ways to get access to that data through piecemeal, piece-part investments. That provide a roadmap of priorities looking forward. Out of that will come a better understanding of the fundamental data assets that are being created. New models of how you engage customers. New models of how operations works in the shop floor. New models of how financial services are being employed and utilized. And use that as a basis for then starting to put forward plans for bringing technologies in, that are capable of not just supporting the data and protecting the data but protecting the overall organization of data in the form of these models, in the form of these relationships, so that the business can, as it creates these, as it throws off these new assets, treat them as the special resource that the business requires. Once that is in place, we'll start seeing businesses more successfully reorganize, reinstitutionalize the work around data, and it won't just be the big technology companies who have, who people call digital native, that are well down this path. I want to thank George Gilbert, David Floyer here in the studio with me. David Vellante, Ralph Finos, Neil Raden and Jim Kobelius on the phone. Thanks very much guys. Great conversation. And that's been another Wikibon Action Item. (upbeat music)
SUMMARY :
I'm joined here in the studio has been that the difference and importantly in the context are going to have absolute consequential impacts and provide access to the data model, the ability of organizations to really implement this but is bigger than the role of technology. that is the right thing to do. Why is it that the most valuable companies in the world human expertise is at the center and the data lives in silos in other businesses, the buildings. the bottling plant et cetera, to data. Yeah we are, we are. an example from the news. and take the assets that you have. One of the primary reasons why is going to enable you to be able to build At the end of the day you look at your value proposition And then you need to put together a plan once again, talking about the Uberization of things, to erase my data, can you say with extreme confidence There's not a lot of companies on the planet and the only way you can be a trusted steward of that That's an example of another, one of the properties I mentioned the big five, but if you look at Spotify, and I'm going to come to you in a second And so really, what you need to do is, of protecting the full organization of those data assets. and use AI if you like to actually understand and it's not just the metadata, And the ability to continue to upgrade that and the technology companies also have to step up It's clear that you should keep data where it is. are going to have an advantage. So if it originates in the cloud, keep it in the cloud. Unless you need to combine it. That's the requirement that you have to have that goes on at the technology side. Well the future's going to be one where systems see, I think you have to keep in mind the guardrails but I know you've got a lot to offer. that needs to grow. Ralph Finos, you've also been pretty quiet. And you need to be cautious about that And I'll give you the second to last answer and they need to be, because the big digital giants David's looking at me saying, ah what about me? that is going to be a 20-year job and the technology's ability to move from just
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim Kobelius | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Geoffrey Moore | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
INA | ORGANIZATION | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
Sears | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
TIBCO | ORGANIZATION | 0.99+ |
DISCO | ORGANIZATION | 0.99+ |
David Vallante | PERSON | 0.99+ |
$160 billion | QUANTITY | 0.99+ |
20-year | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
Express Scripts | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
$67 billion | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
first point | QUANTITY | 0.99+ |
thirty seconds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Connecticut General | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
about $3.5 trillion | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Cigna | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Dylon Sears | ORGANIZATION | 0.98+ |
Action Item Quick Take | Jim Kobielus - Mar 2018
(Upbeat music) (Coughs) >> Hi, I'm Peter Burris with another Wikibooks action item quick take. Jim Kobielus, IBM's up to some good with new tooling for managing data. What's going on? >> Yes Peter, it's not brand new tooling but its important because it actually is a foreshadowing of what's going to be universal. I think it's a capability for programming the uni grade as we've been discussing. Essentially this week at the IBM Signature event Sam Whitestone of IBM discussed with Dave Valente a product they have called Queryplex which is on the market for money even more. Essentially it's a data virtualization environment for distributor query processing in a mesh fabric. And what's important about Queryplex to understand, in a uni grade context, is it enables link binding distributed computation to find the lowest latency path between... Across very fairly complex edge clouds. So to speed up queries no matter where the data may reside and so forth in a fairly real time dynamic fashion. So I think the important things to know about Queryplex are A- that it prioritizes connections with lowest latency based on ongoing computations that are performed and is able to distribute this computation to find the lowest path across the network to prevent the query... The computation controller from being a bottle neck. I think that's a fundamental, architectural capability we're going to see more of with the advent or the growth of the uni grade as a broad concept for building up a distributor cloud computing environment. >> And very importantly there are still a lot of applications that run the businesses on top of IBM machines. Jim Kabielus thanks very much talking about IBM Queryplex and some of the next steps coming. This is Peter Burris with another Wikibooks action item quick take. (upbeat music)
SUMMARY :
Hi, I'm Peter Burris with this computation to find the lowest path a lot of applications that run
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Jim Kabielus | PERSON | 0.99+ |
Sam Whitestone | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mar 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
this week | DATE | 0.96+ |
IBM Signature | EVENT | 0.93+ |
Wikibooks | ORGANIZATION | 0.69+ |
Queryplex | TITLE | 0.59+ |
Wikibooks | TITLE | 0.55+ |
Action Item Quick Take | Neil Raden - Mar 2018
(upbeat music) >> Hi, I'm Peter Burris with another Wikibon Action Item Quick Take. Neil Raden. What's going on with Tableau? >> Well, you know, Tableau software has been a huge success story over the years. Ten years or more. But in the last couple of years they've really exploded. What they did is they allowed in users to take data, analytical data, build some models and generate all sorts of beautiful visualizations from it. Problem was, the people who use Tableau had no tools to work with to prep the data, and that was causing the problem. They work with partners and so forth. But that's all changing. Last year they announced Project Maestro, which is their own data prep product. It's built on a in-memory collinder-oriented data base called Hyper that they bought, and my information, coming from developers who are using the data is that Maestro is going to be a huge success for them. >> Excellent. >> And one other thing, I think it points out that a pure play visualization vendor can't survive. They have to expand horizontally. And it will remain to be seen what Tableau will do after this. This is clearly not its last act. >> Great. Neil Raden talking about Tableau and Project Maestro and expectations for it. This is Peter Burris. Thanks again for watching another Wikibon Action Item Quick Take. (upbeat music)
SUMMARY :
What's going on with Tableau? and that was causing the problem. They have to expand horizontally. and Project Maestro and expectations for it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Ten years | QUANTITY | 0.99+ |
Mar 2018 | DATE | 0.99+ |
Maestro | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.94+ |
last couple of years | DATE | 0.88+ |
Tableau | TITLE | 0.81+ |
Wikibon | ORGANIZATION | 0.77+ |
Hyper | TITLE | 0.75+ |
Project Maestro | TITLE | 0.69+ |
Tableau | ORGANIZATION | 0.62+ |
Action Item Quick Take | Jim Kobielus - Feb 2018
(upbeat music) >> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. Jim Kobielus, where are we with the next step in AI and deep learning? >> Yeah, there's a big to-do going on, a big buzz around reinforcement learning. I think that will become ever higher for working data scientists building applications for robotics and the analytics. And so what I'm encouraged by is the fact that there are open frameworks coming into being, development frameworks, for reinforcement and learning that are integrated to some degree with the investments companies are making on deep learning. In particular, a fair number of frameworks now support TensorFlow or integration with TensorFlow. So I advise our listeners, especially the developers, to look at and evaluate frameworks like TensorFlow Agents. Rate our LLB Global School, Machine Learning Agents in coach. These are not well-known yet, but I think these will become, at least one of these will become a standard component of the data scientist workbench for building the next generation of robotics and other applications that are used for adaptive control. >> Excellent, Jim. This is Peter Burris with another Wikibon Action Item Quick Take. (upbeat music)
SUMMARY :
Hi, this is Peter Burris with another Wikibon with the investments companies are making This is Peter Burris with another
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Jim | PERSON | 0.99+ |
LLB Global School | ORGANIZATION | 0.99+ |
TensorFlow | TITLE | 0.98+ |
one | QUANTITY | 0.83+ |
Wikibon | ORGANIZATION | 0.8+ |
Machine | ORGANIZATION | 0.63+ |
Action Item Quick Take | Neil Raden - Feb 2018
(upbeat electronic music) >> Hi, I'm Peter Burress with another Wikibon Action Item Quick Take. Neil Raden, you've been out visiting clients this week. What's the buzz about data and big data and related stuff? >> Well, the first thing about big data is the product development cadence is so fast now that organizations can't absorb it. Every week something new comes out, and their decisions process is longer than that. Not one person decides to bring in Plume. It's a committee decision. So that's part of the problem. The other part of the problem is they still run on their legacy systems and having a hard time figuring out how to make the two work together. The third thing, though, is I want to disagree with something Dave Vellante said about the insurance industry. Insurance tech is exploding. That industry is in the midst of a huge digital transformation, and perhaps Dave and I could work together on that and do some research and show some of the very, very interesting things that are happening there. But oh, GDPR. I'm sorry, GDPR is like a runaway train. It reminds me of Y2K without the lead time. Everybody is freaked out about it because it infests every system they have, and they don't even know where to start. So we'll need to keep an eye on that. >> Alright, this is Peter Burress, Neil Raden, another Wikibon Action Item Quick Take. (upbeat electronic music)
SUMMARY :
What's the buzz about data and big data and related stuff? The other part of the problem is they still run (upbeat electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
this week | DATE | 0.97+ |
one person | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.93+ |
Wikibon | ORGANIZATION | 0.92+ |
Y2K | ORGANIZATION | 0.91+ |
Plume | ORGANIZATION | 0.33+ |
Action Item | Big Data SV Preview Show - Feb 2018
>> Hi, I'm Peter Burris and once again, welcome to a Wikibon Action Item. (lively electronic music) We are again broadcasting from the beautiful theCUBE Studios here in Palo Alto, California, and we're joined today by a relatively larger group. So, let me take everybody through who's here in the studio with us. David Floyer, George Gilbert, once again, we've been joined by John Furrier, who's one of the key CUBE hosts, and on the remote system is Jim Kobielus, Neil Raden, and another CUBE host, Dave Vellante. Hey guys. >> Hi there. >> Good to be here. >> Hey. >> So, one of the things we're, one of the reasons why we have a little bit larger group here is because we're going to be talking about a community gathering that's taking place in the big data universe in a couple of weeks. Large numbers of big data professionals are going to be descending upon Strata for the purposes of better understanding what's going on within the big data universe. Now we have run a CUBE show next to that event, in which we get the best thought leaders that are possible at Strata, bring them in onto theCUBE, and really to help separate the signal from the noise that Strata has historically represented. We want to use this show to preview what we think that signal's going to be, so that we can help the community better understand what to look for, where to go, what kinds of things to be talking about with each other so that it can get more out of that important event. Now, George, with that in mind, what are kind of the top level thing? If it was one thing that we'd identify as something that was different two years ago or a year ago, and it's going to be different from this show, what would we say it would be? >> Well, I think the big realization that's here is that we're starting with the end in mind. We know the modern operational analytic applications that we want to build, that anticipate or influence a user interaction or inform or automate a business transaction. And for several years, we were experimenting with big data infrastructure, but that was, it wasn't solution-centric, it was technology-centric. And we kind of realized that the do it yourself, assemble your own kit, opensource big data infrastructure created too big a burden on admins. Now we're at the point where we're beginning to see a more converged set of offerings take place. And by converged, I mean an end to end analytic pipeline that is uniform for developers, uniform for admins, and because it's pre-integrated, is lower latency. It helps you put more data through one single analytic latency budget. That's what we think people should look for. Right now, though, the hottest new tech-centric activity is around Machine Learning, and I think the big thing we have to do is recognize that we're sort of at the same maturity level as we were with big data several years ago. And people should, if they're going to work with it, start with the knowledge, for the most part, that they're going to be experimenting, 'cause the tooling isn't quite mature enough, we don't have enough data scientists for people to be building all these pipelines bespoke. And the third-party applications, we don't have a high volume of them where this is embedded yet. >> So if I can kind of summarize what you're saying, we're seeing bifurcation occur within the ecosystem associated with big data that's driving toward simplification on the infrastructure side, which increasingly is being associated with the term big data, and new technologies that can apply that infrastructure and that data to new applications, including things like AI, ML, DL, where we think about modeling and services, and a new way of building value. Now that suggests that one or the other is more or less hot, but Neil Raden, I think the practical reality is that here in Silicon Valley, we got to be careful about getting too far out in front of our skis. At the end of the day, there's still a lot of work to be done inside how you simply do things like move data from one place to the other in a lot of big enterprises. Would you agree with that? >> Oh absolutely. I've been talking to a lot clients this week and, you know, we don't talk about the fact that they're still running their business on what we would call legacy systems, and they don't know how to, you know, get out of them or transform from them. So they're still starting to plan for this, but the problem is, you know, it's like talking about the 27 rocket engines on the whatever it was that he launched into space, launching a Tesla into space. But you can talk about the engineering of those engines and that's great, but what about all the other things you're going to have to do to get that (laughs) car into space? And it's the same thing. A year ago, we were talking about Hadoop and big data and, to a certain extent, Machine Learning, maybe more data science. But now people are really starting to say, How do we actually do this, how do we secure it, how do we govern it, how do we get some sort of metadata or semantics on the data we're working with so people know what they're using. I think that's where we are in a lot of companies. >> Great, so that's great feedback, Neil. So as we look forward, Jim Kobielus, the challenges associated with what it means to better improve the facilities of your infrastructure, but also use that as a basis for increasing your capability on some of the new applications services, what are we looking for, what should folks be looking for as they explore the show in the next couple of weeks on the ML side? What new technologies, what new approaches? Going back to what George said, we're in experimentation mode. What are going to be the experiments that are going to generate greatest results over the course of the next year? >> Yeah, for the data scientists, who flock to Strata and similar conferences, automation of the Machine Learning pipeline is super hot in terms of investments by the solution providers. Everybody from Google to IBM to AWS, and others, are investing very heavily in automation of, not just the data engine, that problem's been had a long time ago. It's automation of more of the feature engineering and the trending. These very manual, often labor intensive, jobs have to be sped up and automated to a great degree to enable the magic of productivity by the data scientists in the new generation of app developers. So look for automation of Machine Learning to be a super hot focus. Related to that is, look for a new generation of development suites that focus on DevOps, speeding the Machine Learning in DL and AI from modeling through training and evaluation deployment in iteration. We've seen a fair upswing in the number of such toolkits on the market from a variety of startup vendors, like the DataRobots of the world. But also coming to say, AWS with SageMaker, for example, that's hot. Also, look for development toolkits that automate more of the cogeneration, you know, a low-code tools, but the new generation of low-code tools, as highlighted in a recent Wikibons study, use ML to drive more of the actual production of fairly decent, good enough code, as a first rough prototype for a broad range of applications. And finally we're seeing a fair amount of ML-generated code generation inside of things like robotic process automation, RPA, which I believe will probably be a super hot theme at Strata and other shows this year going forward. So there's a, you mentioned the idea of better tooling for DevOps and the relationship between big data and ML, and what not, and DevOps. One of the key things that we've been seeing over the course of the last few years, and it's consistent with the trends that we're talking about, is increasing specialization in a lot of the perspectives associated with changes within this marketplace, so we've seen other shows that have emerged that have been very, very important, that we, for example, are participating in. Places like Splunk, for example, that is the vanguard, in many respects, of a lot of these trends in big data and how big data can applied to business problems. Dave Vellante, I know you've been associated with a number of, participating in these shows, how does this notion of specialization inform what's going to happen in San Jose, and what kind of advice and counsel should we tell people to continue to explore beyond just what's going to happen in San Jose in a couple weeks? >> Well, you mentioned Splunk as an example, a very sort of narrow and specialized company that solves a particular problem and has a very enthusiastic ecosystem and customer base around that problem. LAN files to solve security problems, for example. I would say Tableau is another example, you know, heavily focused on Viz. So what you're seeing is these specialized skillsets that go deep within a particular domain. I think the thing to think about, especially when we're in San Jose next week, is as we talk about digital disruption, what are the skillsets required beyond just the domain expertise. So you're sort of seeing this bifurcated skillsets really coming into vogue, where if somebody understands, for example, traditional marketing, but they also need to understand digital marketing in great depth, and the skills that go around it, so there's sort of a two-tool player. We talk about five-tool player in baseball. At least a multidimensional skillset in digital. >> And that's likely to occur not just in a place like marketing, but across the board. David Floyer, as folks go to the show and start to look more specifically about this notion of convergence, are there particular things that they should think about that, to come back to the notion of, well, you know, hardware is going to make things more or less difficult for what the software can do, and software is going to be created that will fill up the capabilities of hardware. What are some of the underlying hardware realities that folks going to the show need to keep in mind as they evaluate, especially the infrastructure side, these different infrastructure technologies that are getting more specialized? >> Well, if we look historically at the big data area, the solution has been to put in very low cost equipment as nodes, lots of different nodes, and move the data to those nodes so that you get a parallelization of the, of the data handling. That is not the only way of doing it. There are good ways now where you can, in fact, have a single version of that data in one place in very high speed storage, on flash storage, for example, and where you can allow very fast communication from all of the nodes directly to that data. And that makes things a lot simpler from an operational point of view. So using current Batch Automation techniques that are in existence, and looking at those from a new perspective, which is I do IUs apply these to big data, how do I automate these things, can make a huge difference in just the practicality in the elapsed time for some of these large training things, for example. >> Yeah, I was going to say that to many respects, what you're talking about is bringing things like training under a more traditional >> David: Operational, yeah. >> approach and operational set of disciplines. >> David: Yes, that's right. >> Very, very important. So John Furrier, I want to come back to you, or I want to come to you, and say that there are some other technologies that, while they're the bright shiny objects and people think that they're going to be the new kind of Harry Potter technologies of magic everywhere, Blockchain is certainly going to become folded into this big data concept, because Blockchain describes how contracts, ownership, authority ultimately get distributed. What should folks look for as the, as Blockchain starts to become part of these conversations? >> That's a good point, Peter. My summary of the preview for BigData SV Silicon Valley, which includes the Strata show, is two things: Blockchain points to the future and GDPR points to the present. GDPR is probably the most, one of the most fundamental impacts to the big data market in a long time. People have been working on it for a year. It is a nightmare. The technical underpinnings of what companies have to do to comply with GDPR is a moving train, and it's complete BS. There's no real solutions out there, so if I was going to tell everyone to think about that and what to look for: What is happening with GDPR, what's the impact of the databases, what's the impact of the architectures? Everyone is faking it 'til they make it. No one really has anything, in my opinion from what I can see, so it's a technical nightmare. Where was that database? So it's going to impact how you store the data, and the sovereignty issue is another issue. So the Blockchain then points to the sovereignty issue of the data, both in terms of the company, the country, and the user. These things are going to impact software development, application development, and, ultimately, cloud choice and the IoT. So to me, GDPR is not just a one and done thing and Blockchain is kind of a future thing to look at. So I would look out of those two lenses and say, Do you have a direction or a narrative that supports me today with what GDPR will impact throughout the organization. And then, what's going on with this new decentralized infrastructure and the role of data, and the sovereignty of that data, with respect to company, country, and user. So to me, that's the big issue. >> So George Gilbert, if we think about this question of these fundamental technologies that are going to become increasingly important here, database managers are not dead as a technology. We've seen a relative explosion over the last few years in at least invention, even if it hasn't been followed with, as Neil talked about, very practical ways of bringing new types of disciplines into a lot of enterprises. What's going to happen with the database world, and what should people be looking for in a couple of weeks to better understand how some of these data management technologies are going to converge and, or involve? >> It's a topic that will be of intense interest and relevance to IT professionals, because it's become the common foundation of all modern apps. But I think what we can do is we can see, for instance, a leading indicator of what's going to happen with the legacy vendors, where we have in-memory technologies from both transaction processing and analytics, and we have more advanced analytics embedded in the database engine, including Machine Learning, the model training, as well as model serving. But the, what happened in the big data community is that we disassembled the DBMS into the data manipulation language, which is an analytic language, like, could be Spark, could be Flink, even Hive. We had the Catalog, which I think Jim has talked about or will be talking about, where we're not looking, it's not just a dictionary of what's in one DBMS, but it's a whole way of tracking and governing data across many stores. And then there's the Storage Manager, could be the file system, an object store, could be just something like Kudu, which is a MPP way of, in parallel, performing a bunch of operations on data that's stored. The reason I bring all this up is, following on David's comment about the evolution of hardware, databases are fundamentally meant to expose capabilities in the hardware and to mediate access to data, using these hardware capabilities. And now that we have this, what's emerging as this unigrid, with memory-intensive architectures and super low latency to get from any point or node on that cluster to any other node, like with only a five microsecond lag, relative to previous architectures. We can now build databases that scale up with the same knowledge base that we built databases... I'm sorry, that scale out, that we used to build databases that scale up. In other words, it democratizes the ability to build databases of enormous scale, and that means that we can have analytics and the transactions working together at very low latency. >> Without binding them. Alright, so I think it's time for the action items. We got a lot to do, so guys, keep it really tight, really simple. David Floyer, let me start with you. Action item. >> So action item on big data should be focus on technologies that are going to reduce the elapse time of solutions in the data center, and those are many and many of them, but it's a production problem, it's becoming a production problem, treat it as a production problem, and put it in the fundamental procedures and technologies to succeed. >> And look for vendors >> Who can do that, yes. >> that do that. George Gilbert, action item. >> So I talked about convergence before. The converged platform now is shifting, it's center of gravity is shifting to continuous processing, where the data lake is a reference data repository that helps inform the creation of models, but then you run the models against the streaming continuous data for the freshest insights-- >> Okay, Jim Kobielus, action item. >> Yeah, focus on developer productivity in this new era of big data analytics. Specifically focus on the next generation of developers, who are data scientists, and specifically focus on automating most of what they do, so they can focus on solving problems and sifting through data. Put all the grunt work or training, and all that stuff, take and carry it by the infrastructure, the tooling. >> Peter: Neil Raden, action item. >> Well, one thing I learned this week is that everything we're talking about is about the analytical problem, which is how do you make better decisions and take action? But companies still run on transactions, and it seems like we're running on two different tracks and no one's talking about the transactions anymore. We're like the tail wagging the dog. >> Okay, John Furrier, action item. >> Action item is dig into GDPR. It is a really big issue. If you're not proactive, it could be a nightmare. It's going to have implications that are going to be far-reaching in the technical infrastructure, and it's the Sarbanes-Oxley, what they did for public companies, this is going to be a nightmare. And evaluate the impact of Blockchains. Two things. >> David Vellante, action item. >> So we often say that digital is data, and just because your industry hasn't been upended by digital transformations, don't think it's not coming. So it's maybe comfortable to sit back and say, Well, we're going to wait and see. Don't sit back and wait and see. All industries are susceptible to digital transformation. >> Alright, so I'll give the action item for the team. We've talked a lot about what to look for in the community gathering that's taking place next week in Silicon Valley around strata. Our observations as the community, it descends upon us, and what to look for is, number one, we're seeing a bifurcation in the marketplace, in the thought leadership, and in the tooling. One set of group, one group is going more after the infrastructure, where it's focused more on simplification, convergence; another group is going more after the developer, AI, ML, where it's focused more on how to create models, training those models, and building applications with the services associated with those models. Look for that. Don't, you know, be careful about vendors who say that they do it all. Be careful about vendors that say that they don't have to participate in a converged approach to doing this. The second thing I think we need to look for, very importantly, is that the role of data is evolving, and data is becoming an asset. And the tooling for driving velocity of data through systems and applications is going to become increasingly important, and the discipline that is necessary to ensure that the business can successfully do that with a high degree of predictability, bringing new production systems are also very important. A third area that we take a look at is that, ultimately, the impact of this notion of data as an asset is going to really come home to roost in 2018 through things like GDPR. As you scan the show, ask a simple question: Who here is going to help me get up to compliance and sustain compliance, as the understanding of privacy, ownership, etc. of data, in a big data context, starts to evolve, because there's going to be a lot of specialization over the next few years. And there's a final one that we might add: When you go to the show, do not just focus on your favorite brands. There's a lot of new technology out there, including things like Blockchain. They're going to have an enormous impact, ultimately, on how this marketplace unfolds. The kind of miasma that's occurred in big data is starting to specialize, it's starting to break down, and that's creating new niches and new opportunities for new sources of technology, while at the same time, reducing the focus that we currently have on things like Hadoop as a centerpiece. A lot of convergence is going to create a lot of new niches, and that's going to require new partnerships, new practices, new business models. Once again, guys, I want to thank you very much for joining me on Action Item today. This is Peter Burris from our beautiful Palo Alto theCUBE Studio. This has been Action Item. (lively electronic music)
SUMMARY :
We are again broadcasting from the beautiful and it's going to be different from this show, And the third-party applications, we don't have Now that suggests that one or the other is more or less hot, but the problem is, you know, it's like talking about the What are going to be the experiments that are going to in a lot of the perspectives associated with I think the thing to think about, that folks going to the show need to keep in mind and move the data to those nodes and people think that they're going to be So the Blockchain then points to the sovereignty issue What's going to happen with the database world, in the hardware and to mediate access to data, We got a lot to do, so guys, focus on technologies that are going to that do that. that helps inform the creation of models, Specifically focus on the next generation of developers, and no one's talking about the transactions anymore. and it's the Sarbanes-Oxley, So it's maybe comfortable to sit back and say, and sustain compliance, as the understanding of privacy,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
George | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Jim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
GDPR | TITLE | 0.99+ |
next week | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
A year ago | DATE | 0.99+ |
two lenses | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
two years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
third area | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
one group | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
27 rocket | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
next year | DATE | 0.98+ |
Two things | QUANTITY | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.97+ |
two-tool player | QUANTITY | 0.97+ |
five microsecond | QUANTITY | 0.96+ |
One set | QUANTITY | 0.96+ |
Tableau | ORGANIZATION | 0.94+ |
a year | QUANTITY | 0.94+ |
single version | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
Wikibons | ORGANIZATION | 0.91+ |
Wikibon | ORGANIZATION | 0.91+ |
two different tracks | QUANTITY | 0.91+ |
five-tool player | QUANTITY | 0.9+ |
several years ago | DATE | 0.9+ |
this year | DATE | 0.9+ |
Strata | TITLE | 0.87+ |
Harry Potter | PERSON | 0.85+ |
one thing | QUANTITY | 0.84+ |
years | DATE | 0.83+ |
one place | QUANTITY | 0.82+ |