Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020
>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)
SUMMARY :
leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Rick Greenwald | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam Ronthal | PERSON | 0.99+ |
Don Feinberg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Larry | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
December 2020 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Henry Cook | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Merv Adrian | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
AI and Hybrid Cloud Storage | Wikibon Action Item | May 2019
Hi, I'm Peter Burris, and this is Wikibon's Action Item. We're joined here in the studio by David Floyer. Hi David. >> Hi there. >> And remote, we've got Jim Kobielus. Hi, Jim. >> Hi everybody. >> Now, Jim, you probably can't see this, but for those who are watching, when we do see the broad set, notice that David Floyer's got his Game of Thrones coffee cup with us. Now that has nothing to do with the topic. David, and Jim, we're going to be talking about this challenge that businesses have, that enterprises have, as they think about making practical use of AI. The presumption for many years was that we were going to move all the data up into the Cloud in a central location, and all workloads were going to be run there. As we've gained experience, it's very clear that we're actually going to see a greater distribution function, partly in response to a greater distribution of data. But what does that tell about the relationship between AI, AI workloads, storage, and hybrid Cloud? David, why don't you give us a little clue as to where we're going to go from here. >> Well I think the first thing we have to do is separate out the two types of workload. There's the development of the AI solution, the inference code, et cetera, the dealing with all of the data required for that. And then there is the execution of that code, which is the inference code itself. And the two are very different in characteristics. For the development, you've got a lot of data. It's very likely to be data-bound. And storage is a very important component of that, as well as computer and the GPUs. For the inference, that's much more compute-bound. Again, compute neural networks, GPUs, are very, very relevant to that portion. Storage is much more ephemeral in the sense that the data will come in and you will need to execute on it. But that data will be part of the, the compute will be part of that sensor, and you will want the storage to be actually in the DIMM itself, or non-volatile DIMM, right up as part of the processing. And you'll want to share that data only locally in real time, through some sort of mesh computing. So, very different compute requirements, storage requirements, and architectural requirements. >> Yeah, let's go back to that notion of the different storage types in a second, but Jim, David described how the workloads are going to play out. Give a sense of what the pipelines are going to look like, because that's what people are building right now, is the pipelines for actually executing these workloads. How will they differ? How do they differ in the different locations? >> Yeah, so the entire DataOps pipeline for data science, data analytics, AI in other words. And so what you're looking at here is all the processes from discovering and adjusting the data to transforming and preparing and correcting it, cleansing it, to modeling and training the AI models, to serving them out for inferencing along the lines of what David's describing. So, there's different types of AI models and one builds from different data to do different types of inferencing. And each of these different pipelines might be highly, often is, highly specific to a particular use case. You know, AI for robotics, that's a very different use case from AI for natural language processing, embedded for example in an e-commerce portal environment. So, what you're looking at here is different pipelines that all share a common sort of flow of activities and phases. And you need a data scientist to build and test, train and evaluate and serve out the various models to the consuming end devices or application. >> So, David we've got 50 or so years of computing. Where the primary role of storage was to assist a transaction and the data associated with that transaction that has occurred. And that's you know, disk and then you have all the way out to tape if we're talking about archive. Flash changes that equation. >> Absolutely changes it. >> AI absolutely demands a different way of thinking. Here we're not talking about persisting our data we're talking about delivering data, really fast. As you said, sometimes very ephemeral. And so, it requires a different set of technologies. What are some of the limitations that historically storage has been putting on some of these workloads? And how are we breaching those limitations, to make them possible? >> Well if we take only 10 years ago, the start of the big data was Hadoop. And that was spreading the data over very cheap disks and hard disks. With the compute there, and you spread that data and you did it all in parallel on very cheap nodes. So, that was the initial but that is a very expensive way of doing it now because you're tying the data to that set of nodes. They're all connected together so, a more modern way of doing it is to use Flash, to use multiple copies of that data but logical copies or snapshots of that Flash. And to be able to apply as many processes, nodes as is appropriate for that particular workload. And that is a far more efficient and faster way of processing that or getting through that sort of workload. And it really does make a difference of tenfold in terms of elapsed time and ability to get through that. And the overall cost is very similar. >> So that's true in the inferencing or, I'm sorry, in the modeling. What about in the inferencing side of things? >> Well, the inferencing side is again, very different. Because you are dealing with the data coming in from the sensors or coming in from other sensors or smart sensors. So, what you want to do there is process that data with the inference code as quickly as you can, in real time. Most of the time in real time. So, when you're doing that, you're holding the current data actually in memory. Or maybe in what's called non-volatile DIMM and VDIMM. Which gives you a larger amount. But, you almost certainly don't have the time to go and store that data and you certainly don't want to store it if you can avoid it because it is a large amount of data and if I open my... >> Has limited derivative use. >> Exactly. >> Yeah. >> So you want to get all or quickly get all the value out of that data. Compact it right down using whatever techniques you can, and then take just the results of that inference up to other ones. Now at the beginning of the cycle, you may need more but at the end of the cycle, you'll need very little. >> So Jim, the AI world has built algorithms over many, many, many years. Many which still persist today but they were building these algorithms with the idea that they were going to use kind of slower technologies. How is the AI world rethinking algorithms, architectures, pipelines, use cases? As a consequence of these new storage capabilities that David's describing? >> Well yeah, well, AI has become widely distributed in terms of its architecture increasingly and often. Increasingly it's running over containerized, Kubernetes orchestrated fabrics. And a lot of this is going on in the area of training, of models and distributing pieces of those models out to various nodes within an edge architecture. It may not be edge in the internet of things sense but, widely distributed, highly parallel environments. As a way of speeding up the training and speeding up the modeling and really speeding up the evaluation of many models running in parallel in an approach called ensemble modeling. To be able to converge on a predictive solution, more rapidly. So, that's very much what David's describing is that that's leveraging the fact that memory is far faster than any storage technology we have out there. And so, being able to distribute pieces of the overall modeling and training and even data prep of workloads. It's able to speed up the deployment of highly optimized and highly sophisticated AI models for the cutting edge, you know, challenges we face like the Event Horizon telescope for example. That we're all aware of when they were able to essentially make a visualization of a black hole. That relied on a form of highly distributed AI called Grid Computing. For example, I mean the challenges like that demand a highly distributed memory-centric orchestrated approach to tackling. >> So, you're essentially moving the code to the data as opposed to moving all of the data all the way out to the one central point. >> Well so if we think about that notion of moving code to the data. And I started off by suggesting that. In many respects, the Cloud is an architectural approach to how you distribute your workloads as opposed to an approach to centralizing everything in some public Cloud. I think increasingly, application architects and IT organizations and service providers are all seeing things in that way. This is a way of more broadly distributing workloads. Now as we think about, we talked briefly about the relationship between storage and AI workloads but we don't want to leave anyone with the impression that we're at a device level. We're really talking about a network of data that has to be associated with a network of storage. >> Yes. >> Now that suggests a different way of thinking about how - about data and data administration storage. We're not thinking about devices, we're really trying to move that conversation up into data services. What kind of data services are especially crucial to supporting some of these distributed AI workloads? >> Yes. So there are the standard ones that you need for all data which is the backup and safety and encryption security, control. >> Primary storage allocation. >> All of that, you need that in place. But on top of that, you need other things as well. Because you need to understand the mesh, the distributed hybrid Cloud that you have, and you need to know what the capabilities are of each of those nodes, you need to know the latencies between each of those nodes - >> Let me stop you here for a second. When you say "you need to know," do you mean "I as an individual need to know" or "the system needs to know"? >> It needs to be known, and it's too complex, far too complex for an individual ever to solve problems like this so it needs, in fact, its own little AI environment to be able to optimize and check the SLAs so that particular inference coding can be achieved in the way that it's set up. >> So it sounds like - >> It's a mesh type of computer. >> Yeah, so it sounds like one of the first use cases for AI, practical, commercial use cases, will be AI within the data plane itself because the AI workloads are going to drive such a complex model and utilization of data that if you don't have that the whole thing will probably just fold in on itself. Jim, how would you characterize this relationship between AI inside the system, and how should people think about that and is that really going to be a practical, near-term commercial application that folks should be paying attention to? >> Well looking at the Cloud native world, what we need and what we're increasingly seeing out there are solutions, tools, really data planes, that are able to associate a distributed storage infrastructure of a very hybridized nature in terms of disk and flash and so forth with a highly distributed containerized application environment. So for example just last week at Jeredhad I met with the folks from Robin Systems and they're one of the solution providers providing those capabilities to associate, like I said, the storage Cloud with the containerized, essentially application, or Cloud applications that are out there, you know, what we need there, like you've indicated, are the ability to use AI to continue to look for patterns of performance issues, bottlenecks, and so forth and to drive the ongoing placement of data storage nodes and servers which in clusters and so forth as way of making sure that storage resources are always used efficiently that SLAs as David indicated are always observed in an automated fashion as the native placement and workload placement decisions are being made and so ultimately that the AI itself, whatever it's doing like recognizing faces or recognizing human language, is able to do it as efficiently and really as cheaply as possible. >> Right, so let me summarize what we've got so far. We've got that there is a relationship between storage and AI, that the workload suggests that we're going to have centralized modeling, large volumes of data, we're going to have distributed inferencing, smaller on data, more complex computing. Flash is crucial, mesh is crucial, and increasingly because of the distributed nature of these applications, there's going to have to be very specific and specialized AI in the infrastructure, in that mesh itself, to administer a lot of these data resources. >> Absolutely. >> So, but we want to be careful here, right David? We don't want to suggest that we have, just as the notion of everything goes into a centralized Cloud under a central administrative effort, we also don't want to suggest this notion that there's this broad, heterogeneous, common, democratized, every service available everywhere. Let's bring hybrid Cloud into this. >> Right. >> How will hybrid Cloud ultimately evolve to ensure that we get common services where we need them? And know where we don't have common services so that we can factor those constraints? >> So it's useful to think about the hybrid Cloud from the point of view of the development which will be fairly normal types of computing and be in really large centers and the edges themselves, which will be what we call autonomous Clouds. Those are the ones at the edge which need to be self-sufficient. So if you have an autonomous car, you can't guarantee that you will have communication to it. And most - a lot of IOTs in distant places which again, on chips or distant places, where you can't guarantee. So they have to be able to run much more by themselves. So that's one important characteristic so that autonomous one needs to be self-sufficient itself and have within it all the capabilities of running that particular code. And then passing up data when it can. >> Now you gave examples where it's physically required to do that, but it's also OT examples. >> Exactly. >> Operational technologies where you need to have that air gap to ensure that bad guys can't get into your data. >> Yes, absolutely, I mean if you think about a boat, a ship, it has multiple very clear air gaps and a nuclear power station has a total air gap around it. You must have those sort of air gaps. So it's a different architecture for different uses for different areas. But of course data is going to come up from those autonomous, upwards, but it will be a very small amount of the data that's actually being processed. The data, and there'll be requests down to those autonomous Clouds for additional processing of one sort or another. So there still will be a discussion, communication, between them, to ensure that the final outcome, the business outcome, is met. >> All right, so I'm going to ask each of you guys to give me a quick prediction. David, I'm going to ask you about storage and then Jim I'm going to ask you about AI in light of David's prediction about storage. So David, as we think about where these AI workloads seem to be going, how is storage technology going to evolve to make AI applications easier to deal with, easier to run, cheaper to run, more secure? >> Well, the fundamental move is towards larger amounts of Flash. And the new thing is that larger amounts of non-volatile DIMM, the memory in the computer itself, those are going to get much, much bigger, those are going to help with the execution of these real-time applications and there's going to be high-speed communication between short distances between the different nodes and this mesh architecture. So that's on the inference side, there's a big change happening in that space. On the development side the storage will move towards sharing data. So having a copy of the data which is available to everybody, and that data will be distributed. So sharing that data, having that data distributed, will then enable the sorts of ways of using that data which will retain context, which is incredibly important, and avoid the cost and the loss of value because of the time taken of moving that data from A to B. >> All right, so to summarize, we've got a new level in the storage hierarchy that puts between Flash and memory to really accelerate things, and then secondly we've got this notion that increasingly we have to provide a way of handling time and context so that we sustain fidelity especially in more real-time applications. Jim, given that this is where storage is going to go, what does that say about AI? >> What it says about AI is that first of all, we're talking about like David said, meshes of meshes, every edge node is increasingly becoming a mesh in its own right with disparate CPUs and GPUs and whatever, doing different inferencing on each device, but every one of these, like a smart car, will have plenty of embedded storage to process a lot of data locally that may need to be kept locally for lots of very good reasons, like a black box in case of an accident, but also in terms of e-discovery of the data and the models that might have led up to an accident that might have caused fatalities and whatnot. So when we look at where AI is going, AI is going into the mesh of mesh, meshes of meshes, where there's AI running it in each of the nodes within the meshes, and the meshes themselves will operate as autonomous decisioning nodes within a broader environment. Now in terms of the context, the context increasingly that surrounds all of the AI within these distributed architectures will be in the form of graphs and graphs are something distinct from the statistical algorithms that we built AI out of. We're talking about knowledge graphs, we're talking about social graphs, we're talking about behavioral graphs, so graph technology is just getting going. For example, Microsoft recently built, they made a big continued push into threading graph - contextual graph technology - into everything they do. So that's where I see AI going is up from statistical models to graph models as the broader metadata framework for binding everything together. >> Excellent. All right guys, so Jim, I think another topic another time might be the mesh mess. (laughs) But we won't do that now. All right, let's summarize really quickly. We've talked about how the relationship between AI, storage and hybrid Clouds are going to evolve. Number one, AI workloads are at least differentiated by where we handle modeling, large amounts of data still need a lot of compute, but we're really focused on large amounts of data and moving that data around very, very quickly. But therefore proximate to where the workload resides. Great, great application for Clouds, large, public as well as private. On the other side, where the inferencing work is done, that's going to be very compute-bound, smaller data volumes, but very, very fast data. Lot of flash everywhere. The second thing we observed is that these new AI applications are going to be used and applied in a lot of different domains, both within human interaction as well as real-time domains within IOT, et cetera, but that as we evolve, we're going to see a greater relationship between the nature of the workload and the class of the storage, and that is going to be a crucial feature for storage administrators and storage vendors over the next few year is to ensure that that specialization is reflected in what's known. What's needed. Now the last point that we'll make very quickly is that as we look forward, the whole concept of hybrid Cloud where we can have greater predictability into the nature of data-oriented services that are available for different workloads is going to be really, really important. We're not going to have all data services common in all places. But we do want to make sure that we can assure whether it's a container-based application or some other structure, that we can ensure that the data that is required will be there in the context, form and metadata structures that are required. Ultimately, as we look forward, we see new classes of storage evolving that bring data even closer to the compute side, and we see new data models emerging, such as graph models, that are a better overall reflection of how this distributed data is going to evolve within hybrid Cloud environments. David Floyer, Jim Kobielus, Wikibon analysts, I'm Peter Burris, once again, this has been Action Item.
SUMMARY :
We're joined here in the studio by David Floyer. And remote, we've got Jim Kobielus. Now that has nothing to do with the topic. in the sense that the data will come in of the different storage types in a second, and adjusting the data to transforming out to tape if we're talking about archive. What are some of the limitations that historically storage of the big data was Hadoop. What about in the inferencing side of things? and store that data and you certainly don't want to store it Now at the beginning of the cycle, you may need more but So Jim, the AI world has built algorithms for the cutting edge, you know, challenges we face as opposed to moving all of the data that has to be associated with a network of storage. to supporting some of these distributed AI workloads? and encryption security, control. the distributed hybrid Cloud that you have, "I as an individual need to know" in the way that it's set up. and is that really going to be a practical, are the ability to use AI to continue to look and increasingly because of the distributed nature just as the notion of everything goes and the edges themselves, which will be what we call to do that, but it's also OT examples. to have that air gap to ensure But of course data is going to come up and then Jim I'm going to ask you about AI because of the time taken of moving that data from A to B. and context so that we sustain fidelity and the models that might have led up to an accident and that is going to be a crucial feature
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Robin Systems | ORGANIZATION | 0.99+ |
May 2019 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each device | QUANTITY | 0.98+ |
Flash | TITLE | 0.97+ |
10 years ago | DATE | 0.96+ |
Jeredhad | ORGANIZATION | 0.95+ |
today | DATE | 0.9+ |
first use cases | QUANTITY | 0.85+ |
first thing | QUANTITY | 0.84+ |
one important characteristic | QUANTITY | 0.76+ |
secondly | QUANTITY | 0.76+ |
one central point | QUANTITY | 0.74+ |
Event Horizon | COMMERCIAL_ITEM | 0.72+ |
many years | QUANTITY | 0.71+ |
50 or so years | QUANTITY | 0.7+ |
Cloud | TITLE | 0.67+ |
first | QUANTITY | 0.66+ |
next few year | DATE | 0.65+ |
lot of data | QUANTITY | 0.62+ |
VDIMM | OTHER | 0.59+ |
every one | QUANTITY | 0.58+ |
second | QUANTITY | 0.57+ |
DataOps | TITLE | 0.46+ |
Kubernetes | TITLE | 0.44+ |
Wikibon Action Item | Wikibon Conversation, February 2019
(electronic music) >> Hi, I'm Peter Burris. Welcome to Wikibon action item from theCUBE Studios in Palo Alto, California. So today we've got a great conversation and what we're going to be talking about is hybrid cloud. Hybrid cloud's been in the news a lot, lately. Larger consequences, from changes made by AWS, as they announced Outpost, and acknowledged for the first time that there's going to be a greater distribution of data and a greater distribution of function as enterprises, move to the cloud. We've been on top of this for quite some time, and have actually coined what we called true hybrid cloud. Which is the idea that increasingly we're going to see a need for a common set of capabilities and services, in multiple locations, so that the cloud can move to the data, and not the data automatically being presumed to move to the cloud. Now to have that conversation and to reveal some new research on what the cost and value propositions of the different options are that are available today. We've got David Foyer, David welcome to theCUBE. >> Thank you. >> So David, let's start, when we talk about hybrid cloud, we are seeing, a continuum of different options starting to emerge. What are the defining characteristics? >> So, yes, we're seeing a continuum emerging. We have a what we call stand alone of course at one end of the spectrum, and then we have multi cloud, and then we have loosely and tightly coupled, and then we have true, and as you go up the spectrum. So the dependents upon data depend on the data plain, dependents upon low latency, dependents on writing a systems of record, records. All of those increase as we're going from high latency and high bandwidth all the way up to low latency. >> So let me see if I got that right. So true hybrid cloud is at one end. >> Yes. >> And true hybrid cloud is, low latency, right on your work loads, simple as possible administration. That means we are typically going to have, a common stack in all locations. >> Yes. >> Next to that is this notion of tightly coupled, hybrid cloud, which could be higher latency, write oriented, could probably has a common set of software, on all nodes, that handles state. And then kind of this notion of loosely coupled. Multi well hybrid cloud, which is high latency, read oriented, which may have just API level coordination and commonality on all nodes. >> Yep that's right, and then you go down even further to just multi cloud, where you're just connecting things and each of them is independent of each other. >> So if I'm a CIO and I'm looking at a move to a cloud, I have to think about green field applications and the natural distribution of data for those green field applications, and that's going to help me choose which class of hybrid cloud, I'm going to use. But let's talk about the more challenging set of scenarios for most CIO's, which is the existing legacy applications. >> The systems of record. >> Yeah, the systems of record as I try to bring those, those cloud like experience to those applications, how am I going through that thought process? >> So, we have some choices, the choices are I could move it up too lift and shift, up to one of the cloud's, one of the large cloud's, many of them are around, and what if I, if I do that, what I need to be looking at is, what is the cost of moving that data, and what is the cost of pushing that up into the cloud, and what's the conversion cost, if I needed to move, to another database. >> And I think that's the biggest one, so that's just the cost of moving the data, which is just an ingress cost, it's a cost of format changes. >> Absolutely >> You know, migration and all the other elements, conversion changes et cetera. >> Right, so what I did in my research was focus on systems of record, the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things, you know, systems of engagements, analytics et cetera. But those systems of record have to work. You need to know if you've taken an order. You need to have consistency about that order. You need to know always that you can recover any data, you need in your financials et cetera, all of that is mission critical systems of record. And that's the piece that I focused on here, and I focused on. >> So again these are low latency. >> Very, low latency, yes. >> Write oriented. >> Very write oriented, types of applications, and, I focused on Oracle because the majority, of systems of record, run on Oracle databases, the large scale ones at least. So, that's what we are focusing on here. So I, looking at the different options for a CIO, of how they would go, and there are three main options open at the moment, there's Oracle, Cloud, Cloud at customer, which gives the cloud experience. There is Microsoft Azure Stack, which has a Oracle database version of it, and Outposts, but we eliminated Outposts not because, it's not going to be any good, but because it's not there yet. >> You can't do research on it if it doesn't exist yet. >> (laughs) That's right. So, we focused on Oracle and Azure, and we focused on, what was the benefit of moving from a traditional environment, where you've got best of breed essentially on site, to this cloud environment. >> So, if we think about it, the normal way of thinking about this kind of a research, is that people talk about R.O.I, and historically that's been done by looking, by keeping the amount of work that's performed, as given, constant and then looking at how the different technology components compare from a call standpoint. But a move to Cloud, the promise of a move to Cloud is not predicated on lowering costs per say. You may have other financial considerations of course but, it's really predicated on the notion of the cloud experience. Which is intended to improve business results, so if we think about R.O.I, as being a numerator question, with the value is the amount of work you do, versus a denominator question which is what resources are consumed to perform that work. It's not just the denominator side, we really need to think about the numerator side as well. >> The value you are creating, yes. >> So, what kind of thing's are we focused on when we think about that value created, as a consequence of possibilities and options of the Cloud. >> Right, so both are important, so obviously when you move, to a cloud environment, you can simplify operations in particular, you can simplify recovery, you can simplify a whole number of things within the IT shop and those give you extra resources. And then the question is, do you just cash in on those resources and say okay I've made some changes, or do you use those resources to improve the ability of your systems to work. One important characteristic of IT, all IT and systems of record in particular, is that you get depreciation of that asset. Over time it becomes less fitted, to the environment, that it started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know, most work done in an IT shop is on the maintenance side. >> Meaning it's an enhancement. >> It's maintenance and enhancement, yes. So making more resources available, and making it easier to do that maintenance, and making less things that are going to interfere with that, faster time to maintenance, faster time to new applications or improvements. Is really fundamental to systems of record. So that is the value that you can bring to it and you also bring value with lower better availability, higher availability as well. So those are the thing's we have put into the model, to see how the different approaches, and we were looking at really a total, one supplier being responsible for everything, which was the Oracle environment, and Oracle Cloud at Customer, to a sort of hybrid environment, more hybrid environment, where you had. >> Or mixed, or mixed. >> Mixed environment, yes. Where you had the equipment coming from different places. >> One vendor. >> The service, the Oracle, the Azure service, coming from Microsoft, and of course the database coming then from Oracle itself. And we found tremendous improvement in the value that you could get because of the single source. We found that a better model. >> So, the common source lead to efficiencies, that then allowed a business to generate new classes, of value >> Correct. >> Cause' as you said, you know, 70 plus percent of an IT or business is spent on technology, is associated with maintaining what's there, enhancing what's there, and a very limited amount is focused on new green field, and new types of applications. So if you can reduce the amount of time and energy, that goes into that heritage set of applications, those systems of record, then that opens up, that frees up resources to do some other things. >> And, having the flexibility now with things like Azure Stack and in the future AWS, of putting that resource either on premise or in the cloud, means that you can make decisions about where you process these things, about where the data is, about where the data needs to be, the best placement for the data for what you're trying to do. >> That decision is predicated on things like latency, but also regulatory environment, intellectual property control. >> And the cost of moving data up and down. So the three laws of the cloud, so having that flexibility of keeping it where you want to is a tremendous value in again, in terms of, the speed of deployment and the speed of improvement. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator side. So the denominator again is, the resources consume, to deliver the work to the business, but when we talk about that denominator side, perhaps opening up additional monies, to do new types of development, new types of work. But, take us through some of the issues like what is a cloud experience associated with, single vendor, faster development, give us some of the issues that are really driving the value proposition above the line. >> The whole issue about Cloud is that you take away all of the requirements to deal with the hardware, deal with the orchestration of the storage, deal with all of these things, so instead of taking weeks, months to put in extra resources, you say, I want them, and it's there. >> So you're taking administrative tasks, out of the flow. >> Out of the flow yes. >> And as a consequence, things happen faster, so time of value is one of the first ones, give us another one. >> So, obviously the ability to have. It's a cloud environment, so if you're a vendor, of that cloud, what you want to be able to do, is to make incremental changes, quickly as opposed to waiting for a new release and work on the release basis. So that fundamental speed to change, speed to improve, bring in new features, bring in new services, a cloud first type model, that is a very powerful way for the vendor to push out new things, and for the consumer to absorb them. >> Right, so the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation, ability to innovate. And then the third most important part is, if you re-invest those resources that you have saved into new services, new capabilities of doing that, to me the most important thing, long term for systems of record is to be able to make them go faster, and use that extra latency time there to bring in systems of analytics, AI systems, other systems, and provide automation of individual business processes, increased automation. That is going to happen overtime, that's a slow adding to it, but it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide, a clear path to, improving the current systems of record. And that is a more faster and more cost effective way, than going in for a conversion, or moving the data up to the cloud or lift and shift, for these types of applications. >> So these are all kind of related, so I get superior innovation speeds, because I'm taking new technology and faster, I get faster time to value, because, I'm not having to perform a bunch of tasks, and I can get, I can imbue additional types of work, in support of automation, without dramatically, expanding the transactional latency and a rival way of transactions within the system of record. Okay so, how did Oracle and Azure, with Oracle, stack up in your analysis? >> So first of all important is both are viable solutions, they both would work. Okay, but the impact in terms of the total business value including obviously any savings on people and things like that, was 290 nearly, $300 million additional, this was for a >> For how big a company? >> For a fortune 2000 customer, so it was around two billion dollars, so a lot of money, over five years, a lot of money. Either way, you would save 200 million, if you were with the Azure, but 300 with the Oracle. So that to me is far, far higher than the costs of IT, for that particular company, it's a strategic decision, to be able to get more value out quicker, and for this class of work load, on Oracle, then Oracle at Cloud was the best decision, to be absolutely fair, if you were on Microsoft's database, and you wanted to go to Microsoft Azure, that would be the better bet. You would get back a lot of those benefits. >> So stay within the stack if you can. >> Correct. >> Alright, so, two billion dollars a year, five years. $10 billion revenue, roughly. >> Between 200 million in saving, for One Microsoft, Azure, plus Oracle. 300 million so a 1% swing, talk to us about speed, value what happens in, a numerator side of that equation? >> So, it is lower in cost, but you have a higher, the cost of the actual cloud, is a little higher, so, overall the pure hardware, equipment class is a wash, it's not going to change much. >> Got it. >> It might be a little bit more expensive. You make the savings, as well because of the people, less operators, simpler environment. Those are the savings you're going to make, and then you are going to push those back, into the organization, as increase value that can be given to the line of the business. >> So the conclusion to the research is that if you are a CIO, you look at your legacy application, it's going to be difficult to move and you go with the stack that's best for those, legacy applications. >> Correct. >> So the vast majority of systems of record, are running on Oracle. >> Large scale. >> Large scale, then that means Oracle Cloud at Customers, is the superior fit for most circumstances. >> For a lot of those. >> If you're not there though then look at other options. >> Absolutely. >> Alright, David Foyer. >> Thank you. >> Thanks, very much for being on the cube today. And you've been watching another Wikibon action item, from theCUBE Studios in Palo Alto California, I'm Peter Burris, thanks very much for watching. (electronic music)
SUMMARY :
and acknowledged for the first time that there's of different options starting to emerge. and then we have true, and as you go up the spectrum. So let me see if I got that right. That means we are typically going to have, Next to that to just multi cloud, where you're just connecting things and that's going to help me choose which class of if I needed to move, to another database. so that's just the cost of moving the data, You know, migration and all the other elements, You need to know always that you can recover any data, So again these So I, looking at the different options for a CIO, and we focused on, what was the benefit of a move to Cloud is not predicated as a consequence of possibilities and options of the Cloud. You have to do maintenance and work, and as you know, So that is the value that you can bring to it Where you had the equipment coming from different places. in the value that you could get So if you can reduce the amount of time and energy, of putting that resource either on premise or in the cloud, That decision is predicated on things like And the cost of moving data up and down. So the denominator again is, the resources consume, all of the requirements to deal with the hardware, so time of value is one of the first ones, and for the consumer to absorb them. Right, so the first one is time to value, adding to it, but it means you can use those I get faster time to value, Okay, but the impact in terms of the total business value So that to me is far, far higher than the costs of IT, Alright, so, two billion dollars a year, five years. 300 million so a 1% swing, talk to us about the cost of the actual cloud, is a little higher, that can be given to the line of the business. So the conclusion to the research is that So the vast majority of systems of record, is the superior fit for most circumstances. And you've been watching another Wikibon action item,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
February 2019 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
200 million | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
1% | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
70 plus percent | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five | QUANTITY | 0.98+ |
around two billion dollars | QUANTITY | 0.98+ |
over five years | QUANTITY | 0.98+ |
300 million | QUANTITY | 0.98+ |
years | QUANTITY | 0.98+ |
Azure Stack | TITLE | 0.98+ |
three laws | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.97+ |
$300 million | QUANTITY | 0.97+ |
three main options | QUANTITY | 0.97+ |
One vendor | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
single source | QUANTITY | 0.94+ |
first type | QUANTITY | 0.94+ |
300 | QUANTITY | 0.93+ |
single | QUANTITY | 0.92+ |
Azure | ORGANIZATION | 0.92+ |
290 | QUANTITY | 0.91+ |
one end | QUANTITY | 0.88+ |
first | QUANTITY | 0.88+ |
Outpost | ORGANIZATION | 0.88+ |
one supplier | QUANTITY | 0.88+ |
Azure | TITLE | 0.86+ |
first ones | QUANTITY | 0.85+ |
two billion dollars a year | QUANTITY | 0.84+ |
Wikibon | EVENT | 0.78+ |
Wikibon | TITLE | 0.71+ |
characteristic | QUANTITY | 0.68+ |
2000 customer | QUANTITY | 0.67+ |
Wikibon | ORGANIZATION | 0.67+ |
Oracle Cloud | TITLE | 0.58+ |
theCUBE | ORGANIZATION | 0.56+ |
Cloud | ORGANIZATION | 0.38+ |
Cloud | TITLE | 0.35+ |
Wikibon Action Item, Cloud-first Options | Wikibon Conversation, February 2019
>> Hi, I'm Peter Burroughs Wellcome to wicked bon action >> item from the Cube Studios in Palo Alto, California So today we've got a great conversation, and what we're going to be talking about is hybrid cloud hybrid. Claude's been in the news a lot lately. Largest consequences from changes made by a Ws is they announced Outpost and acknowledged for the first time that there's going to be a greater distribution of data on a greater distribution of function as enterprise has moved to the cloud. We've been on top of this for quite some time, and it actually coined what we call true hybrid cloud, which is the idea that increasingly, we're going to see a need for a common set of capabilities and services in multiple locations so that the cloud could move to the data and not the data automatically being presumed to move to the cloud. >> Now to have that >> conversation and to reveal some new research on what the cost in value propositions of the different options are available. Today. We've >> got David Foyer. David. Welcome to the Cube. Thank you. So, David, let's start. When we talk about Hybrid Cloud, we're seeing a continuum of different options start to emerge. What are the defining characteristics? >> Yes, we're seeing it could continue him emerging. We have what we've called standalone off course. That one is end of the spectrum on DH. There we have multi cloud, and then we have loosely and tightly coupled, and then we have true and as you go up the spectrum. So the dependence upon data depends on the data plane dependence upon low latent see dependance on writing does a systems of record records. All of those increase as we going from from lonely for High Leighton Sea and High Band with all way up to low late. >> So let me see if I got this right. It's true. I've a cloud is at one end and true. Either cloud is low late and see right on into workloads simplest possible administration. That means we're typically goingto have a common stack in all locations. Next to that is this notion of tightly coupled hybrid cloud, which could be higher late. And see, right oriented could probably has a common set of software on all no common mental state. And then, kind of this. This notion of loosely coupled right multi or hybrid cloud, which is low, high late and see, write or read oriented, which may have just a P I level coordination and commonality and all >> that's right. And then you go down even further to just multi cloud, where you're just connecting things, and each of them is independent off each other. >> So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield applications and the natural distribution of data for those Greenfield applications. And that's going to help me choose which class of hybrid clawed him and he used. But let's talk about the more challenging from a set of scenarios for most CEOs, which is the existing legacy applications as I cry that Rangel yeah, systems of record. As I try to bring those those cloud like experience to those applications, how am I going through that thought process? >> So we have some choices. The choices are I could move it up to lift and shift up to on a one of the clouds by the large clouds, many of them around. And what if I if I do that what I'm need to be looking at is, what is the cost of moving that data? And what is the cost of pushing that up into the cloud and lost the conversion cast if I need to move to another database, >> and I think that's the biggest one. So it just costs of moving the data, which is just uninterested. It's a cost of format changes at our migration and all the other out conversion changes. >> So what I did in my research was focus on systems of record, the the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things their systems, the engagement analytics, etcetera. But those systems of record have to work. They you need to know if you've taken on order, you need to have consistency about that order. You need to know always that you can recover any data you need in your financials, etcetera. All of that is mission critical systems of record. Andi, that's the piece that I focused on here, and I focused on >> sort of. These are loaded and she >> low, very low, latent, right oriented, very right orientated types of applications. And I focused on the oracle because the majority ofthe systems of record run on Oracle databases on the large scale ones, at least so that's what we're we're focusing on here. So I looking at the different options for a C I O off. How they would go on DH. There are three main options open at the moment. There's there's Arkalyk Cloud Cloud, a customer, which gives thie the cloud experience. There is Microsoft as your stack, which has a a Oracle database version of it on DH outposts. But we eliminated outposts not because it's not going to be any good, but because it's not there yet, is >> you get your Razor John thing. >> That's right. So we focused on Oracle on DH as you and we focused on what was the benefit of moving from a traditional environment where you've got best of breed essentially on site to this cloud environment. >> So if we think about it, the normal way of thinking about this kind of a research is that people talk about R. A Y and historically that's been done by looking by keeping the amount of work that's performed has given constant and then looking at how the different technology components compare from a call standpoint. But a move to cloud the promise of a move to cloud is not predicated on lowering costs per se, but may have other financial considerations, of course, but it's really predicated on the notion of the cod experience, which is intended to improve business results. So we think about our lives being a numerator question. Value is the amount of work you do versus the denominator question, which is what resources are consumed to perform that work. It's not just the denominator side we really need to think about. The numerator side is well, you create. So what? What kind of things are we focused >> on? What we think about that value created his consequence of possibilities and options of the cloud. >> Right? So both are important. So so Obviously, when you move to a cloud environment, you can simplify operations. In particular, you can simplify recovery. You, Khun simplify a whole number of things within the shop and those give you extra resources on. Then the question is, Do you just cash in on those resources and say OK, I've made some changes, Or do you use those resources to improve the ability of your systems to work and one important characteristic off it alight and systems of record in particular is that you get depreciation of that asset. Over time, it becomes less fitted to the environment it has started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know most means most work done in my tea shop is on the maintenance side minutes. An enhancement. It's maintenance. An enhancement, yes. So making more resources available on making it easier to do that maintenance are making less, less things that are going to interfere with that faster time to to to maintenance faster time. Two new applications or improvements is really fundamental to systems of record, so that is the value that you can bring to it. And you also bring value with lower of better availability, higher availability as well. So those are the things that we put into the model to see how the different approaches. And we were looking at really a total one. One supplier being responsible for everything, which was the Oracle environment of Oracle clouded customer to a sort of hybrid invite more hybrid environment where you had the the the work environment where you had the equipment coming from different place vendor that the service, the oracle, the as your service coming from Microsoft and, of course, the database coming then from Arkham itself. And we found from tremendous improvement in the value that you could get because of this single source. We found that a better model. >> So the common source led to efficiencies that then allowed a business to generate new classes of value. Because, as you said, you know, seventy plus percent of a night organ orb business is spending. Biology is associate with maintaining which they're enhancing. What's there in a very limited amount is focused on new greenfield or new types of applications. So if you can reduce the amount of time energy that goes into that heritage set of applications those systems of record, the not opens up that frees up resources to do some of the things >> on DH Having inflexibility now with things like As your stack conned in the future E. W. S off. Putting that resource either on premise or in the cloud, means that you can make decisions about where you process things things about where the data is about, where the data needs to be, the best placement of the data for what you're trying to do >> and that that decision is predicated on things like late in sea, but also regulatory, environment and intellectual property, controlling >> the custom moving data up and down. So the three laws of off off the cloud so having that flexibility of moving, keeping it where you want to, is a tremendous value in again in terms ofthe the speed of deployment on the speed of improved. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator sites that the denominator again is the resources consumed to deliver the work to the business. But when we talk about that denominator side, know you perhaps opening up additional monies to do new types of development new times of work. But take us through some of the issues like you know what is a cloud experience associated with single vendor Faster development. Give us some of the issues that are really driving the value proposition. Look above the line. >> I mean, the whole issue about cloud is that you go on, take away all of the requirements to deal with the hardware deal with the orchestration off the storage deal with all of these things. So instead of taking weeks, months to put in extra resources, you say I want them on is there. >> So you're taking out administrate your taking administrative tasks out of the flow out of the flow, and as a consequence, things happen. Faster is the time of values. One of the first one. Give us another one. >> So obviously the ability to no I have it's a cloud environment. So if you're a vendor of that cloud, what you want to be able to do is to make incremental changes quickly, as opposed to awaiting for a new release and work on a release basis. So that fundamental speed to change speed to improve, bring in new features. Bringing new services a cloud first type model that is a very powerful way for the vendor to push out new things. And for the consumer, too, has absorbed them. >> Right? So the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation ability to innovate. And then the third. The third most important part is if you if you re invest those resources that you've saved into new services new capabilities of doing that. To me, the most important thing long term for systems of record is to be able to make them go faster and use that extra Leighton see time there to bring in systems off systems of analytics A. I systems other systems on provide automation of individual business processes, increased automation that is gonna happen over time. That's that's a slow adding to it. But it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide a clear path to improving the current systems of record. And that is a much faster and more cost effective way than going in for a conversion or moving the data upto the cloud or shifting lift and shift. For these types of acts, >> what kind of they're all kind of related? So I get, I get. I get superior innovation speeds because I'm taking new technology and faster. I get faster time to value because I'm not having to perform much of tasks, and I could get future could imbue additional types of work in support of automation without dramatically expanding the transactional wait and see on arrival rate of turns actions within the system of record. Okay, So how did Oracle and Azure with Oracle stack up in your analysis? >> So first of all, important is both a viable solutions. They both would work okay, but the impact in terms of the total business value, including obviously any savings on people and things like that, was two hundred nineteen eighty three hundred million dollars additional. This was for Robert to come in for a a Fortune two thousand customer, so it was around two billion dollars. So a lot of money over five years, a lot of money. Either way, you would save two hundred million if you were with with the zero but three hundred with the oracle, so that that to me, is far, far higher than the costs of I T. For that particular company, it's It is a strategic decision to be able to get more value out quicker. And for this class of workload on Oracle than Arkalyk, Cloud was the best decision to be absolutely fair If you were on Microsoft's database. And you wanted to go to Microsoft as you. That would be the better bet you would. You would get back a lot of those benefits, >> so stay with him. The stack, if you can't. Correct. All right, So So two billion dollars a year. Five years, ten billion dollars in revenue, roughly between two hundred million and saving for one Congress all around three. Treasure Quest. Oracle three hundred millions were one percent swing. Talk to us about speed value. What >> happens in the numerator side of that equation >> S Oh, so it is lower in caste, but you have a higher. The cast of the actual cloud is a little a little higher. So overall, the pure hardware equipment Cass is is awash is not going to change much. It might be a little bit more expensive. You make the savings a cz? Well, because of the people you less less operators, simpler environment. Those are the savings you're going to make. And then you're going to push those back into into the organization a cz increased value that could be given to the line of business. >> So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to move, and you go with stack. That's best for those legacy applications. And since the vast majority of systems of record or running all scale large scale, then that means work. A cloud of customers is a superior fit for most from a lot of chances. So if you're not there, though, when you look at other options, all right, David Floy er thank you. Thanks very much for being on the Cube today, and you've been watching other wicked bon action >> item from the Cube Studios and Power Rialto, California on Peter Burke's Thanks very much for watching.
SUMMARY :
capabilities and services in multiple locations so that the cloud could move to the data conversation and to reveal some new research on what the cost in value propositions of the different options are What are the defining characteristics? So the dependence upon data Next to that is this notion of tightly coupled And then you go down even further to just multi cloud, So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield and lost the conversion cast if I need to move to another database, So it just costs of moving the data, which is just uninterested. You need to know always that you can recover any data you These are loaded and she So I looking at the different So we focused on Oracle on Value is the amount of work you do versus What we think about that value created his consequence of possibilities and options of the cloud. of record, so that is the value that you can bring to it. So the common source led to efficiencies that then allowed a business to generate new premise or in the cloud, means that you can make decisions about where you process things So the three laws of again is the resources consumed to deliver the work to the business. go on, take away all of the requirements to deal with the hardware One of the first one. So obviously the ability to no So the first one is time to value, but also it's lower cost in for a conversion or moving the data upto the cloud or shifting lift I get faster time to value because I'm not having to is far, far higher than the costs of I T. For that particular company, Talk to us about speed value. Well, because of the people you less less operators, simpler environment. So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
February 2019 | DATE | 0.99+ |
ten billion dollars | QUANTITY | 0.99+ |
one percent | QUANTITY | 0.99+ |
two hundred million | QUANTITY | 0.99+ |
Claude | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
zero | QUANTITY | 0.99+ |
Five years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Arkalyk | ORGANIZATION | 0.99+ |
Power Rialto | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
three hundred millions | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two hundred | QUANTITY | 0.99+ |
seventy plus percent | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.98+ |
around two billion dollars | QUANTITY | 0.98+ |
oracle | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
Peter Burke | PERSON | 0.97+ |
over five years | QUANTITY | 0.97+ |
Leighton | ORGANIZATION | 0.97+ |
David Floy er | PERSON | 0.97+ |
three hundred | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
two thousand customer | QUANTITY | 0.96+ |
Two new applications | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Peter Burroughs | PERSON | 0.96+ |
first type | QUANTITY | 0.95+ |
One supplier | QUANTITY | 0.95+ |
High Band | LOCATION | 0.95+ |
single source | QUANTITY | 0.95+ |
two billion dollars a year | QUANTITY | 0.95+ |
three | QUANTITY | 0.93+ |
Khun | ORGANIZATION | 0.93+ |
Treasure Quest | ORGANIZATION | 0.93+ |
nineteen eighty three hundred million dollars | QUANTITY | 0.92+ |
three laws | QUANTITY | 0.92+ |
Congress | ORGANIZATION | 0.91+ |
R. A Y | OTHER | 0.9+ |
Greenfield | ORGANIZATION | 0.89+ |
Azure | ORGANIZATION | 0.88+ |
one end | QUANTITY | 0.87+ |
Wikibon | ORGANIZATION | 0.87+ |
Outpost | ORGANIZATION | 0.85+ |
High Leighton Sea | LOCATION | 0.85+ |
three main options | QUANTITY | 0.85+ |
California | LOCATION | 0.82+ |
first | QUANTITY | 0.78+ |
Arkham | LOCATION | 0.76+ |
Cube | ORGANIZATION | 0.76+ |
Razor John | PERSON | 0.63+ |
Cloud Cloud | COMMERCIAL_ITEM | 0.54+ |
Rangel | PERSON | 0.48+ |
Fortune | TITLE | 0.47+ |
Cloud | ORGANIZATION | 0.43+ |
Wikibon 2019 Predictions
>> Hi, I'm Peter Burris, Chief Research Officer for Wikibon Cube and welcome to another special digital community event. Today we are going to be presenting Wikibon's 2019 trends. Now, I'm here in our Palo Alto Studios in kind of a low tech mode. Precisely, because all our our crews are out at all the big shows bringing you the best of what's going on in the industry, and broadcasting it over The Cube. But that is okay because I've asked each of our Wikibon analysts to use a similar approach to present their insights into what would be the most impactful trends for 2019. Now the way we are going to do this is first we are going to use this video as base of getting our insights out, and then at the end we are going to utilize a crowd chat to give you an opportunity to present your insights back to the community. So, at the end of this video, please stay with us, and share your insights, share your thoughts, your experience, ask your questions about what you think will be the most impactful trends of 2019 and beyond. >> A number of years ago Wikibon predicted that cloud, while dominating computing, would not feature all data moving to the cloud but rather, the cloud experience and cloud services moving to the data. We call that true private cloud computing, and there has, nothing has occurred in the last couple of years that has suggested that we were, anyway, wrong about this prediction. In fact, if we take a look at what's going on with Edge, our expectations that increasingly Edge computing and on Premise technology, or needs, would further accelerate the rate at which cloud experiences end up on Premise, end up at the Edge, and that would be the dominant model for how we think about computing over the course of the next few years. That leads to greater distribution of data. That leads to greater distribution of places where data actually will be used. All under the aegis of cloud computing but not utilizing the centralized public cloud model that so many predicted. >> A prediction we'd like to talk about is how multi-cloud and orchestration of those environments fit together. At Wikibon, we've been looking for many years at how digital businesses are going to leverage cloud, and cloud is not a singular entity, and therefore the outcomes that you are looking for, often require that you use more than one cloud, specially if you are looking at public clouds. We've been seeing the ascendance of Kubernetes as a fundamental foundational piece of enabling this multi-cloud environment. Kubernetes is not the sole thing, and of course, you don't want to overemphasize any specific tool, but you are seeing, driven by the CNC AFT in a broad ecosystem, that Kubernetes is getting into all the platforms, both public and private cloud, and that we predict that by 2020, 90% of multi-cloud enterprise applications will use Kubernetes to lead for the enablement of their multicloud strategies. >> One of the biggest challenges that the industry is going to face over the next few years is how to deal with multi-cloud. We predict, ultimately, that a sizable percentage of the marketplace, as much as 90%, will be taking a multi--cloud approach first to how they conceive, build, and operate their high, strategic value applications that are engaging customers, engaging partners, and driving their businesses forward. However, that creates a pressing need for three new classes of Technology. Technology that provides multi-cloud inter-networking; Technology that provides orchestration services across clouds, and finally Technologies that ensure data protection across multi-cloud. While each of these domains by themselves is relatively small today, we think that over the next decade they will, each, grow into market that are tens of billions if not hundreds of billions of dollars in size. >> The picture I'd like to talk about a very few, the Robotic Process Automation, RPA. So we've observed that there's a widening gap between how many jobs are available world wide and the number of qualified candidates to fill those jobs. RPA, we believe, is going to become a fundamental approach to closing that gap, and really operationalizing artificial intelligence. Executives that we talk to in The Cube; They realize they just can't keep throwing bodies at the problem, so this so called "software robots" are going to become increasingly easy to use. And we think that low code or no code approaches to automation and automating work flows are going to drive the RPA market from its current position, which is around a billion dollars to more than ten X, or ten billion dollars plus by 2023. I predict that in 2019 what we are going to see is more containerization of AI machine learning for deployment to the Edge, throughout the multi-cloud. It's a trend that's been going on for some time. In particular, what we are going to be seeing is a increasing focus on technologies, or projects in code base such as Cube flow, which has been established in this year just gone by to support that approach for containerization of AI out to the edges. In 2019, we are going to see the big guys, like Google, and AWS, and Microsoft, and others in the whole AI space begin to march around the need for a common delatched framework suck such as Cube Flow, because really that is where many of their customers are going. The data scientists and App developers who are building these applications; They want to manage these over Kubernetes using these CNC stacks of tooling and projects to enable a degree of supportability and maintain ability and scalability around containerized intelligent applications. >> My prediction is around the move from linear programming and data models to matrix computing. This is a move that's happening very quicly, indeed, as new types of workload come on. And these workloads include AI, VR, AR, Video Gaming, very much at the edge of things. And ARM is the key provider of these types of computing chips and computing models that are enabling this type of programming to happen. So my prediction is that this type of programming is gonna start very quickly in 2019. It's going to rule very rapidly about two years from now, in 2021, into the enterprise market space, but that the preparation for this type of computing and the movement of work right to the edge, very, very close to the senses, very, very close to where the users are themselves is going to accelerate over the next decade. >> The prediction I'd like to make in 2019 is that the CNCF, as the steward of the growing cloud native stack, they'll expand the range of projects to include the frontier topics, really the frontier paradigms, in micro sources in cloud computing; I'm talking about Serverlus. My prediction is that virtual Kubelets will become an incubating project at CNCF to address the need to provide Serverlus event driven interfaces to containerize orchestrated micro sources. I'd also like to predict that VM and container coexistence will proceed apace in terms of a project such as, specially Kubevirt. I think will become also a CNCF project. And I think it will be adopted fairly widely. And one last prediction, in that vein, is that the recent working group that CNCF has established with Eclipse, around IOT, the internet of things. I think that will come to fruition. There is an Eclipse project called Ditto that uses IOT, and AI, and digital twins, a very interesting way for industrial and other applications. I think that will come under the auspices of CNC in the coming year. >> Security remains vexing to the cloud industry, and the IT industry overall. Historically, it's been about restricting access, largely at the perimeter, and once you provide through the perimeter user would have access to an entire organization's resources, digital resources, whether they be files, or applications, or identities. We think that has to change, largely as a consequence of businesses now being restructured, reorganized, and re-institutionalizing work around data. That what's gonna have to happen is a notion of zero trust security is going to be put in place that is fundamentally tied to the notion of sharing data. So, instead of restriction access at the perimeter, you have to restrict access at the level of data. That is going to have an enormous set of implication overall, for how the computing industry works. But two key technologies are essential to making zero trust security work. One is software to find infrastructure, so that you can make changes to the configuration of your security policies and instances by other software and to, very importantly, high quality analytics that are bringing the network and security functions more closely together and through the shared data are increasing the use of AI, the use of machine learning, etc and ensuring higher quality security models across multiple clouds. It's always great to hear from the Wikibon analysts about what is happening in the industry and what is likely to happen in the industry. But now, let's hear from you, so let's jump into the cloud chat as an opportunity for you to present your ideas, your insights, ask your questions, share your experience. What will be the most important trends and issues in 2019 and beyond, as far as you are concerned. Thank you very much for listening. Now let's cloud chat.
SUMMARY :
each of our Wikibon analysts to use and cloud services moving to the data. and that we predict that by 2020, 90% that the industry is going to face over the and the number of qualified candidates to fill those jobs. but that the preparation for this type of computing is that the recent working group So, instead of restriction access at the perimeter,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
ten billion dollars | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
more than ten X | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
around a billion dollars | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
tens of billions | QUANTITY | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
two key technologies | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
Eclipse | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Wikibon Cube | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
ARM | ORGANIZATION | 0.95+ |
next decade | DATE | 0.95+ |
today | DATE | 0.95+ |
hundreds of billions of dollars | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
last couple of years | DATE | 0.89+ |
Palo Alto Studios | LOCATION | 0.88+ |
Kubernetes | TITLE | 0.86+ |
this year | DATE | 0.86+ |
zero | QUANTITY | 0.84+ |
next few years | DATE | 0.84+ |
Edge | ORGANIZATION | 0.83+ |
number of years ago | DATE | 0.82+ |
Cube Flow | TITLE | 0.79+ |
Wikibon | TITLE | 0.74+ |
Process Automation | ORGANIZATION | 0.74+ |
three | QUANTITY | 0.72+ |
Serverlus | TITLE | 0.71+ |
CNC | ORGANIZATION | 0.68+ |
Premise | ORGANIZATION | 0.67+ |
few years | DATE | 0.64+ |
Kubevirt | ORGANIZATION | 0.64+ |
Cube | TITLE | 0.63+ |
about two years | QUANTITY | 0.61+ |
The Cube | ORGANIZATION | 0.58+ |
Chief Research Officer | PERSON | 0.57+ |
Cube | ORGANIZATION | 0.57+ |
Kubernetes | ORGANIZATION | 0.56+ |
classes | QUANTITY | 0.54+ |
Ditto | TITLE | 0.53+ |
Edge | TITLE | 0.51+ |
Edge | COMMERCIAL_ITEM | 0.33+ |
Old Version: James Kobielus & David Floyer, Wikibon | VMworld 2018
from Las Vegas it's the queue covering VMworld 2018 brought to you by VMware and its ecosystem partners and we're back here at the Mandalay Bay in somewhat beautiful Las Vegas where we're doing third day of VMworld on the cube and on Peterborough and I'm joined by my two lead analysts here at Ricky bond with me Jim Camilo's who's looking at a lot of the software stuff David floor who's helping to drive a lot of our hardware's research guys you've spent an enormous amount of time talking to an enormous number of customers a lot of partners and we all participated in the Analyst Day on Monday let me give you my first impressions and I want to ask you guys some questions here you thought so I have it this is you know my third I guess VMworld in or in a row and and my impression is that this has been the most coherent of the VM worlds I've seen you can tell when a company's going through a transition because they're reaching to try to bring a story together and that sets the tone but this one hot calendar did a phenomenal job of setting up the story it makes sense it's coherent possibly because it aligns so well with what we think is going to happen in the industry so I want to ask you guys based on three days of one around and talking to customers David foyer what's been the high point what have you found is the most interesting thing well I think the most interesting thing is the excitement that there is over VMware if you if you contrast that with a two three years ago the degree of commitment of customers to viennois the degree of integration they're wanting to make the degree rate of change and ideas that have come out of VMware it's like two different companies totally different companies some of the highlights for me were the RDS the bringing from AWS to on site as well as on the AWS cloud RDS capabilities I think that's a very very interesting thing that's the relational database is services the Maria DB and all the other services that's a very exciting thing to me and a hint to me that AWS is going to have to get serious about well Moore's gone out I think it's a really interesting point that after a lot of conversations with a lot of folks saying all AWS it's all going to go up to the cloud and wondering whether that also is a one-way street for VMware Casta Moore's right but now we're seeing it's much more of a bilateral relationship it's a moving it to the right place and that's the second thing the embracing of multi-cloud by everybody one cloud is not going to do everything they're going to be SAS clouds they're going to be multiple places where people are gonna put certain workloads because that's the best strategic fit for it and the acceptance in the marketplace that that is where it's going to go I think that again is a major change so hybrid cloud and multi cloud environments and then the third thing is I think the richness of the ecosystem is amazing the the going on the floor and the number of people that have come to talk to us with new ideas really fascinating ideas is something I haven't seen at all for the last last three four years and so I'm gonna come back to you on that but it goes back to the first point that you make that yeah there is a palpable excitement here about VMware that two-three years ago the conversation was how much longer is the franchise gonna be around Jim but now it's clear yeah it's gonna be around Jim how about you yeah actually I'm like you guys I'm a newbie to VM world this is my very first remember I'm a big data analyst I'm a data science an AI guy but obviously I've been aware of VMware and I've had many contacts with them over the years my take away my prime and I like Pat Gail singers I agree with you Peter they're really coherent take and I like that phrase even though it sounds clucking impact kind of apologize they are the dial tone to the multi-cloud if the surgery really gives you a strong sense or who else can you character is in this whole market space cloud computing has essentially a multi cloud provider who provide the unifying virtualization glue to help their custom to help customers who are investing in an AWS and maybe in a bit of you know you're adopting Google and Microsoft Azure and so forth providing a virtualization layer that's the above server virtualization network virtualization VDI all the way to the edge nobody can put it all is putting it all together and quite the way that VMware is one of the my chief takeaways is similar to David's which is that in terms of the notion of a hybrid cloud VMware with its whole what's it's doing with RDS but also projects like this project dimension which is in project in progress taking essentially the entire VMware virtualization stack and putting it onto an appliance for deployment on the edges and then for them to manage it VMware of this their plans as an end-to-end managed edge cloud service and so forth Wow the blurring of public and private cloud I don't even think the term hybrid cloud applies it's just a blurry the common cloud yeah it's moving to the workload the clouds moving to the data which is exactly what we say they are halfway there in terms of that vision halfway in a sense that RDS has been announced the you know on the VMware and this project dimension they're well along with that if there was a briefings for the analyst space I'm really impressed for how they're architecting this I think they've got a shot to really dominate well I'll tell you so I would agree with you just to maybe provide a slightly different version of one of the things you said I definitely agree I think what's VMware hopes to do and I think they're not alone is to have AWS look like an appliance to their console to have as you look like an appliance of their Khan so through free em where you can get access to whatever services you need including your VMware machines your VMs inside those clouds but that increasingly their their goal is to be that control point that management point for all of these different resources that are building and it is very compelling I think that there's one area that I still think we need more from as analysts and we always got to look through no and what's yeah what was more required and I hear what you say about project dimension but I think that the edge story still requires a fair amount of work oh yeah it's a project in place but that's going to be an increasingly important locus of how architectures get laid out how people think about applications in the future how design happens how methodologies for building software work David what do you think what when you look out what what is what what is more is needed for you so really I think there are two things that give me a small concern the the edge that's a long term view so they got time to get that right but the edge view is very much an IT view top-down and they are looking to put in place everything that they think the OT people should fit in with I think that is personally not going to be a winning strategy you you have to take it from the bottom up the world is going to go towards devices very rich devices and sensors lots of software right on that device the inference work on those devices and the job of IT will be to integrate those devices it won't be those devices taking on the standards of IT it'll be IT that has to shape itself to look after all those devices there so that's a that's the main viewpoint I think that needs adjustment and it will come I'm sure over time but as you said there's a lot of computer science it's going to be an enormous amount of new partnerships are gonna be fabricate exactly to make this happen Jim what do you think yeah I agree terms of partnerships one big gap from both VMware and Dell technologies partnerships and romance and technology proposes AI now they have a project VMware call from another project called project Magna which is really AI ops in fact I published a wiki about reports this week on AI ops AI to drive IT Service Management and to and they're doing some stuff they're working on that project it's just you know the beginning stages I think what's going to happen is that vmware dell technologies they're gonna have to make strategic acquisitions of AI solution providers to build up that capability because that's going to be fundamental to their ability to manage this complex multi called fabric from end to end continuously they need that competency internally that can't be simply a partner providing that that's got to be their core competencies so you know I'm gonna push it I'll give you the contrarian point of view okay we actually had Khamsin VMware we've had a lot of conversations about this does that is that a reflection of David's point about top-down buying things and pushing it down as opposed to other conversations we've had about how the edge is going to evolve where a lot of OT guys are going to combine with business expertise and technology expertise to create specialized solutions and is and then VMware is gonna have to reach out to them and make VMware relevant to them do you think it's going to be VMware buying a bunch of stuff or an a-grade no solution or is it going to be the solutions coming from elsewhere and VM at VMware I just becoming more relevant to them now you can still be buying a bunch of stuff to get that horizontal in place but which way you think it's going to go I think it's gonna be the top-down they're gonna buy stuff because if I talk to the channel one of the channel people this morning about well you know but they've got an IOT connected bundle and so forth they announced this show you know I think they agree with me that the core AI technology needs to be built into the fundamentals like the IOT stack bundle that they then provide to the channel partners for with you know with channel specific content that they can then tweak and customize to their specific needs but you know the core requirements for a I are horizontal you know it's the ability to run neural networks to do predictive analysis anomaly detection and so forth this is all cross-cutting across all domains it has to be in the core application stack they can't be simply something they source for particular channel opportunities it has to be leveraged across you know the same core tensorflow models for anomaly detection for manufacturing for logistics for you know customer relationship management whatever it's or are you saying essentially that then VMware becomes that horizontal play even though even if the solution providers are increasingly close to the actual action where the edges III I'm gonna disagree we can gently on that but we'd still be friends [Music] no it's you know I'm I'm an OT guy of hearth I suppose and I think that that is going to be a stronger force in terms of VMware but there will be some places where you it will be top-down but other places that where it's going to be need needed to adjust but I think there's one other there very interesting area I'd like to bring up in terms of of this question of acquisition what what we heard about beforehand was excellent results and VMware has been adding a you know a billion dollars a year in terms of free cash there and they have thirteen billion in short term cash there and the the refinancing from Dell is gonna take eleven of that thirteen and put it towards the towards the the company now you can work towards deltek yes well just Dell Dell as a hold and and silver later towards those partners I I personally believe that there is such a lot of opportunity that's going to be out there if you take NSX for example it has the potential to do things in new areas they're gonna need to provide solutions in those new areas and aggressively go after those new areas and that's going to mean big investments and many other areas where I think they are going to need acquisitions to strengthen the whole story they have the whole multi-cloud story about this real-time operating system in a sexy has a network routing virtualization backplane I mean it needs to go real-time so sensitive guaranteed ladies if they need that big investments guarantee yeah they need to go there yeah so what we're agreeing on that and I get concerned that it's not going to be given the right resources you know to be able to actually go after the opportunities that they have genuinely created it's gonna mean from you see how that plays out so I think all drugs in the future I think saying though is that there is going to be a solution a set of solution players that VMware is going to have to make significant moves to make them relevant and then the question is where it's the values story what's the value proposition it's probably gonna be like all partnerships yeah some are gonna claim that they are doing it also some are gonna DM where it's gonna claim that they do more of it but at the end of the day VMware has to make themself relevant to the edge however that happens I want to pick up on NSX because I'm a pretty big believer that NSX may be the very special crown jewel and a lot of the stuff this notion of hybrid cloud whatever we call it let's just call it extended cloud let me talk of a better word like it is predicated on the idea that I also have a network that can naturally and easily not just bridge but truly multi network interoperate internet work with a lot of different cloud sources but also all different cloud locations and there's not a lot of technologies out there that are great candidates to do that and it's and I look at NSX and I'm wondering is that gonna be kind of a I want to take the metaphor too far but is that gonna be kind of a new tcp/ip for the cloud in the sense that you're still gonna run over tcp/ip and you're still gonna run over the Internet but now we're gonna get greater visibility into jobs into workloads into management infrastructures into data locations and data placement predictive movement and NSX is going to be the at the vanguard of showing how that's gonna work and the security side of that especially to be able to know what is connected to what and what shouldn't be connected to what and to be able to have that yeah they need stateful structured streaming others Kafka flink whatever they need that to be baked into the whole nsx virtualization layer that much more programmable and that provides that much better a target for applications all right last question then we got a wrap guys David as you walk out the door get in the plane what are you taking away what's your last impression my last impression is one of genuine excitement wanting to work wanting to follow up with so many of the smaller organizations the partners that have been here and who are genuinely providing in this ecosystem a very rich tapestry of of capability that's great Jim my takeaway is I want to see their roadmap for kubernetes and serverless there wasn't a hole last year they made an announcement of a serverless project I forgot what the code name is didn't hear a whole lot about it this year but they're going up the app stack they got a coop you know distribution you know they're if they need a developer story I mean developers are building functional apps and so forth you know you can and they're also containerized they need they need a developer story and they need a server list story and they need to you need to bring us up to speed on where they're going in that regard because AWS their predominant partner I mean they got lambda functions and all that stuff you know that's that's the development platform of the present and future and I'm not hearing an intersection of that story with VMware's a story yeah my last thing that I'll say is that I think that for the next five years VMware is gonna be one of the companies that shapes the future of the cloud and I don't think we would have said that a couple of names no they wouldn't I agree with you so you said yes all right so this has been the wiki bond research leadership team talking about what we've heard at VMware this year VMworld this year a lot of great conversation feel free to reach out to us and if you want to spend more time with rookie bond love to have you once again Peter burrows for David floor and Jim Kabila's thank you very much for watching the cube we'll talk to you again [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jim Kabila | PERSON | 0.99+ |
thirteen billion | QUANTITY | 0.99+ |
David Floyer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Camilo | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Jim | PERSON | 0.99+ |
first impressions | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
thirteen | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Pat Gail | PERSON | 0.99+ |
Moore | PERSON | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
first point | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
ORGANIZATION | 0.97+ | |
third thing | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
third | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
NSX | ORGANIZATION | 0.97+ |
two-three years ago | DATE | 0.97+ |
David floor | PERSON | 0.96+ |
VMworld | ORGANIZATION | 0.96+ |
two different companies | QUANTITY | 0.95+ |
both | QUANTITY | 0.95+ |
VMworld 2018 | EVENT | 0.95+ |
Maria DB | TITLE | 0.95+ |
wiki | ORGANIZATION | 0.95+ |
Microsoft | ORGANIZATION | 0.95+ |
this week | DATE | 0.94+ |
two lead analysts | QUANTITY | 0.94+ |
David foyer | PERSON | 0.93+ |
deltek | ORGANIZATION | 0.93+ |
Monday | DATE | 0.93+ |
third day | QUANTITY | 0.93+ |
two three years ago | DATE | 0.92+ |
one area | QUANTITY | 0.92+ |
this morning | DATE | 0.91+ |
one | QUANTITY | 0.91+ |
Kafka | TITLE | 0.9+ |
Analyst Day | EVENT | 0.89+ |
VMworld | EVENT | 0.89+ |
Khamsin | ORGANIZATION | 0.88+ |
VMware | TITLE | 0.84+ |
Ricky bond | ORGANIZATION | 0.84+ |
Wikibon | ORGANIZATION | 0.83+ |
one cloud | QUANTITY | 0.82+ |
lot of partners | QUANTITY | 0.82+ |
eleven | QUANTITY | 0.81+ |
a billion dollars a year | QUANTITY | 0.81+ |
David Floyer, Wikibon | Pure Storage Accelerate 2018
>> Narrator: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate, 2018, brought to you by Pure Storage. >> Welcome back to theCUBE's coverage of Pure Storage Accelerate 2018. I'm Lisa Martin. Been here all day with Dave Vellante. We're joined by David Floyer now. Guys, really interesting, very informative day. We got to talk to a lot of puritans, but also a breadth of customers, from Mercedes Formula One, to Simpson Strong-Tie to UCLA's School of Medicine. Lot of impact that data is making in a diverse set of industries. Dave, you've been sitting here, with me, all day. What are some of the key takeaways that you have from today? >> Well, Pure's winning in the marketplace. I mean, Pure said, "We're not going to bump along. "We're going to go for it. "We're going to drive growth. "We don't care if we lose money, early on." They bet that the street would reward that model, it has. Kind of a little mini Amazon, version of Amazon model. Grow, grow, grow, worry about profits down the road. They're eking out a slight, little positive free cashflow, on a non-gap basis, so that's good. And they were first with All-Flash, really kind of early on. They kind of won that game. You heard David, today. The NVMe, the first with NVMe. No uplifts on pricing for NVMe. So everybody's going to follow that. They can do the Evergreen model. The can do these things and claim these things as we were first. Of course, we know, David Floyer, you were first to make the call, back in 2008, (laughs) on Flash and the All-Flash data center, but Pure was right there with you. So they're winning in that respect. Their ecosystem is growing. But, you know, storage companies never really have this massive ecosystem that follow them. They really have to do integration. So that's, that's a good thing. So, you know, we're watching growth, we're watching continued execution. It seems like they are betting that their product portfolio, their platform, can serve a lot of different workloads. And it's going to be interesting to see if they can get to two billion, the kind of, the next milestone. They hit a billion. Can they get to two billion with the existing sort of product portfolio and roadmap, or do they have to do M&A? >> David: You're right. >> That's one thing to watch. The other is, can Pure remain independent? David, you know well, we used to have this conversation, all the time, with the likes of David Scott, at 3PAR, and the guys at Compellent, Phil Soran and company. They weren't able, Frank Slootman at Data Domain, they weren't able to stay independent. They got taken out. They weren't pricey enough for the market not to buy them. They got bought out. You know, Pure, five billion dollar market cap, that's kind of rich for somebody to absorb. So it was kind of like NetApp. NetApp got too expensive to get acquired. So, can they achieve that next milestone, two billion. Can they get to five billion. The big difference-- >> Or is there any hiccup, on the way, which will-- >> Yeah, right, exactly. Well the other thing, too, is that, you know, NetApp's market was growing, pretty substantially, at the time, even though they got hit in the dot-com boom. The overall market for Pure isn't really growing. So they have to gain share in order to get to that two billion, three billion, five billion dollar mark. >> If you break the market into the flash and non flash, then they're in the much better half of the market. That one is still growing, from that perspective. >> Well, I kind of like to look at the service end piece of it. I mean, they use this term, by Gartner, today, the something, accelerated, it's a new Gartner term, in 2018-- >> Shared Accelerated Storage >> Shared Accelerated Storage. Gartner finally came up with a category that we called service end. I've been joking all day. Gartner has a better V.P. of naming than we do. (chuckles) We're looking' at service end. I mean, I started, first talking about it, in 2009, thanks to your guidance. But that chart that you have that shows the sort of service end, which is essentially Pure, right? It's the, it's not-- >> Yes. It's a little more software than Pure is. But Pure is an awful lot of software, yes. And showing it growing, at the expense of the other segments, you know. >> David: Particularly sad. >> Particularly sad. Very particularly sad. >> So they're really well positioned, from that standpoint. And, you know, the other thing, Lisa, that was really interesting, we heard from customers today, that they switched for simplicity. Okay, not a surprise. But they were relatively unhappy with some of their existing suppliers. >> Right. >> They got kind of crummy service from some of their existing suppliers. >> Right. >> Now these are, maybe, smaller companies. One customer called out SimpliVity, specifically. He said, "I loved 'em when they were an independent company, "now they're part of HPE, meh, "I don't get service like the way I used to." So, that's a sort of a warning sign and a concern. Maybe their, you know, HPE's prioritizing the bigger customers, maybe the more profitable customers, but that can come back to bite you. >> Lisa: Right. >> So Pure, the point is, Pure has the luxury of being able to lose money, service, like crazy, those customers that might not be as profitable, and grow from it's position of a smaller company, on up. >> Yeah, besides the Evergreen model and the simplicity being, resoundingly, drivers and benefits, that customers across, you know, from Formula One to medical schools, are having, you're right. The independence that Pure has currently is a selling factor for them. And it's also probably a big factor in retention. I mean, they've got a Net Promoter Score of over 83, which is extremely high. >> It's fantastic, isn't it? I think there would be VMI, that I know of, has even higher one, but it's a very, very high score. >> It's very high. They added 300 new customers, last quarter alone, bringing their global customer count to over 4800. And that was a resounding benefit that we were hearing. They, no matter how small, if it's Mercedes Formula One or the Department of Revenue in Mississippi, they all feel important. They feel like they're supported. And that's really key for driving something like a Net Promoter Score. >> Pure had definitely benefited from, it's taken share from EMC. It did early on with VMAX and Symmetrix and VNX. We've seen Dell EMC storage business, you know, decline. It probably has hit bottom, maybe it starts to grow again. When it starts to grow again, I think, even last quarter, it's growth, in dollars, was probably the size of Pure. (chuckles) You know, so, but Pure has definitely benefited from stealing share. The flip side of all this, is when you talk to you know, the CxOs, the big customers, they're doing these big digital transformations. They're not buying products, you know, they're buying transformations. They're buying sets of services. They're buying relationships, and big companies like Dell and IBM and HPE, who have large services arms, can vie for certain business that Pure, necessarily, can't. So, they've got the advantage of being smaller, nimbler, best of breed product, but they don't have this huge portfolio of capabilities that gives them a seat at the CxO table. And you saw that, today. Charlie Giancarlo, his talk, he's a techie. The guys here, Kicks, Hat, they're techies. They're hardcore storage guys. They love storage. It reminds me of the early days of EMC, you know, it's-- >> David: Or NetApp. Yeah. Yeah, or NetApp, right. They're really focused on that. So there's plenty of market for them, right now. But I wonder, David, if you could talk about, sort of architecturally, people used to criticize the two controller, you know, approach. It obviously seems to be doing very well. People take shots at their, the Evergreen model, saying "Oh, we can do that too." But, again, Pure was first. Architecturally, what's your assessment of Pure? >> So, the Evergreen, I think, is excellent. They've gone about that, well. I think, from a straighforward architecture, they kept it very simple. They made a couple of slightly, odd decisions. They went with their own NAND chips, putting them into their own stuff, which made them much smaller, much more compact, completely in charge of the storage stack. And that was a very important choice they made, and it's come out well for them. I have a feeling. My own view is that M.2 is actually going to be the form factor of the future, not the SSD. The Ssd just fitted into a hard disk slot. That was it's only benefit. So, when that comes along, and the NAND vendors want to increase the value that they get from these stacks, etc., I'm a little bit nervous about that. But, having said that, they can convert back. >> Yeah, I mean, that seems like something they could respond to, right? >> Yeah, absolutely. >> I was at the Micron financial analysts' meeting, this week. And a lot of people were expecting that, you know, the memory business has always been very cyclical, it's like the disk drive business. But, it looks like, because of the huge capital expenses required, it looks like supply, looks like they've got a good handle on supply. Micron made a good strong case to the street that, you know, the pricing is probably going to stay pretty favorable for them. So, I don't know what your thoughts are on that, but that could be a little bit of a head wind for some of the systems suppliers. >> I take that with a pinch of salt. They always want to have the market saying it's not going to go down. >> Of course, yeah. And then it crashes. (chuckles) >> The normal market place is, for any of that, is go through this series of S-curves, as you reach a certain point of volume, and 3D NAND has reached that point, that it will go down, inevitably, and then cue comes in,and then that there will go down, again, through that curve. So, I don't see the marketplace changes. I also think that there's plenty of room in the marketplace for enterprise, because the biggest majority of NAND production is for consumer, 80% goes to consumer. So there's plenty of space, in the marketplace, for enterprise to grow. >> But clearly, the prices have not come down as fast as expected because of supply constraints And the way in which companies like Pure have competed with spinning disks, go through excellent data reduction algorithms, right? >> Yes. >> So, at one point, you had predicted there would be a crossover between the cost per bit of flash and spinning disk. Has that crossover occurred, or-- >> Well, I added in the concept of sharing. >> Raw. >> Yeah, raw. But, added in the cost of sharing, the cost-benefit of sharing, and one of the things that really impresses me is their focus on sharing, which is to be able to share that data, for multiple workloads, in one place. And that's excellent technology, they have. And they're extending that from snapshots to cloud snaps, as well. >> Right. >> And I understand that benefit, but from a pure cost per bit standpoint, the crossover hasn't occurred? >> Oh no. No, they're never going to. I don't think they'll ever get to that. The second that happens, disks will just disappear, completely. >> Gosh, guys, I wish we had more time to wrap things up, but thanks, so much, Dave, for joining me all day-- >> Pleasure, Lisa. >> And sporting The Who to my Prince symbol. >> Awesome. >> David, thanks for joining us in the wrap. We appreciate you watching theCUBE, from Pure Storage Accelerate, 2018. I'm Lisa Martin, for Dave and David, thanks for watching.
SUMMARY :
brought to you by Pure Storage. that you have from today? They bet that the street would reward that model, it has. Can they get to five billion. Well the other thing, too, is that, you know, If you break the market into the flash and non flash, Well, I kind of like to look at But that chart that you have that shows the at the expense of the other segments, Particularly sad. And, you know, the other thing, Lisa, They got kind of crummy service but that can come back to bite you. So Pure, the point is, Pure has the luxury that customers across, you know, from I think there would be VMI, that I know of, And that was a resounding benefit that we were hearing. It reminds me of the early days of EMC, you know, it's-- the two controller, you know, approach. completely in charge of the storage stack. And a lot of people were expecting that, you know, I take that with a pinch of salt. And then it crashes. So, I don't see the marketplace changes. So, at one point, you had predicted But, added in the cost of sharing, I don't think they'll ever get to that. We appreciate you watching theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
David | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
2008 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
VMAX | ORGANIZATION | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
two billion | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
David Scott | PERSON | 0.99+ |
VNX | ORGANIZATION | 0.99+ |
five billion | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
three billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Symmetrix | ORGANIZATION | 0.99+ |
Department of Revenue | ORGANIZATION | 0.99+ |
300 new customers | QUANTITY | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
3PAR | ORGANIZATION | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Phil Soran | PERSON | 0.99+ |
Mississippi | LOCATION | 0.99+ |
UCLA | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.98+ |
Compellent | ORGANIZATION | 0.98+ |
Evergreen | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
One customer | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a billion | QUANTITY | 0.98+ |
over 4800 | QUANTITY | 0.98+ |
San Francisco | LOCATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
two controller | QUANTITY | 0.97+ |
over 83 | QUANTITY | 0.96+ |
Dell EMC | ORGANIZATION | 0.96+ |
five billion dollar | QUANTITY | 0.96+ |
one place | QUANTITY | 0.95+ |
NVMe | ORGANIZATION | 0.95+ |
Pure | PERSON | 0.95+ |
Simpson Strong-Tie | ORGANIZATION | 0.94+ |
Wikibon | ORGANIZATION | 0.92+ |
NetApp | TITLE | 0.92+ |
Wikibon Action Item, Quick Take | Neil Raden, 5/4/2018
hi I'm Peter Burroughs welcome to a wiki bond action item quick take Neal Raiden Terry data announced earnings this week what does it tell us about Terry data and the overall market for analytics well tear date announced their first quarter earnings and they beat estimates for both earnings than revenues but they but lo they announced lower guidance for the fiscal year which I guess you know failed to impress Wall Street but recurring quarter one revenue was up 11% nearly a year to three hundred and two million dollars but perpetual revenue was down 23% from quarter one seventeen consulting was up to 135 million for the quarter you know not not altogether shabby for a company in transition but I think what it shows is that Teradata is executing this transitional program and there are some pluses and minuses but they're making progress jury's out but I think overall I'd consider it a good quarter what does it tell us about the market anything we can glean from their daters results about the market overall Neal it's hard to say there's a lot of you know at the ATW conference last week I listened to the keynote from Mike Ferguson I've known Mike for years and I think I always think that Mike's the real deal because he spends all of his time doing consulting and when he speaks he's there to tell us what's happening it he gave a great presentation about datawarehouse versus data Lake and if if he's correct there is still a market for a company like Terra data so you know we'll just have to see excellent Neil Raiden thanks very much this has been a wiki bond critique or actually it's been a wiki bond action item quick-take talk to you again
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil Raiden | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mike Ferguson | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
5/4/2018 | DATE | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Peter Burroughs | PERSON | 0.99+ |
23% | QUANTITY | 0.98+ |
Terra data | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
Neal | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
up to 135 million | QUANTITY | 0.94+ |
nearly a year | QUANTITY | 0.9+ |
first quarter | DATE | 0.88+ |
Wall Street | ORGANIZATION | 0.87+ |
three hundred and two million dollars | QUANTITY | 0.85+ |
years | QUANTITY | 0.8+ |
11% | QUANTITY | 0.8+ |
ATW conference | EVENT | 0.77+ |
one | QUANTITY | 0.76+ |
seventeen | QUANTITY | 0.75+ |
Terry | PERSON | 0.68+ |
Neal Raiden | ORGANIZATION | 0.67+ |
wiki | TITLE | 0.66+ |
Wikibon Action Item | The Roadmap to Automation | April 27, 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (upbeat digital music) >> Cameraman: Three, two, one. >> Hi. Once again, we're broadcasting from our beautiful Palo Alto studios, theCUBE studios, and this week we've got another great group. David Floyer in the studio with me along with George Gilbert. And on the phone we've got Jim Kobielus and Ralph Finos. Hey, guys. >> Hi there. >> So we're going to talk about something that's going to become a big issue. It's only now starting to emerge. And that is, what will be the roadmap to automation? Automation is going to be absolutely crucial for the success of IT in the future and the success of any digital business. At its core, many people have presumed that automation was about reducing labor. So introducing software and other technologies, we would effectively be able to substitute for administrative, operator, and related labor. And while that is absolutely a feature of what we're talking about, the bigger issue is ultimately is that we cannot conceive of more complex workloads that are capable of providing better customer experience, superior operations, all the other things a digital business ultimately wants to achieve. If we don't have a capability for simplifying how those underlying resources get put together, configured, or organized, orchestrated, and ultimately sustained delivery of. So the other part of automation is to allow for much more work that can be performed on the same resources much faster. It's a basis for how we think about plasticity and the ability to reconfigure resources very quickly. Now, the challenge is this industry, the IT industry has always used standards as a weapon. We use standards as a basis of creating eco systems or scale, or mass for even something as, like mainframes. Where there weren't hundreds of millions of potential users. But IBM was successful at using that as a basis for driving their costs down and approving a superior product. That's clearly what Microsoft and Intel did many years ago, was achieve that kind of scale through the driving more, and more, and more, ultimately, volume of the technology, and they won. But along the way though, each time, each generation has featured a significant amount of competition at how those interfaces came together and how they worked. And this is going to be the mother of all standard-oriented competition. How does one automation framework and another automation framework fit together? One being able to create value in a way that serves another automation framework, but ultimately as a, for many companies, a way of creating more scale onto their platform. More volume onto that platform. So this notion of how automation is going to evolve is going to be crucially important. David Floyer, are APIs going to be enough to solve this problem? >> No. That's a short answer to that. This is a very complex problem, and I think it's worthwhile spending a minute just on what are the component parts that need to be brought together. We're going to have a multi-cloud environment. Multiple private clouds, multiple public clouds, and they've got to work together in some way. And the automation is about, and you've got the Edge as well. So you've got a huge amount of data all across all of these different areas. And automation and orchestration across that, are as you said, not just about efficiency, they're about making it work. Making it able to be, to work and to be available. So all of the issues of availability, of security, of compliance, all of these difficult issues are a subject to getting this whole environment to be able to work together through a set of APIs, yes, but a lot lot more than that. And in particular, when you think about it, to me, volume of data is critical. Is who has access to that data. >> Peter: Now, why is that? >> Because if you're dealing with AI and you're dealing with any form of automation like this, the more data you have, the better your models are. And if you can increase that amount of data, as Google show every day, you will maintain that handle on all that control over that area. >> So you said something really important, because the implied assumption, and obviously, it's a major feature of what's going on, is that we've been talking about doing more automation for a long time. But what's different this time is the availability of AI and machine learning, for example, >> Right. as a basis for recognizing patterns, taking remedial action or taking predictive action to avoid the need for remedial action. And it's the availability of that data that's going to improve the quality of those models. >> Yes. Now, George, you've done a lot of work around this a whole notion of ML for ITOM. What are the kind of different approaches? If there's two ways that we're looking at it right now, what are the two ways? >> So there are two ends of the extreme. One is I want to see end to end what's going on across my private cloud or clouds. As well as if I have different applications in different public clouds. But that's very difficult. You get end-to-end visibility but you have to relax a lot of assumptions about what's where. >> And that's called the-- >> Breadth first. So the pro is end-to-end visibility. Con is you don't know how all the pieces fit together quite as well, so you get less fidelity in terms of diagnosing root causes. >> So you're trying to optimize at a macro level while recognizing that you can't optimize at a micro level. >> Right. Now the other approach, the other end of the spectrum, is depth first. Where you constrain the set of workloads and services that you're building and that you know about, and how they fit together. And then the models, based on the data you collect there, can become so rich that you have very very high fidelity root cause determination which allows you to do very precise recommendations or even automated remediation. What we haven't figured out hot to do yet is marry the depth first with the breadth first. So that you have multiple focus depth first. That's very tricky. >> Now, if you think about how the industry has evolved, we wrote some stuff about what we call, what I call the iron triangle. Which is basically a very tight relationship between specialists in technology. So the people who were responsible for a particular asset, be it storage, or the system, or the network. The vendors, who provided a lot of the knowledge about how that worked, and therefore made that specialist more or less successful and competent. And then the automation technology that that vendor ultimately provided. Now, that was not automation technology that was associated with AI or anything along those lines. It was kind of out of the box, buy our tool, and this is how you're going to automate various workflows or scripts, or whatever else it might be. And every effort to try to break that has been met with screaming because, well, you're now breaking my automation routines. So the depth-first approach, even without ML, has been the way that we've done it historically. But, David, you're talking about something different. It's the availability of the data that starts to change that. >> Yeah. >> So are we going to start seeing new compacts put in place between users and vendors and OEMs and a lot of these other folks? And it sounds like it's going to be about access to the data. >> Absolutely. So you're going to start. let's start at the bottom. You've got people who have a particular component, whatever that component is. It might be storage. It might be networking. Whatever that component is. They have products in that area which will be collecting data. And they will need for their particular area to provide a degree of automation. A degree of capability. And they need to do two things. They need to do that optimization and also provide data to other people. So they have to have an OEM agreement not just for the equipment that they provide, but for the data that they're going to give and the data they're going to give back. The automatization of the data, for example, going up and the availability of data to help themselves. >> So contracts effectively mean that you're going to have to negotiate value capture on the data side as well as the revenue side. >> Absolutely. >> The ability to do contracting historically has been around individual products. And so we're pretty good at that. So we can say, you will buy this product. I'm delivering you the value. And then the utility of that product is up to you. When we start going to service contracts, we get a little bit different kind of an arrangement. Now, it's an ongoing continuous delivery. But for the most part, a lot of those service contracts have been predicated to known in advance classes of functions, like Salesforce, for example. Or the SASS business where you're able to write a contract that says over time you will have access to this service. When we start talking about some of this automation though, now we're talking about ongoing, but highly bespoke, and potentially highly divergent, over a relatively short period of time, that you have a hard time writing contracts that will prescribe the range of behaviors and the promise about how those behaviors are actually going to perform. I don't think we're there yet. What do you guys think? >> Well, >> No, no way. I mean, >> Especially when you think about realtime. (laughing) >> Yeah. It has to be realtime to get to the end point of automating the actual reply than the actual action that you take. That's where you have to get to. You can't, It won't be sufficient in realtime. I think it's a very interesting area, this contracts area. If you think about solutions for it, I would be going straight towards blockchain type architectures and dynamic blockchain contracts that would have to be put in place. >> Peter: But they're not realtime. >> The contracts aren't realtime. The contracts will never be realtime, but the >> Accessed? access to the data and the understanding of what data is required. Those will be realtime. >> Well, we'll see. I mean, the theorem's what? Every 12 seconds? >> Well. That's >> Everything gets updated? >> That's To me, that's good enough. >> Okay. >> That's realtime enough. It's not going to solve the problem of somebody >> Peter: It's not going to solve the problem at the edge. >> At the very edge, but it's certainly sufficient to solve the problem of contracts. >> Okay. >> But, and I would add to that and say, in addition to having all this data available. Let's go back like 10, 20 years and look at Cisco. A lot of their differentiation and what entrenched them was sort of universal familiarity with their admin interfaces and they might not expose APIs in a way that would make it common across their competitors. But if you had data from them and a constrained number of other providers for around which you would build let's say, these modern big data applications. It's if you constrain the problem, you can get to the depth first. >> Yeah, but Cisco is a great example of it's an archetype for what I said earlier, that notion of an iron triangle. You had Cisco admins >> Yeah. that were certified to run Cisco gear and therefore had a strong incentive to ensure that more Cisco gear was purchased utilizing a Cisco command line interface that did incorporate a fair amount of automation for that Cisco gear and it was almost impossible for a lot of companies to penetrate that tight arrangement between the Cisco admin that was certified, the Cisco gear, and the COI. >> And the exact same thing happened with Oracle. The Oracle admin skillset was pervasive within large >> Peter: Happened with everybody. >> Yes, absolutely >> But, >> Peter: The only reason it didn't happen in the IBM mainframe, David, was because of a >> It did happen, yeah, >> Well, but it did happen, but governments stepped in and said, this violates antitrust. And IBM was forced by law, by court decree, to open up those interfaces. >> Yes. That's true. >> But are we going to see the same type of thing >> I think it's very interesting to see the shape of this market. When we look a little bit ahead. People like Amazon are going to have IAS, they're going to be running applications. They are going to go for the depth way of doing things across, or what which way around is it? >> Peter: The breadth. They're going to be end to end. >> But they will go depth in individual-- >> Components. Or show of, but they will put together their own type of things for their services. >> Right. >> Equally, other players like Dell, for example, have a lot of different products. A lot of different components in a lot of different areas. They have to go piece by piece and put together a consortium of suppliers to them. Storage suppliers, chip suppliers, and put together that outside and it's going to have to be a different type of solution that they put together. HP will have the same issue there. And as of people like CA, for example, who we'll see an opportunity for them to be come in again with great products and overlooking the whole of all of this data coming in. >> Peter: Oh, sure. Absolutely. >> So there's a lot of players who could be in this area. Microsoft, I missed out, of course they will have the two ends that they can combine together. >> Well, they may have an advantage that nobody else has-- >> Exactly. Yeah. because they're strong in both places. But I have Jim Kobielus. Let me check, are you there now? Do we got Jim back? >> Can you hear me? >> Peter: I can barely hear you, Jim. Could we bring Jim's volume up a little bit? So, Jim, I asked the question earlier, about we have the tooling for AI. We know how to get data. How to build models and how to apply the models in a broad brush way. And we're certainly starting to see that happen within the IT operations management world. The ITOM world, but we don't yet know how we're going to write these contracts that are capable of better anticipating, putting in place a regime that really describes how the, what are the limits of data sharing? What are the limits of derivative use? Et cetera. I argued, and here in the studio we generally agreed, that's we still haven't figured that out and that this is going to be one of the places where the tension between, at least in the B2B world, data availability and derivative use and where you capture value and where those profitables go, is going to be significant. But I want to get your take. Has the AI community >> Yeah. started figuring out how we're going to contractually handle obligations around data, data use, data sharing, data derivative use. >> The short answer is, no they have not. The longer answer is, that can you hear me, first of all? >> Peter: Barely. >> Okay. Should I keep talking? >> Yeah. Go ahead. >> Okay. The short answer is, no that the AI community has not addressed those, those IP protection issues. But there is a growing push in the AI community to leverage blockchain for such requirements in terms of block chains to store smart contracts where related to downstream utilization of data and derivative models. But that's extraordinarily early on in its development in terms of insight in the AI community and in the blockchain community as well. In other words, in fact, in one of the posts that I'm working on right now, is looking at a company called 8base that's actually using blockchain to store all of those assets, those artifacts for the development and lifecycle along with the smart contracts to drive those downstream uses. So what I'm saying is that there's lots of smart people like yourselves are thinking about these problems, but there's no consensus, definitely, in the AI community for how to manage all those rights downstream. >> All right. So very quickly, Ralph Finos, if you're there. I want to get your perspective >> Yeah. on what this means from markets, market leadership. What do you think? How's this going to impact who are the leaders, who's likely to continue to grow and gain even more strength? What're your thoughts on this? >> Yeah. I think, my perspective on this thing in the near term is to focus on simplification. And to focus on depth, because you can get return, you can get payback for that kind of work and it simplifies the overall picture so when you're going broad, you've got less of a problem to deal with. To link all these things together. So I'm going to go with the Shaker kind of perspective on the world is to make things simple. And to focus there. And I think the complexity of what we're talking about for breadth is too difficult to handle at this point in time. I don't see it happening any time in the near future. >> Although there are some companies, like Splunk, for example, that are doing a decent job of presenting a more of a breadth approach, but they're not going deep into the various elements. So, George, really quick. Let's talk to you. >> I beg to disagree on that one. >> Peter: Oh! >> They're actually, they built a platform, originally that was breadth first. They built all these, essentially, forwarders which could understand the formats of the output of all sorts of different devices and services. But then they started building what they called curated experiences which is the equivalent of what we call depth first. They're doing it for IT service management. They're doing it for what's called user behavior. Analytics, which is it's a way of tracking bad actors or bad devices on a network. And they're going to be pumping out more of those. What's not clear yet, is how they're going to integrate those so that IT service management understands security and vice versa. >> And I think that's one of the key things, George, is that ultimately, the real question will be or not the real question, but when we think about the roadmap, it's probably that security is going to be early on one of the things that gets addressed here. And again, it's not just security from a perimeter standpoint. Some people are calling it a software-based perimeter. Our perspective is the data's going to go everywhere and ultimately how do you sustain a zero trust world where you know your data is going to be out in the clear so what are you going to do about it? All right. So look. Let's wrap this one up. Jim Kobielus, let's give you the first Action Item. Jim, Action Item. >> Action Item. Wow. Action Item Automation is just to follow the stack of assets that drive automation and figure out your overall sharing architecture for sharing out these assets. I think the core asset will remain orchestration models. I don't think predictive models in AI are a huge piece of the overall automation pie in terms of the logic. So just focus on building out and protecting and sharing and reusing your orchestration models. Those are critically important. In any domain. End to end or in specific automation domains. >> Peter: David Floyer, Action Item. >> So my Action Item is to acknowledge that the world of building your own automation yourself around a whole lot of piece parts that you put together are over. You won't have access to a sufficient data. So enterprises must take a broad view of getting data, of getting components that have data be giving them data. Make contracts with people to give them data, masking or whatever it is and become part of a broader scheme that will allow them to meet the automation requirements of the 21st century. >> Ralph Finos, Action Item. >> Yeah. Again, I would reiterate the importance of keeping it simple. Taking care of the depth questions and moving forward from there. The complexity is enormous, and-- >> Peter: George Gilbert, Action Item. >> I say, start with what customers always start with with a new technology, which is a constrained environment like a pilot and there's two areas that are potentially high return. One is big data, where it's been a multi vendor or multi-vendor component mix, and a mess. And so you take that and you constrain that and make that a depth-first approach in the cloud where there is data to manage that. And the second one is security, where we have now a more and more trained applications just for that. I say, don't start with a platform. Start with those solutions and then start adding more solutions around that. >> All right. Great. So here's our overall Action Item. The question of automation or roadmap to automation is crucial for multiple reasons. But one of the most important ones is it's inconceivable to us to envision how a business can institute even more complex applications if we don't have a way of improving the degree of automation on the underlying infrastructure. How this is going to play out, we're not exactly sure. But we do think that there are a few principals that are going to be important that users have to focus on. Number one is data. Be very clear that there is value in your data, both to you as well as to your suppliers and as you think about writing contracts, don't write contracts that are focused on a product now. Focus on even that product as a service over time where you are sharing data back and forth in addition to getting some return out of whatever assets you've put in place. And make sure that the negotiations specifically acknowledge the value of that data to your suppliers as well. Number two, that there is certainly going to be a scale here. There's certainly going to be a volume question here. And as we think about where a lot of the new approaches to doing these or this notion of automation, is going to come out of the cloud vendors. Once again, the cloud vendors are articulating what the overall model is going to look like. What that cloud experience is going to look like. And it's going to be a challenge to other suppliers who are providing an on-premises true private cloud and Edge orientation where the data must live sometimes it is not something that they just want to do because they want to do it. Because that data requires it to be able to reflect that cloud operating model. And expect, ultimately, that your suppliers also are going to have to have very clear contractual relationships with the cloud players and each other for how that data gets shared. Ultimately, however, we think it's crucially important that any CIO recognized that the existing environment that they have right now is not converged. The existing environment today remains operators, suppliers of technology, and suppliers of automation capabilities and breaking that up is going to be crucial. Not only to achieving automation objectives, but to achieve a converged infrastructure, hyper converged infrastructure, multi-cloud arrangements, including private cloud, true private cloud, and the cloud itself. And this is going to be a management challenge, goes way beyond just products and technology, to actually incorporating how you think about your shopping, organized, how you institutionalize the work that the business requires, and therefore what you identify as a tasks that will be first to be automated. Our expectation, security's going to be early on. Why? Because your CEO and your board of directors are going to demand it. So think about how automation can be improved and enhanced through a security lens, but do so in a way that ensures that over time you can bring new capabilities on with a depth-first approach at least, to the breadth that you need within your shop and within your business, your digital business, to achieve the success and the results that you want. Okay. Once again, I want to thank David Floyer and George Gilbert here in the studio with us. On the phone, Ralph Finos and Jim Kobielus. Couldn't get Neil Raiden in today, sorry Neil. And I am Peter Burris, and this has been an Action Item. Talk to you again soon. (upbeat digital music)
SUMMARY :
and welcome to another Wikibon Action Item. And on the phone we've got Jim Kobielus and Ralph Finos. and the ability to reconfigure resources very quickly. that need to be brought together. the more data you have, is the availability of AI and machine learning, And it's the availability of that data What are the kind of different approaches? You get end-to-end visibility but you have to relax So the pro is end-to-end visibility. while recognizing that you can't optimize at a micro level. So that you have multiple focus depth first. that starts to change that. And it sounds like it's going to be about access to the data. and the data they're going to give back. have to negotiate value capture on the data side and the promise about how those behaviors I mean, Especially when you think about realtime. than the actual action that you take. but the access to the data and the understanding I mean, the theorem's what? To me, that's good enough. It's not going to solve the problem of somebody but it's certainly sufficient to solve the problem in addition to having all this data available. Yeah, but Cisco is a great example of and therefore had a strong incentive to ensure And the exact same thing happened with Oracle. to open up those interfaces. They are going to go for the depth way of doing things They're going to be end to end. but they will put together their own type of things that outside and it's going to have to be a different type Peter: Oh, sure. the two ends that they can combine together. Let me check, are you there now? and that this is going to be one of the places to contractually handle obligations around data, The longer answer is, that and in the blockchain community as well. I want to get your perspective How's this going to impact who are the leaders, So I'm going to go with the Shaker kind of perspective Let's talk to you. I beg to disagree And they're going to be pumping out more of those. Our perspective is the data's going to go everywhere Action Item Automation is just to follow that the world of building your own automation yourself Taking care of the depth questions and make that a depth-first approach in the cloud Because that data requires it to be able to reflect
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
April 27, 2018 | DATE | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Neil Raiden | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
8base | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
two areas | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
each generation | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both places | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Three | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
SASS | ORGANIZATION | 0.98+ |
this week | DATE | 0.97+ |
each time | QUANTITY | 0.97+ |
two ends | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
ORGANIZATION | 0.96+ | |
first | QUANTITY | 0.96+ |
second one | QUANTITY | 0.94+ |
CA | LOCATION | 0.92+ |
Wikibon Action Item | March 23rd, 2018
>> Hi, I'm Peter Burris, and welcome to another Wikibon Action Item. (funky electronic music) This was a very interesting week in the tech industry, specifically because IBM's Think Conference aggregated in a large number of people. Now, The CUBE was there. Dave Vellante, John Furrier, and myself all participated in somewhere in the vicinity of 60 or 70 interviews with thought leaders in the industry, including a number of very senior IBM executives. The reason why this becomes so important is because IBM made a proposal to the industry about how some of the digital disruption that the market faces is likely to unfold. The normal approach or the normal mindset that people have used is that startups, digital native companies were going to change the way that everything was going to operate, and the dinosaurs were going to go by the wayside. IBM's interesting proposal is that the dinosaurs actually are going to learn to dance, utilizing or playing on a book title from a number of years ago. And the specific argument was laid out by Ginni Rometty in her keynote, when she said that there are number of factors that are especially important here. Factor number one is that increasingly, businesses are going to recognize that the role that their data plays in competition is on the ascending. It's getting more important. Now, this is something that Wikibon's been arguing for quite some time. In fact, we have said that the whole key to digital disruption and digital business is to acknowledge the difference between business and digital business, is the role that data and data assets play in your business. So we have strong agreement there. But on top of that, Ginni Rometty made the observation that 80% of the data that could be accessed and put the work in business has not yet been made available to the new activities, the new processes that are essential to changing the way customers are engaged, businesses operate, and overall change and disruption occurs. So her suggestion is that that 80%, that vast amount of data that could be applied that's not being tapped, is embedded deep within the incumbents. And so the core argument from IBM is that the incumbent companies, not the digital natives, not the startups, but the incumbent companies are poised to make a significant, to have a significant role in disrupting how markets operate, because of the value of their data that hasn't currently been put to work and made available to new types of work. That was the thesis that we heard this week, and that's what we're going to talk about today. Are the incumbent really going to strike back? So Dave Vellante, let me start with you. You were at Think, you heard the same type of argument. What did you walk away with? >> So when I first heard the term incumbent disruptors, I was very skeptical, and I still am. But I like the concept and I like it a lot. So let me explain why I like it and why I think there's some real challenges. If I'm a large incumbent global 2,000, I'm not going to just roll over because the world is changing and software is eating my world. Rather what I'm going to do is I'm going to use my considerable assets to compete, and so that includes my customers, my employees, my ecosystem, the partnerships that I have there, et cetera. The reason why I'm skeptical is because incumbents aren't organized around their data assets. Their data assets are stovepipe, they're all over the place. And the skills to leverage that data value, monetize that data, understand the contribution that data makes toward monetization, those skills are limited. They're bespoke and they're very narrow. They're within lines of business or divisions. So there's a huge AI gap between the true digital business and an incumbent business. Now, I don't think all is lost. I think a lot of strategies can work, from M&A to transformation projects, joint ventures, spin-offs. Yeah, IBM gave some examples. They put up Verizon and American Airlines. I don't see them yet as the incumbent disruptors. But then there was another example of IBM Maersk doing some very interesting and disrupting things, Royal Bank of Canada doing some pretty interesting things. >> But in a joint venture forum, Dave, to your point, they specifically set up a joint venture that would be organized around this data, didn't they? >> Yes, and that's really the point I'm trying to make. All is not lost. There are certain things that you can do, many things that you can do as an incumbent. And it's really game on for the next wave of innovation. >> So we agree as a general principle that data is really important, David Floyer. And that's been our thesis for quite some time. But Ginni put something out there, Ginni Rometty put something out there. My good friend, Ginni Rometty, put something out there that 80% of the data that could be applied to disruption, better customer engagement, better operations, new markets, is not being utilized. What do we think about that? Is that number real? >> If you look at the data inside any organization, there's a lot of structured data. And that has better ability to move through an organization. Equally, there's a huge amount of unstructured data that goes in emails. It goes in voicemails, it goes in shared documents. It goes in diagrams, PowerPoints, et cetera, that also is data which is very much locked up in the way that Dave Vellante was talking about, locked up in a particular process or in a particular area. So is there a large amount of data that could be used inside an organization? Is it private, is it theirs? Yes, there is. The question is, how do you tap that data? How do you organize around that data to release it? >> So this is kind of a chicken and egg kind of a problem. Neil Raden, I'm going to turn to you. When we think about this chicken and egg problem, the question is do we organize in anticipation of creating these assets? Do we establish new processes in anticipation of creating these data assets? Or do we create the data assets first and then re-institutionalize the work? And the reason why it's a chicken and egg kind of problem is because it takes an enormous amount of leadership will to affect the way a business works before the asset's in place. But it's unclear that we're going to get the asset that we want unless we affect the reorganization, institutionalization. Neil, is it going to be a chicken? Is it going to be the egg? Or is this one of the biggest problems that these guys are going to have? >> Well, I'm a little skeptical about this 80% number. I need some convincing before I comment on that. But I would rather see, when David mentioned the PowerPoint slides or email or that sort of thing, I would rather see that information curated by the application itself, rather than dragged out in broad data and reinterpreted in something. I think that's very dangerous. I think we saw that in data warehousing. (mumbling) But when you look at building data lakes, you throw all this stuff into a data lake. And then after the fact, somebody has to say, "Well, what does this data mean?" So I find it kind of a problem. >> So Jim Kobielus, a couple weeks ago Microsoft actually introduced a technology or a toolkit that could in fact be applied to move this kind of advance processing for dragging value out of a PowerPoint or a Word document or something else, close and proximate to the application. Is that, I mean, what Neil just suggested I think is a very, very good point. Are we going to see these kinds of new technologies directly embedded within applications to help users narrowly, but businesses more broadly, lift that information out of these applications so it can be freed up for other uses? >> I think yeah, on some level, Peter, this is a topic called dark data. It's been discussed in data management circles for a long time. The vast majority, I think 75 to 80% is the number that I see in the research, is locked up in terms of it's not searchable, it's not easily discoverable. It's not mashupable, I'm making up a word. But the term mashup hasn't been used in years, but I think it's a good one. What it's all about is if we want to make the most out of our incumbent's data, then we need to give the business, the business people, the tools to find the data where it is, to mash it up into new forms and analytics and so forth, in order to monetize it and sell it, make money off of it. So there are a wide range of data discovery and other tools that support a fairly self-service combination and composition of composite data object. I don't know that, however, that the culture of monetizing existing dataset and pulling dark data into productized forms, I don't think that's taken root in any organization anywhere. I think that's just something that consultants talk about as something that gee, should be done, but I don't think it's happening in the real world. >> And I think you're probably correct about that, but I still think Neil raised a great point. And I would expect, and I think we all believe, that increasingly this is not going to come as a result of massive changes in adoption of new data science, like practices everywhere, but an embedding of these technologies. Machine learning algorithms, approaches to finding patterns within application data, in the applications themselves, which is exactly what Neil was saying. So I think that what we're going to see, and I wanted some validation from you guys about this, is increasingly tools being used by application providers to reveal data that's in applications, and not open source, independent tool chains that then ex-post-facto get applied to all kinds of different data sources in an attempt for the organization to pull the stuff out. David Floyer, what do you think? >> I agree with you. I think there's a great opportunity for the IT industry in this area to put together solutions which can go and fit in. On the basis of existing applications, there's a huge amount of potential, for example, of ERP systems to link in with IOT systems, for example, and provide a data across an organization. Rather than designing your own IOT system, I think people are going to buy-in pre-made ones. They're going to put the devices in, the data's going to come in, and the AI work will be done as part of that, as part of implementing that. And right across the board, there is tremendous opportunity to improve the applications that currently exist, or put in new versions of applications to address this question of data sharing across an organization. >> Yeah, I think that's going to be a big piece of what happens. And it also says, Neil Raden, something about whether or not enormous machine learning deities in the sky, some of which might start with the letter W, are going to be the best and only way to unlock this data. Is this going to be something that, we're suggesting now that it's something that's going to be increasingly-distributed closer to applications, less invasive and disruptive to people, more invasive and disruptive to the applications and the systems that are in place. And what do you think, Neil? Is that a better way of thinking about this? >> Yeah, let me give you an example. Data science the way it's been practiced is a mess. You have one person who's trying to find the data, trying to understand the data, complete your selection, designing experiments, doing runs, and so forth, coming up with formulas and then putting them in the cluster with funny names so they can try to remember which one was which. And now what you have are a number of software companies who've come up with brilliant ways of managing that process, of really helping the data science to create a work process in curating the data and so forth. So if you want to know something about this particular model, you don't have to go to the person and say, "Why did you do that model? "What exactly were you thinking?" That information would be available right there in the workbench. And I think that's a good model for, frankly, everything. >> So let's-- >> Development pipeline toolkits. That's a hot theme. >> Yeah, it's a very hot theme. But Jim, I don't think you think but I'm going to test it. I don't think we're going to see AI pipeline toolkits be immediately or be accessed by your average end user who's putting together a contract, so that that toolkit or so that data is automatically munched and ingested or ingested and munched by some AI pipeline. This is going to happen in an application. So the person's going to continue to do their work, and then the tooling will or will not grab that information and then combine it with other things through the application itself into the pipeline. We got that right? >> Yeah, but I think this is all being, everything you described is being embedded in applications that are making calls to backend cloud services that have themselves been built by data scientists and exposed through rest APIs. Steve, Peter, everything you're describing is coming to applications fairly rapidly. >> I think that's a good point, but I want to test it. I want to test that. So Ralph Finos, you've been paying a lot of attention during reporting season to what some of the big guys are saying on some of their calls and in some of their public statements. One company in particular, Oracle, has been finessing a transformation, shall we say? What are they saying about how this is going as we think about their customer base, the transformation of their customer base, and the degree to which applications are or are not playing a role in those transformations? >> Yeah, I think in their last earnings call a couple days ago that the point that they were making around the decline and the-- >> Again, this is Oracle. So in Oracle's last earnings call, yeah. >> Yeah, I'm sorry, yeah. And the decline and the revenue growth rate in the public cloud, the SAS end of their business, was a function really of a slowdown of the original acquisitions they made to kind of show up as being a transformative cloud vendor, and that are basically beginning to run out of gas. And I think if you're looking at marketing applications and sales-related applications and content-type of applications, those are kind of hitting a natural high of growth. And I think what they were saying is that from a migration perspective on ERP, that that's going to take a while to get done. They were saying something like 10 or 15% of their customer base had just begun doing some sort of migration. And that's a data around ERP and those kinds of applications. So it's a long slog ahead of them, but I'd rather be in their shoes, I think, for the long run than trying to kind of jazz up in the near-term some kind of pseudo-SAS cloud growth based on acquisition and low-lying fruit. >> Yeah, because they have a public cloud, right? I mean, at least they're in the game. >> Yeah, and they have to show they're in the game. >> Yeah, and specifically they're talking about their applications as clouds themselves. So they're not just saying here's a set of resources that you can build, too. They're saying here's a set of SAS-based applications that you can build around. >> Dave: Right. Go ahead, Ralph, sorry. >> Yeah, yeah. And I think the notion there is the migration to their ERP and their systems of record applications that they're saying, this is going to take a long time for people to do that migration because of complexity in process. >> So the last point, or Dave Vellante, did you have a point you want to make before I jump into a new thought here? >> I just compare and contrast IBM and Oracle. They have public clouds, they have SAS. Many others don't. I think this is a major different point of differentiation. >> Alright, so we've talked about whether or not this notion of data as a source of value's important, and we agree it is. We still don't know whether or not 80% is the right number, but it is some large number that's currently not being utilized and applied to work differently than the data currently is. And that likely creates some significant opportunities for transformation. Do we ultimately think that the incumbents, again, I mention the chicken and the egg problem. Do we ultimately think that the incumbents are... Is this going to be a test of whether or not the incumbents are going to be around in 10 years? The degree to which they enact the types of transformation we thought about. Dave Vellante, you said you were skeptical. You heard the story. We've had the conversation. Will incumbents who do this in fact be in a better position? >> Well, incumbents that do take action absolutely will be in a better position. But I think that's the real question. I personally believe that every industry is going to get disrupted by digital, and I think a lot of companies are not prepared for this and are going to be in deep trouble. >> Alright, so one more thought, because we're talking about industries overall. There's so many elements we haven't gotten to, but there's one absolute thing I want to talk about. Specifically the difference between B2C and B2B companies. Clearly the B2C industries have been disrupted, many of them pretty significantly, over the last few years. Not too long ago, I have multiple not-necessarily-good memories of running the aisles of Toys R Us sometime after 10 o'clock at night, right around December 24th. I can't do that anymore, and it's not because my kids are grown. Or I won't be able to do that soon anymore. So B2C industries seem to have been moved faster, because the digital natives are able to take advantage of the fact that a lot of these B2C industries did not have direct and strong relationships with those customers. I would posit that a lot of the B2B industries are really where the action's going to take. And the kind of way I would think about it, and David Floyer, I'll turn to you first. The way I would think about it is that in the B2C world, it's new markets and new ways of doing things, which is where the disruption's going to take place. So more of a substitution as opposed to a churn. But in the B2B markets, it's disrupting greater efficiencies, greater automation, greater engagement with existing customers, as well as finding new businesses and opportunities. What do you think about that? >> I think the B2B market is much more stable. Relationships, business relationships, very, very important. They take a long time to change. >> Peter: But much of that isn't digital. >> A lot of that is not digital. I agree with that. However, I think that the underlying change that's happening is one of automation. B2B are struggling to put into place automation with robots, automation everywhere. What you see, for example, in Amazon is a dedication to automation, to making things more efficient. And I think that's, to me, the biggest challenges, owning up to the fact that they have to change their automation, get themselves far more efficient. And if they don't succeed in doing that, then their ability to survive or their likelihood of being taken over with a reverse takeover becomes higher and higher and higher. So how do you go about that level, huge increase in automation that is needed to survive, I think is the biggest question for B2B players. >> And when we think about automation, David Floyer, we're not talking about the manufacturing arms or only talking about the manufacturing arms. We're talking about a lot of new software automation. Dave Vellante, Jim Kobielus, RPA is kind of a new thing. Dave, we saw some interesting things at Think. Bring us up to speed quickly on what the community at Think was talking about with RPA. >> Well, I tell you. There were a lot of people in financial services, which is IBM's stronghold. And they're using software robots to automate a lot of the backend stuff that humans were doing. That's a major, major use case. I would say 25 to 30% of the financial services organizations that I talked to had active RPA projects ongoing at the moment. I don't know. Jim, what are your thoughts? >> Yeah, I think backend automation is where B2B disruption is happening. As the organizations are able to automate more of their backend, digitize more of their backend functions and accelerate them and improve the throughput of transactions, are those that will clean up. I think for the B2C space, it's the frontend automation of the digitalization of the engagement channels. But RPA is essentially a key that's unlocking backend automation for everybody, because it allows more of the frontend business analysts and those who are not traditionally BPM, or business process re-engineering professionals, to begin to take standard administrative processes and begin to automate them from, as it were, the outside-in in a greater way. So I think RPA is a secret key for that. I think we'll see some of the more disruptive organizations, businesses, take RPA and use it to essentially just reverse-engineer, as it were, existing processes, but in an automated fashion, and drive that improvement but in the backend by AI. >> I just love the term software robots. I just think that that's, I think that so strongly evokes what's going to happen here. >> If I could add, I think there's a huge need to simplify that space. The other thing I witnessed at IBM Think is it's still pretty complicated. It's still a heavy lift. There's a lot of big services component to this, which is probably why IBM loves it. But there's a massive market, I think, to simplify the adoption or RPA. >> I completely agree. We have to open the aperture as well. Again, the goal is not to train people new things, new data science, new automation stuff, but to provide tools and increasingly embed those tools into stuff that people are already using, so that the disruption and the changes happen more as a consequence of continuing to do what the people do. Alright, so let's hit the action item we're on, guys. It's been a great conversation. Again, we haven't talked about GDPR. We haven't talked about a wide array of different factors that are going to be an issue. I think this is something we're going to talk about. But on the narrow issue of can the disruptors strike back? Neil Raden, let's start with you. Neil Raden, action item. >> I've been saying since 1975 that I should be hanging around with a better class of people, but I do spend a lot of time in the insurance industry. And I have been getting a consensus that in the next five to 10 years, there will no longer be underwriters for claims adjustments. That business is ready for massive, massive change. >> And those are disruptors, largely. Jim Kobielus, action item. >> Action item. In terms of business disruption, is just not to imagine that because you were the incumbent in the past era in some solution category that's declining, that that automatically guarantees you, that makes your data fit for seizing opportunities in the future. As we've learned from Blockbuster Video, the fact that they had all this customer data didn't give them any defenses against Netflix coming along and cleaning their coffin, putting them out of business. So the next generation of disruptor will not have any legacy data to work from, and they'll be able to work miracles because they made a strategic bet on some frontend digital channel that made all the difference. >> Ralph Finos, action item. >> Yeah, I think there's a notion here of siege mentality. And I think the incumbents are in the castle walls, and the disruptors are outside the castle walls. And sometimes the disruptors, you know, scale the walls. Sometimes they don't. But I think being inside the walls is a long-run tougher thing to be at. >> Dave Vellante, action item. >> I want to pick up on something Neil said. I think it's alluring for some of these industries, like insurance and financial services and healthcare, even parts of government, that really haven't been disrupted in a huge way yet to say, "Well, I'll wait and I'll see what happens." I think that's a huge mistake. I think you have to start immediately thinking about strategies, particularly around your data, as we talked about earlier. Maybe it's M&A, maybe it's joint ventures, maybe it's spinning out new companies. But the time is past where you should be acting. >> David Floyer, action item. >> I think that it's easier to focus on something that you can actually do. So my action item is that the focus of most B2B companies should be looking at all of their processes and incrementally automating them, taking out the people cost, taking out the cost, other costs, automating those processes as much as possible. That, in my opinion, is the most likely path to being in the position that you can continue to be competitive. Without that focus, it's likely that you're going to be disrupted. >> Alright. So the one thing I'll say about that, David, is when I think you say people cost I think you mean the administrative cost associated with people. >> And people doing things, automating jobs. >> Alright, so we have been talking here in today's Wikibon Action Item about the question, will the incumbents be able to strike back? The argument we heard at IBM Think this past week, and this is the third week of March, was that data is an asset that can be applied to significantly disrupt industries, and that incumbents have a lot of data that hasn't been bought into play in the disruptive flow. And IBM's argument is that we're going to see a lot of incumbents start putting their data into play, more of their data assets into play. And that's going to have a significant impact ultimately on industry structure, customer engagement, the nature of the products and services that are available over the course of the next decade. We agree. We generally agree. We might nitpick about whether it's 80%, whether it's 60%. But in general, the observation is an enormous amount of data that exists within a large company, that's related to how they conduct business, is siloed and locked away and is used once and is not made available, is dark and is not made available for derivative uses. That could, in fact, lead to significant consequential improvements in how a business's transaction costs are ultimately distributed. Automation's going to be a big deal. David Floyer's mentioned this in the past. I'm also of the opinion that there's going to be a lot of new opportunities for revenue enhancement and products. I think that's going to be as big, but it's very clear that to start it makes an enormous amount of sense to take a look at where your existing transaction costs are, where existing information asymmetries exist, and see what you can do to unlock that data, make it available to other processes, and start to do a better job of automating local and specific to those activities. And we generally ask our clients to take a look at what is your value proposition? What are the outcomes that are necessary for that value proposition? What activities are most important to creating those outcomes? And then find those that, by doing a better job of unlocking new data, you can better automate those activities. In general, our belief is that there's a significant difference between B2C and B2B businesses. Why? Because a lot of B2C businesses never really had that direct connection, therefore never really had as much of the market and customer data about what was going on. A lot of point-of-sale perhaps, but not a lot of other types of data. And then the disruptors stepped in and created direct relationships, gathered that data and were able to rapidly innovate products and services that served consumers differently. Where a lot of that new opportunity exists is in the B2B world. And here's where the real incumbents are going to start flexing their muscles over the course of the next decade, as they find those opportunities to engage differently, to automate existing practices and activities, change their cost model, and introduce new approaches to operating that are cloud-based, blockchain-based, data-based, based on data, and find new ways to utilize their people. If there's one big caution we have about this, it's this. Ultimately, the tooling is not broadly mature. The people necessary to build a lot of these tools are increasingly moving into the traditional disruptors, the legacy disruptors if we will. AWS, Netflix, Microsoft, companies more along those lines. That talent is very dear still in the industry, and it's going to require an enormous effort to bring those new types of technologies that can in fact liberate some of this data. We looked at things like RPA, robot process automation. We look at the big application providers to increasingly imbue their products and services with some of these new technologies. And ultimately, paradoxically perhaps, we look for the incumbent disruptors to find ways to disrupt without disrupting their own employees and customers. So embedding more of these new technologies in an ethical way directly into the systems and applications that serve people, so that the people face minimal changes to learning new tricks, because the systems themselves have gotten much more automated and much more... Are able to learn and evolve and adjust much more rapidly in a way that still corresponds to the way people do work. So our action item. Any company in the B2B space that is waiting for data to emerge as an asset in their business, so that they can then do all the institutional, re-institutionalizing of work and reorganizing of work and new types of investment, is not going to be in business in 10 years. Or it's going to have a very tough time with it. The big challenge for the board and the CIO, and it's not successfully been done in the past, at least not too often, is to start the process today without necessarily having access to the data, of starting to think about how the work's going to change, think about the way their organization's going to have to be set up. This is not business process re-engineering. This is organizing around future value of data, the options that data can create, and employ that approach to start doing local automation, serve customers, and change the way partnerships work, and ultimately plan out for an extended period of time how their digital business is going to evolve. Once again, I want to thank David Floyer here in the studio with me. Neil Raden, Dave Vellante, Ralph Finos, Jim Kobielus remote. Thanks very much guys. For all of our clients, once again this has been a Wikibon Action Item. We'll talk to you again. Thanks for watching. (funky electronic music)
SUMMARY :
is that the dinosaurs actually are going to learn to dance, And the skills to leverage that data value, Yes, and that's really the point I'm trying to make. that 80% of the data that could be applied to disruption, And that has better ability to move through an organization. that these guys are going to have? And then after the fact, somebody has to say, close and proximate to the application. that the culture of monetizing existing dataset in an attempt for the organization to pull the stuff out. the data's going to come in, Yeah, I think that's going to be a big piece of what happens. of really helping the data science That's a hot theme. So the person's going to continue to do their work, that are making calls to backend cloud services and the degree to which applications are So in Oracle's last earnings call, yeah. and that are basically beginning to run out of gas. I mean, at least they're in the game. here's a set of resources that you can build, too. is the migration to their ERP I think this is a major different point of differentiation. and applied to work differently than the data currently is. and are going to be in deep trouble. So more of a substitution as opposed to a churn. They take a long time to change. And I think that's, to me, the biggest challenges, or only talking about the manufacturing arms. of the financial services organizations that I talked to and drive that improvement but in the backend by AI. I just love the term software robots. There's a lot of big services component to this, of different factors that are going to be an issue. that in the next five to 10 years, And those are disruptors, largely. that made all the difference. And sometimes the disruptors, you know, scale the walls. But the time is past where you should be acting. So my action item is that the focus of most B2B companies So the one thing I'll say about that, David, and employ that approach to start doing local automation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
75 | QUANTITY | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
March 23rd, 2018 | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Toys R Us | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Think | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Ginni | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
10 years | QUANTITY | 0.99+ |
1975 | DATE | 0.99+ |
Word | TITLE | 0.99+ |
Royal Bank of Canada | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Wikibon Action Item Quick Take | Infinidat Event Coverage, March 2018
>> Hi I'm Peter Burris, and welcome to another Wikibon Action Item Quick Take. Dave Vellante, interesting community event next week. What's going on? >> So Infinidat is a company that was started by Moshe Yanai. He invented symmetrics... Probably the single most important storage product of all time. At any rate, he started this new company Infinidat. They tend to do things differently. They're a one product company, but Tuesday March 27th, they're extending their portfolio pretty dramatically. We're going to be covering that. We have a crowd chat going on. It's, again Tuesday March 27th, 10:30 eastern time. Crowdchat.net/infinichat. Check it out. >> Great. That's been our Wikibon Action Item Quick Take. Talk to you soon. (upbeat music)
SUMMARY :
to another Wikibon Action Item Quick Take. We're going to be covering that. Talk to you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
Andrew | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Andrew Liu | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Hertz | ORGANIZATION | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Avis | ORGANIZATION | 0.99+ |
Tuesday March 27th | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
ASOS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Orlando, Florida | LOCATION | 0.99+ |
Crowdchat.net/infinichat | OTHER | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Moshe Yanai | PERSON | 0.99+ |
a week later | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Each year | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Azure Cosmos DB | TITLE | 0.98+ |
millions of vehicles | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
Cohesity | ORGANIZATION | 0.98+ |
200 different countries | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
Cassandra | TITLE | 0.98+ |
Redmond, Washington | LOCATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
hundred, 200 countries | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
Office | TITLE | 0.97+ |
each one | QUANTITY | 0.97+ |
a week | QUANTITY | 0.97+ |
single-digit | QUANTITY | 0.96+ |
Lego | ORGANIZATION | 0.95+ |
about four years ago | DATE | 0.95+ |
200 millisecond | QUANTITY | 0.95+ |
about eight years ago | DATE | 0.94+ |
MongoDB | TITLE | 0.93+ |
two awesome properties | QUANTITY | 0.93+ |
Microsoft Ignite | ORGANIZATION | 0.93+ |
single line | QUANTITY | 0.92+ |
first party | QUANTITY | 0.92+ |
Azure Cosmos | TITLE | 0.92+ |
one region | QUANTITY | 0.92+ |
Cosmos | TITLE | 0.91+ |
Azure | TITLE | 0.89+ |
millions of writes per second | QUANTITY | 0.89+ |
Wikibon Action Item Quick Take | The Role of Digital Disruption, March 2018
>> Hi this is Peter Burris with the Wikibon Action Item Quick Take. Wikibon's investing significantly on a significant research project right now to take a look at the role that digital disruption's playing as it pertains to data protection. In fact, we think this is so important that we're actually starting to coin the term digital business protection. We're looking for practitioners, senior people who are concerned about how they're going to adopt the crucial technologies that are going to make it possible for digital businesses to protect themselves, both from a storage availability standpoint, backup restore, security protection, the role that AI is going to play in identifying patterns. Do a better job of staging data around the organization. We're looking at doing this important crowd chat in the first couple of weeks of April. So if you want to participate, and we want to get as many people as possible, it's a great way to get your ideas in about digital business protection and what kind of software is going to be required to do it. But very importantly, what kind of journey businesses are going to go on to move their organizations through this crucial new technology capability. @PLBurris, @ P L B U R R I S. Hit me up, let's start talking about digital business protection. Talk to you soon. (upbeat music)
SUMMARY :
the role that AI is going to play in identifying patterns.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
March 2018 | DATE | 0.99+ |
@PLBurris | PERSON | 0.98+ |
both | QUANTITY | 0.97+ |
@ P L B U R R I S. | PERSON | 0.91+ |
April | DATE | 0.88+ |
first couple | QUANTITY | 0.68+ |
The Role of Digital | TITLE | 0.57+ |
of weeks | DATE | 0.47+ |
Wikibon Action Item Quick Take | OCP Summit, March 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item Quick Take. David Floyer, you were at OCP, the Open Compute Platform this past week and saw some really interesting things. A couple companies stood out for you including- >> Liqid. They are a very very interesting company. They went GA with a PCIE switch. That's the very very high speed switch that all the systems work off. And what this does is essentially enable virtualization of CPU storage, GPU, systems network, anything that can connect to a PCI bus without the software overhead, without the VMware overhead, without the KVM overhead. This is very exciting that you can have Bare-metal virtualization of the product and put together your own architecture of systems. And one particular example that struck me as being very useful. If you're doing benchmarks and you're trying to do benchmarks with one GPU, two GPU's or more storage or whatever it is you want. This seems to be an ideal way of being able to do that so quickly. I think this is a very exciting product. It's a competitor with Intel's RSD, Rack Scale Design. That's obviously another thing that they seem to have beaten them to, GA to something that works. There's a 30,000 developer kit- >> Mark: 30,000 dollar. >> 30,000 dollar developer kit. I would recommend that that's a best buy for enterprising Cloud provider data centers. >> Excellent. >> So David Floyer at OCP this week, Liqid's new bus technology. Check it out. This has been a Wikibon Action Item Quick Take with Peter Burris and David Floyer. (techno music)
SUMMARY :
David Floyer, you were at OCP, the Open Compute Platform This is very exciting that you can have Bare-metal I would recommend that that's a best buy with Peter Burris and David Floyer.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
30,000 dollar | QUANTITY | 0.99+ |
March 2018 | DATE | 0.99+ |
Liqid | ORGANIZATION | 0.99+ |
OCP | ORGANIZATION | 0.98+ |
two GPU | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
one GPU | QUANTITY | 0.96+ |
this week | DATE | 0.94+ |
one particular example | QUANTITY | 0.91+ |
couple companies | QUANTITY | 0.85+ |
30,000 developer | QUANTITY | 0.85+ |
OCP Summit | EVENT | 0.77+ |
GA | LOCATION | 0.76+ |
Wikibon | ORGANIZATION | 0.76+ |
past week | DATE | 0.75+ |
Wikibon Action Item Quick Take | David Floyer | OCP Summit, March 2018
>> Hi I'm Peter Burris, and welcome once again to another Wikibon Action Item Quick Take. David Floyer you were at OCP, the Open Compute Platform show, or summit this week, wandered the floor, talked to a lot of people, one company in particular stood out, Nimbus Data, what'd you hear? >> Well they had a very interesting announcement of their 100 terrabyte three and a half inch SSD, called the ExaData. That's a lot of storage in a very small space. It's high capacity SSDs, in my opinion, are going to be very important. They are denser, much less power, much less space, not as much performance, but fit in very nicely between the lowest level of disc, hard disc storage and the upper level. So they are going to be very useful in lower tier two applications. Very low friction for adoption there. They're going to be useful in tier three, but they're not direct replacement for disc. They work in a slightly different way. So the friction is going to be a little bit higher there. And then in tier four, there's again very interesting of putting all of the metadata about large amounts of data and put the metadata on high capacity SSD to enable much faster access at a tier four level. So action item for me is have a look at my research, and have a look at the general pricing: it's about half of what a standard SSD is. >> Excellent so this is once again a Wikibon Action Item Quick Take. David Floyer talking about Nimbus Data and their new high capacity, slightly lower performance, cost effective SSD. (upbeat music)
SUMMARY :
to another Wikibon Action Item Quick Take. So they are going to be very useful and their new high capacity, slightly lower performance,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Steve Mulaney | PERSON | 0.99+ |
George | PERSON | 0.99+ |
John Currier | PERSON | 0.99+ |
Derek Monahan | PERSON | 0.99+ |
Justin Smith | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
George Buckman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Stephen | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Steve Eleni | PERSON | 0.99+ |
Bobby Willoughby | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
John Ford | PERSON | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Missouri | LOCATION | 0.99+ |
twenty-year | QUANTITY | 0.99+ |
Luis Castillo | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Ellie Mae | PERSON | 0.99+ |
80 percent | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
twenty years | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
John fritz | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
North America | LOCATION | 0.99+ |
Jennifer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Michael Keaton | PERSON | 0.99+ |
Santa Clara, CA | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
National Instruments | ORGANIZATION | 0.99+ |
Jon Fourier | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Toby Foster | PERSON | 0.99+ |
hundred-percent | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
11 years | QUANTITY | 0.99+ |
Stacey | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
two sides | QUANTITY | 0.99+ |
18 months ago | DATE | 0.99+ |
two types | QUANTITY | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Wikibon Action Item Quick Take | Dave Vellante | Overall Digital Business Protection, March 2018
>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item Quick Take. Dave Vellante, data protection, cloud orchestration, overall digital business protection, pretty critical issue. We got some interesting things going on. What's happening? >> As organizations go digital, I see the confluence of privacy, security, data protection, business continuity coming together and I really would like to talk to CSOs in our community how they look at protecting their business in a digital world. So @dvellante, love to just do a crowd chat on this. Love your opinions as to what you think is changing in digital business data protection. >> Great so that's @dvellante, so reach out to Dave, let's get some folks together talking about this crucially important conversation. We'll be doing it in a couple of weeks. Thanks very much. This has been another Wikibon Action Item Quick Take. (upbeat music)
SUMMARY :
Dave Vellante, data protection, cloud orchestration, data protection, business continuity so reach out to Dave, let's get some folks together
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
@dvellante | PERSON | 0.98+ |
Wikibon | TITLE | 0.62+ |
Wikibon Action Item Quick Take | Microsoft AI Platform for Windows, March 2018
>> Hi I'm Peter Burris and welcome to another Wikibon Action Action Item Quick Take. Jim Kobielus, Microsoft seems to be gettin' ready to do a makeover of application development. What's going on? >> Yeah, that's pretty exciting, Peter. So, last week, on the 7th, Microsoft added in one of their Developer Days and now something called AI Platform for Windows and let me explain why that's important. Because that is going to bring Machine Learning down to the desktop applications, anything that's written to run on Windows 10. And why that's important is that, starting with Visual Studio 15.7, they'll be an ability for developers who don't know anything about Machine Learning to be able to, in a very visual way, create Machine Learning models, that they can then have trained in the cloud, and then deployed to their Windows applications, whatever it might be, and to do real-time, local inferencing in those applications, without need for round-tripping back to the cloud. So, what we're looking at now is they're going to bring this capability into the core of Visual Studio and then they're going to be backwards compatible with previous versions of Visual Studio. What that means is, I can just imagine, over the next couple of years, most Windows applications will be heavily ML enabled, so that more and more of the application logic at the desktop in Windows, will be driven by ML, they'll be less need for apps, as we've known them, historically, pre-packaged bundles of code. It'll be dynamic logic. It'll be ML. So, I think this is really marking the beginning of the end of the app era at the device level, I think. So, I'm really excited and we're looking forward to hearing more about Microsoft, where they're going with AI Platform for Windows, but I think that's a landmark announcement we'll stay tuned for. >> Excellent. Jim Kobielus, thank you very much. This has been another Wikibon Action Item Quick Take. (soft digital music)
SUMMARY :
Jim Kobielus, Microsoft seems to be gettin' ready to do and then they're going to be backwards compatible with previous Jim Kobielus, thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Visual Studio 15.7 | TITLE | 0.99+ |
Windows 10 | TITLE | 0.99+ |
Visual Studio | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Windows | TITLE | 0.99+ |
Peter | PERSON | 0.98+ |
Platform | TITLE | 0.74+ |
years | DATE | 0.72+ |
7th | DATE | 0.68+ |
next couple | DATE | 0.55+ |
Wikibon | TITLE | 0.54+ |
Wikibon | ORGANIZATION | 0.38+ |
Wikibon Action Item | De-risking Digital Business | March 2018
>> Hi I'm Peter Burris. Welcome to another Wikibon Action Item. (upbeat music) We're once again broadcasting from theCube's beautiful Palo Alto, California studio. I'm joined here in the studio by George Gilbert and David Floyer. And then remotely, we have Jim Kobielus, David Vellante, Neil Raden and Ralph Finos. Hi guys. >> Hey. >> Hi >> How you all doing? >> This is a great, great group of people to talk about the topic we're going to talk about, guys. We're going to talk about the notion of de-risking digital business. Now, the reason why this becomes interesting is, the Wikibon perspective for quite some time has been that the difference between business and digital business is the role that data assets play in a digital business. Now, if you think about what that means. Every business institutionalizes its work around what it regards as its most important assets. A bottling company, for example, organizes around the bottling plant. A financial services company organizes around the regulatory impacts or limitations on how they share information and what is regarded as fair use of data and other resources, and assets. The same thing exists in a digital business. There's a difference between, say, Sears and Walmart. Walmart mades use of data differently than Sears. And that specific assets that are employed and had a significant impact on how the retail business was structured. Along comes Amazon, which is even deeper in the use of data as a basis for how it conducts its business and Amazon is institutionalizing work in quite different ways and has been incredibly successful. We could go on and on and on with a number of different examples of this, and we'll get into that. But what it means ultimately is that the tie between data and what is regarded as valuable in the business is becoming increasingly clear, even if it's not perfect. And so traditional approaches to de-risking data, through backup and restore, now needs to be re-thought so that it's not just de-risking the data, it's de-risking the data assets. And, since those data assets are so central to the business operations of many of these digital businesses, what it means to de-risk the whole business. So, David Vellante, give us a starting point. How should folks think about this different approach to envisioning business? And digital business, and the notion of risk? >> Okay thanks Peter, I mean I agree with a lot of what you just said and I want to pick up on that. I see the future of digital business as really built around data sort of agreeing with you, building on what you just said. Really where organizations are putting data at the core and increasingly I believe that organizations that have traditionally relied on human expertise as the primary differentiator, will be disrupted by companies where data is the fundamental value driver and I think there are some examples of that and I'm sure we'll talk about it. And in this new world humans have expertise that leverage the organization's data model and create value from that data with augmented machine intelligence. I'm not crazy about the term artificial intelligence. And you hear a lot about data-driven companies and I think such companies are going to have a technology foundation that is increasingly described as autonomous, aware, anticipatory, and importantly in the context of today's discussion, self-healing. So able to withstand failures and recover very quickly. So de-risking a digital business is going to require new ways of thinking about data protection and security and privacy. Specifically as it relates to data protection, I think it's going to be a fundamental component of the so-called data-driven company's technology fabric. This can be designed into applications, into data stores, into file systems, into middleware, and into infrastructure, as code. And many technology companies are going to try to attack this problem from a lot of different angles. Trying to infuse machine intelligence into the hardware, software and automated processes. And the premise is that meaty companies will architect their technology foundations, not as a set of remote cloud services that they're calling, but rather as a ubiquitous set of functional capabilities that largely mimic a range of human activities. Including storing, backing up, and virtually instantaneous recovery from failure. >> So let me build on that. So what you're kind of saying if I can summarize, and we'll get into whether or not it's human expertise or some other approach or notion of business. But you're saying that increasingly patterns in the data are going to have absolute consequential impacts on how a business ultimately behaves. We got that right? >> Yeah absolutely. And how you construct that data model, and provide access to the data model, is going to be a fundamental determinant of success. >> Neil Raden, does that mean that people are no longer important? >> Well no, no I wouldn't say that at all. I'm talking with the head of a medical school a couple of weeks ago, and he said something that really resonated. He said that there're as many doctors who graduated at the bottom of their class as the top of their class. And I think that's true of organizations too. You know what, 20 years ago I had the privilege of interviewing Peter Drucker for an hour and he foresaw this, 20 years ago, he said that people who run companies have traditionally had IT departments that provided operational data but they needed to start to figure out how to get value from that data and not only get value from that data but get value from data outside the company, not just internal data. So he kind of saw this big data thing happening 20 years ago. Unfortunately, he had a prejudice for senior executives. You know, he never really thought about any other people in an organization except the highest people. And I think what we're talking about here is really the whole organization. I think that, I have some concerns about the ability of organizations to really implement this without a lot of fumbles. I mean it's fine to talk about the five digital giants but there's a lot of companies out there that, you know the bar isn't really that high for them to stay in business. And they just seem to get along. And I think if we're going to de-risk we really need to help companies understand the whole process of transformation, not just the technology. >> Well, take us through it. What is this process of transformation? That includes the role of technology but is bigger than the role of technology. >> Well, it's like anything else, right. There has to be communication, there has to be some element of control, there has to be a lot of flexibility and most importantly I think there has to be acceptability by the people who are going to be affected by it, that is the right thing to do. And I would say you start with assumptions, I call it assumption analysis, in other words let's all get together and figure out what our assumptions are, and see if we can't line em up. Typically IT is not good at this. So I think it's going to require the help of a lot of practitioners who can guide them. >> So Dave Vellante, reconcile one point that you made I want to come back to this notion of how we're moving from businesses built on expertise and people to businesses built on expertise resident as patterns in the data, or data models. Why is it that the most valuable companies in the world seem to be the ones that have the most real hardcore data scientists. Isn't that expertise and people? >> Yeah it is, and I think it's worth pointing out. Look, the stock market is volatile, but right now the top-five companies: Apple, Amazon, Google, Facebook and Microsoft, in terms of market cap, account for about $3.5 trillion and there's a big distance between them, and they've clearly surpassed the big banks and the oil companies. Now again, that could change, but I believe that it's because they are data-driven. So called data-driven. Does that mean they don't need humans? No, but human expertise surrounds the data as opposed to most companies, human expertise is at the center and the data lives in silos and I think it's very hard to protect data, and leverage data, that lives in silos. >> Yes, so here's where I'll take exception to that, Dave. And I want to get everybody to build on top of this just very quickly. I think that human expertise has surrounded, in other businesses, the buildings. Or, the bottling plant. Or, the wealth management. Or, the platoon. So I think that the organization of assets has always been the determining factor of how a business behaves and we institutionalized work, in other words where we put people, based on the business' understanding of assets. Do you disagree with that? Is that, are we wrong in that regard? I think data scientists are an example of reinstitutionalizing work around a very core asset in this case, data. >> Yeah, you're saying that the most valuable asset is shifting from some of those physical assets, the bottling plant et cetera, to data. >> Yeah we are, we are. Absolutely. Alright, David Foyer. >> Neil: I'd like to come in. >> Panelist: I agree with that too. >> Okay, go ahead Neil. >> I'd like to give an example from the news. Cigna's acquisition of Express Scripts for $67 billion. Who the hell is Cigna, right? Connecticut General is just a sleepy life insurance company and INA was a second-tier property and casualty company. They merged a long time ago, they got into health insurance and suddenly, who's Express Scripts? I mean that's a company that nobody ever even heard of. They're a pharmacy benefit manager, what is that? They're an information management company, period. That's all they do. >> David Foyer, what does this mean from a technology standpoint? >> So I wanted to to emphasize one thing that evolution has always taught us. That you have to be able to come from where you are. You have to be able to evolve from where you are and take the assets that you have. And the assets that people have are their current systems of records, other things like that. They must be able to evolve into the future to better utilize what those systems are. And the other thing I would like to say-- >> Let me give you an example just to interrupt you, because this is a very important point. One of the primary reasons why the telecommunications companies, whom so many people believed, analysts believed, had this fundamental advantage, because so much information's flowing through them is when you're writing assets off for 30 years, that kind of locks you into an operational mode, doesn't it? >> Exactly. And the other thing I want to emphasize is that the most important thing is sources of data not the data itself. So for example, real-time data is very very important. So what is your source of your real-time data? If you've given that away to Google or your IOT vendor you have made a fundamental strategic mistake. So understanding the sources of data, making sure that you have access to that data, is going to enable you to be able to build the sort of processes and data digitalization. >> So let's turn that concept into kind of a Geoffrey Moore kind of strategy bromide. At the end of the day you look at your value proposition and then what activities are central to that value proposition and what data is thrown off by those activities and what data's required by those activities. >> Right, both internal-- >> We got that right? >> Yeah. Both internal and external data. What are those sources that you require? Yes, that's exactly right. And then you need to put together a plan which takes you from where you are, as the sources of data and then focuses on how you can use that data to either improve revenue or to reduce costs, or a combination of those two things, as a series of specific exercises. And in particular, using that data to automate in real-time as much as possible. That to me is the fundamental requirement to actually be able to do this and make money from it. If you look at every example, it's all real-time. It's real-time bidding at Google, it's real-time allocation of resources by Uber. That is where people need to focus on. So it's those steps, practical steps, that organizations need to take that I think we should be giving a lot of focus on. >> You mention Uber. David Vellante, we're just not talking about the, once again, talking about the Uberization of things, are we? Or is that what we mean here? So, what we'll do is we'll turn the conversation very quickly over to you George. And there are existing today a number of different domains where we're starting to see a new emphasis on how we start pricing some of this risk. Because when we think about de-risking as it relates to data give us an example of one. >> Well we were talking earlier, in financial services risk itself is priced just the way time is priced in terms of what premium you'll pay in terms of interest rates. But there's also something that's softer that's come into much more widely-held consciousness recently which is reputational risk. Which is different from operational risk. Reputational risk is about, are you a trusted steward for data? Some of that could be personal information and a use case that's very prominent now with the European GDPR regulation is, you know, if I ask you as a consumer or an individual to erase my data, can you say with extreme confidence that you have? That's just one example. >> Well I'll give you a specific number on that. We've mentioned it here on Action Item before. I had a conversation with a Chief Privacy Officer a few months ago who told me that they had priced out what the fines to Equifax would have been had the problem occurred after GDPR fines were enacted. It was $160 billion, was the estimate. There's not a lot of companies on the planet that could deal with $160 billion liability. Like that. >> Okay, so we have a price now that might have been kind of, sort of mushy before. And the notion of trust hasn't really changed over time what's changed is the technical implementations that support it. And in the old world with systems of record we basically collected from our operational applications as much data as we could put it in the data warehouse and it's data marked satellites. And we try to govern it within that perimeter. But now we know that data basically originates and goes just about anywhere. There's no well-defined perimeter. It's much more porous, far more distributed. You might think of it as a distributed data fabric and the only way you can be a trusted steward of that is if you now, across the silos, without trying to centralize all the data that's in silos or across them, you can enforce, who's allowed to access it, what they're allowed to do, audit who's done what to what type of data, when and where? And then there's a variety of approaches. Just to pick two, one is where it's discovery-oriented to figure out what's going on with the data estate. Using machine learning this is, Alation is an example. And then there's another example, which is where you try and get everyone to plug into what's essentially a new system catalog. That's not in a in a deviant mesh but that acts like the fabric for your data fabric, deviant mesh. >> That's an example of another, one of the properties of looking at coming at this. But when we think, Dave Vellante coming back to you for a second. When we think about the conversation there's been a lot of presumption or a lot of bromide. Analysts like to talk about, don't get Uberized. We're not just talking about getting Uberized. We're talking about something a little bit different aren't we? >> Well yeah, absolutely. I think Uber's going to get Uberized, personally. But I think there's a lot of evidence, I mentioned the big five, but if you look at Spotify, Waze, AirbnB, yes Uber, yes Twitter, Netflix, Bitcoin is an example, 23andme. These are all examples of companies that, I'll go back to what I said before, are putting data at the core and building humans expertise around that core to leverage that expertise. And I think it's easy to sit back, for some companies to sit back and say, "Well I'm going to wait and see what happens." But to me anyway, there's a big gap between kind of the haves and the have-nots. And I think that, that gap is around applying machine intelligence to data and applying cloud economics. Zero marginal economics and API economy. An always-on sort of mentality, et cetera et cetera. And that's what the economy, in my view anyway, is going to look like in the future. >> So let me put out a challenge, Jim I'm going to come to you in a second, very quickly on some of the things that start looking like data assets. But today, when we talk about data protection we're talking about simply a whole bunch of applications and a whole bunch of devices. Just spinning that data off, so we have it at a third site. And then we can, and it takes to someone in real-time, and then if there's a catastrophe or we have, you know, large or small, being able to restore it often in hours or days. So we're talking about an improvement on RPO and RTO but when we talk about data assets, and I'm going to come to you in a second with that David Floyer, but when we talk about data assets, we're talking about, not only the data, the bits. We're talking about the relationships and the organization, and the metadata, as being a key element of that. So David, I'm sorry Jim Kobielus, just really quickly, thirty seconds. Models, what do they look like? What are the new nature of some of these assets look like? >> Well the new nature of these assets are the machine learning models that are driving so many business processes right now. And so really the core assets there are the data obviously from which they are developed, and also from which they are trained. But also very much the knowledge of the data scientists and engineers who build and tune this stuff. And so really, what you need to do is, you need to protect that knowledge and grow that knowledge base of data science professionals in your organization, in a way that builds on it. And hopefully you keep the smartest people in house. And they can encode more of their knowledge in automated programs to manage the entire pipeline of development. >> We're not talking about files. We're not even talking about databases, are we David Floyer? We're talking about something different. Algorithms and models are today's technology's really really set up to do a good job of protecting the full organization of those data assets. >> I would say that they're not even being thought about yet. And going back on what Jim was saying, Those data scientists are the only people who understand that in the same way as in the year 2000, the COBOL programmers were the only people who understood what was going on inside those applications. And we as an industry have to allow organizations to be able to protect the assets inside their applications and use AI if you like to actually understand what is in those applications and how are they working? And I think that's an incredibly important de-risking is ensuring that you're not dependent on a few experts who could leave at any moment, in the same way as COBOL programmers could have left. >> But it's not just the data, and it's not just the metadata, it really is the data structure. >> It is the model. Just the whole way that this has been put together and the reason why. And the ability to continue to upgrade that and change that over time. So those assets are incredibly important but at the moment there is no way that you can, there isn't technology available for you to actually protect those assets. >> So if I combine what you just said with what Neil Raden was talking about, David Vallante's put forward a good vision of what's required. Neil Raden's made the observation that this is going to be much more than technology. There's a lot of change, not change management at a low level inside the IT, but business change and the technology companies also have to step up and be able to support this. We're seeing this, we're seeing a number of different vendor types start to enter into this space. Certainly storage guys, Dylon Sears talking about doing a better job of data protection we're seeing middleware companies, TIBCO and DISCO, talk about doing this differently. We're seeing file systems, Scality, WekaIO talk about doing this differently. Backup and restore companies, Veeam, Veritas. I mean, everybody's looking at this and they're all coming at it. Just really quickly David, where's the inside track at this point? >> For me it is so much whitespace as to be unbelievable. >> So nobody has an inside track yet. >> Nobody has an inside track. Just to start with a few things. It's clear that you should keep data where it is. The cost of moving data around an organization from inside to out, is crazy. >> So companies that keep data in place, or technologies to keep data in place, are going to have an advantage. >> Much, much, much greater advantage. Sure, there must be backups somewhere. But you need to keep the working copies of data where they are because it's the real-time access, usually that's important. So if it originates in the cloud, keep it in the cloud. If it originates in a data-provider, on another cloud, that's where you should keep it. If it originates on your premise, keep it where it originated. >> Unless you need to combine it. But that's a new origination point. >> Then you're taking subsets of that data and then combining that up for itself. So that would be my first point. So organizations are going to need to put together what George was talking about, this metadata of all the data, how it interconnects, how it's being used. The flow of data through the organization, it's amazing to me that when you go to an IT shop they cannot define for you how the data flows through that data center or that organization. That's the requirement that you have to have and AI is going to be part of that solution, of looking at all of the applications and the data and telling you where it's going and how it's working together. >> So the second thing would be companies that are able to build or conceive of networks as data. Will also have an advantage. And I think I'd add a third one. Companies that demonstrate perennial observations, a real understanding of the unbelievable change that's required you can't just say, oh Facebook wants this therefore everybody's going to want it. There's going to be a lot of push marketing that goes on at the technology side. Alright so let's get to some Action Items. David Vellante, I'll start with you. Action Item. >> Well the future's going to be one where systems see, they talk, they sense, they recognize, they control, they optimize. It may be tempting to say, you know what I'm going to wait, I'm going to sit back and wait to figure out how I'm going to close that machine intelligence gap. I think that's a mistake. I think you have to start now, and you have to start with your data model. >> George Gilbert, Action Item. >> I think you have to keep in mind the guardrails related to governance, and trust, when you're building applications on the new data fabric. And you can take the approach of a platform-oriented one where you're plugging into an API, like Apache Atlas, that Hortonworks is driving, or a discovery-oriented one as David was talking about which would be something like Alation, using machine learning. But if, let's say the use case starts out as an IOT, edge analytics and cloud inferencing, that data science pipeline itself has to now be part of this fabric. Including the output of the design time. Meaning the models themselves, so they can be managed. >> Excellent. Jim Kobielus, you've been pretty quiet but I know you've got a lot to offer. Action Item, Jim. >> I'll be very brief. What you need to do is protect your data science knowledge base. That's the way to de-risk this entire process. And that involves more than just a data catalog. You need a data science expertise registry within your distributed value chain. And you need to manage that as a very human asset that needs to grow. That is your number one asset going forward. >> Ralph Finos, you've also been pretty quiet. Action Item, Ralph. >> Yeah, I think you've got to be careful about what you're trying to get done. Whether it's, it depends on your industry, whether it's finance or whether it's the entertainment business, there are different requirements about data in those different environments. And you need to be cautious about that and you need leadership on the executive business side of things. The last thing in the world you want to do is depend on data scientists to figure this stuff out. >> And I'll give you the second to last answer or Action Item. Neil Raden, Action Item. >> I think there's been a lot of progress lately in creating tools for data scientists to be more efficient and they need to be, because the big digital giants are draining them from other companies. So that's very encouraging. But in general I think becoming a data-driven, a digital transformation company for most companies, is a big job and I think they need to it in piece parts because if they try to do it all at once they're going to be in trouble. >> Alright, so that's great conversation guys. Oh, David Floyer, Action Item. David's looking at me saying, ah what about me? David Floyer, Action Item. >> (laughing) So my Action Item comes from an Irish proverb. Which if you ask for directions they will always answer you, "I wouldn't start from here." So the Action Item that I have is, if somebody is coming in saying you have to re-do all of your applications and re-write them from scratch, and start in a completely different direction, that is going to be a 20-year job and you're not going to ever get it done. So you have to start from what you have. The digital assets that you have, and you have to focus on improving those with additional applications, additional data using that as the foundation for how you build that business with a clear long-term view. And if you look at some of the examples that were given early, particularly in the insurance industries, that's what they did. >> Thank you very much guys. So, let's do an overall Action Item. We've been talking today about the challenges of de-risking digital business which ties directly to the overall understanding of the role of data assets play in businesses and the technology's ability to move from just protecting data, restoring data, to actually restoring the relationships in the data, the structures of the data and very importantly the models that are resident in the data. This is going to be a significant journey. There's clear evidence that this is driving a new valuation within the business. Folks talk about data as the new oil. We don't necessarily see things that way because data, quite frankly, is a very very different kind of asset. The cost could be shared because it doesn't suffer the same limits on scarcity. So as a consequence, what has to happen is, you have to start with where you are. What is your current value proposition? And what data do you have in support of that value proposition? And then whiteboard it, clean slate it and say, what data would we like to have in support of the activities that we perform? Figure out what those gaps are. Find ways to get access to that data through piecemeal, piece-part investments. That provide a roadmap of priorities looking forward. Out of that will come a better understanding of the fundamental data assets that are being created. New models of how you engage customers. New models of how operations works in the shop floor. New models of how financial services are being employed and utilized. And use that as a basis for then starting to put forward plans for bringing technologies in, that are capable of not just supporting the data and protecting the data but protecting the overall organization of data in the form of these models, in the form of these relationships, so that the business can, as it creates these, as it throws off these new assets, treat them as the special resource that the business requires. Once that is in place, we'll start seeing businesses more successfully reorganize, reinstitutionalize the work around data, and it won't just be the big technology companies who have, who people call digital native, that are well down this path. I want to thank George Gilbert, David Floyer here in the studio with me. David Vellante, Ralph Finos, Neil Raden and Jim Kobelius on the phone. Thanks very much guys. Great conversation. And that's been another Wikibon Action Item. (upbeat music)
SUMMARY :
I'm joined here in the studio has been that the difference and importantly in the context are going to have absolute consequential impacts and provide access to the data model, the ability of organizations to really implement this but is bigger than the role of technology. that is the right thing to do. Why is it that the most valuable companies in the world human expertise is at the center and the data lives in silos in other businesses, the buildings. the bottling plant et cetera, to data. Yeah we are, we are. an example from the news. and take the assets that you have. One of the primary reasons why is going to enable you to be able to build At the end of the day you look at your value proposition And then you need to put together a plan once again, talking about the Uberization of things, to erase my data, can you say with extreme confidence There's not a lot of companies on the planet and the only way you can be a trusted steward of that That's an example of another, one of the properties I mentioned the big five, but if you look at Spotify, and I'm going to come to you in a second And so really, what you need to do is, of protecting the full organization of those data assets. and use AI if you like to actually understand and it's not just the metadata, And the ability to continue to upgrade that and the technology companies also have to step up It's clear that you should keep data where it is. are going to have an advantage. So if it originates in the cloud, keep it in the cloud. Unless you need to combine it. That's the requirement that you have to have that goes on at the technology side. Well the future's going to be one where systems see, I think you have to keep in mind the guardrails but I know you've got a lot to offer. that needs to grow. Ralph Finos, you've also been pretty quiet. And you need to be cautious about that And I'll give you the second to last answer and they need to be, because the big digital giants David's looking at me saying, ah what about me? that is going to be a 20-year job and the technology's ability to move from just
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim Kobelius | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Geoffrey Moore | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
INA | ORGANIZATION | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
Sears | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
March 2018 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
TIBCO | ORGANIZATION | 0.99+ |
DISCO | ORGANIZATION | 0.99+ |
David Vallante | PERSON | 0.99+ |
$160 billion | QUANTITY | 0.99+ |
20-year | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
Ralph | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
Express Scripts | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
$67 billion | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
first point | QUANTITY | 0.99+ |
thirty seconds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Connecticut General | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
about $3.5 trillion | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Cigna | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Dylon Sears | ORGANIZATION | 0.98+ |
Wikibon | Action Item, Feb 2018
>> Hi I'm Peter Burris, welcome to Action Item. (electronic music) There's an enormous net new array of software technologies that are available to businesses and enterprises to tend to some new classes of problems and that means that there's an explosion in the number of problems that people perceive as could be applied, or could be solved, with software approaches. The whole world of how we're going to automate things differently in artificial intelligence and any number of other software technologies, are all being brought to bear on problems in ways that we never envisioned or never thought possible. That leads ultimately to a comparable explosion in the number of approaches to how we're going to solve some of these problems. That means new tooling, new models, new any number of other structures, conventions, and artifacts that are going to have to be factored by IT organizations and professionals in the technology industry as they conceive and put forward plans and approaches to solving some of these problems. Now, George that leads to a question. Are we going to see an ongoing ever-expanding array of approaches or are we going to see some new kind of steady-state that kind of starts to simplify what happens, or how enterprises conceive of the role of software and solving problems. >> Well, we've had... probably four decades of packaged applications being installed and defining really the systems of record, which first handled the ordered cash process and then layered around that. Once we had more CRM capabilities we had the sort of the opportunity to lead capability added in there. But systems of record fundamentally are backward looking, they're tracking about the performance of the business. The opportunity-- >> Peter: Recording what has happened? >> Yes, recording what has happened. The opportunity we have is now to combine what the big Internet companies pioneered, with systems of engagement. Where you had machine learning anticipating and influencing interactions. You can now combine those sorts of analytics with systems of record to inform and automate decisions in the form of transactions. And the question is now, how are we going to do this? Is there some way to simplify or, not completely standardized, but can we make it so that we have at least some conventions and design patterns for how to do that? >> And David, we've been working on this problem for quite some time but the notion of convergence has been extent in the hardware and the services, or in the systems business for quite some time. Take us through what convergence means and how it is going to set up new ways of thinking about software. >> So there's a hardware convergence and it's useful to define a few terms. There's converged systems, those are systems which have some management software that have been brought into it and then on top of that they have traditional SANs and networks. There's hyper-converged systems, which started off in the cloud systems and now have come to enterprise as well. And those bring software networking, software storage, software-- >> Software defined, so it's a virtualizing of those converged systems. >> David: Absolutely, and in the future is going to bring also automated operational stuff as well, AI in the operational side. And then there's full stack conversions. Where we start to put in the software, the application software, to begin with the database side of things and then the application itself on top of the database. And finally these, what you are talking about, the systems of intelligence. Where we can combine both the systems of record, the systems of engagement, and the real-time analytics as a complete stack. >> Peter: Let's talk about this for a second because ultimately what I think you're saying is, that we've got hardware convergence in the form of converged infrastructure, hyper-converged in the forms of virtualization of that, new ways of thinking about how the stack comes together, and new ways of thinking about application components. But what seems to be the common thread, through all of this, is data. >> David: Yes. >> So it's basically what we're seeing is a convergence or a rethinking of how software elements revolve around the data, is that kind of the centerpiece of this? >> David: That's the centerpiece of it and we had very serious constraints about accessing data. Those will improve with flash but there's still a lot of room for improvement. And the architecture that we are saying is going to come forward, which really helps this a lot, is the unit grid architecture. Where we offload the networking and the storage from the processor. This is already happening in the hyper scale clouds, they're putting a lot of effort into doing this. But we're at the same time allowing any processor to access any data in a much more fluid way and we can grow that to thousands of processes. Now that type of architecture gives us the ability to converge the traditional systems of record, and there are a lot of them obviously, and the systems of engagement and the the real-time analytics for the first time. >> But the focal point of that convergence is not the licensing of the software, the focal point is convergence around the data. >> The data. >> But that has some pretty significant implications when we think about how software has always been sold, how organizations to run software have been structured, the way that funding is set up within businesses. So George, what does it mean to talk about converging software around data from a practical standpoint over the next few years? >> Okay, so let me take that and interpret that as converging the software around data in the context of adding intelligence to our existing application portfolio and then the new applications that follow on. And basically, when we want to inject an intelligence enough to inform and anticipate and inform interactions or inform or automate transactions, we have a bunch of steps that need to get done. Where we're ingesting essentially contextual or ambient information. Often this is information about a user or the business process. And this data, it's got to go through a pipeline where there's both a Design Time and a Run Time. In addition to ingesting it, you have to sort of enrich it and make it ready for analysis. Then the analysis has essentially picking out of all that data and calculating the features that you plug into a machine learning model. And then that, produces essentially an inference based on all that data, that says well this is the probable value and it sounds like, sounds like it's in the weeds but the point is it's actually a standardized set of steps. Then the question is, do you put that all together in one product across that whole pipeline? Can one piece of infrastructure software manage that ? Or do you have a bunch of pieces each handing off to the next? And-- >> Peter: But let me stop you so because I want to make sure that we kind of follow this thread. So we've argued that hardware convergence and the ability to scale the role the data plays or how data is used, is happening and that opens up new opportunities to think about data. Now what we've got is we are centering a lot of the software convergence around the use of data through copies and other types of mechanisms for handling snapshots and whatnot and things like uni grid. What you're, let's start with this. It sounds like what you're saying is we need to think of new classes of investments in technologies that are specifically set up to handling the processing of data in a more distributed application way, right? If I got that right, that's kind of what we mean by pipelines? >> George: Yes. >> Okay, so once we do that, once we establish those conventions, once we establish organizationally institutionally how that's going to work. Now we take the next step of saying, are we going to default to a single set of products or are we going to do best to breed and what kind of convergence are we going to see there? >> And there's no-- >> First of all, have I got that right? >> Yes, but there's no right answer. And I think there's a bunch of variables that we have to play with that depend on who the customer is. For instance, the very largest and most sophisticated tech companies are more comfortable taking multiple pieces each that's very specialized and putting them together in a pipeline. >> Facebook, Yahoo, Google-- >> George: LinkedIn. >> Got it. >> George: Those guys. And the knobs that they're playing with, that everyone's playing with, are three, basically on the software side. There's your latency budget, which is how much time do you have to produce an answer. So that drives the transaction or the interaction. And it's not, that itself is not just a single answer because... It's not, the goal isn't to get it as short as possible. The goal is to get as much information into the analysis within the budgeted latency. >> Peter: So it's packing the latency budget with data? >> George: Yes, because the more data that goes into making the inference, the better the inference. >> Got it. >> The example that someone used actually on Fareed Zakaria GPS, one show about it was, if he had 300 attributes describing a person he could know more about that person then that person did (laughs) in terms of inferring other attributes. So the the point is, once you've got your latency budget, the other two knobs that you can play with are development complexity and admin complexity. And the idea is on development complexity, there's a bunch of abstractions that you have to deal with. If it's all one product you're going to have one data model, one address and namespace convention, one programming model, one way of persisting data, a whole bunch of things. That's simplicity. And that makes it more accessible to mainstream organizations. Similarly there's a bunch of, let me just add that, there's probably two or three times as many constructs that admins would have to deal with. So again, if you're dealing with one product, it's a huge burden off the admin and we know they struggled with Hadoop. >> So convergence, decisions about how to enact convergence is going to be partly or strongly influenced by those three issues. Latency budget, development complexity or simplicity, and administrative, David-- >> I'd like to add one more to that, and that is location of data. Because you want to be able to, you want to be able to look at the data that is most relevant to solving that particular problem. Now, today a lot of the data is inside the enterprise. There's a lot of data outside that but they're still, you will want to, in the best possible way, combine that data one way or another. >> But isn't that a variable on the latency budget? >> David: Well there's, I would think it's very useful to split the latency budget, which is to do with inference mainly, and development with the machine learning. So there is a development cycle with machine learning that is much longer. That is days, could be weeks, could be months. >> I would still done in Bash. >> It is or will be done, wait a second. It will be done in Bash, it is done in Bash, and it's. You need to test it and then deliver it as an inference engine to the applications that you're talking about. Now that's going to be very close together, that inference, then the rest of it has to be all physically very close together. But the data itself is spread out and you want to have mechanisms that can combine those datas, move application to those datas, bring those together in the best possible way. That is still a Bash process. That can run where the data is, in the cloud locally, wherever it is. >> George: And I think you brought up a great point, which I would tend to include in latency budget because... no matter what kind of answers you're looking for, some of the attributes are going to be pre computed and those could be-- >> David: Absolutely. >> External data. >> David: Yes. >> And you're not going to calculate everything in real time, there's just-- >> You can't. >> Yes you can't. >> But is the practical reality that the convergence of, so again, the argument. We've got all these new problems, all new kinds of new people that are claiming that they know how to solve the problems, each of them choosing different classes of tools to solve the problem, an explosion across the board in the approaches, which can lead to enormous downstream integration and complexity costs. You've used the example of Cloudera, for example. Some of the distro companies who claim that 50 plus percent of their development budget is dedicated to just integrating these pieces. That's a non-starter for a lot of enterprises. Are we fundamentally saying that the degree of complexity or the degree of simplicity and convergence, it's possible in software, is tied to the degree of convergence in the data? >> You're honing in on something really important, give me-- >> Peter: Thank you! (laughs) >> George: Give an example of the convergence of data that you're talking about. >> Peter: I'll let David do it because I think he's going to jump on it. >> David: Yes so let me take examples, for example. If you have a small business, there's no way that you want to invest yourself in any of the normal levels of machine learning and applications like that. You want to outsource that. So big software companies are going to do that for you and they're going to do it especially for the specific business processes which are unique to them, which give them digital differentiation of some sort or another. So for all of those type of things, software will come in from vendors, from SAP or son of SAP, which will help you solve those problems. And having data brokers which are collecting the data, putting them together, helping you with that. That seems to me the way things are going. In the same way that there's a lot of inference engines which will be out at the IOT level. Those will have very rapid analytics given to them. Again, not by yourself but by companies that specialize in facial recognition or specialize in making warehouse-- >> Wait a minute, are you saying that my customers aren't special, that require special facial recognition? (laughs) So I agree with David but I want to come back to this notion because-- >> David: The point I was getting at is, there's going to be lots and lots of room for software to be developed, to help in specific cases. >> Peter: And large markets to sell that software into. >> Very large markets. >> Whether it's a software, but increasingly also with services. But I want to come back to this notion of convergence because we talked about hardware convergence and we're starting to talk about the practical limits on software convergence. But somewhere in between I would argue, and I think you guys would agree, that really the catalyst for, or the thing that's going to determine the rate of change and the degree of convergence is going to be how we deal with data. Now you've done a lot of research on this, I'm going to put something out there and you tell me if I'm wrong. But at the end of the day, when we start thinking about uni grid, when we start thinking about some of these new technologies, and the ability to have single copies or single sources of data, multiple copies, in many respects what we're talking about is the virtualization of data without loss. >> David: Yes. >> Not loss of the characters, the fidelity of the data, or the state of the data. I got that right? >> Knowing the state of the data. >> Peter: Or knowing state of the data. >> If you take a snapshot, that's a point in time, you know what that point of time is, and you can do a lot of analytics for example on, and you want to do them on a certain time of day or whatever-- >> Peter: So is it wrong to say that we're seeing, we've moved through the virtualization of hardware and we're now in a hyper scale or hyper-converged, which is very powerful stuff. We're seeing this explosion in the amount of software that's being you know, the way we approach problems and whatnot. But that a forcing function, something that's going to both constrain how converged that can be, but also force or catalyze some convergence, is the idea that we're moving into an era where we can start to think about virtualized data through some of these distributed file systems-- >> David: That's right, and the metadata that goes with it. The most important thing about the data is, and it's increasing much more rapidly than data itself, is the metadata around it. But I want to just, make one point on this, all data isn't useful. There's a huge amount of data that we capture that we're just going to have to throw away. The idea that we can look at every piece of data for every decision is patently false. There's a lovely example of this in... fluid mechanics. >> Peter: Fluid dynamics. >> David: Fluid dynamics, if you're trying to, if you're trying to have simulation at a very very low level, the amount of-- >> Peter: High fidelity. >> High fidelity, you run out of capacity very very very quickly indeed. So you have to make trade-offs about everything and all of that data that you're doing in that simulation, you're not going to keep that. All the data from IOT, you can't keep that. >> Peter: And that's not just a statement about the performance or the power or the capabilities of the hardware, there's some physical realities-- >> David: Absolutely, yes. >> That are going to limit what you can do with the simulation. But, and we've talked. We've talked about this in other action items, There is this notion of options on data value, where the value of today's data is maybe-- >> David: Is much higher. >> Peter: Well it's higher from at a time standpoint for the problems that we understand and are trying to solve now but there may be future problems where we still want to ensure that we have some degree of data where we can be better at attending those future problems. But I want to come back to this point because in all honesty, I haven't heard anybody else talking about this and maybe's because I'm not listening. But this notion of again, your research that the notion of virtualized data inside these new architectures being a catalyst for a simplification of a lot of the sharing subsystem. >> David: It's essentially sharing of data. So instead of having the traditional way of doing it within a data center, which is I have my systems of record, I make a copy, it gets delivered to the data warehouse, for example. That's the way that's being done. That is too slow, moving data is incredibly slow. So another way of doing it is to share that data, make a virtual copy of it, and technologies allowing you to do that because the access density has gone up by thousands of times-- >> Peter: Because? >> Because. (laughs) Because of flash, because of new technologies at that level, >> Peter: High performance interfaces, high performance networks. >> David: All of that stuff is now allowing things, which just couldn't be even conceived. However, there is still a constraint there. It may be a thousand times bigger but there is still an absolute constraint to the amount of data that you can actually process. >> And that constraint is provided by latency. >> Latency. >> Peter: Speed of light. >> Speed of light and speed of the processes themselves. >> George: Let me add something that may help explain the sort of the virtualization of data and how it ties into the convergence or non convergence of the software around it. Which is, when we're building these analytic pipelines, essentially we've disassembled what used to be a DBMS. And so out of that we've got a storage engine, we've got query optimizers, we've got data manipulation languages which have grown into full-blown analytic languages, data definition language. Now the system catalog used to be just, a way to virtualize all the tables in the database and tell you where all the stuff was, and the indexes and things like that. Now, what we're seeing is since data is now spread out over so many places and products, we're seeing an emergence of a new of catalog. Whether that's from Elation or Dremio or on AWS, it's the Glue catalog, and I think there's something equivalent coming on Asure. But the point is, we're beginning, those are beginning to get useful enough to be the entry point for analytic products and maybe eventually even for transactional products to update, or at least to analyze the data in these pipelines that we're putting together out of these components of what was a disassembled database. Now, we could be-- >> I would make a difference there there between the development of analytics and again, the real-time use of those analytics within systems of intelligence. >> George: Yeah but when you're using them-- >> David: There's a different, problems they have to solve. >> George: But there's a Design Time and a Run Time, there's actually four pipelines for the sort of analytic pipeline itself. There's Design Time and Run Time, and then for the inference engine and the modeling that goes behind it, there's also a Design Time and Run Time. But I guess where. I'm not disagreeing that you could have one converged product to manage the Run Time analytic pipeline. I'm just saying that the pieces that you assemble could come from one vendor. >> Yeah but I think David's point, I think it's accurate and this has been since the beginning of time. (laughs) Certainly predated UNIVAC. That at the end of the day, read/write ratios and the characteristics of the data are going to have an enormous impact on the choices that you make. And high write to read ratios almost dictate the degree of convergence, and we used to call that SMP, or you know scale-up database managers. And for those types of applications, with those types of workloads, it's not necessarily obvious that that's going to change. Now we can still find ways to relax that but you're talking about, George, the new characteristics >> Injecting the analytics. >> Injecting the analytics where we're doing more reading as opposed to writing. We may still be writing into an application that has these characteristics-- >> That's a small amount of data. >> But a significant portion of the new function is associated with these new pipelines. >> Right. And it's actually... what data you create is generally derived data. So you're not stepping on something that's already there. >> All right, so let me get some action items here. David, I want to start with you. What's the action item? >> David: So for me, about conversions, there's two levels of conversions. First of all, converge as much as possible and give the work to the vendor, would be my action item. The more that you can go full stack, the more that you can get the software services from a single point, single throat to choke, single hand to shake, the more you have out source your problems to them. >> Peter: And that has a speed implication, time to value. >> Time to value, it has a, you don't have to do undifferentiated work. So that's the first level of convergence and then the second level of convergence is to look hard about how you can bring additional value to your existing systems of record by putting in automation or a real-time analytics. Which leads to automation, that is the second one, for me, where the money is. Automation, reduction in the number of things that people have to do. >> Peter: George, action item. >> So my action item is that you have to evaluate, you the customer have to evaluate sort of your skills as much as your existing application portfolio. And if more of your greenfield apps can start in the cloud and you're not religious about open source but you're more religious about the admin burden and development burden and your latency budget, then start focusing on the services that the cloud vendors originally created that were standalone, but they are increasingly integrating because the customers are leading them there. And then for those customers who you know, have decades and decades of infrastructure and applications on Prem and need a pathway to the cloud, some of the vendors formerly known as Hadoop vendors. But for that matter, any on Prem software vendor is providing customers a way to run workloads in a hybrid environment or to migrate data across platforms. >> All right, so let me give this a final action item here. Thank you David Foyer, George Gilbert. Neil Raiden and Jim Kobielus and the rest of the Wikibon team is with customers today. We talked today about convergence at the software level. What we've observed over the course of the last few years is an expanding array of software technologies, specifically AI, big data, machine learning, etc. That are allowing enterprises to think differently about the types of problems that they can solve with technology. That's leading to an explosion and a number of problems that folks are looking at, the number of individuals participating in making those decisions and thinking those issues through. And very importantly, an explosion of the number of vendors with piecemeal solutions about what they regard, their best approach to doing things. However, that is going to have a significant burden that could have enormous implications for years and so the question is, will we see a degree of convergence in the approach to doing software, in the form of pipelines and applications and whatnot, driven by a combination of: what the hardware is capable of doing, what the skills are or make possible, and very importantly, the natural attributes of the data. And we think that there will be. There will always be tension in the model if you try to invent new software but one of the factors that's going to bring it all back to a degree of simplicity, will be a combination of what the hardware can do, what people can do, and what the data can do. And so we believe, pretty strongly, that ultimately the issues surrounding data whether it be latency or location, as well as the development complexity and administrative complexity, are going to be a range of factors that are going to dictate ultimately of how some of these solutions start to converge and simplify within enterprises. As we look forward, our expectation is that we're going to see an enormous net new investment over the next few years in pipelines, because pipelines are a first-level set of investments on how we're going to handle data within the enterprise. And they'll look like, in certain respects, how DBMS used to look but just in a disaggregated way but conceptually and administratively and then from a product selection and service election standpoint, the expectation is that they themselves have to come together so the developers can have a consistent view of the data that's going to run inside the enterprise. Want to thank David Floyer, want to thank George Gilbert. Once again, this has been Wikibon Action Item and we look forward to seeing you on our next Action Item. (electronic music)
SUMMARY :
in the number of approaches to how we're going the sort of the opportunity to lead And the question is now, how are we going to do this? has been extent in the hardware and the services, and now have come to enterprise as well. of those converged systems. David: Absolutely, and in the future is going to bring hyper-converged in the forms of virtualization of that, and the the real-time analytics for the first time. the licensing of the software, the way that funding is set up within businesses. the features that you plug into a machine learning model. and the ability to scale how that's going to work. that we have to play with that It's not, the goal isn't to get it as short as possible. George: Yes, because the more data that goes the other two knobs that you can play with is going to be partly or strongly that is most relevant to solving that particular problem. to split the latency budget, that inference, then the rest of it has to be all some of the attributes are going to be pre computed But is the practical reality that the convergence of, George: Give an example of the convergence of data because I think he's going to jump on it. in any of the normal levels of there's going to be lots and lots of room for and the ability to have single copies Not loss of the characters, the fidelity of the data, the way we approach problems and whatnot. David: That's right, and the metadata that goes with it. and all of that data that you're doing in that simulation, That are going to limit what you can for the problems that we understand So instead of having the traditional way of doing it Because of flash, because of new technologies at that level, Peter: High performance interfaces, to the amount of data that you can actually process. and the indexes and things like that. the development of analytics and again, I'm just saying that the pieces that you assemble on the choices that you make. Injecting the analytics where we're doing But a significant portion of the new function is what data you create is generally derived data. What's the action item? the more that you can get the software services So that's the first level of convergence and applications on Prem and need a pathway to the cloud, of convergence in the approach to doing software,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
George Gilber | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Neil Raiden | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
300 attributes | QUANTITY | 0.99+ |
Bash | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
second level | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two knobs | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two levels | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
each | QUANTITY | 0.98+ |
three issues | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
one product | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
UNIVAC | ORGANIZATION | 0.98+ |
50 plus percent | QUANTITY | 0.98+ |
decades | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
single point | QUANTITY | 0.97+ |
three times | QUANTITY | 0.97+ |
one way | QUANTITY | 0.97+ |
David Floyer, Wikibon | Action Item Quick Take: Storage Networks, Feb 2018
>> Hi, I'm Peter Burris, and this is a Wikibon Action Item Quick Take. (techno music) David Floyer, lot of new opportunities for thinking about how we can spread data. That puts new types of pressure on networks. What's going on? >> So, what's interesting is the future of networks and in particular one type of network. So, if we generalize about networks you can have simplicity, which is N-F-V, for example, Network Function Virtualization is incredibly important for. You can have scale, reach, the number of different places that you place data and how you can have the same admin for that. And you can have performance. Those are three things and there's usually a trade-off between those. You can't ... very, very difficult to have all three. What's interesting is that Mellanox have defined one piece of that network, the storage network, as a place where performance is absolutely critical. And they've defined the storage network with an emphasis on this performance using ethernet. Why? Because now ethernet can offer the same point-to-point capabilities, no lost capabilities. The fastest switches are in ethernet now. They go up to 400 has been announced, which is much ... >> David: 400 ... >> Gigabits per second, which is much faster than anybody else for any other protocol. So, and the reason for, one of the major reasons for this is that volume is coming from the Cloud providers. So they are providing a statement that storage networks are different from other networks. They need to have very low latency, they need to have high bandwidth, they need to have no loss, they need this point-to-point capability so that things can be done very, very fast indeed. I think their vision of where storage networks go is very sound and that is what all storage vendors need to take heed of, and C-I-Os, C-T-Os need to take heed of, is that type of network is going to be what is in the Cloud and is going to come to the Enterprise Data Center very quickly. >> David Floyer, thank you very much. Bottom line, ethernet, storage area networks, segmentation, still going to happen. >> Yup. >> I'm Peter Burris, this has been a Wikibon Action Item Quick Take. (techno music)
SUMMARY :
and this is a Wikibon Action Item Quick Take. and how you can have the same admin for that. So, and the reason for, one of the major reasons for this David Floyer, thank you very much. this has been a Wikibon Action Item Quick Take.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
Mellanox | ORGANIZATION | 0.97+ |
Wikibon | ORGANIZATION | 0.95+ |
one type | QUANTITY | 0.95+ |
three | QUANTITY | 0.93+ |
David | PERSON | 0.92+ |
second | QUANTITY | 0.89+ |
up to 400 | QUANTITY | 0.73+ |
I-Os | COMMERCIAL_ITEM | 0.54+ |
-T-Os | TITLE | 0.52+ |
C- | TITLE | 0.52+ |
400 | QUANTITY | 0.43+ |
C | ORGANIZATION | 0.38+ |
Peter Burris, Wikibon | Action Item Quick Take: NVMe over Fabrics, Feb 2018
(gentle electronic music) >> Hi, I'm Peter Burris. Welcome to another Wikibon Action Item Quick Take. A lot of new technology throughout the entire stack, including still Inside Systems. One in particular's pretty important, tell us about it. >> Thank you, NVMe over Fabric is what I'm going to talk about. And my take on this is that it's going to be very real in 2018. It's going to support all the protocols, it'll support iSCSI, it'll support Fibre Channel, InfiniBand and Ethernet. So it's going to affect all storage. The incremental costs are low, very low. The performance of it is absolutely outstanding and fantastic, and there'll be huge savings, potential huge savings on things, for example, like core licensing. So the savings within storage and the savings across the system will be large. My view is it should become the design standard in 2018 for storage. So the Action Item here is to assume that you are going to be implementing NVMe over Fabrics over the next 18 months as part of all storage purchases and ensure that all the NICs and the software etc will support it. So the key question to ask of any vendor is 'What is your committed NVMe rollout in 2018 and the start of 2019?' >> David Floyer, thank you very much. Once again, the idea here is NVMe becoming not just a technology standard, but now becoming ready for prime time in a commercial way. This has been a Wikibon Action Item Quick Take. Thanks for watching. (gentle electronic music)
SUMMARY :
Welcome to another Wikibon Action Item Quick Take. So the Action Item here is to assume Once again, the idea here is NVMe
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.97+ |
months | DATE | 0.84+ |
iSCSI | OTHER | 0.77+ |
One | QUANTITY | 0.77+ |
2019 | DATE | 0.71+ |
InfiniBand | OTHER | 0.71+ |
18 | QUANTITY | 0.55+ |
Channel | OTHER | 0.5+ |
Peter Burris, Wikibon | Action Item Quick Take: Hortonworks, Feb 2018
(rhythmic techno) >> Hi, this is Peter Burris. Welcome to a Wikibon Action Item Quick Take. It's earning season. Hortonworks revealed some numbers. Mark responded. George, what happened? >> So, Hortonworks had a good year and a good quarter, in terms of meeting the expectations they set for Wall Street and analysts. There was a little disapointment in the guidance. And, normally, we don't really talk about earnings on a show like this, but I think it's worth pointing out or focusing on it because it highlights an issue, which is something that we've lost sight of. We've been in this environment, now, for 10 years, where we see pricing in this slow motion collapse based on metered pricing models or subscription pricing models, as well as open source. But what hasn't changed is the cost of fielding a direct sales force to get customers to do enterprise-wide adoption. Everyone talks about land and expand, which is like self-service or, at best, inside sales. But to get wide-scale adoption, you need to call high, you need to have solutions architects who can map it to an enterprise architectures, enterprise-specific architecture and infrastructure. I think we're going to see convergence and consolidation. Howtonworks does have a very broad product line and we're seeing evidence of uptake of the new products, especially for data in motion to go with its data lake product. But I think this is something we're going to have to watch with all vendors. Can they afford to build the go-to market channel that will make their customers successful? >> Once again, software's complex, especially enterprise software that promises to do complex and rich things. This has been a Wikibon Action Item Quick Take. Thank you for watching. (quiet techno)
SUMMARY :
Welcome to a Wikibon Action Item Quick Take. But to get wide-scale adoption, you need to call high, to do complex and rich things.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephane Monoboisset | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa Tung | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jamie Sharath | PERSON | 0.99+ |
Rajeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeremy | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Holland | LOCATION | 0.99+ |
Abhiman Matlapudi | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Rajeem | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Rajeev Krishnan | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Sally Jenkins | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Stephane | PERSON | 0.99+ |
John Farer | PERSON | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Abhiman | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
130% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
183% | QUANTITY | 0.99+ |
14 million | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
38% | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
24 million | QUANTITY | 0.99+ |
Theresa | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Accelize | ORGANIZATION | 0.99+ |
32 million | QUANTITY | 0.99+ |
Peter Burris, Wikibon | Action Item Quick Take: AWS Low Code, Feb 2018
(electronic pop music) >> Hi, I'm Peter Burris. Welcome to a Wikibon Action Item Quick Take. One of the biggest challenges that all cloud players face is how to bring more developers into the ranks. Jim Kobielus, Amazon did something interesting to, or I should say, AWS did something interesting this week. Tell us about it. >> Well, they haven't actually done it, Peter, but there is rumor that they're doing it. Let me explain. Darryl Taft, who's a very well-seasoned veteran reporter with TechTarget now... Darryl reported that AWS is "appealing to the masses" with a low-code development project. I think that's exciting. He's got it on strong background that they've got Adam Bosworth, formerly of Microsoft, heading up their low-code tool development effort. I think one of the things that AWS is missing is a strong tool for developers, especially professional developers, trying to rapidly build cloud applications, and also for the run-of-the-mill business user who wants to quickly put together an application right in the Amazon cloud. I'm impressed that they've got Adam Bosworth, who was very much one of the drivers behind the Access database at Microsoft, going forward. So going forward, I'm looking forward to seeing, hopefully, they say they've been developing it since last summer, AWS... I'm hoping to see an actual low-code tool from AWS that would bring them into this space in a major way, really to encourage more development of cloud applications running natively in the very sprawling and complex AWS world. >> All right, so, AWS being rumored to expand their attractiveness to developers. This has been a Wikibon Action Item Quick Take. (electronic pop music)
SUMMARY :
is how to bring more developers into the ranks. Darryl reported that AWS is "appealing to the masses" All right, so, AWS being rumored to expand
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Adam Bosworth | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Darryl | PERSON | 0.99+ |
Darryl Taft | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
last summer | DATE | 0.97+ |
this week | DATE | 0.95+ |
one | QUANTITY | 0.94+ |
One | QUANTITY | 0.92+ |
TechTarget | ORGANIZATION | 0.92+ |
Wikibon | ORGANIZATION | 0.89+ |
Amazon cloud | ORGANIZATION | 0.74+ |
Wikibon | TITLE | 0.57+ |
Peter Burris, Wikibon | Action Item, Feb 9 2018
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (upbeat music) Once again, we're broadcasting from theCUBE studio in beautiful Palo Alto, California, and I have joining me here in the studio George Gilbert, David Floyer, both Wikibon analysts, and remote, welcome Neil Raden and Jim Kobielus. This week, we're going to talk about something that's actually quite important, and it's one of those examples of an innovation in which technology that is maturing in multiple domains is brought together in unique and interesting ways to potentially dramatically revolutionize how work gets done. Specifically, we're talking about something we call augmented programming. The notion of augmented programming borrows from some of the technologies associated with new or declarative low-code development environments, machine learning, and an increasing understanding of the role that automation's going to play, specifically as pertains to human and human-augmented activities. Now, low-code programming has been around for a while. Machine learning's been around for a while, and, increasingly, some of these notions of automation have been around for a while. But it's how they are coming together to create new approaches and new possibilities that can dramatically improve the speed of systems development, the quality of systems development, and, ultimately, very importantly, the ongoing manageability of those systems. So, Jim Kobielus, let's start with you. What are some of the issues associated with augmented programming that users need to be focused on? >> Yeah, well, the primary issue, or, really, the driver, is that we need to increase the productivity of developers greatly, because required of them to build programs, applications faster with fewer resources, and deploy them more rapidly in DevOps environments, and to manage that code, and to optimize that code for 10 zillion downstream platforms from mobile to web to the Internet of Things, and so forth. They need power tooling to be able to drive this process. Now, low-code platforms, you know, that whole low-code space has been around for years. It's very much evolved from what used to be called rapid application development, which itself evolved from the 4GL languages of decades past, and so forth. Looking at it now, here, we're moving towards the end of the second decade of this century. All low-code development space has evolved, it is rapidly emerging into, BPM, on the one hand, orchestration modeling tools. Robotic process automation, on the other hand, to enable the average end user or business analyst to quickly gin up an application based on being able to wire together UI components fairly rapidly, and drive it from the UI on in. What we're seeing now is that more and more machine learning is being used in the process of developing low-code application, or in the low-code development of applications. More machine learning is being used in a variety of capacities, one of which is simply to be able to infer the appropriate program code for external assets like screenshots and wireframes, but also from database schema and so forth. A lot of machine learning is coming to this space in a major way. >> But it sounds, though, there's still going to be some degree of specialization, and the nature of the tools that we might use in this notion of augmented programming. So, RPA may be associated with a certain class of applications and environmental considerations, and there'll be other tools, for example, that might be associated with different application considerations and environmental attributes as well. But David Floyer, one of the things that we're concerned about is, a couple weeks ago, we talked about the notion of data-aware middleware, where the idea that, increasingly, we'll see middleware emerge that's capable of moving data in response to the metadata attributes of the data, combined with invisibility to the application patterns. But when we think about this notion of augmented programming, what are some of the potential limits that people have to think about as they consider these tools? >> Peter, that's a very good question. The key for all of these techniques is to use the right tools in the right place. A lot of the environments where the leading edge of this environment assumes an environment where the programmer has access to all of his data, he owns it, and he is the only person there. The challenge is, in many applications, you are sharing data. You are sharing data across the organization, you are sharing data between programmers. Now, this introduces a huge amount of complexity, and there have been many attempts to try and tackle this. There've been data dictionaries, there've been data management, ways of managing this data. They haven't had a very good history. The efforts involved in trying to make those work within an organization have been, at best, spasmodic. >> (laughs) Spasmodic, good word! >> When we go into this environment, I think the key is, make sure that you are applying these tools to the areas initially where somebody does have access to all the data, and then carefully look at it from the point of view of shared data, because you have a whole lot of issues in state environments, which we do not have in non-state environments, and the complexity of locking data, the complexity of many people accessing that data, that requires another set of tools. I'm all in favor of these low-code-type environments, but you have to make sure that you're applying the right tools for the right type of applications. >> And specifically, for example, a lot of metadata that's typically associated with a database is not easily revealed to an application developer, nor an application. And so, you have to be very, very careful about how you exploit that. Now, Neil Raden, there has been over the years, as David mentioned, a number of passes at doing this that didn't go so well, but there are some business reasons to think why this time it might go a little bit better. Talk a little bit about some of the higher-level business considerations that are on the table that may catalyze better adoption this time of these types of tools. >> One thing is that, no matter what kind of an organization you are, whether you're a huge multinational or an SMB or whatever, all of these companies are really rotten with what we call shadow systems. In other words, companies have applications that do what they do, and what they don't do, people cobble together. The vast majority of 'em are done in Access and Excel, still. Even in advanced organizations, you'll find this. If there's a way to eliminate that, because it's a real killer of productivity, then that's a real positive. I suppose my concern is that when you deal at that level, how are you going to maintain coherency and consistency in those systems over time without adding, like he said, orchestration of those systems? What David is saying, I think, is really key. >> Yeah, I, go ahead, sorry, Neil. Go ahead. >> No, that's all right. What I was-- >> I think-- >> Peter: Sorry. Bad host. >> David: You think? >> Neil: No, go ahead. >> No, what I was going to say was, and a crucial feature of this is that a lot of times, the application is owned by a business line, and the business line presumes that they own their data, and they have modeled those systems for a certain type of work, for a certain volume of work, for a certain distribution of control, and when you reveal a lot of this stuff, you sometimes break those assumptions. That can lead to real serious breaks in the system. >> You know, they're not always evil, as we like to characterize them. Some of them are actually well-thought-out and really good system, better than anything they could get 'em from the IT organizations. But the point is, they're usually pretty brittle, and they require a lot of effort for the people who develop them to keep them running because they don't use the kind of tools and approaches and platforms and methodologies that lend themselves to good-quality software. I think there's real potential for RTA in that area. >> I think there are also some interesting platforms that are driving to help in this particular area, particularly of applications which go across departments in an organization. ServiceNow, for example, has a very powerful platform for very high-level production of systems, and it's being used a lot of the time to solve problems of procedures, procedures going across different departments, automating those procedures. I think there are some extremely good tools coming out which will significantly help, but they do help more in the serial procedures, rather than the concurrent procedures. >> And there are some expectations about the type of tools you use, and the extensibility of those tools, et cetera, which leads me, anyway, George, to ask the question about some of the machine learning attributes of this. We've got to be careful about machine learning being positioned as the panacea for all business problems, which too often seems to be the case. But we are certainly, it's reasonable to observe that machine learning can, in fact, help us in important ways at understanding how patterns in applications and data are working, how people are working together. Talk a little bit about the machine learning attributes of some of these tools. >> Well, I like to say that every few years, we have a technology we get so excited about that we assume it tastes like chocolate, costs a dollar, and cures cancer. Machine learning is that technology right now. The interesting thing about robotic process automation in many low-code environments is that they're sort of inheriting the mantle of the old application macros, and even cross-application macros from the early desktop office wars. The difference now is, unlike then, there were APIs that those scripts could talk to, and they could then treat the desktop applications as an application platform. As David said, and Neil, we're going through application user interfaces now, and when you want to do a low-code programming environment, you want often to program by example. But then you need to generalize parts, you know, when you move this thing to this place, you might now want to generalize that. That's where machine learning can start helping take literal scripts, and adding more abstract constructs to them. >> So, you're literally digitizing some of the digital primitives that are in some of these applications, and that allows you to reveal data the machine learning can apply to make observations, recommendations about patterns, and actually do code generation. >> And you know, I would add one thing, that it's not just about the UI anymore, because we're surfacing, as we were talking earlier, the data-driven middleware. Another way of looking at what used to be the system catalog, we had big applications all talking to a central database. But now that we have so many repositories, we're sort of extricating the system catalog so that we can look at and curate data in many locations. These tools can access that because they have user interfaces, as well as APIs. And then, in addition, you don't have to go against a database that is unprotected with an applications business logic. More and more, we have microservices and serverless functions where they embody the business logic, and you can go against them, and they enforce the rules as well. >> That's great, so, David Floyer-- >> I should point out-- >> Hold on, Jim. Dave Floyer, this is not a technology set that suddenly is emerging on the scene independent of other changes. There's also some important changes in the hardware itself that are making it possible for us to reveal data differently, so that these types of tools and these types of technologies can be applied. I'm specifically thinking about something as mundane as SSD, flash-based storage, and other types of technologies that allow us to different things with data so that we can envision working with this stuff. Give us a quick list down on the infrastructure, some of the key technologies in making this possible. >> When we look at systems architectures now, what we never had was fast memories, fast storage. We had very, very slow storage, and we had to design systems to take account of that. What is coming in now is much, much faster storage built on things like NVMe, other fabrics, which really get to any data within microseconds, as opposed to the milliseconds. That's thousands of times faster. What you can do with these is, not only can the access density that you can achieve to the data is much, much higher than it was. Many, again, many thousand times higher. That enables you to take a different approach to sharing data. Instead of having to share data at the disk level, you can now, for example, take a snapshot of data. You can allow that snapshot to be the snapshot of, for example, the analytics system on the hour, or on the day, or whatever timescale that you want it. And then, in parallel, you can use huge amounts of analytic data against a snapshot of that same data while the same operational system is working. There are some techniques there which I think are very exciting, indeed. The other big change is that we're going to be talking machine to machine. Applications were designed for human, most of applications were designed for a human to be the recipient at the other end. One of the differences when you're dealing with machines is now you have to get your code done in microseconds, as opposed to seconds. Again, a thousand times faster. This is a very exciting area, but when we're looking at low-code, for example, you're still going to need those well-crafted algorithms, those well-crafted code, very fast code that you're going to need as one of the tools of programmers. There's still going to be a need for people who can create these very fast algorithms. An exciting time all the way around for programmers. >> What were you going to say, Jim? And I want to come back and have you talk about DevOps for a second. >> Yeah, I think I was going to, I'll add to what David was just saying. Most low-code tools are not entirely no-code, meaning what they do is they auto-generate code, pursuant to some business declared a specification. The code, the actual, professional programmers can go in and modify that code and tweak it and optimize it. And I want to tie in now to something that George was talking about, the role of ML in this process. ML can make a huge mess, in the sense that ML can be an enabler for more people who don't know whole lot about development. You want to build stuff willy-nilly, so there's more code out there than you can shake a stick at, and there's no standards. But also, I'm seeing, and I saw this past week, MIT has a project, they already have a tool, that's able to do this. It's able to take ML, use ML to take a snapshot or a segment of code out of one program, and then modify it so that it fit and then transplant it into another application and modify it so it fits the context of the new application along various attributes, and so forth. What I'm getting at is that ML can be, according to what, say, MIT has done, ML can be a tool for enabling reuse of code and re-contextualization and tweaking of code. In other words, ML can be a handmaiden of enforcing standards as code gets repurposed throughout these low-code environments. I think that ML can be, it's a double-edged sword, in terms of enabling stronger or weaker governance over the whole development process. >> Yeah, and I want to add to that, Jim, that it's not just you can enforce, or at least, reveal standards and compliance, but also increases the likelihood that we become a little bit more tool-dependent. And then going back to what you were talking about, a little bit less tool-dependent, I should say. Going back to what you were talking about, David, it increases the likelihood that people are using the right tool for the right job, which is a pretty crucial element of this, especially as we do in adoption. So, Jim, give us a couple of quick observations on what a development organization is going to have to do differently to get going on utilizing some of these technologies. What are the top two or three things that folks are going to have to think about? >> First of all, in the low-code space, there are general-purpose tools that can bang out code for various target languages, for various applications, and there are highly special-purpose tools that can go gangbusters on auto-ginning web application code and mobile code and IoT code. First and foremost, you got to decide how much of the ocean you want to boil off, in terms of low-code. I recommend that if you have a requirement for accelerating, say, mobile code development, then go with low-code tools that are geared to iOS and Android and so forth, as your target platform, and stay there. Don't feel like you have to get some monster suite that can do everything, potentially. That's one critical thing. Another critical thing is it's not, the tool that you adopt, it needs to be more than just a development tool. It needs to also have capabilities built in to help your team govern those code builds within whatever DevOps, CIC, or repository you have inside your organization, make sure that the tool you've got plays well with your DevOps environment, with your workflows, with your code repositories. And then, number three, we keep forgetting this, but the front-end development is still not a walk in the woods. In fact, specifying a complex business logic that drives all this code generation, this is stuff for professional developers more often than not. These are complex, even RPA tools are, quite frankly, not as user-friendly as maybe potentially they could be down the road, 'cause you still need somebody to think through the end-to-end application, and then to specify those steps at a declarative level that need to be accomplished before the RPA tool can do its magic and build something that you might want to then crystallize as a repeatable asset in your organization. >> So it doesn't take the thinking out of application development. >> James: Oh, no, no, no no. >> All right, so, let's do this. Let's hit the action items and see what we all think folks should do next. David Floyer, let me start with you. What's the action item out of this? >> The action item is horses for courses. The right horse for the right course, the right tools for the right job. Understand where things are stateless and where things are state, and use the appropriate tools, and, as Jim was just saying, make sure that there is integration of those tools into the current processes and procedures for coding. >> George Gilbert, action item. >> I would say that, building on that, start with pilots where it involves one or a couple simple applications. Or, I should say, one or a couple enterprise applications, but with less, less sort of branching, if-then type of logic built in. It could be hardwired-- >> So, simple flows? >> Simple flows, so that over time you can generalize that and play with how the RPA tools or low-code tools can generalize their auto-generated code. >> Peter: Neil Raden, action item. >> My suggestion is that if you involve someone who's going to learn how to use these tools and develop an application or applications for you, make sure that you're dealing with someone who's going to be around for a while, because otherwise, you're going to end up with a lot of orphan code that you can't maintain. We've certainly seen that before. >> David: That's great. >> Peter: Jim Kobielus, action item. >> Yeah, action item is, approach low-code as tooling for the professional developer, not to necessarily bring in untrained, non-traditional developers. Like Neil said, make sure that the low-code environment itself is there for the long haul, it'll be managed and used by professional developers, and make sure that they are provided with the front-end visual workspace that helps them do their jobs most effectively, that is user-friendly for them to get stuff done in a hurry. And don't worry about bringing in the freelance, untrained developers into your organization, or somehow re-tasking your business analysts to become coders. That's probably not the best idea in the long run, for maintainability of the code, if nothing else. >> Certainly not in the intermediate term. Okay, so here's the action item. Here's our Wikibon Action Item. As digital business progresses, it needs to be able to create digital assets that are predicated on valuable data faster, in a more flexible way, with more business knowledge embedded and imbued directly in how the process works. A new class of tools is emerging that we think will actually allow this to happen more successfully. It combines mature knowledge in the application development world with new insights in how machine learning works, and new understanding of the impacts of automation on organization. We call these augmented programming tools, and essentially, we call them augmented programming, because in this case, the system is taking some degree of responsibility for the business to generate code, identify patterns, and ultimately do a better job maintaining how applications get organized and run. While these technologies have potential power, we have to acknowledge that there's not ever going to be a one-size-fits-all at all. In fact, we believe very strongly that we're going to see a range of different tools emerge that will allow developers to take advantage of this approach, given their starting point of the artifacts that are available, and the characteristics of the applications that have to be built. One of the ones that we think is particularly important is robotic process automation, or RPA, which starts with the idea of being able to discover something about the way applications work by looking at how the application behaves onscreen, encapsulate that, generalize it so that it can be used as a tool in future application development work. We also note that these application development technologies will not operate independent of other technology and organizational changes within the business. Specifically, on the technology side, we are encouraged that there's a continuing evolution of hardware technology that's going to take advantage of faster data access utilizing solid-state disks, NVMe over fabric, and new types of system architectures that are much better-suited for rapid shared data access. Additionally, we observed that there's new classes of technologies that are emerging that allow a data control plane to actually operate based on metadata characteristics, and informed by application patterns, often through things like machine learning. One of the organizational issues that we think is really crucial is that folks should not presume that this is going to be a path for taking anybody in the business and turning them into an application developer. You still have to be able to think like an application developer and imagine how you turn a business process into something that looks like a program. But another group that we think has to be considered here is not just the DevOps people, although that's important, but go down a level. The good old DBAs who have always suffered through new advances in tools that made the assumption that the data that's in a database is always available, and they don't have to worry about transaction scaling, and they don't have to worry about the way that the database manager's set up. It would be unfortunate if the value of these tools from a collaboration standpoint, to work better with the business, to work better with the younger programmers, ended up failing because developers continue to not pay attention to how the underlying systems that currently control a lot of the data operate. Okay, once again, this has been, we really appreciate you participating. Thank you, David Floyer and George Gilbert, and on the remote, Neil Raden and Jim Kobielus. We've been talking about augmented programming. This has been Wikibon Action Item. (upbeat music)
SUMMARY :
of the role that automation's going to play, and drive it from the UI on in. and the nature of the tools that we might use and he is the only person there. and the complexity of locking data, business considerations that are on the table that when you deal at that level, Yeah, I, go ahead, sorry, Neil. What I was-- Peter: Sorry. and the business line presumes that they own their data, that lend themselves to good-quality software. that are driving to help in this particular area, and the extensibility of those tools, et cetera, and adding more abstract constructs to them. and that allows you to reveal data that it's not just about the UI anymore, some of the key technologies in making this possible. You can allow that snapshot to be the snapshot of, and have you talk about DevOps for a second. and modify it so it fits the context of the new application And then going back to what you were talking about, make sure that the tool you've got So it doesn't take the thinking Let's hit the action items make sure that there is integration of those tools but with less, Simple flows, so that over time you can generalize that that you can't maintain. and make sure that they are provided with that this is going to be a path
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 9 2018 | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
10 zillion | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
iOS | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
This week | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
one program | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
thousand times | QUANTITY | 0.94+ |
CIC | TITLE | 0.94+ |
past week | DATE | 0.94+ |
ServiceNow | TITLE | 0.93+ |
One thing | QUANTITY | 0.93+ |
Access | TITLE | 0.92+ |
one thing | QUANTITY | 0.89+ |
thousands of times | QUANTITY | 0.88+ |
one critical thing | QUANTITY | 0.88+ |
a dollar | QUANTITY | 0.87+ |
couple weeks ago | DATE | 0.85+ |
second decade of this century | DATE | 0.84+ |
number three | QUANTITY | 0.76+ |
decades | DATE | 0.75+ |
couple simple applications | QUANTITY | 0.73+ |
one of | QUANTITY | 0.71+ |
couple enterprise applications | QUANTITY | 0.67+ |
a second | QUANTITY | 0.63+ |
double | QUANTITY | 0.61+ |
top | QUANTITY | 0.57+ |
two | QUANTITY | 0.53+ |
ML | TITLE | 0.51+ |
4GL | OTHER | 0.48+ |
Peter Burris, Wikibon | Action Item Quick Take: Teradata, Feb 2018
(electronic pop music) >> Hi, I'm Peter Burris. Welcome to a Wikibon Action Item Quick Take. This week, Teradata announced some earnings and some changes. Neil Raden, what happened? >> A couple of years ago, and don't hold my feet to the fire, but most people considered Teradata dying out, a company with great technology that just wasn't current with where things were going. They saw that, too, and they've done a tremendous job at reinventing themselves. The numbers were evident in their 4th quarter and full fiscal year numbers. They weren't spectacular, but they did beat everybody's estimate, which is a good thing. They also showed something like $250 million in subscription income, which was probably zero a year and a half ago. So that's a good thing. I think it's showing that they're making progress. They're not out of the woods yet, obviously, but I think that the program is a good program and the numbers are showing it. The other thing that I really, really like is that they elevated Oliver Ratzesberger to COO. So he's now basically in charge of pretty much everything, right? (laughs) He's going to take charge of the entire organization's sales, and marketing, and service, and so forth. He was in charge of product before this. Really, good things have happened in terms of their technology with Oliver. I've known Oliver for a while, and he's been with eBay, did a great job there. I think he's going to stick around. Sales, products, services, and marketing under one team, that's a pretty tall order. But I think he's up to it, and I'm looking forward to the 2018 year and seeing how well they do. >> Excellent, Neil. So, Teradata transitioning and finding people that can make it happen. This has been a Wikibon Action Item Quick Take. (electronic pop music)
SUMMARY :
Welcome to a Wikibon Action Item Quick Take. I think he's going to stick around. and finding people that can make it happen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
$250 million | QUANTITY | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Oliver Ratzesberger | PERSON | 0.99+ |
Oliver | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
2018 year | DATE | 0.99+ |
4th quarter | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
one team | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
a year and a half ago | DATE | 0.95+ |
couple of years ago | DATE | 0.93+ |
2018-01-26 Wikibon Research Quick Take #1 with David Foyer
(mid-tempo electronic music) >> Hi, I'm Peter Burris. And once again, this is another Wikibon research quick take. I'm here with David Floyer. David, Amazon did something interesting this week. What is it? What's the impact? >> Amazon, and I mean by that, Amazon, not AWS, have put into place, something following on from their data warehouse automation. They have now a store which is completely automated. You walk in, you pick something off the shelf, and you walk out. They've done all of the automation, lots and lots of cameras everywhere, lots of sophisticated work. It's taken them more than four years of hard work on AI, to get this done. The implication is if, I think this is both exciting and people who are not doing anything, they must be really fearful about this. This is an exciting time, and something that other people must get on with, which is automation of the business process that are important to them. >> Retail or not, one of the things, very quickly, that we've observed, is the process of automating employee activities is slow. The process of automating, or providing automation for customer activities is even slower. We're really talking about Amazon introducing technologies to provide the Amazon brand to the customer in an automated way. Big deal. >> Absolutely, big, big deal. >> All right, this has been a Wikibon quick take research with David Floyer. Thanks, David. (upbeat electronic music)
SUMMARY :
David, Amazon did something interesting this week. The implication is if, I think this is is the process of automating employee activities is slow. All right, this has been a Wikibon quick take
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2018-01-26 | DATE | 0.99+ |
David Foyer | PERSON | 0.99+ |
more than four years | QUANTITY | 0.99+ |
both | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
#1 | QUANTITY | 0.76+ |
lots | QUANTITY | 0.46+ |