Image Title

Search Results for David foyer:

2018-01-26 Wikibon Research Quick Take #1 with David Foyer


 

(mid-tempo electronic music) >> Hi, I'm Peter Burris. And once again, this is another Wikibon research quick take. I'm here with David Floyer. David, Amazon did something interesting this week. What is it? What's the impact? >> Amazon, and I mean by that, Amazon, not AWS, have put into place, something following on from their data warehouse automation. They have now a store which is completely automated. You walk in, you pick something off the shelf, and you walk out. They've done all of the automation, lots and lots of cameras everywhere, lots of sophisticated work. It's taken them more than four years of hard work on AI, to get this done. The implication is if, I think this is both exciting and people who are not doing anything, they must be really fearful about this. This is an exciting time, and something that other people must get on with, which is automation of the business process that are important to them. >> Retail or not, one of the things, very quickly, that we've observed, is the process of automating employee activities is slow. The process of automating, or providing automation for customer activities is even slower. We're really talking about Amazon introducing technologies to provide the Amazon brand to the customer in an automated way. Big deal. >> Absolutely, big, big deal. >> All right, this has been a Wikibon quick take research with David Floyer. Thanks, David. (upbeat electronic music)

Published Date : Jan 26 2018

SUMMARY :

David, Amazon did something interesting this week. The implication is if, I think this is is the process of automating employee activities is slow. All right, this has been a Wikibon quick take

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

2018-01-26DATE

0.99+

David FoyerPERSON

0.99+

more than four yearsQUANTITY

0.99+

bothQUANTITY

0.97+

WikibonORGANIZATION

0.97+

this weekDATE

0.96+

#1QUANTITY

0.76+

lotsQUANTITY

0.46+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

SiliconANGLE Report: Reporters Notebook with Adrian Cockcroft | AWS re:Invent 2022


 

(soft techno upbeat music) >> Hi there. Welcome back to Las Vegas. This is Dave Villante with Paul Gillon. Reinvent day one and a half. We started last night, Monday, theCUBE after dark. Now we're going wall to wall. Today. Today was of course the big keynote, Adam Selipsky, kind of the baton now handing, you know, last year when he did his keynote, he was very new. He was sort of still getting his feet wet and finding his guru swing. Settling in a little bit more this year, learning a lot more, getting deeper into the tech, but of course, sharing the love with other leaders like Peter DeSantis. Tomorrow's going to be Swamy in the keynote. Adrian Cockcroft is here. Former AWS, former network Netflix CTO, currently an analyst. You got your own firm now. You're out there. Great to see you again. Thanks for coming on theCUBE. >> Yeah, thanks. >> We heard you on at Super Cloud, you gave some really good insights there back in August. So now as an outsider, you come in obviously, you got to be impressed with the size and the ecosystem and the energy. Of course. What were your thoughts on, you know what you've seen so far, today's keynotes, last night Peter DeSantis, what stood out to you? >> Yeah, I think it's great to be back at Reinvent again. We're kind of pretty much back to where we were before the pandemic sort of shut it down. This is a little, it's almost as big as the, the largest one that we had before. And everyone's turned up. It just feels like we're back. So that's really good to see. And it's a slightly different style. I think there were was more sort of video production things happening. I think in this keynote, more storytelling. I'm not sure it really all stitched together very well. Right. Some of the stories like, how does that follow that? So there were a few things there and some of there were spelling mistakes on the slides, you know that ELT instead of ETL and they spelled ZFS wrong and something. So it just seemed like there was, I'm not quite sure just maybe a few things were sort of rushed at the last minute. >> Not really AWS like, was it? It's kind of remind the Patriots Paul, you know Bill Belichick's teams are fumbling all over the place. >> That's right. That's right. >> Part of it may be, I mean the sort of the market. They have a leader in marketing right now but they're going to have a CMO. So that's sort of maybe as lack of a single threaded leader for this thing. Everything's being shared around a bit more. So maybe, I mean, it's all fixable and it's mine. This is minor stuff. I'm just sort of looking at it and going there's a few things that looked like they were not quite as good as they could have been in the way it was put together. Right? >> But I mean, you're taking a, you know a year of not doing Reinvent. Yeah. Being isolated. You know, we've certainly seen it with theCUBE. It's like, okay, it's not like riding a bike. You know, things that, you know you got to kind of relearn the muscle memories. It's more like golf than is bicycle riding. >> Well I've done AWS keynotes myself. And they are pretty much scrambled. It looks nice, but there's a lot of scrambling leading up to when it actually goes. Right? And sometimes you can, you sometimes see a little kind of the edges of that, and sometimes it's much more polished. But you know, overall it's pretty good. I think Peter DeSantis keynote yesterday was a lot of really good meat there. There was some nice presentations, and some great announcements there. And today I was, I thought I was a little disappointed with some of the, I thought they could have been more. I think the way Andy Jesse did it, he crammed more announcements into his keynote, and Adam seems to be taking sort of a bit more of a measured approach. There were a few things he picked up on and then I'm expecting more to be spread throughout the rest of the day. >> This was more poetic. Right? He took the universe as the analogy for data, the ocean for security. Right? The Antarctic was sort of. >> Yeah. It looked pretty, >> yeah. >> But I'm not sure that was like, we're not here really to watch nature videos >> As analysts and journalists, You're like, come on. >> Yeah, >> Give it the meat >> That was kind the thing, yeah, >> It has always been the AWS has always been Reinvent has always been a shock at our approach. 100, 150 announcements. And they're really, that kind of pressure seems to be off them now. Their position at the top of the market seems to be unshakeable. There's no clear competition that's creeping up behind them. So how does that affect the messaging you think that AWS brings to market when it doesn't really have to prove that it's a leader anymore? It can go after maybe more of the niche markets or fix the stuff that's a little broken more fine tuning than grandiose statements. >> I think so AWS for a long time was so far out that they basically said, "We don't think about the competition, we are listen to the customers." And that was always the statement that works as long as you're always in the lead, right? Because you are introducing the new idea to the customer. Nobody else got there first. So that was the case. But in a few areas they aren't leading. Right? You could argue in machine learning, not necessarily leading in sustainability. They're not leading and they don't want to talk about some of these areas and-- >> Database. I mean arguably, >> They're pretty strong there, but the areas when you are behind, it's like they kind of know how to play offense. But when you're playing defense, it's a different set of game. You're playing a different game and it's hard to be good at both. I think and I'm not sure that they're really used to following somebody into a market and making a success of that. So there's something, it's a little harder. Do you see what I mean? >> I get opinion on this. So when I say database, David Foyer was two years ago, predicted AWS is going to have to converge somehow. They have no choice. And they sort of touched on that today, right? Eliminating ETL, that's one thing. But Aurora to Redshift. >> Yeah. >> You know, end to end. I'm not sure it's totally, they're fully end to end >> That's a really good, that is an excellent piece of work, because there's a lot of work that it eliminates. There's are clear pain points, but then you've got sort of the competing thing, is like the MongoDB and it's like, it's just a way with one database keeps it simple. >> Snowflake, >> Or you've got on Snowflake maybe you've got all these 20 different things you're trying to integrate at AWS, but it's kind of like you have a bag of Lego bricks. It's my favorite analogy, right? You want a toy for Christmas, you want a toy formula one racing car since that seems to be the theme, right? >> Okay. Do you want the fully built model that you can play with right now? Or do you want the Lego version that you have to spend three days building. Right? And AWS is the Lego technique thing. You have to spend some time building it, but once you've built it, you can evolve it, and you'll still be playing those are still good bricks years later. Whereas that prebuilt to probably broken gathering dust, right? So there's something about having an vulnerable architecture which is harder to get into, but more durable in the long term. And so AWS tends to play the long game in many ways. And that's one of the elements that they do that and that's good, but it makes it hard to consume for enterprise buyers that are used to getting it with a bow on top. And here's the solution. You know? >> And Paul, that was always Andy Chassy's answer to when we would ask him, you know, all these primitives you're going to make it simpler. You see the primitives give us the advantage to turn on a dime in the marketplace. And that's true. >> Yeah. So you're saying, you know, you take all these things together and you wrap it up, and you put a snowflake on top, and now you've got a simple thing or a Mongo or Mongo atlas or whatever. So you've got these layered platforms now which are making it simpler to consume, but now you're kind of, you know, you're all stuck in that ecosystem, you know, so it's like what layer of abstractions do you want to tie yourself to, right? >> The data bricks coming at it from more of an open source approach. But it's similar. >> We're seeing Amazon direct more into vertical markets. They spotlighted what Goldman Sachs is doing on their platform. They've got a variety of platforms that are supposedly targeted custom built for vertical markets. How do successful do you see that play being? Is this something that the customers you think are looking for, a fully integrated Amazon solution? >> I think so. There's usually if you look at, you know the MongoDB or data stacks, or the other sort of or elastic, you know, they've got the specific solution with the people that really are developing the core technology, there's open source equivalent version. The AWS is running, and it's usually maybe they've got a price advantage or it's, you know there's some data integration in there or it's somehow easier to integrate but it's not stopping those companies from growing. And what it's doing is it's endorsing that platform. So if you look at the collection of databases that have been around over the last few years, now you've got basically Elastic Mongo and Cassandra, you know the data stacks as being endorsed by the cloud vendors. These are winners. They're going to be around for a very long time. You can build yourself on that architecture. But what happened to Couch base and you know, a few of the other ones, you know, they don't really fit. Like how you going to bait? If you are now becoming an also ran, because you didn't get cloned by the cloud vendor. So the customers are going is that a safe place to be, right? >> But isn't it, don't they want to encourage those partners though in the name of building the marketplace ecosystem? >> Yeah. >> This is huge. >> But certainly the platform, yeah, the platform encourages people to do more. And there's always room around the edge. But the mainstream customers like that really like spending the good money, are looking for something that's got a long term life to it. Right? They're looking for a long commitment to that technology and that it's going to be invested in and grow. And the fact that the cloud providers are adopting and particularly AWS is adopting some of these technologies means that is a very long term commitment. You can base, you know, you can bet your future architecture on that for a decade probably. >> So they have to pick winners. >> Yeah. So it's sort of picking winners. And then if you're the open source company that's now got AWS turning up, you have to then leverage it and use that as a way to grow the market. And I think Mongo have done an excellent job of that. I mean, they're top level sponsors of Reinvent, and they're out there messaging that and doing a good job of showing people how to layer on top of AWS and make it a win-win both sides. >> So ever since we've been in the business, you hear the narrative hardware's going to die. It's just, you know, it's commodity and there's some truth to that. But hardware's actually driving good gross margins for the Cisco's of the world. Storage companies have always made good margins. Servers maybe not so much, 'cause Intel sucked all the margin out of it. But let's face it, AWS makes most of its money. We know on compute, it's got 25 plus percent operating margins depending on the seasonality there. What do you think happens long term to the infrastructure layer discussion? Okay, commodity cloud, you know, we talk about super cloud. Do you think that AWS, and the other cloud vendors that infrastructure, IS gets commoditized and they have to go up market or you see that continuing I mean history would say that still good margins in hardware. What are your thoughts on that? >> It's not commoditizing, it's becoming more specific. We've got all these accelerators and custom chips now, and this is something, this almost goes back. I mean, I was with some micro systems 20,30 years ago and we developed our own chips and HP developed their own chips and SGI mips, right? We were like, the architectures were all squabbling of who had the best processor chips and it took years to get chips that worked. Now if you make a chip and it doesn't work immediately, you screwed up somewhere right? It's become the technology of building these immensely complicated powerful chips that has become commoditized. So the cost of building a custom chip, is now getting to the point where Apple and Amazon, your Apple laptop has got full custom chips your phone, your iPhone, whatever and you're getting Google making custom chips and we've got Nvidia now getting into CPUs as well as GPUs. So we're seeing that the ability to build a custom chip, is becoming something that everyone is leveraging. And the cost of doing that is coming down to startups are doing it. So we're going to see many, many more, much more innovation I think, and this is like Intel and AMD are, you know they've got the compatibility legacy, but of the most powerful, most interesting new things I think are going to be custom. And we're seeing that with Graviton three particular in the three E that was announced last night with like 30, 40% whatever it was, more performance for HPC workloads. And that's, you know, the HPC market is going to have to deal with cloud. I mean they are starting to, and I was at Supercomputing a few weeks ago and they are tiptoeing around the edge of cloud, but those supercomputers are water cold. They are monsters. I mean you go around supercomputing, there are plumbing vendors on the booth. >> Of course. Yeah. >> Right? And they're highly concentrated systems, and that's really the only difference, is like, is it water cooler or echo? The rest of the technology stack is pretty much off the shelf stuff with a few tweets software. >> You point about, you know, the chips and what AWS is doing. The Annapurna acquisition. >> Yeah. >> They're on a dramatically different curve now. I think it comes down to, again, David Floyd's premise, really comes down to volume. The arm wafer volumes are 10 x those of X 86, volume always wins. And the economics of semis. >> That kind of got us there. But now there's also a risk five coming along if you, in terms of licensing is becoming one of the bottlenecks. Like if the cost of building a chip is really low, then it comes down to licensing costs and do you want to pay the arm license And the risk five is an open source chip set which some people are starting to use for things. So your dis controller may have a risk five in it, for example, nowadays, those kinds of things. So I think that's kind of the the dynamic that's playing out. There's a lot of innovation in hardware to come in the next few years. There's a thing called CXL compute express link which is going to be really interesting. I think that's probably two years out, before we start seeing it for real. But it lets you put glue together entire rack in a very flexible way. So just, and that's the entire industry coming together around a single standard, the whole industry except for Amazon, in fact just about. >> Well, but maybe I think eventually they'll get there. Don't use system on a chip CXL. >> I have no idea whether I have no knowledge about whether going to do anything CXL. >> Presuming I'm not trying to tap anything confidential. It just makes sense that they would do a system on chip. It makes sense that they would do something like CXL. Why not adopt the standard, if it's going to be as the cost. >> Yeah. And so that was one of the things out of zip computing. The other thing is the low latency networking with the elastic fabric adapter EFA and the extensions to that that were announced last night. They doubled the throughput. So you get twice the capacity on the nitro chip. And then the other thing was this, this is a bit technical, but this scalable datagram protocol that they've got which basically says, if I want to send a message, a packet from one machine to another machine, instead of sending it over one wire, I consider it over 16 wires in parallel. And I will just flood the network with all the packets and they can arrive in any order. This is why it isn't done normally. TCP is in order, the packets come in order they're supposed to, but this is fully flooding them around with its own fast retry and then they get reassembled at the other end. So they're not just using this now for HPC workloads. They've turned it on for TCP for just without any change to your application. If you are trying to move a large piece of data between two machines, and you're just pushing it down a network, a single connection, it takes it from five gigabits per second to 25 gigabits per second. A five x speed up, with a protocol tweak that's run by the Nitro, this is super interesting. >> Probably want to get all that AIML that stuff is going on. >> Well, the AIML stuff is leveraging it underneath, but this is for everybody. Like you're just copying data around, right? And you're limited, "Hey this is going to get there five times faster, pushing a big enough chunk of data around." So this is turning on gradually as the nitro five comes out, and you have to enable it at the instance level. But it's a super interesting announcement from last night. >> So the bottom line bumper sticker on commoditization is what? >> I don't think so. I mean what's the APIs? Your arm compatible, your Intel X 86 compatible or your maybe risk five one day compatible in the cloud. And those are the APIs, right? That's the commodity level. And the software is now, the software ecosystem is super portable across those as we're seeing with Apple moving from Intel to it's really not an issue, right? The software and the tooling is all there to do that. But underneath that, we're going to see an arms race between the top providers as they all try and develop faster chips for doing more specific things. We've got cranium for training, that instance has they announced it last year with 800 gigabits going out of a single instance, 800 gigabits or no, but this year they doubled it. Yeah. So 1.6 terabytes out of a single machine, right? That's insane, right? But what you're doing is you're putting together hundreds or thousands of those to solve the big machine learning training problems. These super, these enormous clusters that they're being formed for doing these massive problems. And there is a market now, for these incredibly large supercomputer clusters built for doing AI. That's all bandwidth limited. >> And you think about the timeframe from design to tape out. >> Yeah. >> Is just getting compressed It's relative. >> It is. >> Six is going the other way >> The tooling is all there. Yeah. >> Fantastic. Adrian, always a pleasure to have you on. Thanks so much. >> Yeah. >> Really appreciate it. >> Yeah, thank you. >> Thank you Paul. >> Cheers. All right. Keep it right there everybody. Don't forget, go to thecube.net, you'll see all these videos. Go to siliconangle.com, We've got features with Adam Selipsky, we got my breaking analysis, we have another feature with MongoDB's, Dev Ittycheria, Ali Ghodsi, as well Frank Sluman tomorrow. So check that out. Keep it right there. You're watching theCUBE, the leader in enterprise and emerging tech, right back. (soft techno upbeat music)

Published Date : Nov 30 2022

SUMMARY :

Great to see you again. and the ecosystem and the energy. Some of the stories like, It's kind of remind the That's right. I mean the sort of the market. the muscle memories. kind of the edges of that, the analogy for data, As analysts and journalists, So how does that affect the messaging always in the lead, right? I mean arguably, and it's hard to be good at both. But Aurora to Redshift. You know, end to end. of the competing thing, but it's kind of like you And AWS is the Lego technique thing. to when we would ask him, you know, and you put a snowflake on top, from more of an open source approach. the customers you think a few of the other ones, you know, and that it's going to and doing a good job of showing people and the other cloud vendors the HPC market is going to Yeah. and that's really the only difference, the chips and what AWS is doing. And the economics of semis. So just, and that's the entire industry Well, but maybe I think I have no idea whether if it's going to be as the cost. and the extensions to that AIML that stuff is going on. and you have to enable And the software is now, And you think about the timeframe Is just getting compressed Yeah. Adrian, always a pleasure to have you on. the leader in enterprise

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam SelipskyPERSON

0.99+

David FloydPERSON

0.99+

Peter DeSantisPERSON

0.99+

PaulPERSON

0.99+

Ali GhodsiPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AWSORGANIZATION

0.99+

Frank SlumanPERSON

0.99+

Paul GillonPERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Andy ChassyPERSON

0.99+

Las VegasLOCATION

0.99+

AdamPERSON

0.99+

Dev IttycheriaPERSON

0.99+

Andy JessePERSON

0.99+

Dave VillantePERSON

0.99+

AugustDATE

0.99+

two machinesQUANTITY

0.99+

Bill BelichickPERSON

0.99+

10QUANTITY

0.99+

CiscoORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

1.6 terabytesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

hundredsQUANTITY

0.99+

one machineQUANTITY

0.99+

three daysQUANTITY

0.99+

AdrianPERSON

0.99+

800 gigabitsQUANTITY

0.99+

TodayDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

David FoyerPERSON

0.99+

two yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

yesterdayDATE

0.99+

this yearDATE

0.99+

SnowflakeTITLE

0.99+

NvidiaORGANIZATION

0.99+

five timesQUANTITY

0.99+

oneQUANTITY

0.99+

NetflixORGANIZATION

0.99+

thecube.netOTHER

0.99+

IntelORGANIZATION

0.99+

fiveQUANTITY

0.99+

both sidesQUANTITY

0.99+

MongoORGANIZATION

0.99+

ChristmasEVENT

0.99+

last nightDATE

0.99+

HPORGANIZATION

0.98+

25 plus percentQUANTITY

0.98+

thousandsQUANTITY

0.98+

20,30 years agoDATE

0.98+

pandemicEVENT

0.98+

bothQUANTITY

0.98+

two years agoDATE

0.98+

twiceQUANTITY

0.98+

tomorrowDATE

0.98+

X 86COMMERCIAL_ITEM

0.98+

AntarcticLOCATION

0.98+

PatriotsORGANIZATION

0.98+

siliconangle.comOTHER

0.97+

The Truth About MySQL HeatWave


 

>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.

Published Date : Nov 1 2022

SUMMARY :

Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarkPERSON

0.99+

Ron HolgerPERSON

0.99+

RonPERSON

0.99+

Mark StammerPERSON

0.99+

IBMORGANIZATION

0.99+

Ron WestfallPERSON

0.99+

RyanPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Holgar MuellerPERSON

0.99+

AmazonORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

17 timesQUANTITY

0.99+

twoQUANTITY

0.99+

David FoyerPERSON

0.99+

44%QUANTITY

0.99+

1.2%QUANTITY

0.99+

4.8 billionQUANTITY

0.99+

JasonPERSON

0.99+

UberORGANIZATION

0.99+

Fu Chim ResearchORGANIZATION

0.99+

Dave AntePERSON

0.99+

Day 3 Wrap with Stu Miniman | AWS re:Invent 2021


 

(upbeat music) >> We're back at AWS re:Invent 2021. It's the biggest hybrid event of the year. One of the few physical events and we're psyched to be here. My name is Dave Vellante, and I'm really pleased to bring back the host emeritus, Stu Miniman, somebody I worked with side-by-side, Stu, for 10 years in a setting much like this, many like this. So, good to have you back. >> Dave, it's great to be here with theCUBE team, family here and re:Invent, Dave. I mean, this show, I remember back, Dave, going to you after the first re:Invent we talked, we were like, "We got to be there." Dave, remember the first year we came, the second year of re:Invent, this is the 10th year now, little card tables, gaming companies, all this stuff. You had Jerry Chen on yesterday and Jerry was comparing like, this is going to be like the next Microsoft. And we bet heavy on this ecosystem. And yeah, we all think this cloud thing, it might be real. 20,000 people here, it's not the 50 or 75,000 that we had in like 2018, 2019, but this ecosystem, what's happening in the cloud, multiple versions of hybrid going on with the event and the services, but yeah, phenomenal stuff. And yeah, it's so nice to see people. >> That's for sure. It's something that we've talked about a lot over the years is, and you remember the early days of re:Invent and to this day, just very a strong developer affinity that AWS has done a tremendous job of building that up and it's their raison d'etre, it's how they approach the market. But now you've been at Red Hat for a bit, obviously as well, developer affinity, what have you learned? Specifically as it relates to the cloud, Kubernetes, hottest thing going, you don't want to do an OpenShift commercial, but it's there, you're in the middle of that mix. What have you learned generally? >> Well, Dave, to the comment that you made about developers here, it's developers and the enterprise. We used to have a joke and say, enterprise developer is an oxymoron, but that line between developers doing stuff, early as a cloud, it was stealth computing. It's they're often doing this stuff and central IT is not managing it. So how do the pieces come together? How do apps and infrastructure, how do those pieces come together? And it's something that Red Hat has been doing a long time. Think about the Linux developer. They might've not have been the app developers, the people building Linux and everything, but they had a decent close tie to it. I'm on the OpenShift team. What we do is cloud, Dave, and we've got a partnership here with Amazon. We GAed our native cloud service earlier this year. Andy Jassy helped name it. It is the beautifully named Red Hat OpenShift Service on AWS or ROSA. But we've done OpenShift on AWS for more than five years, basically since we were doing Kubernetes, it's been here because of course customers doing cloud, where are they? A lot of them are here in Amazon. So I've been loving talking to a lot of customers, understanding how enterprise adoption is increasing, how we can enable developers and help them move faster. And yeah, I mean the quick plug on OpenShift is our service. We've got an SRE team that is going to manage all of that. A friend of the program, Corey Quinn, says, "Hey, an SRE team like that, because you don't want to manage as an enterprise." You don't want to manage Kubernetes. Yeah, you need to understand some of the pieces, but what is important to your business is the applications, your data and all those things and managing the undifferentiated heavy lifting. That's one of the reasons you went to the cloud. So therefore changing your model as to how you consume services in the cloud. And what are we seeing with Amazon, Dave? They're trying to build more solutions, simplify deployments, and offer more solutions including with their ecosystem. >> So I want to ask you. You said enterprise developer is kind of an oxymoron, and I remember, years ago I used to hang around with a lot of heads of application development and insurance companies and financial services, pharmaceutical, and they didn't wear hoodies, but they didn't wear suits either. And then when I talked to guys like Jeff Clark, for instance. He talks about we're building an abstraction layer across clouds, blah, blah, blah, which by the way, I think it is the right strategy. I'm like, "Okay, I'll drink some of that Kool-Aid." And then when I come here, we talked to Adam Selipsky. John flew out and I was on the chime. He goes, "Yeah, that's not hybrid. No, this is nothing like, it's not AWS, AWS is cloud." So, square that circle for me, 'cause you're in both worlds and certainly your strategy is to connect those words. Is that cloud? >> Yeah, right. I mean, Dave, we spent years talking about like is private cloud really a cloud? And when we started coming to the show, there is only one cloud. It is the public cloud and Amazon is the paragon of, I don't know what it was. >> Dave: Fake clouds, cloud washing. >> So today, Amazon's putting lots of things into your data center and extending the cloud out to that environment. >> So that's cloud. >> That's cloud. >> What do we call that cloud? What about the reverse? >> What's happening at the edge is that cloud is that extension of what we said from Amazon. If you look at not only Outpost, but Wavelengths and Local Zones and everything else like that. >> Let's say, yes, that's cloud. The APIs, primitives, check. >> Dave, I've always thought cloud is an operating model, not a location. And the hybrid definition is not the old, I did an ebook on this, Dave earlier this year. It's not the decade old NIS definition of an application that spans because I don't get up in the morning as an enterprise and say, "Oh, let me look at the table of how much Google is charging me or Microsoft or Amazon," or wake up one morning and move from one cloud to the other. Portability, follow the sun type stuff, does it ever happen? Yes, but it is rare thing. Applications oftentimes get pulled apart. So we've seen if you talk about AI, training the cloud, then transact and do things at the edge. If I'm in an autonomous vehicle or in a geosynchronous satellite, I can't be going back to the cloud to process stuff. So I get what I need and I process there. The same thing hybrid, oftentimes I will do my transactional activity in the public cloud because I've got unlimited compute capability, but I might have my repository of data for many different reasons, governance or security, all these things in my own data center. So parts of an application might live there, but I don't just span to go between the public cloud in my data center or the edge, it's specific architectural decisions as to how we do this. And by the way the developer, they don't want to have to think about location. I mean, my background, servers, storage, virtualization, all that stuff, that was very much an infrastructure up look of things. Developers want to worry about their code and make sure that it works in production. >> Okay, let me test that. If it's in the AWS cloud and I think it's true for the other hyperscale clouds too, they don't have to think about location, but they still have to think about location on-prem, don't they? >> Well, Dave, even in a public cloud, you do need to worry about sometimes it's like, "Okay, do I split it between availability zones? How do I build that? How do I do that?" So there are things that we build on top of it. So we've seen Amazon. >> I think that's fair, data sovereignty, you have to think about okay. >> Absolutely, a lot of those things. >> Okay, but the experience in Germany is going to be the same as it is in DC, is it not? >> More or less? There are some differences we'll see off and Amazon will roll things out over time and what's available, you've got cloud. >> For sure, though that's definitely true. That's a maturity thing, right? You've talked a bit, but ultimately they all sort of catch up. I guess my question would be is the delta between, let's say, Fed adoption and East Coast, is that delta narrower, significantly narrow than what you might see on-prem? >> The services are the same, sometimes for financial or political things, there might be some slight differences, but yes, the cloud experience should be the same everywhere from Amazon. >> Is it from a standpoint of hybrid, on-prem to cloud, across cloud? >> Many of the things when they go outside of the Amazon data centers are limited or a little bit different or you might have latency considerations that you have to consider. >> Now it's a tug of war. >> So it's not totally seamless because, David Foyer would tell us there, "You're not going to fight physics." There are certain things that we need to have and we've changed the way we architect things because it's no longer the bottleneck of the local scuzzy connection that you have there, it is now (indistinct). >> But the point I'm making is that gets into a tug of war of "Our way is better than your way." And the answer is depends in terms of your workload and the use case. >> You've looked at some of these new databases that span globes and do things of the like. >> Another question, I don't know if you saw the Goldman Sachs deal this morning, Goldman Sachs is basically turning its business into a SaaS and pointing it to their hedge funds and allowing people to access their data, their tools, their software that they built for their own purposes. And now they're outselling it. Similar to what NASDAQ has done. I can't imagine doing that without containers. >> Yeah, so interesting point, I think. At least six years ago now, Amazon launched serverless and serverless was going to take over the world. I dug into the space for a couple of years. And you had the serverless with camp and you had the container camp. Last year at re:Invent, I really felt a shift from Amazon's positioning that many of the abstraction layers and the tools that help you support those environments will now span between Lambda and containers. The container world has been adding serverless functionality. So Amazon does Fargate. The open-source community uses something called Knative, and just breaking this week. Knative was a project that Google started and it looks like that is going to move over to the CNCF. So be part of the whole Kubernetes ecosystem and everything like that. Oracle, VMware, IBM, Red Hat, all heavily involved in Knative, and we're all excited to see that go into the CNCF. So the reason I say that, I've seen from Amazon, I actually, John and I, when we interviewed Andy Jassy back in 2017, I asked him a follow-up question because he said if he was to build AWS in 2017, "I would start with everything underneath it serverless." I would wonder if following up with Adam or Andy today, I'd said, "Would it be all serverless or would containers be a piece of it?" Because sometimes underneath it doesn't matter or sometimes it can be containers and serverless. It's a single unit in Amazon and when they position things, it's now that spectrum of unit, everything from the serverless through the containers, through... James Hamilton wrote a blog post today about running Xen-on-Nitro and they have a migration service for a mainframe. So what do we know? That one of the only things about IT is almost nothing ever goes away. I mean, it sounded like Amazon declared coming soon the end of life of mainframe. My friends over at IBM might not be quite ready to call that era over but we shall see. All these things take time. Everything in IT is additive. I'm happy to see. It is very much usually an end world when I look at the container and Kubernetes space. That is something that you can have a broad spectrum of applications. So some of my more monolithic applications can move over, my cool new data, AI things, I can build on it, microservices in between. And so, it's a broad platform that spans the cloud, the edge, the data center. So that cloud operating model is easier to have consistency all the places that I go. >> Mainframe is in the cloud. Well, we'll see. Big banks by the next site unseen. So I think Amazon will be able to eat away at the edges of that, but I don't think there's going to be a major migration. They claim it. Their big thing is that you can't get COBOL programmers. So I'm like, "Yeah, call DXC, you'll get plenty." Let's talk about something more interesting. (Stu laughs softly) So the last 10 years was a lot of, a lot about IT transformation and there was a lot more room to grow there. I mean, the four big hyperscalers are going to do 120 billion this year. They're growing at 35%. Maybe it's not a trillion, but there's a $500 billion market that they're going after, maybe more. It looks like there's a real move. You saw that with NASDAQ, the Goldman deal, to really drive into business, deeper business integration in addition to IT transformation. So how do you see the next decade of cloud? What should we be watching? >> So, one of the interesting trends, I mean, Dave, for years we covered big data and big data felt very horizontal in it's approach thing. Hadoop take over the world. When I look at AI solutions, when I look at the edge computing technologies that happen, they're very vertically driven. So, our early customers in edge adoption tend to be like telco with the 5G rollout manufacturing in some of their environments. AI, every single industry has a whole set of use cases that they're using that go very deep. So I think cloud computing goes from, we talked about infrastructure as a service to it needs to be more, it is solution, some of these pieces go together. When Adam got up on stage and talked about how many instance types they have on Amazon, Dave, it's got to be 2X or 4X more different instant types than if I went to go to HPE or Dell and buy a physical server for my environment. So we need to have areas and guidance and blueprints and heck, use some of that ML and AI to help drive people to the right solutions because we definitely have the paradox of choice today. So I think you will find some gravity moving towards some of these environments. Gravatar has been really interesting to watch. Obviously that Annapurna acquisition should be down as one of the biggest ones in the cloud era. >> No lack of optionality to your point. So I guess to the point of deeper business integration, that's the big question, will Amazon provide more solution abstractions? They certainly do with Connect. We didn't hear a ton of that this show. >> Interestingly. (Dave speaking indistinctly) So the article that you and John Furrier wrote after meeting with Adam, the thing that caught my eye is discussion of community and ecosystems. And one of the things coming after, some, big communities out there like, you and I lived through the VMware ecosystem in that very tight community. There are forming little areas of community here in this group, but it's not a single cloud community. There are those focus areas that they have. And I do love to see, I mean, obviously working for Red Hat, talking about the ecosystem support. I was very happy to hear Adam mention Red Hat in the keynote as one of the key hybrid partners there. So, for Amazon to get from the 60 million, the 60 billion to the trillion dollar mark down the road, it's going to take a village and we're happy to be a part of it. >> Hey, great to have you back, enjoy the rest of the show. This is, let's see, day three, we're wrapping up. We're here again tomorrow so check it out. Special thanks to obviously AWS is our anchor sponsor and of course, AMD for sponsoring the editorial segments of our event. You're watching theCUBE, the leader in tech coverage. See you tomorrow. (bright upbeat music)

Published Date : Dec 2 2021

SUMMARY :

One of the few physical events and the services, but and to this day, just very and managing the it is the right strategy. It is the public cloud and and extending the cloud the edge is that cloud Let's say, yes, that's cloud. the cloud to process stuff. If it's in the AWS cloud So there are things that you have to think about okay. and Amazon will roll things out over time be is the delta between, The services are the same, Many of the things when they go outside because it's no longer the bottleneck and the use case. that span globes and and allowing people to access that many of the abstraction So the last 10 years was a lot of, So, one of the interesting trends, So I guess to the point of the 60 billion to the trillion enjoy the rest of the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Corey QuinnPERSON

0.99+

Dave VellantePERSON

0.99+

Adam SelipskyPERSON

0.99+

Jeff ClarkPERSON

0.99+

NASDAQORGANIZATION

0.99+

John FurrierPERSON

0.99+

IBMORGANIZATION

0.99+

2017DATE

0.99+

MicrosoftORGANIZATION

0.99+

James HamiltonPERSON

0.99+

GermanyLOCATION

0.99+

Stu MinimanPERSON

0.99+

Andy JassyPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

AndyPERSON

0.99+

AdamPERSON

0.99+

JohnPERSON

0.99+

50QUANTITY

0.99+

$500 billionQUANTITY

0.99+

DellORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

120 billionQUANTITY

0.99+

David FoyerPERSON

0.99+

OracleORGANIZATION

0.99+

Last yearDATE

0.99+

2019DATE

0.99+

35%QUANTITY

0.99+

2XQUANTITY

0.99+

2018DATE

0.99+

60 billionQUANTITY

0.99+

60 millionQUANTITY

0.99+

75,000QUANTITY

0.99+

DCLOCATION

0.99+

StuPERSON

0.99+

4XQUANTITY

0.99+

10 yearsQUANTITY

0.99+

10th yearQUANTITY

0.99+

tomorrowDATE

0.99+

todayDATE

0.99+

AMDORGANIZATION

0.99+

more than five yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

trillion dollarQUANTITY

0.99+

telcoORGANIZATION

0.99+

oneQUANTITY

0.99+

JerryPERSON

0.99+

Juan Loaiza, Oracle | CUBE Conversation 2021


 

(upbeat music) >> The innovation around databases has exploded over the last few years. Not only do organizations continue to rely on database technology to manage their most mission critical business data. But new use cases have emerged that process and analyze unstructured data. They share data at scale, protect data, provide greater heterogeneity. New technologies are being injected into the database equation. Not just cloud which has been a huge force in the space, but also AI to drive better insights and automation, blockchain to protect data and provide better auditability, new file formats to expand the utility of database technology and more. Debates are bound as to who's the best number one, the fastest, the most cloudy, the least expensive, et cetera. But there is no debate, when it comes to leadership and mission critical database technologies. That status goes to Oracle. And with me to talk about the developments of database technology in the market is cube alum Juan Loaiza, who's executive vice president of Mission Critical Database Technology at Oracle. Juan always great to see you, thanks for making some time. >> Thanks, great to see you Dave, always a pleasure to join you. >> Yeah and I hope you have some time because they've got a lot of questions for you. (chuckles) I want to start with- >> All right I love questions. >> Good I want to start and we'll go deep if you're up for it. I want to start with the GoldenGate announcement. We're covering that recent announcement, the service on OCI. GoldenGate it's part of this your super high availability capabilities that Oracle is so well known for. What do we need to know about the new service and what it brings for your customers? >> Yeah, so first of all, GoldenGate is all about creating real time data throughout an enterprise. So it does replication, data integration, moving data into analytic workloads, streaming analytics of data, migrating of databases and making databases highly available. All those are use cases for real-time data movement. And GoldenGate is really the leading product in the market, has been for many years. We have about 80% of the global fortune 500 running GoldenGate today, in addition to thousands and thousands of smaller customers. So it is the premier data integration, replication, high availability, anything involving moving data in real time, GoldenGate is the premier platform. And so we've had that available as a product for many years. And what we just recently done is we've released it as a cloud service, as a fully managed and automated cloud service. So that's kind of the big new thing that's happening right now. >> So is that what's unique about this, is it's now a service, or there are other attributes that are unique to Oracle? >> Yeah, so the service is kind of the most basic part to it. But the big thing about the service is it makes this product dramatically easier to use. So traditionally the data integration, replication products, although very powerful, also are very complex to use. And one of the big benefits of the service is we've made a dramatically simpler. So not just super experts can use it, but anyone can use it. And also as part of releasing it as a cloud service, we've done a number of unique things including making it completely elastically scalable, pay per use and dynamic scalability. So just in time, real time scalability. So as your workload increases we automatically increase the throughput of GoldenGate. So previously you had to figure all this stuff out ahead of time. It was very static. All these products have been very static. Now it's completely dynamic a native cloud product and that's very unique in the market. >> So, I mean, from an availability standpoint, I guess IBM sort of has this with Db2 but it doesn't offer the heterogeneity that GoldenGate has. But at what about like AWS, Microsoft, Google, do they provide services like, like GoldenGate? >> There's really nothing like the GoldenGate service. When you're talking about people like Google and Azure, they really have do it yourself third-party products. So there'll be a third party data integration replication product, and it's kind of available in their marketplace and customers have to do everything. So it's basically a put it together, your own kit. And it's very complicated. I mean these data integration products have always been complicated, and they're even more complicated in the cloud, if you have to do everything yourself. Amazon has a product but it's really focused on basic data migration to their cloud. It doesn't have the same capabilities as Oracle has. It doesn't have the elasticity, it doesn't have pay peruse, so it's really not very clavy at all. >> Well, so I mean the biggest customers have always glommed onto GoldenGate because they need that super ultra high availability. And they're capable of do it yourself. So, tell us how this compares to two DIY. >> Yeah, so you have mentioned the big customers so you're absolutely right. The big customers have been big users of GoldenGate. Smaller customers or users as well, however, it's been challenging because it's complicated. Data integration has been a complicated area of data management. More and most complicated. And so one of the things this does, is that it expands the market. Makes it much dramatically easier for smaller companies that don't have as many it resources to use the product. Also, smaller companies obviously don't have as much data as the really large giants. So they don't have as much data throughput. So traditionally the price has been high for a small customer. But now, with pay per use in the cloud, it eliminates the two big blockers for smaller enterprises. Which are the costs, the high fixed costs and the complexity of the products. So in which, by the way, it's helpful for everyone also. And for big customers they've also struggled with elasticity. So sometimes a huge batch job will kick in, the rate of change increases and suddenly the replication product doesn't keep up. Because on-prem products aren't really very elastic. So it helps large customers as well. Everybody loves these reviews but the elasticity pay per use, on demand nature of it's really helpful for everybody. >> Well, and because it's delivered as a service I would imagine for the large customers that you're giving them more granularity, so they can apply it maybe for a single application, as opposed to trying to have to justify it across a whole suite. And because the cost is higher, but now if you're allowing me to pay by the drink, is that right? I could just sort of apply it in a more granular level. >> Yes, that's exactly right. It's really pay per use. You can use it as much or as little as you want. You just pay for what you use. And as I mentioned, it's not a static payment either. So if you have a lot of data loads going on and right now you pay a little more, at night when you have less going on, you pay a lot less. So you really just paying for what use. It's very easy to set it up for a single application or all your applications. >> How about for things like continuous replication or real-time analytics, is the service designed to support that? >> Yes, so that's the heritage of GoldenGate. GoldenGate has been around for decades and we've worked with some of the most demanding customers in the world on exactly those things. So real time data all over the enterprise is really the goal that everyone wants. Real-time data from OTP and to analytics, from one system to another system, and for availability. That is the key benefit of GoldenGate. And that's the key technology that we've been working on for decades. And now we have it very easy to use in the cloud. >> Well what would be the overheads associated with that? I mean, for instance, you've go it, you need a second copy. You need the other database copies, and where does it make sense to incur that overhead? Obviously the super high availability apps that can exploit real time. Think like fraud detection is the obvious one, but what else can you add there? >> Well, GoldenGate itself doesn't require any extra copies of anything. However, it does enable customers that want to create for example, an analytics system, a data warehouse, to feed data from all their systems in real time into that data warehouse for example. And it also enables the real-time capabilities, enable high availability and you can get high availability within the cloud with it, between on premises in the cloud, between clouds. Also, you can migrate data. Migrate databases without having to take them down. So all these capabilities are available now and they're very easy to use. >> Okay. Thanks for that clarification. What about autonomous? Is that on the roadmap or what you thinking? >> Yeah, the GoldenGate is essentially an autonomous service. And it works with the Oracle Autonomous Database. So you can both use it as a source for data and as a sink for data, as a place you're writing data. So for example, you can have an autonomous OTP database, that's replicating to another autonomous OTP database in real time. And both of them are replicating changes to the autonomous data warehouse. But it doesn't all have to be autonomous. You can have any mix of, autonomous not autonomous, on-prem in cloud, in anybody's cloud. So that's the beauty of GoldenGate, It's extremely flexible. >> Well, you mentioned the plasticity a couple of times. I mean, why is that so important that that GoldenGate on OCI gives you that elastic, whatever billing the auto-scaling talk, talk to me in terms of what that does for the customer. >> Yeah, there's really two big benefits. One benefit is it's very difficult to predict workloads. So normally on an on-prem configuration, you have to say, okay what is the max possible workload that's going to happen here? And then you have to buy the product, configure the product, get hardware, basically size, everything for that. And then if you guess wrong, you're either spending too much because you oversized it or you have a big data real-time problem. The data can't keep up with the real-time because you've undersized the configuration. So that's hard to do. So the beauty of elasticity and the dynamic elasticity, the pay per use, is you don't have to figure all this stuff out. So if you have more workload, we grow it automatically. If you have less workload, we shrink it automatically. And you don't have to guess ahead of time. You don't have to price ahead of time. So you, you just use what, what you use, right? You don't pay for something that you're not using. So it's a very big change in the whole model of how you use these data, replication, integration, high availability technologies. >> Well, I think I'm correct to say GoldenGate primarily has been for big companies. You mentioned that small companies can now take advantage of this service. We talked about the granularity. And I could definitely see, can they afford it? I guess this is part one and then, and then the other part of the question is, I can see GoldenGate really satisfying your on-prem customers and them taking advantage of it, but do you think this will attract new customers beyond your core? So two part question there. >> Yeah, absolutely. So small customers have been challenged by the complexity of data integration. And that's one of the great things about the cloud services is it's dramatically simpler. So Oracle manages everything. Oracle does the patching, the upgrades. Oracle does the monitoring. It takes care of the high availability of the product. So all that management, complexity, all the configuration set up, everything like that, that's all automated, that's owned by Oracle. So small customers were always challenged by the complexity of product, along with everything else that they had to do. And then the other of course benefit is small customers were challenged by the large fixed price. So now with pay per use, they pay only for what they use. It's really usable by easily by small customers also. So it really expands the market and makes it more broadly applicable. >> So kind of same answer for beyond your existing customer base, beyond the on-prem that that's kind of... You answered >> Right. >> my two part question with one answer, so that was pretty efficient, (chuckles) pun intended. So the bottom line for me and squinting through this announcement is you've got the heterogeneity piece with GoldenGate OCI and as such it's going to give you the capability to create what I'll call an architecturally coherent decentralized data mesh. Big on this data mesh these days, could have decentralized data. With the proviso then I going to be able to connect to OCI, which of course you can do with Azure or I guess you could bring cloud to a customer on prem, first of all, is this correct? And can we expect you over time to do this with AWS or other cloud providers? >> It can move data from Amazon or to Amazon. It can actually handle, any data wherever it lives. So, yeah, it's very flexible and it's really just the automation of all the management, that we're running in our public cloud But the data can be from anywhere to anywhere. >> Cool, all right, let's switch topics here a little bit. Just talk about some of the things that you've been working on, some of the innovation. I sat through your blockchain announcement, it was very cool. Of course I love anything blockchain and crypto, NFTs are exploding, so that Coinbase IPO. It's just really an exciting time out there. I think a lot of people don't really appreciate the innovation that's occurring. So you've been making a lot of big announcements last several months. You've been taking your R and D bringing it into product, So that's great, we love to always see that because that's where really the rubber meets the road. Just for the database side of the house, you announced 21c the next generation of the self-driving data warehouse, ADW, blockchain tables, now you got GoldenGate running on OCI. Take us inside the development organizations. What are the underlying drivers other than your boss. >> When we talk about our autonomous database, it is the mission critical Oracle database, but it's dramatically easier to do. So Oracle does all the management all on automation, but also we use machine learning to tune, and to make it highly available, and to make it highly secure. So that that's been one of our biggest products we've been working on for many years. And recently we enhanced our autonomous data warehouse taking it beyond being a data warehouse to complete a data analytics platform. So it includes things like ETL. So we built ETL into the autonomous data warehouse. We're building our GoldenGate replication into autonomous data warehousing. We built machine learning directly natively into the database. So now, if someone wants to run some machine learning they just run a machine learning queries. They no longer have to stand up a separate system. So a big move that we've been making is, taking it beyond just a database to a full analytic platform. And this goes beyond what anyone else in the industry is doing, because we have a lot more technology. So for example, the ML machine learning directly in the database, the ETL directly in the database. The data replication is directly in the database. All these things are very unique to Oracle. And they dramatically simplify for customers how they manage data. In addition to that, we've also been working in our database product. We've enhanced it tremendously. So our big goal there is to provide what we call it converged database. So everything you need, all the data types. Whether it's JSON, relational, spatial, graph, all that different kinds of data types, all the different kinds of workloads. Analytics, OTP, things like blockchain, microservices events, all built into the Oracle database, making it dramatically easier to both develop and deploy new applications. So those are some of our big, big goals. Make it simple, make it integrated. Take the complexity, we'll take on the complexity. So developers and customers find it easy to develop an easy to use. And we've made huge strides in all these areas in the last couple of years. >> That's awesome. I wonder if we could land on blockchain again for now it's kind of jogging, but sort of on crypto. Though you're not about crypto but you are about applying blockchain. Maybe you can help our audience understand what are some of the real use cases where blockchain tech can be used with Oracle database. >> Yeah, so that's a very interesting topic. As you mentioned, blockchain is very currently, we see a lot of cryptocurrencies. I distributed applications for blockchain. So in general, in the past, we've had two worlds. We've had the enterprise data management world and we've had the blockchain world. And these are very distinct, right? And on the blockchain side the applications have mostly centered around, distributed multi-party applications, right? So where you have multiple parties that all want to reach consensus and then that consensus is stored in a blockchain. So that's kind of been the focus of blockchain. And what we've done is very innovative. We're the first company to ever do this. Is we've taken the core architecture, ideas. And really a lot of it has to do with the cryptography of blockchain. And we've built, we've engineered that natively into the mainstream Oracle database. So now in mainstream Oracle database, we have blockchain technology built in. And it's very dramatically simpler to use. And the use cases, you asked about the use case, that's what we've done. And it's taken us about five years to do this. Now it's been released into the market in our mainstream 19c Oracle database. So the use case is different from the conventional blockchain use case. Which I mentioned was really multi-party consensus based apps. We're trying to make blockchain useful for mainstream, enterprise and government applications. So any kind of mainstream government application, or enterprise application. And that idea of blockchain, the core concept of blockchain, is it addresses a different kind of security problem. So when you look at conventional security, it's really trying to keep people out. So we have things like firewalls, passwords, networking cryption, data encryption. It's all about keeping bad people out of the data. And there's really two big problems that it doesn't address well. One problem is that there's always new security exploits being published. So you have hackers out there that are working overtime. Sometimes they're nation States that are trying to attack data providers. And every week, every month there's a new security exploit that's discovered and this happens all the time. So that's one big problem. So we're building up these elaborate walls of protection around our core data assets. And in the meantime, we have basically barbarians attacking on every side.(chuckles) And every once in a while, they get over the walls and this is just what's happening. So that's one big problem. And the second big problem is elicit changes made by people with credentials. So sometimes you have an insider in your, in your company. Whether it's an administrator or a sales person, a support person, that has valid credentials, but then uses those valid credentials in some illicit way. They go out and change somebody's data for their own gain. And even more common than that cause there's not that many bad guys inside the company to they exist, is stolen credentials. So what's happened in many cases is hackers or nation States will steal for example, administrative credentials and then use those administrative credentials to come into a system and steal data. So that's the kind of problem that is not well addressed by security mechanism. So if you have privileges security mechanism says, yeah you're fine. If somebody steals your privileges, again you get the pass through the gate. And so what we've done with blockchain is we've taken the cryptography elements of blockchain. We call it crypto secure data management. And we've built those into the Oracle database. So think of it this way. If someone actually makes it through over the walls that we built, and in into the core data, what we've done with that cryptographic technology of blockchain, is we've made that immutable. So you can't change it. So even if you make it over the gate you can't get into the core data assets and change those assets. And that's not built into Oracle databases is super easy to adopt. And I think it's going to really enhance and expand the community of people that can actually use that blockchain technology. >> I mean, that's awesome. I could talk all day about blockchain. And I mean, when you think about hackers, it's all there. They're all about ROI, value over cost. And if you can increase the denominator they're going to go somewhere else, right? Because the value will will decline. And this is really the intersection of software engineering cryptography. And I guess even when you bring crypto currency into it, it's like sort of the game theory. That's really kind of not what you're all about, but the first two pieces are really critical in terms of just next generation of raising that security hurdle. Love it. Now, go ahead. >> Yeah it's a different approach. I was just going to say, it's a different approach. Because think about trying to keep people out with things like passwords and firewalls, you can have basically bugs in that software that allow people to exploit and get in. When you're talking about cryptography, that's math, it's very difficult. I mean, you really can't fight pass math. Once the data is cryptographically protected on a blockchain, a hacker can't really do anything with that. It's just, math is math. There's nothing you can do to break it, right. It's very different from trying to get through some algorithm. That's really trying to keep you out. >> Awesome. I said, I could talk forever on this topic. But let me, let me go into some competitive dynamics. You recently announced Autonomous Data Warehouse. You've got service capabilities that are really trying to appeal to the line of business. I want to get your take on that announcement and specifically how you think it compares name names. I'm going to name names you don't have to. But Snowflake, obviously a lot of momentum in the marketplace. AWS with Redshift is doing very, very well. Obviously there are others. But those are two prominent ones that we've tracked in our data shows that have momentum. How do you compare? >> Yeah, so there's a number of different ways to look at the comparison. So the most simplest and straightforward is there's a lot more functionality in Oracle data warehousing. Oracle has been doing this for decades. We have a lot of built-in functionality. For example, machine learning natively built into the database makes it super easy to use. We have mixed workloads, we have spatial capabilities. We have graph capabilities. We have JSON capabilities. We have a microservice capabilities. We have-- So there's a lot more capabilities. So that's number one. Number two, our cloud service is dramatically more elastic. So with our cloud service all you really do, is you basically move the slide. You say hey, I want more resources, I want less resources. In fact, we'll do that automatically, that's called auto-scaling. In contrast when you look at people like Snowflake or Redshift they want you to stand up a new cluster. Hey you have some more workload on Monday, stand up another cluster and then we'll have two sets of clusters or maybe you'd want a third cluster, maybe you want a fourth cluster. So you end up with all these different systems which is how they scale. They say, hey, I can have multiple sets of servers access the same data. With Oracle you don't have to even think about those things. We auto scale, you get more workload. We just give it more resources. You don't even have to think about that. And then the other thing is we're looking at the whole data management end to end problem. So starting with capturing the data, moving the data in real time, transforming the data, loading the data, running machine learning and analytics on the data. Putting all kinds of data in a single place that you can do analytics on all of it together. And then having very rich screen capabilities for viewing the data, graphing the data, modeling the data, all those things. So it's all integrated. It makes it super easy to use. So a much easier, much more functionality and much more elastic than any of our competitors in the market. >> Interesting, thank you for those comments. I mean, it's a different world, right? I mean, you guys got all the market share, they got all the growth, those things over time, you've been around, you see it, they come together and you fight it out and may the best approach wins. >> So we'll be watching >> Yeah also I forgot to mention the obvious thing, which is Oracle runs everywhere. So you can run Oracle on premises. You can run Oracle on the public cloud. You can run what we call cloud at customer. Our competitors really are just public cloud only. So you customers don't get the choice of where they want to run their data warehouse. >> Now Juan a while ago I sat down with David foyer and Mark steamer. We reviewed how Gartner looks at the marketplace and it wasn't surprise that when it came to operational workloads, Oracle stood out. I mean, that's kind of an understatement relative to the major competitors. Most of our viewers, I don't think expected for instance Microsoft or AWS to be that far away from you. But at the same time, the database magic quadrant maybe didn't reflect that gap as widely. So there's some dissonance there with the detailed workload drill downs were dramatic. And I wonder what your take on the results. I mean, obviously you're happy with them. You came out leading in virtually every category or you will one and two, and some of that sort of not even non-mission critical operational stuff. But what can you add to my narrative there? >> Yeah, so Gartner, first of all, we're talking about cloud databases. >> Right. >> Right, so this is not on premises databases this is pure cloud databases. And what they did is they did two things. One is, the main thing was a technical rating of the databases, of the cloud databases. And, there's other vendors that have been had database in the cloud for longer than we have. But in the most recent Gartner analysis report, as you mentioned, Oracle came out on top for cloud database technology, in almost every single operational use case including things like Internet of Things, things like JSON data, variable data, analytics as well as a traditional OTP and mixed workloads. So Oracle was rated the highest technology which isn't a big surprise. We've been doing this for decades. Over 90% of the global fortune 500 run Oracle. And there's a reason, because this is what we're good at. This our core strength. Our availability, our security, our scalability, our functionality, both for OTP and analytics. All the capabilities, built-in machine learning, graph analytics, everything. So even when we compare narrowly things like Internet of Things or variable data against niche competitors that that's what all they do. We came up dramatically ahead. But what surprised a lot of people is how far ahead of some of the other cloud vendors like Amazon, like Azure, like Google, Oracle came out ahead in the cloud database category. So a lot of people think, well, some of these other pure cloud vendors must be ahead of Oracle in cloud database. But actually not. I mean, if you look at the Gartner analyst report, it was very clear. It was Oracle was dramatically ahead of their cloud database technologies with our cloud database. >> So I'm pretty much out of time but last question. I've had some interesting discussions lately and we've pointed out for years in our research that of course you're delivering the entire stack, the database, part of the infrastructure the applications, you have the whole engineered system strategy. And for the most part you're kind of unique in this regard. I mean, Dell just announced that it's spinning off VMware and it could have gone the other direction. And become more integrated hardware and software player, for the data center. But look, it's working for Dell based on the reaction, from the street post announcement. Cisco they got a hardware and software model that's sort of integrated but the company's value that peaked back in the .com boom, it's been very slow to bounce back. But my point is for these companies the street doesn't value, the integrated model. Oracle is kind of the exception. You know, it's at trading at all time highs, I know you're not going to comment on the stock price, but I guess in SAP until it missed it guided conservatively, was kind of on the good trajectory. But so I'm wondering, why do you think Oracle strategy resonates with investors, but not so much those companies? Is it, because you have the applications piece? I mean, maybe that's kind of my premise for, for SAP but what's your take? Why is it working for you? >> Well, okay. I think it's pretty simple, which is some of our competitors, for example, they might have a software product and a hardware product. But mostly those are acquired in their separate products that just happen to be in a portfolio. They are not a single company with a single vision and joint engineering going on. It's really, hey, I got the software on over here. I got the hardware over there, but they don't really talk to each other, they don't really work together. They're not trying to develop something where the stack is actually not just integrated but engineered together. And that is really the key. Oracle focuses on data management top to bottom. So we have everything from our ERP, CRM applications talking to our database, talking to our engineered systems, running in our cloud. And it's all completely engineered together. So Oracle doesn't just acquire these things and kind of glue them together. We actually engineer them and that's fundamentally the difference. You can buy two things and have them as two separate divisions in your company but it doesn't really get you a whole lot. >> Juan it's always a pleasure, I love these conversations and hope we can do more in the future. Really appreciate your time. Thanks for coming to the CUBE >> Pleasure, Dave nice to talk to you. >> All right keep it right there, everybody. This is Dave Vellante for theCUBE, we'll see you next time. (upbeat musiC)

Published Date : Apr 21 2021

SUMMARY :

of database technology in the market Thanks, great to see you Dave, Yeah and I hope you have some time about the new service So that's kind of the big new thing of the most basic part to it. but it doesn't offer the complicated in the cloud, Well, so I mean the biggest customers And so one of the things this does, And because the cost is higher, So if you have a lot And that's the key technology is the obvious one, And it also enables the Is that on the roadmap So that's the beauty of GoldenGate, that does for the customer. the pay per use, is you don't have of the question is, I can see GoldenGate So it really expands the market beyond the on-prem that that's kind of... So the bottom line for me and it's really just the of the self-driving data So for example, the ML but you are about applying blockchain. And the use cases, you of the game theory. Once the data is in the marketplace. So the most simplest and straightforward may the best approach wins. You can run Oracle on the public cloud. But at the same time, the Yeah, so Gartner, first of all, of the databases, of the cloud databases. And for the most part you're And that is really the key. Thanks for coming to the CUBE theCUBE, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Juan LoaizaPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

JuanPERSON

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DellORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

thousandsQUANTITY

0.99+

MondayDATE

0.99+

two thingsQUANTITY

0.99+

One problemQUANTITY

0.99+

Mark steamerPERSON

0.99+

One benefitQUANTITY

0.99+

GartnerORGANIZATION

0.99+

OCIORGANIZATION

0.99+

fourth clusterQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

one answerQUANTITY

0.99+

third clusterQUANTITY

0.99+

one big problemQUANTITY

0.99+

two big problemsQUANTITY

0.99+

two setsQUANTITY

0.99+

CoinbaseORGANIZATION

0.99+

two partQUANTITY

0.99+

about five yearsQUANTITY

0.98+

two big benefitsQUANTITY

0.98+

first companyQUANTITY

0.97+

two separate divisionsQUANTITY

0.97+

Over 90%QUANTITY

0.97+

GoldenGateORGANIZATION

0.97+

second copyQUANTITY

0.97+

David foyerPERSON

0.97+

first two piecesQUANTITY

0.96+

singleQUANTITY

0.96+

two big blockersQUANTITY

0.96+

single applicationQUANTITY

0.96+

Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 10 2021

SUMMARY :

This is breaking analysis and did a lot of the charts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FoyerPERSON

0.99+

David MoschellaPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Patrick MoorheadPERSON

0.99+

Tom SiebelPERSON

0.99+

New EnglandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CNETORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MIT Technology ReviewORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10%QUANTITY

0.99+

five minutesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last weekDATE

0.99+

99%QUANTITY

0.99+

ETRORGANIZATION

0.99+

dave.vellante@siliconangle.comOTHER

0.99+

John FurrierPERSON

0.99+

EE TimesORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

10XQUANTITY

0.99+

last weekDATE

0.99+

MoschellaPERSON

0.99+

theCUBEORGANIZATION

0.98+

IntelORGANIZATION

0.98+

15 nanometerQUANTITY

0.98+

2015DATE

0.98+

todayDATE

0.98+

Seeing DigitalTITLE

0.98+

30%QUANTITY

0.98+

HPEORGANIZATION

0.98+

this weekDATE

0.98+

A14COMMERCIAL_ITEM

0.98+

higher than 118%QUANTITY

0.98+

5%QUANTITY

0.97+

10 years agoDATE

0.97+

EinORGANIZATION

0.97+

a buckQUANTITY

0.97+

64 bitQUANTITY

0.97+

C3.aiTITLE

0.97+

DatabricksORGANIZATION

0.97+

about 40%QUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

DataikuORGANIZATION

0.95+

siliconangle.comOTHER

0.94+

Breaking Analysis with Dave Vellante: Intel, Too Strategic to Fail


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Braking Analysis with Dave Vellante. >> Intel's big announcement this week underscores the threat that the United States faces from China. The US needs to lead in semiconductor design and manufacturing. And that lead is slipping because Intel has been fumbling the ball over the past several years, a mere two months into the job, new CEO Pat Gelsinger wasted no time in setting a new course for perhaps, the most strategically important American technology company. We believe that Gelsinger has only shown us part of his plan. This is the beginning of a long and highly complex journey. Despite Gelsinger's clear vision, his deep understanding of technology and execution ethos, in order to regain its number one position, Intel we believe we'll need to have help from partners, competitors and very importantly, the US government. Hello everyone and welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis we'll peel the onion Intel's announcement of this week and explain why we're perhaps not as sanguine as was Wall Street on Intel's prospects. And we'll lay out what we think needs to take place for Intel to once again, become top gun and for us to gain more confidence. By the way this is the first time we're broadcasting live with Braking Analysis. We're broadcasting on the CUBE handles on Twitch, Periscope and YouTube and going forward we'll do this regularly as a live program and we'll bring in the community perspective into the conversation through chat. Now you may recall that in January, we kind of dismissed analysis that said Intel didn't have to make any major strategic changes to its business when they brought on Pat Gelsinger. Rather we said the exact opposite. Our view at time was that the root of Intel's problems could be traced to the fact that it wasn't no longer the volume leader. Because mobile volumes dwarf those of x86. As such we said that Intel couldn't go up the learning curve for next gen technologies as fast as its competitors and it needed to shed its dogma of being highly vertically integrated. We said Intel needed to more heavily leverage outsourced foundries. But more specifically, we suggested that in order for Intel to regain its volume lead, it needed to, we said at the time, spin out its manufacturing, create a joint venture sure with a volume leader, leveraging Intel's US manufacturing presence. This, we still believe with some slight refreshes to our thinking based on what Gelsinger has announced. And we'll talk about that today. Now specifically there were three main pieces and a lot of details to Intel's announcement. Gelsinger made it clear that Intel is not giving up its IDM or integrated device manufacturing ethos. He called this IDM 2.0, which comprises Intel's internal manufacturing, leveraging external Foundries and creating a new business unit called Intel Foundry Services. It's okay. Gelsinger said, "We are not giving up on integrated manufacturing." However, we think this is somewhat nuanced. Clearly Intel can't, won't and shouldn't give up on IDM. However, we believe Intel is entering a new era where it's giving designers more choice. This was not explicitly stated. However we feel like Intel's internal manufacturing arm will have increased pressure to serve its designers in a more competitive manner. We've already seen this with Intel finally embracing EUV or extreme ultraviolet lithography. Gelsinger basically said that Intel didn't lean into EUV early on and that it created more complexity in its 10 nanometer process, which dominoed into seven nanometer and as you know the rest of the story and Intel's delays. But since mid last year, it's embraced the technology. Now as a point of reference, Samsung started applying EUV for its seven nanometer technology in 2018. And it began shipping in early 2020. So as you can see, it takes years to get this technology into volume production. The point is that Intel realizes it needs to be more competitive. And we suspect, it will give more freedom to designers to leverage outsource manufacturing. But Gelsinger clearly signaled that IDM is not going away. But the really big news is that Intel is setting up a new division with a separate PNL that's going to report directly to Pat. Essentially it's hanging out a shingle and saying, we're open for business to make your chips. Intel is building two new Fabs in Arizona and investing $20 billion as part of this initiative. Now well Intel has tried this before earlier last decade. Gelsinger says that this time we're serious and we're going to do it right. We'll come back to that. This organizational move while not a spin out or a joint venture, it's part of the recipe that we saw as necessary for Intel to be more competitive. Let's talk about why Intel is doing this. Look at lots has changed in the world of semiconductors. When you think about it back when Pat was at Intel in the '90s, Intel was the volume leader. It crushed the competition with x86. And the competition at the time was coming from risk chips. And when Apple changed the game with iPod and iPhone and iPad, the volume equation flipped to mobile. And that led to big changes in the industry. Specifically, the world started to separate design from manufacturing. We now see firms going from design to tape out in 12 months versus taking three years. A good example is Tesla and his deal with ARM and Samsung. And what's happened is Intel has gone from number one in Foundry in terms of clock speed, wafer density, volume, lowest cost, highest margin to falling behind. TSMC, Samsung and alternative processor competitors like NVIDIA. Volume is still the maker of kings in this business. That hasn't changed and it confers advantage in terms of cost, speed and efficiency. But ARM wafer volumes, we estimate are 10x those of x86. That's a big change since Pat left Intel more than a decade ago. There's also a major chip shortage today. But you know this time, it feels a little different than the typical semiconductor boom and bust cycles. Semiconductor consumption is entering a new era and new use cases emerging from automobiles to factories, to every imaginable device piece of equipment, infrastructure, silicon is everywhere. But the biggest threat of all is China. China wants to be self-sufficient in semiconductors by 2025. It's putting approximately $60 billion into new chip Fabs, and there's more to come. China wants to be the new economic leader of the world and semiconductors are critical to that goal. Now there are those poopoo the China threat. This recent article from Scott Foster lays out some really good information. But the one thing that caught our attention is a statement that China's semiconductor industry is nowhere near being a major competitor in the global market. Let alone an existential threat to the international order and the American way of life. I think Scotty is stuck in the engine room and can't see the forest of the trees, wake up. Sure. You can say China is way behind. Let's take an example. NAND. Today China is at about 64 3D layers whereas Micron they're at 172. By 2022 China's going to be at 128. Micron, it's going to be well over 200. So what's the big deal? We say talk to us in 2025 because we think China will be at parody. That's just one example. Now the type of thinking that says don't worry about China and semi's reminds me of the epic lecture series that Clay Christiansen gave as a visiting professor at Oxford University on the history of, and the economics of the steel industry. Now if you haven't watched this series, you should. Basically Christiansen took the audience through the dynamics of steel production. And he asked the question, "Who told the steel manufacturers that gross margin was the number one measure of profitability? Was it God?" he joked. His point was, when new entrance came into the market in the '70s, they were bottom feeders going after the low margin, low quality, easiest to make rebar sector. And the incumbents nearly pulled back and their mix shifted to higher margin products and their gross margins went up and life was good. Until they lost the next layer. And then the next, and then the next, until it was game over. Now, one of the things that got lost in Pat's big announcement on the 23rd of March was that Intel guided the street below consensus on revenue and earnings. But the stock went up the next day. Now when asked about gross margin in the Q&A segment of the announcement, yes, gross margin is a if not the key metric in semi's in terms of measuring profitability. When asked Intel CFO George Davis explained that with the uptick in PCs last year there was a product shift to the lower margin PC sector and that put pressure on gross margins. It was a product mix thing. And revenue because PC chips are less expensive than server chips was affected as were margins. Now we shared this chart in our last Intel update showing, spending momentum over time for Dell's laptop business from ETR. And you can see in the inset, the unit growth and the market data from IDC, yes, Dell's laptop business is growing, everybody's laptop business is growing. Thank you COVID. But you see the numbers from IDC, Gartner, et cetera. Now, as we pointed out last time, PC volumes had peaked in 2011 and that's when the long arm of rights law began to eat into Intel's dominance. Today ARM wafer production as we said is far greater than Intel's and well, you know the story. Here's the irony, the very bucket that conferred volume adventures to Intel PCs, yes, it had a slight uptick last year, which was great news for Dell. But according to Intel it pulled down its margins. The point is Intel is loving the high end of the market because it's higher margin and more profitable. I wonder what Clay Christensen would say to that. Now there's more to this story. Intel's CFO blame the supply constraints on Intel's revenue and profit pressures yet AMD's revenue and profits are booming. So RTSMCs. Only Intel can't seem to thrive when there's this massive chip shortage. Now let's get back to Pat's announcement. Intel is for sure, going forward investing $20 billion in two new US-based fabrication facilities. This chart shows Intel's investments in US R&D, US CapEx and the job growth that's created as a result, as well as R&D and CapEx investments in Ireland and Israel. Now we added the bar on the right hand side from a Wall Street journal article that compares TSMC CapEx in the dark green to that of Intel and the light green. You can see TSMC surpass the CapEx investment of Intel in 2015, and then Intel took the lead back again. And in 2017 was, hey it on in 2018. But last year TSMC took the lead, again. And appears to be widening that lead quite substantially. Leading us to our conclusion that this will not be enough. These moves by Intel will not be enough. They need to do more. And a big part of this announcement was partnerships and packaging. Okay. So here's where it gets interesting. Intel, as you may know was late to the party with SOC system on a chip. And it's going to use its packaging prowess to try and leap frog the competition. SOC bundles things like GPU, NPU, DSUs, accelerators caches on a single chip. So better use the real estate if you will. Now Intel wants to build system on package which will dis-aggregate memory from compute. Now remember today, memory is very poorly utilized. What Intel is going to do is to create a package with literally thousands of nodes comprising small processors, big processors, alternative processors, ARM processors, custom Silicon all sharing a pool of memory. This is a huge innovation and we'll come back to this in a moment. Now as part of the announcement, Intel trotted out some big name customers, prospects and even competitors that it wants to turn into prospects and customers. Amazon, Google, Satya Nadella gave a quick talk from Microsoft to Cisco. All those guys are designing their own chips as does Ericsson and look even Qualcomm is on the list, a competitor. Intel wants to earn the right to make chips for these firms. Now many on the list like Microsoft and Google they'd be happy to do so because they want more competition. And Qualcomm, well look if Intel can do a good job and be a strong second sourced, why not? Well, one reason is they compete aggressively with Intel but we don't like Intel so much but it's very possible. But the two most important partners on this slide are one IBM and two, the US government. Now many people were going to gloss over IBM in this announcement, but we think it's one of the most important pieces of the puzzle. Yes. IBM and semiconductors. IBM actually has some of the best semiconductor technology in the world. It's got great architecture and is two to three years ahead of Intel with POWER10. Yes, POWER. IBM is the world's leader in terms of dis-aggregating compute from memory with the ability to scale to thousands of nodes, sound familiar? IBM leads in power density, efficiency and it can put more stuff closer together. And it's looking now at a 20x increase in AI inference performance. We think Pat has been thinking about this for a while and he said, how can I leave leap frog system on chip. And we think he thought and said, I'll use our outstanding process manufacturing and I'll tap IBM as a partner for R&D and architectural chips to build the next generation of systems that are more flexible and performant than anything that's out there. Now look, this is super high end stuff. And guess who needs really high end massive supercomputing capabilities? Well, the US military. Pat said straight up, "We've talked to the government and we're honored to be competing for the government/military chips boundary." I mean, look Intel in my view was going to have to fall down into face not win this business. And by making the commitment to Foundry Services we think they will get a huge contract from the government, as large, perhaps as $10 billion or more to build a secure government Foundry and serve the military for decades to come. Now Pat was specifically asked in the Q&A section is this Foundry strategy that you're embarking on viable without the help of the US government? Kind of implying that it was a handout or a bailout. And Pat of course said all the right things. He said, "This is the right thing for Intel. Independent of the government, we haven't received any commitment or subsidies or anything like that from the US government." Okay, cool. But they have had conversations and I have no doubt, and Pat confirmed this, that those conversations were very, very positive that Intel should head in this direction. Well, we know what's happening here. The US government wants Intel to win. It needs Intel to win and its participation greatly increases the probability of success. But unfortunately, we still don't think it's enough for Intel to regain its number one position. Let's look at that in a little bit more detail. The headwinds for Intel are many. Look it can't just flick a switch and catch up on manufacturing leadership. It's going to take four years. And lots can change in that time. It tells market momentum as well as we pointed out earlier is headed in the wrong direction from a financial perspective. Moreover, where is the volume going to come from? It's going to take years for Intel to catch up for ARMS if it never can. And it's going to have to fight to win that business from its current competitors. Now I have no doubt. It will fight hard under Pat's excellent leadership. But the Foundry business is different. Consider this, Intel's annual CapEx expenditures, if you divide that by their yearly revenue it comes out to about 20% of revenue. TSMC spends 50% of its revenue each year on CapEx. This is a different animal, very service oriented. So look, we're not pounding the table saying Intel's worst days are over. We don't think they are. Now, there are some positives, I'm showing those in the right-hand side. Pat Gelsinger was born for this job. He proved that the other day, even though we already knew it. I have never seen him more excited and more clearheaded. And we agreed that the chip demand dynamic is going to have legs in this decade and beyond with Digital, Edge, AI and new use cases that are going to power that demand. And Intel is too strategic to fail. And the US government has huge incentives to make sure that it succeeds. But it's still not enough in our opinion because like the steel manufacturers Intel's real advantage today is increasingly in the high end high margin business. And without volume, China is going to win this battle. So we continue to believe that a new joint venture is going to emerge. Here's our prediction. We see a triumvirate emerging in a new joint venture that is led by Intel. It brings x86, that volume associated with that. It brings cash, manufacturing prowess, R&D. It brings global resources, so much more than we show in this chart. IBM as we laid out brings architecture, it's R&D, it's longstanding relationships. It's deal flow, it can funnel its business to the joint venture as can of course, parts of Intel. We see IBM getting a nice licensed deal from Intel and or the JV. And it has to get paid for its contribution and we think it'll also get a sweet deal and the manufacturing fees from this Intel Foundry. But it's still not enough to beat China. Intel needs volume. And that's where Samsung comes in. It has the volume with ARM, has the experience and a complete offering across products. We also think that South Korea is a more geographically appealing spot in the globe than Taiwan with its proximity to China. Not to mention that TSMC, it doesn't need Intel. It's already number one. Intel can get a better deal from number two, Samsung. And together these three we think, in this unique structure could give it a chance to become number one by the end of the decade or early in the 2030s. We think what's happening is our take, is that Intel is going to fight hard to win that government business, put itself in a stronger negotiating position and then cut a deal with some supplier. We think Samsung makes more sense than anybody else. Now finally, we want to leave you with some comments and some thoughts from the community. First, I want to thank David Foyer. His decade plus of work and knowledge of this industry along with this collaboration made this work possible. His fingerprints are all over this research in case you didn't notice. And next I want to share comments from two of my colleagues. The first is Serbjeet Johal. He sent this to me last night. He said, "We are not in our grandfather's compute era anymore. Compute is getting spread into every aspect of our economy and lives. The use of processors is getting more and more specialized and will intensify with the rise in edge computing, AI inference and new workloads." Yes, I totally agree with Sarbjeet. And that's the dynamic which Pat is betting and betting big. But the bottom line is summed up by my friend and former IDC mentor, Dave Moschella. He says, "This is all about China. History suggests that there are very few second acts, you know other than Microsoft and Apple. History also will say that the antitrust pressures that enabled AMD to thrive are the ones, the very ones that starved Intel's cash. Microsoft made the shift it's PC software cash cows proved impervious to competition. The irony is the same government that attacked Intel's monopoly now wants to be Intel's protector because of China. Perhaps it's a cautionary tale to those who want to break up big tech." Wow. What more can I add to that? Okay. That's it for now. Remember I publish each week on wikibon.com and siliconangle.com. These episodes are all available as podcasts. All you got to do is search for Braking Analysis podcasts and you can always connect with me on Twitter @dvellante or email me at david.vellante, siliconangle.com As always I appreciate the comments on LinkedIn and in clubhouse please follow me so that you're notified when we start a room and start riffing on these topics. And don't forget to check out etr.plus for all the survey data. This is Dave Vellante for theCUBE insights powered by ETR, be well, and we'll see you next time. (upbeat music)

Published Date : Mar 26 2021

SUMMARY :

in Palo Alto in Boston, in the dark green to that of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

Dave MoschellaPERSON

0.99+

Pat GelsingerPERSON

0.99+

AppleORGANIZATION

0.99+

2015DATE

0.99+

CiscoORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

PatPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GelsingerPERSON

0.99+

AmazonORGANIZATION

0.99+

TSMCORGANIZATION

0.99+

2011DATE

0.99+

JanuaryDATE

0.99+

2018DATE

0.99+

2025DATE

0.99+

IrelandLOCATION

0.99+

$10 billionQUANTITY

0.99+

$20 billionQUANTITY

0.99+

2017DATE

0.99+

twoQUANTITY

0.99+

QualcommORGANIZATION

0.99+

ArizonaLOCATION

0.99+

EricssonORGANIZATION

0.99+

Clay ChristensenPERSON

0.99+

IDCORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

GartnerORGANIZATION

0.99+

Clay ChristiansenPERSON

0.99+

DellORGANIZATION

0.99+

IsraelLOCATION

0.99+

David FoyerPERSON

0.99+

12 monthsQUANTITY

0.99+

IntelORGANIZATION

0.99+

ARMORGANIZATION

0.99+

last yearDATE

0.99+

ChristiansenPERSON

0.99+

10 nanometerQUANTITY

0.99+

AMDORGANIZATION

0.99+

FirstQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

20xQUANTITY

0.99+

Serbjeet JohalPERSON

0.99+

50%QUANTITY

0.99+

four yearsQUANTITY

0.99+

mid last yearDATE

0.99+

HPE Discover 2020 Analysis | HPE Discover 2020


 

>>from around the globe. It's the Cube covering HP. Discover Virtual experience Brought to you by HP. >>Welcome back to the Cube's coverage of HP Discover. 2020. The virtual experience. The Cube. The Cube has been virtualized. My name is Dave Vellante. I'm here with Stuart Minuteman and our good friend Tim Crawford is here. He's a strategic advisor to see Io's with boa. Tim, Great to see you. Stuart. Thanks for coming on. >>Great to see you as well, Dave. >>Yes. So let's unpack. What's going on in that Discover Antonio's, He notes, Maybe talk a little bit about the prospects for HP of coming forward in this decade. You know, last decade was not a great one for HP, HP. I mean, there was a lot of turmoil. There was a botched acquisitions. There was breaking up the company and spin merges and a lot of distractions. And so now that companies really and you hear this from Antonio kind of positioning for innovation for the next decade. So So I think this is probably a lot of excitement inside the company, but I want to touch on a couple of points and then you get your guys reaction, I guess, you know, to start off. Obviously, Antonio's talking about Cove in the role that they played in that whole, you know, pandemic and the transition toe the the isolation economy. But so let me start with you, Tim. I mean, what is the sort of posture amongst cios that you talk to? How strategic is HB H B two? The folks that you talk to in your community? >>Well, I think if you look at how CIOs are thinking, especially as we head into covert it into Corona virus and kind of mapping through that, that price, um, it really came down to Can they get their hands on technology? Can they get people back to work working from home? Can they do it in a secure fashion? Um, keeping people productive. I mean, there was a lot of block and tackling, and even to this day, there's still a fair amount of that was taking place. Um, we really haven't seen the fallout from the cybersecurity impact of expanding our foot print. Um, quite. But we'll see that, probably in the coming months. There are some initial inklings there when it comes to HP specifically I think it comes back to just making sure that they had the product on hand, that they understood that customers are going through dramatic change. And so all bets are off. You have to kind of step back and say, Okay, those plans that I had 60 9100 and 20 days ago those strategies that I may have already started down the path with those are up for grabs. I need to step back from those and figure out What do I do now? And I think each company, HP included, needs to think about how do they start to meld themselves, to be able to address those changing customer needs? And I think that's that's where this really kind of becomes the rubber hits the road is is HP capable of doing that? And are they making the right changes? And quite frankly, that starts with empathy. And I think we've heard pretty clearly from Antonio that he is sympathetic to the plight of their customers and the world >>on the whole. >>Yeah, and I think culturally 10 minutes do I mean I think you know HP is kind of getting back to some of its roots, and Tony has been there for a long time. I think people I think is very well liked. Andi, I think, ease of use, and I'm sure he's tough. But he's also a very fair individual, and he's got a vision and he's focused. And so, you know, I think again, as they said, looking forward to this decade, I think could be one that is, you know, one of innovation. Although, you know, look, you look at the stock price, you know, it's kind of piqued in November 19. It's obviously down like many stocks, so there's a lot of work to do there, and it's too. We're certainly hearing from HP. This notion of everything is a service that we've talked about green like a lot. What's your sense of their prospects going forward in this, you know, New Era? >>Yeah, I mean, Dave, one of the biggest attacks we've heard about H E in the last couple of years, you know the line Michael Dell would use is you're not going to grow by, say, abstraction. But as a platform company, HP is much more open. From what I've seen in the HP that I remember from, you know, 5 to 10 years ago. So you look at their partner ecosystem. It's robust. So, you know, years ago, it seemed to be if it didn't come out of HP Labs, it wasn't a product, you know. That was the services arm all wanted to sell HP here. Now, in this software defined world working in a cloud environment, they're much more open to finding that innovation and enabling it. So, you know, we talk about Green Lake Day. Three lakes got about 1000 customers right now, and a big piece of that is a partner. Port Police, whether it's VM Ware Amazon Annex, were H B's full stack themselves. They have optionality in there, and that's what we hear from from users is that they want flexibility they don't want. You know, you look at the cloud providers, it's not, you know, here's a solution. You look at Amazon. There's dozens of databases that you can use from Amazon or, if you use on top of Amazon, so H p e. You know, not a public cloud provider, but looking more like that cloud experience. They've done so many acquisitions over the years. Many of them were troubled. They got rid of some of the pieces that they might have over paid for. But you look at something like CTP them in this multi cloud world in the networking space, they've got a really cool, open source company, the company behind spiffy, inspire. And, you know, companies that are looking at containers and kubernetes, you know, really respond to say, Hey, these are projects that were interesting Oh, who's the company that that's driving that it's HP so more open, more of a partner ecosystem definitely feels that there's a lot there that I respect and like that hp >>well, I mean, the intent of splitting the company was so that HP could be more focused but focused on innovation was the intent was to be the growth company. It hasn't fully played out yet. But Tim, when you think about the conversations that CIOs are having with with HPI today versus what they were having with hpe HP, the the conglomerate of that the Comprising e ds and PCs, I guess I don't know, in a way, more more Dell like so Certainly Michael Dell's having strategic conversations, CIOs. But you got to believe that the the conversations are more focused today. Is that a good thing or a jury's still out? >>No, it absolutely is a good thing. And I think one of the things that you have to look at is we're getting back to brass tax. We're getting back to that focus around business objectives. So no longer is that hey, who has the coolest tech? And how can we implement that tax? Kind of looking from a tech business? Ah, spectrum, you're now focused squarely is a C i. O. You have to be squarely focused on what are the business objectives that you are teamed up for, and if you're not, you're on a very short leash and that doesn't end well. And I think the great thing about the split of HP HP e split and I think you almost have to kind of step back for a second. Let's talk about leadership because leadership plays a very significant role, especially for CIOs that are thinking about long term decisions and strategic partners. I don't think that HP necessarily had the right leadership in place to carry them into that strategic world. I think Antonio really makes a change there. I mean, they made some really poor decisions. Post split. Um, that really didn't bode well for HP. Um, and frankly, I talked a bit about that I know wasn't really popular within HP, but quite frankly, they needed to hear it. And I think that actually has been heard. And I think they are listening to their customers. And one of the big changes is they're getting back into the software business. And when you talk about strategic initiatives, you have to get beyond just the hardware and start moving up the proverbial stack, getting closer to those business initiatives. And that is software. >>Yeah, well, Antonio talked about sort of the insights. I mean, something I've said a lot about borrowed from the very Meeker conversations that that data is plentiful. Something I've always said. Insights aren't. And so you're right. You've seen a couple of acquisitions, you know, Matt bahr They picked up, I think pretty inexpensively. Kind of interesting cause, remember, HP hp had an investment in Horton works, which, of course, is now Cloudera and Blue Data. Ah Kumar Conte's company, you know, kind of focusing on maybe automating data, you know, they talked about Ed centric, cloud enabled, data driven. Nobody's gonna argue with those things. But you're right, Tim. I mean, you're talking more software than kind of jettisons the software business and now sort of have to rebuild it. And then, of course, do this cloud. What do you make of HP ease Cloud play? >>Yeah, well, I >>mean, >>Dave, you the pieces. You were just talking about math bar and blue data, where HP connects it together is, you know, ai ops. So you know, where are we going with infrastructure? There needs to be a lot more automation. We heard a great quote. I love from automation anywhere. Dave was, if you talk about digital transformation without automation, it's hallucination. So, you know, HP baking that into what they're doing. So, you know, I fully agree with Tim software software software, you know, is where the innovation is. So it can't just be the infrastructure. How do you have eyes and books into the applications? How are you helping customers build those new pieces? And what's the other software that you build around that? So, you know, absolutely. It's an interesting piece. And you know, HP has got a lot of interesting pieces. You know, you talk about the edge. Aruba is a great asset for that kind of environment and from a partnership, that is a damn point. Dave. They have. John Chambers was in the keynote. John, of course. Long time partners. He's with Cisco for many years Intel. Cisco started eating with HP on the server business, but now he's also the chairman of pensando. HP is an investor in pensando general availability this month of that solution, and that's going to really help build out that next generation edge. So, you know, a chip set that HP E can offer similar to what we see how Amazon builds outpost s. So that is a solution both for the enterprise and beyond. Is as a B >>yeah course. Do. Of course, it's kind of, but about three com toe. Add more fuel to that tension. Go ahead, Tim. >>Well, I was going to pick apart some of those pieces because you know, at edge is not an edge is not an edge. And I think it's important to highlight some of the advantages that HP is bringing to the table where Pensando comes in, where Aruba comes in and also we're really comes in. I think there are a number of these components that I want to make sure that we don't necessarily gloss over that are really key for HP in terms of the future. And that is when you step back and you look at how customers are gonna have to consume services, how they're going to have to engage with both the edge and the cloud and everything in between. HP has a great portfolio of hardware. What they haven't necessarily had was the glue, that connective tissue to bring all of that together. And I think that's where things like Green Lake and Green Lake Central really gonna play a role. And even their, um, newer cloud services are going to play a role. And unlike outposts and unlike some of the other private cloud services that are on the market today, they're looking to extend a cloud like experience all the way to the edge and that continuity creating that simplicity is going to be key for enterprises. And I think that's something that shouldn't be understated. It's gonna be really important because when I look at in the conversations I'm having when we're looking at edge to cloud and everything in between. Oh my gosh, that's really complicated. And you have to figure out how to simplify that. And the only way you're going to do that is if you take it up a layer and start thinking about management tools. You start thinking about autumn, and as companies start to take data from the edge, they start analyzing it at the edge and intermediate points on the way to cloud. It's going to be even more important to bring continuity across this entire spectrum. And so that's one of the things that I'm really excited about that I'm hearing from Antonio's keynote and others. Ah, here at HP Discover. >>Yeah, >>well, let's let's stay on that stupid. Let's stay on that for a second. >>Yeah, I wanted to see oh interested him because, you know, it's funny. You think back. You know, HP at one point in time was a leader in, you know, management solutions. You know, HP one view, you know, in the early days, it was really well respected. I think what I'm hearing from you, I think about outpost is Amazon hasn't really put management for the edge. All they're doing is extending the cloud piece and putting a piece out of the edge. It feels like we need a management solution that built from the ground up for this kind of solution. And do I hear you right? You believe that to be as some of those pieces today? >>Well, let's compare and contrast briefly on that. I think Amazon and the way Amazon is well, is Google and Microsoft, for that matter. The way that they are encompassing the edge into their portfolio is interesting, but it's an extension of their core business, their core public cloud services business. Most of the enterprise footprint is not in public cloud. It's at the other end of that spectrum, and so being able to take not just what's happening at the edge. But what about in your corporate data center in your corporate data center? You still have to manage that, and that doesn't fall under the purview of Cloud. And so that's why I'm looking at HP is a way to create that connective tissue between what companies are doing within the corporate data center today, what they're doing at the edge as well as what they're doing, maybe in private cloud and an extension public cloud. But let's also remember something else. Most of these enterprises, they're also in a multi cloud environment, so they're touching into different public cloud providers for different services. And so now you talk about how do I manage this across the spectrum of edge to cloud. But then, across different public cloud providers, things get really complicated really fast. And I think the hints of what I'm seeing in software and the new software branding give me a moment of pause to say, Wait a second. Is HP really gonna head down that path? And if so, that's great because it is of high demand in the enterprise. >>Well, let's talk about that some more because I think this really is the big opportunity and we're potentially innovation is. So my question is how much of Green Lake and Green Lake services are really designed for sort of on Prem to make that edge to on Prem? No, I want to ask about Cloud, how much of that is actually delivering Cloud Native Services on AWS on Google on Azure and Ali Cloud etcetera versus kind of creating a cloud like experience for on Prem in it and eventually the edge. I'm not clear on that. You guys have insight on how much effort is going into that cloud. Native components in the public cloud. >>Well, I would say that the first thing is you have to go back to the applications to truly get that cloud native experience. I think HP is putting the components together to a prize. This to be able to capitalize on that cloud like experience with cloud native APS. But the vast majority of enterprise app they're not cloud native. And so I think the way that I'm interpreting Green Lake and I think there are a lot of questions Greenland and how it's consumed by enterprises there. There was some initial questions around the branding when it first came out. Um, and so you know it's not perfect. I think HP definitely have some work to do to clarify what it is and what it isn't in a way that enterprises can understand. But from what I'm seeing, it looks to be creating and a cloud like experience for enterprises from edge to cloud, but also providing the components so that if you do have applications that are shovel ready for cloud or our cloud native, you can embrace Public Cloud as well as private cloud and pull them under the Green Lake >>Rela. Yeah, ostensibly stew kubernetes is part of the answer to that, although you know, as we've talked about, Kubernetes is necessary containers and necessary but not sufficient for that experience. And I guess the point I'm getting to is, you know we do. We've talked about this with Red Hat, certainly with VM Ware and others the opportunity to have that experience across clouds at the Edge on Prim. That's expensive from an R and D standpoint. And so I want to kind of bring that into the discussion. HP last year spent about 1.8 billion in R and D Sounds like a lot of money. It's about 6% of its of it's revenues, but it's it's spread thin now. It does are indeed through investments, for instance, like Pensando or other acquisitions. But in terms of organic R and D, you know, it's it's it's not at the top of the heap. I mean, obviously guys like Amazon and Google have surpassed them. I've written about this with regard to IBM because they, like HP, spend a lot on dividends on share buybacks, which they have to do to prop up the stock price and placate Wall Street. But it But it detracts from their ability to fund R and d student your take on that sort of innovation roadmap for the next decade. >>Yeah, I mean, one of the things we look at it in the last year or so there's been what we were talking about earlier, that management across these environments and kubernetes is a piece of it. So, you know, Google laid down and those you've got Microsoft with Azure, our VM ware with EMS. Ooh! And to Tim's point, you know, it feels like Green Lake fits kind of in that category, but there's there's pieces that fall outside of it. So, you know, when I first thought of Green Lake, it was Oh, well, I've got a private cloud stack like an azure stack is one of the solutions that they have there. How does that tie into that full solution? So extending that out, moving that brand I do here, you know good things from the field, the partners and customers. Green Lake is well respected, and it feels like that is, that is a big growth. So it's HB 50 from being more thought of, as you know, a box seller to more of that solution in subscription model. Green Lake is a vehicle for that. And as you pointed out, you know, rightfully so. Software so important. And I feel when that thing I'd say HPI ee feels toe have more embracing of software than, say, they're closest competitors. Which is Dell, which, you know, Dell Statement is always to be the leading infrastructure writer, and the arm of VM Ware is their software. So, you know, just Dell alone without VM ware, HP has to be that full solution of what Dell and VM ware together. >>Yeah, and VM Ware Is that the crown jewel? And of course, HP doesn't have a VM ware, but it does have over 8000 software engineers. Now I want to ask you about open source. I mean, I would hope that they're allocating a large portion of those software engineers. The open source development developing tooling at the edge, developing tooling from multi cloud certainly building hooks in from their hardware. But is HP Tim doing enough in open source? >>Well, I don't want to get on the open source bandwagon, and I don't necessarily want to jump off it. I think the important thing here is that there are places where open source makes sense in places where it doesn't, um, and you have to look at each particular scenario and really kind of ask yourself, does it make sense to address it here? I mean, it's a way to to engage your developers and engage your customers in a different mode. What I see from HP E is more of a focus around trying to determine where can we provide the greatest value for our customers, which, frankly, is where their focus should be, whether that shows up in open source for software, whether that shows up in commercial products. Um, we'll see how that plays out. But I think the one thing that I give HP e props on one of several things I would say is that they are kind of getting back to their roots and saying, Look, we're an infrastructure company, that is what we do really well We're not trying to be everything to everyone. And so let's try and figure out what are customers asking for? How do we step through that? I think this is actually one of the challenges that Antonio's predecessors had was that they tried to do jump into all the different areas, you know, cloud software. And they were really X over, extending themselves in ways that they probably should. But they were doing it in ways that really didn't speak to their four, and they weren't connecting those dots. They weren't connecting that that connective tissue they needed to dio. So I do think that, you know, whether it's open source or commercial software, we'll see how that plays out. Um, but I'm glad to see that they are stepping back and saying Okay, let's be mindful about how we ease into this >>well, so the reason I bring up open source is because I think it's the mainspring of innovation in the industry on that, but of course it's very tough to make money, but we've talked a lot about H B's strength since breath is, we haven't talked much about servers, but they're strong in servers. That's fine We don't need to spend time there. It's culture. It seems to be getting back to some of its roots. We've touched on some of its its weaknesses and maybe gaps. But I want to talk about the opportunities, and there's a huge opportunity to the edge. David Flores quantified. He says that Tam is four. Trillion is enormous, but here's my question is the edge Right now we're seeing from companies like HP and Dell. Is there largely taking Intel based servers, kind of making a new form factor and putting them out on the edge? Is that the right approach? Will there be an emergence of alternative processors? Whether it's our maybe, maybe there's some NVIDIA in there and just a whole new architecture for the edge to authority. Throw it out to you first, get Tim Scott thoughts. >>Yeah, So what? One thing, Dave, You know, HP does have a long history of partnering with a lot of those solutions. So you see NVIDIA up on stage when you think about Moonshot and the machine and some of the other platforms that they felt they've looked at alternative options. So, you know, I know from Wicky Bon standpoint, you know, David Foyer wrote the piece. That arm is a huge opportunity at the edge there. And you would think that HP would be one of the companies that would be fast to embrace that >>Well, that's why I might like like Moonshot. I think that was probably ahead of its time. But the whole notion of you know, a very slim form factor that can pop in and pop out. You know, different alternative processor architecture is very efficient, potentially at the edge. Maybe that's got got potential. But do you have any thoughts on this? I mean, I know it's kind of Yeah, any hardware is, but, >>well, it is a little hardware, but I think you have to come back to the applicability of it. I mean, if you're taking a slim down ruggedized server and trying Teoh essentially take out, take off all the fancy pieces and just get to the core of it and call that your edge. I think you've missed a huge opportunity beyond that. So what happens with the processing that might be in camera or in a robot or in an inch device? These are custom silicon custom processors custom demand that you can't pull back to a server for everything you have to be able to to extend it even further. And, you know, if I compare and contrast for a minute, I think some of the vendors that are looking at Hey, our definition of edge is a laptop or it is this smaller form factor server. I think they're incredibly limiting themselves. I think there is a great opportunity beyond that, and we'll see more of those kind of crop up, because the reality is the applicability of how Edge gets used is we do data collection and data analysis in the device at the device. So whether it's a camera, whether it's ah, robot, there's processing that happens within that device. Now some of that might come back to an intermediate area, and that intermediate area might be one of these smaller form factor devices, like a server for a demo. But it might not be. It might be a custom type of device that's needed in a remote location, and then from there you might get back to that smaller form factor. Do you have all of these stages and data and processing is getting done at each of these stages as more and more resources are made available. Because there are things around AI and ML that you could only do in cloud, you would not be able to do even in a smaller form factor at the edge. But there are some that you can do with the edge and you need to do at the edge, either for latency reasons or just response time. And so that's important to understand the applicability of this. It's not just a simple is saying, Hey, you know, we've got this edge to cloud portfolio and it's great and we've got the smaller servers. You have to kind of change the vernacular a little bit and look at the applicability of it and what people are actually doing >>with. I think those are great points. I think you're 100% right on. You are going to be doing AI influencing at the edge. The data of a lot of data is going to stay at the edge and I personally think and again David Floor is written about this, that it's going to require different architectures. It's not going to be the data center products thrown over to the edge or shrunk down. As you're saying, That's maybe not the right approach, but something that's very efficient, very low cost of when you think about autonomous vehicles. They could have, you know, quote unquote servers in there. They certainly have compute in there. That could be, you know, 2344 $5000 worth of value. And I think that's an opportunity. I'd love to see HP Dell, others really invest in R and D, and this is a new architecture and build that out really infuse ai at the edge. Last last question, guys, we're running out of time. One of the things I'll start with you. Still what things you're gonna watch for HP as indicators of success of innovation in the coming decade. As we said last decade, kind of painful for HP and HP. You know, this decade holds a lot of promise. One of the things you're gonna be watching in terms of success indicators. >>So it's something we talked about earlier is how are they helping customers build new things, So a ws always focuses on builders. Microsoft talks a lot. I've heard somethin double last year's talk about building those new applications. So you know infrastructure is only there for the data, and the applications live on top of it. And if you mention Dave, there's a number of these acquisitions. HP has moved up the stack. Some eso those proof points on new ways of doing business. New ways of building new applications are what I'm looking for from HP, and it's robust ecosystem. >>Tim. Yeah, yeah, and I would just pick you back right on. What's do was saying is that this is a, you know, going back to the Moonshot goals. I mean, it's about as far away as HP ease, and HP is routes used to be and that that hardware space. But it's really changing business outcomes, changing business experiences and experiences for the customers of their customers. And so is far cord that that eight p e can get. I wouldn't expect them to get all the way there, although in conversations I am having with HP and with others that it seems like they are thinking about that. But they have to start moving in that direction. And that's actually something that when you start with the builder conversation like Microsoft has had, an Amazon has had Google's had and even Dell, to some degree has had. I think you missed the bigger picture, so I'm not saying exclude the builder conversation. But you have to put it in the right context because otherwise you get into this siloed mentality of right. We have solved one problem, one unique problem, and built this one unique solution. And we've got bigger issues to be able to address as enterprises, and that's going to involve a lot of different moving parts. And you need to know if you're a builder, you've it or even ah ah, hardware manufacturer. You've got to figure out, How does your piece fit into that bigger picture and you've got to connect those dots very, very quickly. And that's one of the things I'll be looking for. HP as well is how they take this new software initiative and really carry it forward. I'm really encouraged by what I'm seeing. But of course the future could hold something completely different. We thought 2020 would look very different six months ago or a year ago than it does today. >>Well, I wanna I want to pick up on that, I think I would add, and I agree with you. I'm really gonna be looking for innovation. Can h P e e get back to kind of its roots? Remember, H B's router invents it was in the logo. I can't translate its R and D into innovation. To me, it's all about innovation. And I think you know cios like Antonio Neri, Michael Dell, Arvind Krishna. They got a They have a tough, tough position because they're on the one hand, they're throwing off cash, and they can continue Teoh to bump along and, you know, placate Wall Street, give back dividends and share buybacks. And and that's fine. And everybody would be kind of happy. But I'll point out that Amazon in 2007 spent spend less than a $1,000,000,000 in R and D. Google spent about the back, then about the same amount of each B E spends today. So the point is, if the edge is really such a huge opportunity, this $4 trillion tam is David Foyer points out, there's a There's a way in which some of these infrastructure companies could actually pull a kind of mini Microsoft and reinvent themselves in a way that could lead to massive shareholder returns. But it was really will take bold vision and a brave leader to actually make that happen. So that's one of things I'm gonna be watching very closely hp invent turn r and D into dollars. And so you guys really appreciate you coming on the Cube and breaking down the segment for ah, the future of HP be well, and, uh and thanks very much. Alright. And thank you for watching everybody. This is Dave Volante for Tim Crawford and Stupid men. Our coverage of HP ease 2020 Virtual experience. We'll be right back right after this short break. >>Yeah, yeah, yeah, yeah.

Published Date : Jun 23 2020

SUMMARY :

Discover Virtual experience Brought to you by HP. He's a strategic advisor to see Io's with boa. And so now that companies really and you hear this from Antonio kind of positioning for innovation for the next decade. I think it comes back to just making sure that they had the product on hand, And so, you know, that I remember from, you know, 5 to 10 years ago. But you got to believe that the the conversations And I think one of the things that you have to look you know, kind of focusing on maybe automating data, And you know, HP has got a lot of interesting pieces. Add more fuel to that tension. And that is when you step back and you look at how customers are gonna have to consume services, Let's stay on that for a second. You know, HP one view, you know, in the early days, it was really well respected. And so now you talk about how do I manage this across Well, let's talk about that some more because I think this really is the big opportunity and we're potentially innovation edge to cloud, but also providing the components so that if you do have applications And I guess the point I'm getting to is, you know we do. Which is Dell, which, you know, Dell Statement is always to be the leading infrastructure Yeah, and VM Ware Is that the crown jewel? had was that they tried to do jump into all the different areas, you know, Throw it out to you first, get Tim Scott thoughts. And you would think that HP would be one of the companies that would be fast But the whole notion of you custom demand that you can't pull back to a server for everything They could have, you know, quote unquote servers in there. And if you mention Dave, that this is a, you know, going back to the Moonshot goals. And I think you know cios like Antonio Neri, Michael Dell, Arvind Krishna. Yeah, yeah, yeah,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Tim CrawfordPERSON

0.99+

Dave VellantePERSON

0.99+

David FloresPERSON

0.99+

TonyPERSON

0.99+

DellORGANIZATION

0.99+

AntonioPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

TimPERSON

0.99+

November 19DATE

0.99+

DavePERSON

0.99+

David FoyerPERSON

0.99+

IBMORGANIZATION

0.99+

Tim ScottPERSON

0.99+

Arvind KrishnaPERSON

0.99+

StuartPERSON

0.99+

JohnPERSON

0.99+

2007DATE

0.99+

John ChambersPERSON

0.99+

Michael DellPERSON

0.99+

Dave VolantePERSON

0.99+

100%QUANTITY

0.99+

David FloorPERSON

0.99+

last yearDATE

0.99+

Antonio NeriPERSON

0.99+

10 minutesQUANTITY

0.99+

$4 trillionQUANTITY

0.99+

AWSORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

Derek Dicker, Micron | Micron Insight 2019


 

>>Live from San Francisco. It's the cube covering my groin. Insight 2019 brought to you by micron. >>Welcome back to pier 27 in San Francisco. I'm your host Dave Vellante with my cohost David foyer and this is the cube, the leader in live tech coverage. This is our live coverage of micron insight 2019 we were here last year talking about some of the big picture trends. Derek ticker is here, he's the general manager and vice president of the storage business unit at micro and great to see you again. Thank you so much for having me here. Welcome. So you know we talk about the super powers a lot, you know, cloud data, AI and these new workloads that are coming in. And this, this, I was talking to David earlier in our kickoff like how real is AI? And it feels like it's real. It's not just a bunch of vendor industry hype and it comes in a lot of different forms. Derek, what are you seeing in terms of the new workloads and the big trends in artificial intelligence? >>I think just on the, on the front end, you guys are absolutely right. The, the role of artificial intelligence in the world is, uh, is absolutely transformational. I was sitting in a meeting in the last couple of days and somebody was walking through a storyline that I have to share with you. That's a perfect example of why this is becoming mainstream. In Southern California at a children's hospital, there were a set of parents that had a few days old baby and this baby was going through seizures and no one could figure out what it was. And during the periods of time of the seizure, the child's brain activity was zero. There was no brain activity whatsoever. And what they did is they performed a CT scan, found nothing, check for infections, found nothing. And can you imagine a parent just sitting there dealing with their child and that situation, you feel hopeless. >>This particular institution is so much on the bleeding edge. They've been investing in personalized medicine and essentially what they were able to do was extract a sample of blood from that sample of blood within a matter of minutes. They were able to run an algorithm that could sift through 5 million genetic variants to go find a potential match for a genetic variant that existed within this child. They found one that was 0.01% of the population found a tiny, tiny, call it a less than a needle in the haystack. And what they were able to do is translate that actual insight into a treatment. And that treatment wasn't invasive. It didn't involve surgery. It involves supplements and providing this shower, just the nutrients that he needed to combat this genetic variant. But all of this was enabled through technology and through artificial intelligence in general. And a big part of the show that we're here at today is to talk about the industry coming together and discussing what are the great advances that are happening in that domain. >>It's just, it's super exciting to see something that touches that close to our life. I love that story and that's, that's why I love this event. I mean, well, obviously micron memories, you know, DRAM, NAND, et cetera, et cetera. But this event is all about connecting to the impacts on our lives. You take, you take that, I used to ask this question a lot of when will machines be able to make better diagnoses than, than doctors. And I think, you know, a lot people say, well they already can, but the real answer is it's really about the augmentation. Yeah. You know, machines helping doctors get to that, you know, very, you know, uh, a small probability 0.1001% yes. And it'd be able to act on it. That's really how AI is affecting our lives every day. >> Wholeheartedly agree. And actually that's a, that's a big part of our mission. >>Our mission is to transform how the world uses information to enrich life. That's the heart and soul of what you just described. Yeah. And we're actually, we're super excited about what we see happening in storage as a result of this. Um, one of the, one of the things that we've noticed as we've gotten engaged with a broad host of customers in the industry is that there's a lot of focus on artificial intelligence workloads being handled based on memory and memory bandwidth and larger amounts of memory being required. If you look at systems of today versus systems of tomorrow, based on the types of workloads that are evolving from machine learning, the need for DRAM is growing dramatically. Multiple factors, we see that, but what nobody ever talks about or rarely talks about is what's going on in the storage subsystem and one of the biggest issues that we've found over time or challenges that exist is as you look at the AI workloads going back to 2014 the storage bandwidth required was a few megabytes per second and called tens of, but if you just look every year, over time we're exceeding at gigabyte, two gigabytes of bandwidth required out of the storage subsystem. >>Forget the memory. The storage is being used as a cash in it flushes, but once you get into a case where you actually want to do more work on a given asset, which of course everybody wants to do from a TCO perspective, you need super high performance and capability. One of the things that that we uncovered was by delivering an SSD. This is our 9,300 drive. We actually balanced both the read IOPS and the ride IOPS at three gigs per second. And what we allow to have happened is not just what you can imagine as almost sequential work. You load up a bunch of data into a, into a training machine, the machine goes and processes on it, comes back with a result, load more data in by actually having a balanced read and write a model. Your ingest times go faster. So while you're working on a sequence, you can actually ingest more data into the system and it creates this overall efficiency. And it's these types of things that I think provided a great opportunity for innovation in the storage domain for these types of that's working >> requiring new architectures in storage, right? I mean, yeah, >>I mean, th th so one of the things that's happened in, in bringing SSDs in is that the old protocols were very slow, etc. And now we all the new protocols within in Vme and potentially even more new protocols coming in, uh, into this area. What's micron? What, how is micron making this thing happen? This speed that's gonna provide these insights? >>It's a fan fan. Fantastic question and you're absolutely right. The, the world of standards is something that we found over the course of time. If you can get a group of industry players wrapped around a given set of standards, you can create a large enough market and then people can innovate on top of that. And for us in the, in the storage domain, the big transitions had been in Sada and NBME. You see that happening today when we talked a little bit about maybe a teaser for what's coming a little later at, at our event, um, in some of the broader areas in the market, we're talking about how fabrics attach storage and infrastructure. And interestingly enough, where people are innovating quite a bit right now is around using the NBME infrastructure over fabrics themselves, which allows for shared storage across a network as opposed to just within a given server there. >>There's some fantastic companies that are out there that are actually delivering both software stacks and hardware accelerators to take advantage of existing NBME SSDs. But the protocol itself gets preserved. But then they can share these SSDs over a network, which takes a scenario where before you were locked with your storage stranded within a server and now you can actually distribute more broad. It's amazing difference, isn't it at that potential of looking at data over as broad an area as you want to. Absolutely. And being able to address it directly and having it done with standards and then having it done with low enough latency such that you aren't feeling severely disadvantaged, taking that SSD out of a box and making it available across a broad network. So you guys have a huge observation space. Uh, you sell storage to the enterprise, you sell storage to the cloud everywhere. >>I want to ask you about the macro because when you look at the traditional storage suppliers, you know, some of them are struggling right now. There aren't many guys that are really growing and gaining share because the cloud is eating away at that. You guys sell to the cloud. So that's fine. Moving, you know, arms dealer, whoever wins it may the best man win. Um, but, but at the same time, customers have ingested so much all flash. It's giving them head room and so they're like, Hey, I'm good for awhile. I used to have this spinning disc. I'd throw spinning disc at it at the problem till I said, give me performance headroom. That has changed. Now we certainly expect a couple of things that that will catch up and there'll be another step function. But there's also elasticity. Yes. Uh, you saw for instance, pure storage last quarter said, wow, hit the price dropped so fast, it actually hurt our revenues. >>And you'd say, well, wait a minute. If the price drops, we want people to buy more. There's no question that they will. It just didn't happen fast enough from the quarter. All of these interesting rip currents going on. I wonder what you're seeing in terms of the overall macro. Yeah. It's actually a fantastic question. If you go back in time and you look at the number of sequential quarters, when we had ASP decreases across the industry, it was more than six. And the duration from peak to trough on the spot markets was high double digit percentages. Not many markets go through that type of a transition. But as you suggested, there's this notion of elasticity that exists, which is once the price gets below a certain threshold, all of a sudden new markets open up. And we're seeing that happen today. We're seeing that happen in the client space. >>So, so these devices actually, they're going through this transition where companies are actually saying, you know what, we're going to design out the hard drive cages for all platforms across our portfolio going into the future. That's happening now. And it's happening largely because these price points are enabling that, that situation and the enterprise a similar nature in terms of average capacities and drives being deployed over time. So it's, I told you, I think the last time we saw John, I told just one of the most exciting times to be in the memory and storage industry. I'll hold true to that today. I, I'm super excited about it, but I just bought a new laptop and, and you know, I have, you know, a half a half a terabyte today and they said for 200 bucks you can get a terabyte. Yes. And so I said, Oh wow, I could take everything from 1983 and bring it, bring it over. >>Yeah. Interestingly, it was back ordered, you know, so I think, wow, it am I the only one, but this is going to happen. I mean, everybody's going to have, you know, make the price lower. Boom. They'll buy more. We, we, we believe that to be the case for the foreseeable future. Okay. Do you see yourself going in more into the capacity market as well with SSTs and I mean, this, this, this drop, let's do big opportunity or, yeah. Actually, you know, one of the areas that we feel particularly privileged to be able to, to engage in is the, the use of QLC technology, right. You know, quad level solar for bits per cell technology. We've integrated this into a family of, uh, of SSDs for the enterprise, or interestingly enough, we have an opportunity to displace hard drives at an even faster rate because the core capability of the products are more power efficient. >>They've got equal to, or better performance than existing hard drives. And when you look at the TCO across a Reed intensive workloads, it's actually, it's a no brainer to go replace those HDD workloads in the client space. There's segments of the market where we're seeing QLC to play today for higher, higher capacity value segments. And then there's another segment for performance. So it's actually each segment is opening up in a more dramatic way. So the last question, I know you got some announcements today. They haven't hit the wire yet, but what, can you show us a little leg, Derrick? What can you tell us? So I, I'll, I'll give you this much. The, um, the market today, if you go look in the enterprise segment is essentially NBME and SATA and SAS. And if you look at MDME in 20 2019 essential wearing crossover on a gigabyte basis, right? >>And it's gonna grow. It's gonna continue to grow. I mentioned earlier the 9,300 product that we use for machine learning, AI workloads, super high performance. There's a segment of the market that we haven't announced products in today that is a, a a mainstream portion of that market that looks very, very interesting to us. In addition, we can never forget that transitions in the enterprise take a really long time, right, and Sada is going to be around for a long time. It may be 15% of the market and 10% out a few years, but our customers are being very clear. We're going to continue to ship Satta for an extended period of time. The beautiful thing about about micron is we have wonderful 96 layer technology. There's a need in the market and both of the segments I described, and that's about as much as I can give you, I don't bet against data. Derek, thanks very much for coming on. Thank you guys so much. You're welcome. There's a lot of facts. Keep it right there, buddy. We'll be back at micron insight 2019 from San Francisco. You're watching the cube.

Published Date : Oct 24 2019

SUMMARY :

Insight 2019 brought to you by micron. he's the general manager and vice president of the storage business unit at micro and great to see you again. And can you imagine a parent And a big part of the show that we're here at today is to talk about the industry coming together and discussing what are the great And I think, you know, a lot people say, And actually that's a, that's a big part of our mission. That's the heart and soul of what you just described. And what we allow to have happened is not just what you can imagine as almost in bringing SSDs in is that the old protocols were very slow, If you can get a group of industry players So you guys have a huge I want to ask you about the macro because when you look at the traditional storage suppliers, If you go back in time and you look at the number of sequential quarters, when we had ASP I have, you know, a half a half a terabyte today and they said for 200 bucks you can get a I mean, everybody's going to have, you know, make the price lower. And when you look at the TCO across a Reed There's a segment of the market that we haven't announced products in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

DerekPERSON

0.99+

2014DATE

0.99+

Derek DickerPERSON

0.99+

last yearDATE

0.99+

San FranciscoLOCATION

0.99+

Southern CaliforniaLOCATION

0.99+

0.01%QUANTITY

0.99+

200 bucksQUANTITY

0.99+

15%QUANTITY

0.99+

1983DATE

0.99+

SASORGANIZATION

0.99+

10%QUANTITY

0.99+

DerrickPERSON

0.99+

9,300QUANTITY

0.99+

tensQUANTITY

0.99+

JohnPERSON

0.99+

SATAORGANIZATION

0.99+

0.1001%QUANTITY

0.99+

MicronORGANIZATION

0.99+

two gigabytesQUANTITY

0.99+

last quarterDATE

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

20 2019DATE

0.99+

tomorrowDATE

0.99+

NBMEORGANIZATION

0.99+

oneQUANTITY

0.98+

more than sixQUANTITY

0.98+

bothQUANTITY

0.98+

each segmentQUANTITY

0.98+

zeroQUANTITY

0.96+

micronORGANIZATION

0.96+

SadaORGANIZATION

0.96+

pier 27LOCATION

0.95+

2019DATE

0.95+

micron insightORGANIZATION

0.95+

9,300 driveQUANTITY

0.93+

half a half a terabyteQUANTITY

0.91+

less than a needleQUANTITY

0.89+

three gigs per secondQUANTITY

0.89+

gigabyteQUANTITY

0.87+

a minuteQUANTITY

0.87+

5 million genetic variantsQUANTITY

0.86+

David foyerPERSON

0.84+

layerOTHER

0.82+

both softwareQUANTITY

0.74+

yearQUANTITY

0.74+

micron insight 2019ORGANIZATION

0.74+

few days oldQUANTITY

0.73+

few megabytes per secondQUANTITY

0.7+

Micron InsightORGANIZATION

0.7+

last couple of daysDATE

0.69+

thingsQUANTITY

0.69+

MDMEORGANIZATION

0.59+

96QUANTITY

0.59+

InsightORGANIZATION

0.46+

SadaTITLE

0.4+

terabyteQUANTITY

0.37+

SattaCOMMERCIAL_ITEM

0.35+

2019TITLE

0.27+

11 25 19 HPE Launch Floyer 4 (Do not make public)


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation welcome to the cube studios for the cube conversation where we go in-depth with thought leaders driving business outcomes with technology I'm your host Peter Burris digital business and the need to drive the value of data within organizations is creating an explosion of technology in multiple domains systems networking and storage we've seen advances in flash we've seen advances in HD DS we've seen advances and all kinds of different elements but it's essential that users and enterprises still think in terms not just of these individual technologies piecemeal but as solutions that are applied to use cases now you always have to be aware of what are the underlying technology components but it's still important to think about how systems integration is going to bring them together and apply them to serve business outcomes now to have that conversation we've got David Fleur who's the CTO and co-founder of wiki bond and my colleague David welcome to the cube thank you very much Peter all right so I've just laid out this proposition that systems integration as a discipline is not gonna go away when we think about how to build these capabilities that businesses need in digital business so let's talk about that what are some of the key features of systems integration especially in the storage world that will continue to be a helps differentiate between winners and losers absolutely so you you need to be able to use software to be able to combine all these different layers and it has to be an architect software solution that will work wherever you've got equipment and where have you got data so it needs to work in the cloud it needs to work in a private cloud it needs to work at the edge all of these needs to be architected in a way which is available to the users to put where the data is going to be created as opposed to bring it all in in one super large collection of data and so we've got different types of technology at the very fastest we've got DRAM we've got we've got non-volatile DRAM which is coming very fast indeed we've got flash and there are many different sorts of flash there's obtained from Intel that may be trying to get in between there as well and then there are different HD DS as well so we got a long hierarchy the important thing is that we protect the application and the operations from all of that complexity by having an overall hierarchy and utilizing software from an integration standpoint but it suggests that when an enterprise thinks about a solution for how they store their data they need to think in terms of as you said first off physically where is it going to be secondly what kinds of services at the software level am I going to utilize to ensure that I can have a common administrative experience and the differentiated usage experience based on the physical characteristics of where it's being used and then obviously and very importantly from an administration standpoint I need to ensure that I'm not having to learn new and unique administration practices everywhere because I would just blow everything up absolutely but there is a real there's going to be in my opinion a large number of these solutions out there I mean one data architecture is not going to be sufficient for all applications they're gonna have many different architectures out there I think it's probably useful just to start with one as an example in this area just let's take one as an example and then we can see what the major characteristics of you are so let's take something that would fit in most places a mid-range type solution let's take nimble nimble storage which has a very specific architecture so it was started off by being a virtualization of all those different layers so the application sees that everything is in flash and in cash or whatever it is but where it is is totally different it can be anywhere within that hierarchy so the application sees effectively a pool of resources that it can call yes all it sees and and it doesn't know and nobody and it doesn't need to know that it's on disk or a hard disk or in in memory in in in a cache inside the controller or wherever it is so it starts with using nimble as an example nimble is successfully masking the complexities and specificities of that storage heart and from the application right so so and and that's an advantage because it's simpler but it's also needs to cover more things you need to be able to do everything within that virtualized environment so you need for example to be able to take snapshots and you the snapshots need all the metadata about the snapshots needs to be put in a separate place so one of the things you find that comes from this sort of architecture is that the metadata is separated out completely different from the actual data itself but still proximate to the data because data locality still matters absolutely has to be there but it's in a different part of a hierarchy it's much further up the hierarchy all the metadata so what we've got the metadata we've got the flash high speed we've got the the fastest which is the DRAM itself that when for writes is has a protection mechanism for that that part of the DRAM specialized hardware in that area so that allows you to do writes very very quickly indeed and then you come down to the next layer which is flash and indeed within the in the in taking the nimble example you have two sorts of flash you can have the high-speed flash at the top and if you want to you can have lower performance flash you know using the 3d quad flash or whatever it is you can have lower performance flash if that's what you need and then going lower down then you have HD DS and the architecture combines the benefits of flash with the character and the characteristics of flash with the benefits of HD d which is much lower cost but the characteristics of HD d which are slower but very suited to writing out large volumes or reading in large volumes so that's read out to the disk but where where it's all held is held in the metadata so it's really looking at the workloads that are going to be they're gonna hit the data and then with out of making the application aware of it utilizing the underlying storage hierarchy to so best support those workloads again with a virtualized interface that keeps it really simple from an administration development and runtime perspective actually all right David foyer thanks very much for being on the cube and talking about some of these new solution-oriented requirements for thinking about storage over the next few years once again I'm Peter Burris see you next time you [Music]

Published Date : May 1 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FleurPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Peter BurrisPERSON

0.99+

two sortsQUANTITY

0.95+

nimbleORGANIZATION

0.95+

oneQUANTITY

0.93+

David foyerPERSON

0.92+

Palo Alto CaliforniaLOCATION

0.91+

wiki bondORGANIZATION

0.9+

firstQUANTITY

0.83+

one ofQUANTITY

0.76+

secondlyQUANTITY

0.72+

nimbleTITLE

0.64+

11OTHER

0.64+

next few yearsDATE

0.61+

thingsQUANTITY

0.6+

HPE Launch Floyer 4TITLE

0.5+

25OTHER

0.44+

19OTHER

0.38+

Day 2 Keynote Analysis | Dell Technologies World 2019


 

>> Live from Las Vegas, it's theCUBE! Covering Dell Technologies World 2019. Brought to you by Dell Technologies and its ecosystem partners. >> Hello everyone, welcome to theCUBE's live coverage here in Las Vegas for Dell Technologies World 2019. I'm John Furrier, Stu Miniman, Dave Vellante. Day two of three days of wall-to-wall coverage. We got two sets called theCube Cannon. We've got the Cannon of Content, interviews all day long, out at night at the analyst briefings, meet-ups, receptions, talking to all the executives at Dell Technologies VMware and across the industry. Stu, Dave, today is product announcements on the keynotes. Yesterday was the grand vision with Michael Dell and the big reveal on the Microsoft partnership with Satya Nadella's surprise visit onstage, unveiling new Azure-VMware integrations with Dell Technologies. Dell announced the Dell Cloud, which is a little bit of Virtustream, but they're trying to position this cloud, I guess it's a cloud if you want to call it a single cloud of glass. Dave, single pane in the glass with a variety of other things, unified workspace and some other things. This is Dell trying to be a supplier end-to-end. This is the pitch from Dell Technologies. We'll be talking to Michael Dell, also Pat Gelsinger, the CO of VMware. Dave, were you impressed, were you shocked, were you surprised with yesterday's big news and as the products start coming online here, what's your analysis? >> Well yesterday, John, was all about the big strategic vision, Michael Dell laying out check for good and then the linchpin of Dell strategy which of course is VMware for cloud, multicloud, hybrid cloud, kind of VMware everywhere. I was surprised that Satya Nadella flew down from Seattle and was here on stage in person. Didn't come in from the big screen. So I thought that was pretty impressive. You had the three power players up on stage. Today of course was all about the products. Both Dell and EMC have always been very practical in terms of their engineering. Stu, you used to work there. Their R&D is a lot of D. It's sort of incremental product improvements to keep the customers happy, to keep ahead of the competition, to keep the lifecycle going. They had like 10 announcements today. I can go through 'em real quick if you want, but they range from new laptops to talking about new branding on servers, new storage devices. You had PowerProtect which is their new rebranded backup and data protection and data manage portfolio, an area where Dell EMC has been behind. So lots of announcements. Another kind of mega launch tradition and again, a lot of incremental but important tactical improvements to the product line. >> Last year, what we heard from Jeff Clarke is they're looking to simplify that portfolio. Back in the EMC days, it was oh my gosh, look at the breadth of this. Every category, they had two or three offerings and you know, the stated goal is to simplify that and that means most categories are going to get one product. It's interesting. You talk about networking just got rebranded with that Power branding. I kind of said there there's marketing behind it. If you know what that product is because it's the Power brand and they put it out there. So you know, PowerMax, has been their tiered storage. They had a good update for Unity. It's Unity XT. Doesn't have a power name yet so maybe there's still some dry powder left in the product portfolio there, but they're making progress going through this 'cause these things don't happen overnight. It's great to spin up the clouds, but in the storage world, customers, they trust, they have the code, they test it out. So going to new generations, making that change, does take time but you've seen that progress. The tail end of that integration between Dell and EMC on the product side. >> Stu, what's your analysis of the products so far 'cause again like Dave said, it's a slew of announcements. What's resonating, what's popping out, what's boiling up to the surface? >> Yeah so look, the area that I spent so much time on, John, that hyper-converged infrastructure. If you look at a lot of the pieces underneath it all, it's VxRail. One of the things we've had a little bit of a challenge squinting through is oh wait, there's this managed service stack, it's VxRail underneath. Oh wait I've taken the appliance and I put VCF. Oh that's VxRail and then I've got this other, it's like I see three or four solutions and I'm like is it all just VxRail with like a VMware stack on top of it? But it's how do I package it, what applications live on it, how is it consumed, manage service, op ex, cap ex. So they've got that a little bit of complexity when VxRail itself is you know, dirt simple and really there so they're making progress on the cloud piece. Dell is the leader in hyper-converged. I'll point out, you don't hear anybody talking about Nutanix here, but Dell still has a partnership on the XC Core. They're going to sell a lot of Dell servers into Nutanix environment so I expect you'll still have the Nutanix show. John you're going to be at that next week. They're still going to talk about Dell. I'm sure you'll talk to Dheeraj. Yes they made a partnership with HP, but that does not kill the relationship with Nutanix just like Microsoft, heck. I'm going to see Satya Nadella on stage at Red Hat Summit next week and you're like oh well VMware and Red Hat. Red Hat's here. Red Hat's a Dell-ready partner. If you want to put open shift on top of their stack, they can do that so hardware and software, everybody's got their pieces, everybody's got their pieces, everybody competes a lot, but they partner across the board. IBM Global Services is here. There's so many companies here. Dell's a broad company, deep partnerships. The question I have is Pat Gelsinger was just on stage saying that this SDDC will be the building block for the future. I said kudos to them. They've got it on AWS, they've got it announced with Azure, we announced it with Google, but that is not necessarily the end state. VMware is a piece of the puzzle. I don't know if VMware will be the leader in multicloud management. vCenter was the leader in virtualization management so how much of that will there or do I get an Amazon and then start moving some stuff over? Do I get to Azure and start modernizing my environment so that I don't need to pay VMware and I don't need virtualization. VMware and Dell are going to containerize everything so in the future, are they containerware, you know? That's the competition kind of post-it note. They are VMware at their core. VMware is centra of the strategy and there's still some work to go, but they're making some good progress. >> I want to get your thoughts, guys, on the role VMware is playing here at the show. Normally they're here, usually they're here, but this year it seems to be much more smoother integration of talking points, messaging, product integrations. The show's got a good beat to it. Pretty packed, but the role of VMware, Dave, Stu, what's your reaction and thoughts? We've seen them dance all the time. Obviously VMware, Dave as you pointed out yesterday, a big part of the valuation of Dell Technologies, but what's your observation on the presence of VMware here at Dell Technologies World? >> I mean I've said many times that this company and I said this about EMC, it's kind of a boring company without VMware. You put VMware in the mix and all of a sudden, it becomes very strategic and very interesting from a lot of standpoints. Certainly from a financial standpoint. Remember, the Class V transaction that took Dell public was the result of an $11 billion dividend because of VMware. They took VMware's cash and they said okay, we're going to give nine billion to the shareholders. Without VMware, that wouldn't have happened. As well, the multicloud strategy, the underpinning of that multicloud strategy is VMWare. What strikes me, John and Stu, is that the cultural change. You had Dell, you had EMC. They said ah yeah the companies are compatible, but they're different companies. They maybe had shared kind of goals and values, but they had different cultures and really in a short timeframe, Michael Dell and his team have put these two companies together and they have aligned in a big way. I mean they are basically saying VMware and Dell, boom. That's how we're going to market and you know, Pat's coming on later today and I'm sure he'll say hey we love NetApp, we love HBE, we love IBM, but it's clear what the preferred partnership is. >> Dave, when the acquisition happened, there was talks of synergies and we were like oh where are they going to cut everything? If I look around here, they've got the seven logos of the primary companies. It's Dell, Dell EMC, Pivotal, RSA, Secureworks, Virtustream and VMware. They're one company. Michael Dell will go on calls for any of them. Friends of mine at Pivotal says you talk to Michael quite a bit. You know, he's out there. We talked about it yesterday. Dell and VMware are closer and tighter aligned than EMC and VMware ever were. Now on the one hand, EMC kept them separate because the growth of virtualization required that. Today in this cloud environment, it's a different world and it's matured so VMware, sure, there's still work on HP and IBM and all this other stuff, but Dell leads that move as you said, Dave. >> John, you're big on culture. This is a founder culture. What's your take on what Michael Dell has accomplished and how does it stand to compare with sort of other great cultural transformations that you've seen? >> Well I think HBE is a great example of a culture that split, was uncharged there. We know what happened there and I think they're hurting, they're losing talent and they're not winning in categories across the board like Dell is. I think Michael Dell, the founder-led approach that he's having 'cause he told us years ago, if you guys remember, here on the record, also privately that I'm going to take this off the table with EMC and I'm going to do all these things. We're going to execute. So he brought his execution mojo and ecos of Dell and become Dell Technologies, as Stu pointed out, a portfolio of multiple companies under one umbrella and he brought the execution discipline and this is a theme, Dave. Last night at the analysts reception, as I was talking to other analysts and talking to some of the execs, both from VMware and Dell Technologies, that the execution performance across the board both on product integration, which was a weak spot as you know, is getting better, the business performance discipline. We're going to have the CFO on here to talk more about it, they're executing. Howard Elias is going to be on this afternoon. He called this three years ago when he was talking about the integration that they saw synergies, they saw opportunities and they were going to unpack those. They stayed relentless on that. So I think this is a great example of keeping the founders around for all the VC-backed companies. You're thinking about getting rid of founders. Never let a founder leave a company. They bring the vision, they bring also some guts and grit and they bring a perspective and you can put great talent and team around that, that attract and retain great executives like Michael's done and he's poaching HPE, other companies and pulling talent in 'cause they're executing. They pay well, it's a great place to work according to the statistics. So again, this is all because of the founder and if the founder's not around, you have all the fiefdoms and the policists who kick in and then it becomes kind of sideways. So that's kind of what I see other companies that don't have founders around and HP lost their founders obviously and then the culture kind of went a little bit sideways. So they're trying to get back in the game, seeing them go back to their roots. We'll see how they do. We don't do that show anymore and again we don't have a lot of visibility into what HP's doing but we do know, Dave, that they do not have a lot of the pieces on the board that Dell does. So if you want to have an end-to-end operating model, and you're missing key value activities of an end-to-end value chain, that's going to be hard to automate, it's hard to be a performant, it's going to be hard to be successful. So I think Dell is showing the playbook of how to be horizontally scalable operationally and offer perspectives and data-driven specialism in any industry in any vertical. >> Yeah Dave, if I can just on the cultural piece 'cause it's really interesting. You talked about EMC, East Coast hard driving versus VMware, software, Silicon Valley company. While they're working together, a lot of it, you know, I talk to VMware people and they're like well it's great the Dell force is just selling our stuff. It's not like I'm having storage shoved down my throat or we have to have our arms twisted. It's the product portfolio that they're selling, the vSAN, NSX, the management software suite and those pieces, things like SD-WAN, there's some good synergies there. So the product portfolio is a nice fit that just jointly go out to market that they just really line up well together and Dell's a very different cultural beast than EMC was. >> Well again, staying on culture for a moment, when I discussed with some of the folks that I know out of Hopkinton the narrative early on was oh Dell's ruining EMC, tearing it apart and so forth. When you talk to people today, they say, you know what, it was painful. Dell came in and said okay, you're going to be accountable, really had an accountability culture, but now they've come out the other side, the narrative is it was the right thing to do. Jeff Clarke came in and sort of forced this alignment. There's like no question about it. People, this is a guy who you know, his calendar's set for the year. People know where he's going to be, what meeting he's going to have, what's expected and they're prepared and it seems to be taking hold. I mean if a $90 billion company that's growing at 14% in revenues, in profitable revenues, that's quite astounding when you think about it and I think it's a big result of the speed at which Dell has brought in its operating model to the broader EMC and transformed itself. It's quite amazing. >> Awesome show, guys. We've got clips out there on the #DellTechWorld on Twitter. We've got a lot of videos. We've got two sets here, three days of wall-to-wall coverage. Final word on this intro for day two, guys. Thoughts on the show? It's not a boring show. It's a lot of activities, a lot of things. They've got an Alienware eSports gaming studio which I think is totally badass. A lot of kind of cool things here. It's not the glitz and glam that we've seen in other EMC Worlds before or Dell Worlds, but it's meat and potatoes and it's got a spring to its step here. I feel it's not, it feels good. That's my takeaway. >> Well the big theme is hybrid cloud and multicloud. Jon Rowe as we were leaving the room today that we were early with that multicloud. Thanks for everybody else in the industry for hopping on board. The reality is the first time I heard the sort of hybrid cloud was called private cloud. Chuck Hollis wrote a blog back in the mid to late 2000s. Now I will make an observation in the customers that I talk to. Multicloud is not thus far, has not thus far has been a deliberate strategy. In my opinion, it's been the outcropping of multivendor, shadow IT, lines of business and I think the corner office is saying hold on, we need to reign this in, we need to have a better understanding of what our cloud strategy is, build a platform that is hybrid and sure, multicloud, to build our digital transformation. We need IT to basically help us build this out to make sure we comply with the corporate edicts and that's what's happening. It is early days. There's a long way to go. >> Yeah, as Dave, as you know, I sat right down the hallway from Chuck Hollis when he wrote that piece and I went and I called up Chuck and I was like hey Chuck, this sure sounds like my next generation virtual data center stuff that I joined the CTO office to work on and he's like yeah, yeah, new marketing branding and I wrote a piece, exactly what you said, Dave, on Wikibon.com, hybrid and multicloud were a bunch of pieces, you know. It's not a cohesive strategy. The management's not there. We're starting to see maturation. Some of the point products, you know, developed really fast. When we talk about VMware on AWS, that happened really fast. I heard if you stop by the VMware booth here at the show, they're showing outposts and I said is a diagram? No, no, I've got customers in production running this. I'm like hold on, I need to hear about this. Outpost in production? But that strategy as you said, hybrid and multicloud, we're starting to get there, starting to pull it together. David Foyer wrote a phenomenal piece about hybridcloud taxonomy. We've spent a lot of time on the research side. Really what does the industry need to do, how should customers think about all of the layers? You know, data and networking and all of these components to help make not just a bunch of pieces but actually drive innovation and help be better than the sum of its parts. >> Well ironic followup on that post, the Chuck Hollis post was around they called it the private cloud and it was all about homogeneity and now multicloud is everything but homogeneous. Outpost, however, is. Same hardware, same software, same control plane, same data plane so interesting juxtaposition. >> We'll see Amazon Outpost. Guys, go to SiliconAngle.com, Wikibon.com. Great hybridcloud, multicloud analysis coverage and news. And some of the headlines hitting the net here. Dell Technologies makes VMware linchpin of hybrid cloud, data center as a service, end user strategies from Zdnet. eWEEK, Dell makes major hybrid cloud push. Obviously great analysis, guys, right on the number. Day two, CUBE coverage here in Las Vegas. I'm John Furrier, Dave Vellante, Stu Miniman. We've got two sets. Rebecca Knight, Lisa Martin and more. Stay tuned for more coverage of day two after the short break. (upbeat music)

Published Date : Apr 30 2019

SUMMARY :

Brought to you by Dell Technologies and the big reveal on the Microsoft partnership Didn't come in from the big screen. and that means most categories are going to get one product. Stu, what's your analysis of the products so far but that does not kill the relationship with Nutanix is playing here at the show. What strikes me, John and Stu, is that the cultural change. of the primary companies. and how does it stand to compare with sort of other and if the founder's not around, you have all the It's the product portfolio that they're selling, and they're prepared and it seems to be taking hold. and it's got a spring to its step here. in the customers that I talk to. Some of the point products, you know, the private cloud and it was all about homogeneity And some of the headlines hitting the net here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Rebecca KnightPERSON

0.99+

MichaelPERSON

0.99+

DavePERSON

0.99+

Jeff ClarkePERSON

0.99+

IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Chuck HollisPERSON

0.99+

Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

Pat GelsingerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

EMCORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

HPORGANIZATION

0.99+

DellORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

twoQUANTITY

0.99+

Satya NadellaPERSON

0.99+

RSAORGANIZATION

0.99+

SeattleLOCATION

0.99+

$11 billionQUANTITY

0.99+

David FoyerPERSON

0.99+

PivotalORGANIZATION

0.99+

Michael DellPERSON

0.99+

Las VegasLOCATION

0.99+

ChuckPERSON

0.99+

GoogleORGANIZATION

0.99+

StuPERSON

0.99+

SecureworksORGANIZATION

0.99+

Jon RowePERSON

0.99+

AmazonORGANIZATION

0.99+

14%QUANTITY

0.99+

IBM Global ServicesORGANIZATION

0.99+

nine billionQUANTITY

0.99+

VirtustreamORGANIZATION

0.99+

yesterdayDATE

0.99+

Wikibon Action Item | Wikibon Conversation, February 2019


 

(electronic music) >> Hi, I'm Peter Burris. Welcome to Wikibon action item from theCUBE Studios in Palo Alto, California. So today we've got a great conversation and what we're going to be talking about is hybrid cloud. Hybrid cloud's been in the news a lot, lately. Larger consequences, from changes made by AWS, as they announced Outpost, and acknowledged for the first time that there's going to be a greater distribution of data and a greater distribution of function as enterprises, move to the cloud. We've been on top of this for quite some time, and have actually coined what we called true hybrid cloud. Which is the idea that increasingly we're going to see a need for a common set of capabilities and services, in multiple locations, so that the cloud can move to the data, and not the data automatically being presumed to move to the cloud. Now to have that conversation and to reveal some new research on what the cost and value propositions of the different options are that are available today. We've got David Foyer, David welcome to theCUBE. >> Thank you. >> So David, let's start, when we talk about hybrid cloud, we are seeing, a continuum of different options starting to emerge. What are the defining characteristics? >> So, yes, we're seeing a continuum emerging. We have a what we call stand alone of course at one end of the spectrum, and then we have multi cloud, and then we have loosely and tightly coupled, and then we have true, and as you go up the spectrum. So the dependents upon data depend on the data plain, dependents upon low latency, dependents on writing a systems of record, records. All of those increase as we're going from high latency and high bandwidth all the way up to low latency. >> So let me see if I got that right. So true hybrid cloud is at one end. >> Yes. >> And true hybrid cloud is, low latency, right on your work loads, simple as possible administration. That means we are typically going to have, a common stack in all locations. >> Yes. >> Next to that is this notion of tightly coupled, hybrid cloud, which could be higher latency, write oriented, could probably has a common set of software, on all nodes, that handles state. And then kind of this notion of loosely coupled. Multi well hybrid cloud, which is high latency, read oriented, which may have just API level coordination and commonality on all nodes. >> Yep that's right, and then you go down even further to just multi cloud, where you're just connecting things and each of them is independent of each other. >> So if I'm a CIO and I'm looking at a move to a cloud, I have to think about green field applications and the natural distribution of data for those green field applications, and that's going to help me choose which class of hybrid cloud, I'm going to use. But let's talk about the more challenging set of scenarios for most CIO's, which is the existing legacy applications. >> The systems of record. >> Yeah, the systems of record as I try to bring those, those cloud like experience to those applications, how am I going through that thought process? >> So, we have some choices, the choices are I could move it up too lift and shift, up to one of the cloud's, one of the large cloud's, many of them are around, and what if I, if I do that, what I need to be looking at is, what is the cost of moving that data, and what is the cost of pushing that up into the cloud, and what's the conversion cost, if I needed to move, to another database. >> And I think that's the biggest one, so that's just the cost of moving the data, which is just an ingress cost, it's a cost of format changes. >> Absolutely >> You know, migration and all the other elements, conversion changes et cetera. >> Right, so what I did in my research was focus on systems of record, the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things, you know, systems of engagements, analytics et cetera. But those systems of record have to work. You need to know if you've taken an order. You need to have consistency about that order. You need to know always that you can recover any data, you need in your financials et cetera, all of that is mission critical systems of record. And that's the piece that I focused on here, and I focused on. >> So again these are low latency. >> Very, low latency, yes. >> Write oriented. >> Very write oriented, types of applications, and, I focused on Oracle because the majority, of systems of record, run on Oracle databases, the large scale ones at least. So, that's what we are focusing on here. So I, looking at the different options for a CIO, of how they would go, and there are three main options open at the moment, there's Oracle, Cloud, Cloud at customer, which gives the cloud experience. There is Microsoft Azure Stack, which has a Oracle database version of it, and Outposts, but we eliminated Outposts not because, it's not going to be any good, but because it's not there yet. >> You can't do research on it if it doesn't exist yet. >> (laughs) That's right. So, we focused on Oracle and Azure, and we focused on, what was the benefit of moving from a traditional environment, where you've got best of breed essentially on site, to this cloud environment. >> So, if we think about it, the normal way of thinking about this kind of a research, is that people talk about R.O.I, and historically that's been done by looking, by keeping the amount of work that's performed, as given, constant and then looking at how the different technology components compare from a call standpoint. But a move to Cloud, the promise of a move to Cloud is not predicated on lowering costs per say. You may have other financial considerations of course but, it's really predicated on the notion of the cloud experience. Which is intended to improve business results, so if we think about R.O.I, as being a numerator question, with the value is the amount of work you do, versus a denominator question which is what resources are consumed to perform that work. It's not just the denominator side, we really need to think about the numerator side as well. >> The value you are creating, yes. >> So, what kind of thing's are we focused on when we think about that value created, as a consequence of possibilities and options of the Cloud. >> Right, so both are important, so obviously when you move, to a cloud environment, you can simplify operations in particular, you can simplify recovery, you can simplify a whole number of things within the IT shop and those give you extra resources. And then the question is, do you just cash in on those resources and say okay I've made some changes, or do you use those resources to improve the ability of your systems to work. One important characteristic of IT, all IT and systems of record in particular, is that you get depreciation of that asset. Over time it becomes less fitted, to the environment, that it started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know, most work done in an IT shop is on the maintenance side. >> Meaning it's an enhancement. >> It's maintenance and enhancement, yes. So making more resources available, and making it easier to do that maintenance, and making less things that are going to interfere with that, faster time to maintenance, faster time to new applications or improvements. Is really fundamental to systems of record. So that is the value that you can bring to it and you also bring value with lower better availability, higher availability as well. So those are the thing's we have put into the model, to see how the different approaches, and we were looking at really a total, one supplier being responsible for everything, which was the Oracle environment, and Oracle Cloud at Customer, to a sort of hybrid environment, more hybrid environment, where you had. >> Or mixed, or mixed. >> Mixed environment, yes. Where you had the equipment coming from different places. >> One vendor. >> The service, the Oracle, the Azure service, coming from Microsoft, and of course the database coming then from Oracle itself. And we found tremendous improvement in the value that you could get because of the single source. We found that a better model. >> So, the common source lead to efficiencies, that then allowed a business to generate new classes, of value >> Correct. >> Cause' as you said, you know, 70 plus percent of an IT or business is spent on technology, is associated with maintaining what's there, enhancing what's there, and a very limited amount is focused on new green field, and new types of applications. So if you can reduce the amount of time and energy, that goes into that heritage set of applications, those systems of record, then that opens up, that frees up resources to do some other things. >> And, having the flexibility now with things like Azure Stack and in the future AWS, of putting that resource either on premise or in the cloud, means that you can make decisions about where you process these things, about where the data is, about where the data needs to be, the best placement for the data for what you're trying to do. >> That decision is predicated on things like latency, but also regulatory environment, intellectual property control. >> And the cost of moving data up and down. So the three laws of the cloud, so having that flexibility of keeping it where you want to is a tremendous value in again, in terms of, the speed of deployment and the speed of improvement. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator side. So the denominator again is, the resources consume, to deliver the work to the business, but when we talk about that denominator side, perhaps opening up additional monies, to do new types of development, new types of work. But, take us through some of the issues like what is a cloud experience associated with, single vendor, faster development, give us some of the issues that are really driving the value proposition above the line. >> The whole issue about Cloud is that you take away all of the requirements to deal with the hardware, deal with the orchestration of the storage, deal with all of these things, so instead of taking weeks, months to put in extra resources, you say, I want them, and it's there. >> So you're taking administrative tasks, out of the flow. >> Out of the flow yes. >> And as a consequence, things happen faster, so time of value is one of the first ones, give us another one. >> So, obviously the ability to have. It's a cloud environment, so if you're a vendor, of that cloud, what you want to be able to do, is to make incremental changes, quickly as opposed to waiting for a new release and work on the release basis. So that fundamental speed to change, speed to improve, bring in new features, bring in new services, a cloud first type model, that is a very powerful way for the vendor to push out new things, and for the consumer to absorb them. >> Right, so the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation, ability to innovate. And then the third most important part is, if you re-invest those resources that you have saved into new services, new capabilities of doing that, to me the most important thing, long term for systems of record is to be able to make them go faster, and use that extra latency time there to bring in systems of analytics, AI systems, other systems, and provide automation of individual business processes, increased automation. That is going to happen overtime, that's a slow adding to it, but it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide, a clear path to, improving the current systems of record. And that is a more faster and more cost effective way, than going in for a conversion, or moving the data up to the cloud or lift and shift, for these types of applications. >> So these are all kind of related, so I get superior innovation speeds, because I'm taking new technology and faster, I get faster time to value, because, I'm not having to perform a bunch of tasks, and I can get, I can imbue additional types of work, in support of automation, without dramatically, expanding the transactional latency and a rival way of transactions within the system of record. Okay so, how did Oracle and Azure, with Oracle, stack up in your analysis? >> So first of all important is both are viable solutions, they both would work. Okay, but the impact in terms of the total business value including obviously any savings on people and things like that, was 290 nearly, $300 million additional, this was for a >> For how big a company? >> For a fortune 2000 customer, so it was around two billion dollars, so a lot of money, over five years, a lot of money. Either way, you would save 200 million, if you were with the Azure, but 300 with the Oracle. So that to me is far, far higher than the costs of IT, for that particular company, it's a strategic decision, to be able to get more value out quicker, and for this class of work load, on Oracle, then Oracle at Cloud was the best decision, to be absolutely fair, if you were on Microsoft's database, and you wanted to go to Microsoft Azure, that would be the better bet. You would get back a lot of those benefits. >> So stay within the stack if you can. >> Correct. >> Alright, so, two billion dollars a year, five years. $10 billion revenue, roughly. >> Between 200 million in saving, for One Microsoft, Azure, plus Oracle. 300 million so a 1% swing, talk to us about speed, value what happens in, a numerator side of that equation? >> So, it is lower in cost, but you have a higher, the cost of the actual cloud, is a little higher, so, overall the pure hardware, equipment class is a wash, it's not going to change much. >> Got it. >> It might be a little bit more expensive. You make the savings, as well because of the people, less operators, simpler environment. Those are the savings you're going to make, and then you are going to push those back, into the organization, as increase value that can be given to the line of the business. >> So the conclusion to the research is that if you are a CIO, you look at your legacy application, it's going to be difficult to move and you go with the stack that's best for those, legacy applications. >> Correct. >> So the vast majority of systems of record, are running on Oracle. >> Large scale. >> Large scale, then that means Oracle Cloud at Customers, is the superior fit for most circumstances. >> For a lot of those. >> If you're not there though then look at other options. >> Absolutely. >> Alright, David Foyer. >> Thank you. >> Thanks, very much for being on the cube today. And you've been watching another Wikibon action item, from theCUBE Studios in Palo Alto California, I'm Peter Burris, thanks very much for watching. (electronic music)

Published Date : Feb 19 2019

SUMMARY :

and acknowledged for the first time that there's of different options starting to emerge. and then we have true, and as you go up the spectrum. So let me see if I got that right. That means we are typically going to have, Next to that to just multi cloud, where you're just connecting things and that's going to help me choose which class of if I needed to move, to another database. so that's just the cost of moving the data, You know, migration and all the other elements, You need to know always that you can recover any data, So again these So I, looking at the different options for a CIO, and we focused on, what was the benefit of a move to Cloud is not predicated as a consequence of possibilities and options of the Cloud. You have to do maintenance and work, and as you know, So that is the value that you can bring to it Where you had the equipment coming from different places. in the value that you could get So if you can reduce the amount of time and energy, of putting that resource either on premise or in the cloud, That decision is predicated on things like And the cost of moving data up and down. So the denominator again is, the resources consume, all of the requirements to deal with the hardware, so time of value is one of the first ones, and for the consumer to absorb them. Right, so the first one is time to value, adding to it, but it means you can use those I get faster time to value, Okay, but the impact in terms of the total business value So that to me is far, far higher than the costs of IT, Alright, so, two billion dollars a year, five years. 300 million so a 1% swing, talk to us about the cost of the actual cloud, is a little higher, that can be given to the line of the business. So the conclusion to the research is that So the vast majority of systems of record, is the superior fit for most circumstances. And you've been watching another Wikibon action item,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

David FoyerPERSON

0.99+

DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

$10 billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

February 2019DATE

0.99+

OracleORGANIZATION

0.99+

200 millionQUANTITY

0.99+

thirdQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

1%QUANTITY

0.99+

first timeQUANTITY

0.99+

bothQUANTITY

0.99+

Palo Alto CaliforniaLOCATION

0.99+

OneQUANTITY

0.99+

70 plus percentQUANTITY

0.99+

todayDATE

0.99+

fiveQUANTITY

0.98+

around two billion dollarsQUANTITY

0.98+

over five yearsQUANTITY

0.98+

300 millionQUANTITY

0.98+

yearsQUANTITY

0.98+

Azure StackTITLE

0.98+

three lawsQUANTITY

0.97+

first oneQUANTITY

0.97+

theCUBE StudiosORGANIZATION

0.97+

$300 millionQUANTITY

0.97+

three main optionsQUANTITY

0.97+

One vendorQUANTITY

0.97+

oneQUANTITY

0.96+

eachQUANTITY

0.95+

single sourceQUANTITY

0.94+

first typeQUANTITY

0.94+

300QUANTITY

0.93+

singleQUANTITY

0.92+

AzureORGANIZATION

0.92+

290QUANTITY

0.91+

one endQUANTITY

0.88+

firstQUANTITY

0.88+

OutpostORGANIZATION

0.88+

one supplierQUANTITY

0.88+

AzureTITLE

0.86+

first onesQUANTITY

0.85+

two billion dollars a yearQUANTITY

0.84+

WikibonEVENT

0.78+

WikibonTITLE

0.71+

characteristicQUANTITY

0.68+

2000 customerQUANTITY

0.67+

WikibonORGANIZATION

0.67+

Oracle CloudTITLE

0.58+

theCUBEORGANIZATION

0.56+

CloudORGANIZATION

0.38+

CloudTITLE

0.35+

Wikibon Action Item, Cloud-first Options | Wikibon Conversation, February 2019


 

>> Hi, I'm Peter Burroughs Wellcome to wicked bon action >> item from the Cube Studios in Palo Alto, California So today we've got a great conversation, and what we're going to be talking about is hybrid cloud hybrid. Claude's been in the news a lot lately. Largest consequences from changes made by a Ws is they announced Outpost and acknowledged for the first time that there's going to be a greater distribution of data on a greater distribution of function as enterprise has moved to the cloud. We've been on top of this for quite some time, and it actually coined what we call true hybrid cloud, which is the idea that increasingly, we're going to see a need for a common set of capabilities and services in multiple locations so that the cloud could move to the data and not the data automatically being presumed to move to the cloud. >> Now to have that >> conversation and to reveal some new research on what the cost in value propositions of the different options are available. Today. We've >> got David Foyer. David. Welcome to the Cube. Thank you. So, David, let's start. When we talk about Hybrid Cloud, we're seeing a continuum of different options start to emerge. What are the defining characteristics? >> Yes, we're seeing it could continue him emerging. We have what we've called standalone off course. That one is end of the spectrum on DH. There we have multi cloud, and then we have loosely and tightly coupled, and then we have true and as you go up the spectrum. So the dependence upon data depends on the data plane dependence upon low latent see dependance on writing does a systems of record records. All of those increase as we going from from lonely for High Leighton Sea and High Band with all way up to low late. >> So let me see if I got this right. It's true. I've a cloud is at one end and true. Either cloud is low late and see right on into workloads simplest possible administration. That means we're typically goingto have a common stack in all locations. Next to that is this notion of tightly coupled hybrid cloud, which could be higher late. And see, right oriented could probably has a common set of software on all no common mental state. And then, kind of this. This notion of loosely coupled right multi or hybrid cloud, which is low, high late and see, write or read oriented, which may have just a P I level coordination and commonality and all >> that's right. And then you go down even further to just multi cloud, where you're just connecting things, and each of them is independent off each other. >> So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield applications and the natural distribution of data for those Greenfield applications. And that's going to help me choose which class of hybrid clawed him and he used. But let's talk about the more challenging from a set of scenarios for most CEOs, which is the existing legacy applications as I cry that Rangel yeah, systems of record. As I try to bring those those cloud like experience to those applications, how am I going through that thought process? >> So we have some choices. The choices are I could move it up to lift and shift up to on a one of the clouds by the large clouds, many of them around. And what if I if I do that what I'm need to be looking at is, what is the cost of moving that data? And what is the cost of pushing that up into the cloud and lost the conversion cast if I need to move to another database, >> and I think that's the biggest one. So it just costs of moving the data, which is just uninterested. It's a cost of format changes at our migration and all the other out conversion changes. >> So what I did in my research was focus on systems of record, the the highly expensive, very, very important systems of record, which obviously are fed by a lot of other things their systems, the engagement analytics, etcetera. But those systems of record have to work. They you need to know if you've taken on order, you need to have consistency about that order. You need to know always that you can recover any data you need in your financials, etcetera. All of that is mission critical systems of record. Andi, that's the piece that I focused on here, and I focused on >> sort of. These are loaded and she >> low, very low, latent, right oriented, very right orientated types of applications. And I focused on the oracle because the majority ofthe systems of record run on Oracle databases on the large scale ones, at least so that's what we're we're focusing on here. So I looking at the different options for a C I O off. How they would go on DH. There are three main options open at the moment. There's there's Arkalyk Cloud Cloud, a customer, which gives thie the cloud experience. There is Microsoft as your stack, which has a a Oracle database version of it on DH outposts. But we eliminated outposts not because it's not going to be any good, but because it's not there yet, is >> you get your Razor John thing. >> That's right. So we focused on Oracle on DH as you and we focused on what was the benefit of moving from a traditional environment where you've got best of breed essentially on site to this cloud environment. >> So if we think about it, the normal way of thinking about this kind of a research is that people talk about R. A Y and historically that's been done by looking by keeping the amount of work that's performed has given constant and then looking at how the different technology components compare from a call standpoint. But a move to cloud the promise of a move to cloud is not predicated on lowering costs per se, but may have other financial considerations, of course, but it's really predicated on the notion of the cod experience, which is intended to improve business results. So we think about our lives being a numerator question. Value is the amount of work you do versus the denominator question, which is what resources are consumed to perform that work. It's not just the denominator side we really need to think about. The numerator side is well, you create. So what? What kind of things are we focused >> on? What we think about that value created his consequence of possibilities and options of the cloud. >> Right? So both are important. So so Obviously, when you move to a cloud environment, you can simplify operations. In particular, you can simplify recovery. You, Khun simplify a whole number of things within the shop and those give you extra resources on. Then the question is, Do you just cash in on those resources and say OK, I've made some changes, Or do you use those resources to improve the ability of your systems to work and one important characteristic off it alight and systems of record in particular is that you get depreciation of that asset. Over time, it becomes less fitted to the environment it has started with, so you have to do maintenance on it. You have to do maintenance and work, and as you know most means most work done in my tea shop is on the maintenance side minutes. An enhancement. It's maintenance. An enhancement, yes. So making more resources available on making it easier to do that maintenance are making less, less things that are going to interfere with that faster time to to to maintenance faster time. Two new applications or improvements is really fundamental to systems of record, so that is the value that you can bring to it. And you also bring value with lower of better availability, higher availability as well. So those are the things that we put into the model to see how the different approaches. And we were looking at really a total one. One supplier being responsible for everything, which was the Oracle environment of Oracle clouded customer to a sort of hybrid invite more hybrid environment where you had the the the work environment where you had the equipment coming from different place vendor that the service, the oracle, the as your service coming from Microsoft and, of course, the database coming then from Arkham itself. And we found from tremendous improvement in the value that you could get because of this single source. We found that a better model. >> So the common source led to efficiencies that then allowed a business to generate new classes of value. Because, as you said, you know, seventy plus percent of a night organ orb business is spending. Biology is associate with maintaining which they're enhancing. What's there in a very limited amount is focused on new greenfield or new types of applications. So if you can reduce the amount of time energy that goes into that heritage set of applications those systems of record, the not opens up that frees up resources to do some of the things >> on DH Having inflexibility now with things like As your stack conned in the future E. W. S off. Putting that resource either on premise or in the cloud, means that you can make decisions about where you process things things about where the data is about, where the data needs to be, the best placement of the data for what you're trying to do >> and that that decision is predicated on things like late in sea, but also regulatory, environment and intellectual property, controlling >> the custom moving data up and down. So the three laws of off off the cloud so having that flexibility of moving, keeping it where you want to, is a tremendous value in again in terms ofthe the speed of deployment on the speed of improved. >> So we'll get to the issues surrounding the denominator side of this. I want to come back to that numerator sites that the denominator again is the resources consumed to deliver the work to the business. But when we talk about that denominator side, know you perhaps opening up additional monies to do new types of development new times of work. But take us through some of the issues like you know what is a cloud experience associated with single vendor Faster development. Give us some of the issues that are really driving the value proposition. Look above the line. >> I mean, the whole issue about cloud is that you go on, take away all of the requirements to deal with the hardware deal with the orchestration off the storage deal with all of these things. So instead of taking weeks, months to put in extra resources, you say I want them on is there. >> So you're taking out administrate your taking administrative tasks out of the flow out of the flow, and as a consequence, things happen. Faster is the time of values. One of the first one. Give us another one. >> So obviously the ability to no I have it's a cloud environment. So if you're a vendor of that cloud, what you want to be able to do is to make incremental changes quickly, as opposed to awaiting for a new release and work on a release basis. So that fundamental speed to change speed to improve, bring in new features. Bringing new services a cloud first type model that is a very powerful way for the vendor to push out new things. And for the consumer, too, has absorbed them. >> Right? So the first one is time to value, but also it's lower cost to innovation. >> Yes, faster innovation ability to innovate. And then the third. The third most important part is if you if you re invest those resources that you've saved into new services new capabilities of doing that. To me, the most important thing long term for systems of record is to be able to make them go faster and use that extra Leighton see time there to bring in systems off systems of analytics A. I systems other systems on provide automation of individual business processes, increased automation that is gonna happen over time. That's that's a slow adding to it. But it means you can use those cloud mechanisms, those additional resources, wherever they are. You can use those to provide a clear path to improving the current systems of record. And that is a much faster and more cost effective way than going in for a conversion or moving the data upto the cloud or shifting lift and shift. For these types of acts, >> what kind of they're all kind of related? So I get, I get. I get superior innovation speeds because I'm taking new technology and faster. I get faster time to value because I'm not having to perform much of tasks, and I could get future could imbue additional types of work in support of automation without dramatically expanding the transactional wait and see on arrival rate of turns actions within the system of record. Okay, So how did Oracle and Azure with Oracle stack up in your analysis? >> So first of all, important is both a viable solutions. They both would work okay, but the impact in terms of the total business value, including obviously any savings on people and things like that, was two hundred nineteen eighty three hundred million dollars additional. This was for Robert to come in for a a Fortune two thousand customer, so it was around two billion dollars. So a lot of money over five years, a lot of money. Either way, you would save two hundred million if you were with with the zero but three hundred with the oracle, so that that to me, is far, far higher than the costs of I T. For that particular company, it's It is a strategic decision to be able to get more value out quicker. And for this class of workload on Oracle than Arkalyk, Cloud was the best decision to be absolutely fair If you were on Microsoft's database. And you wanted to go to Microsoft as you. That would be the better bet you would. You would get back a lot of those benefits, >> so stay with him. The stack, if you can't. Correct. All right, So So two billion dollars a year. Five years, ten billion dollars in revenue, roughly between two hundred million and saving for one Congress all around three. Treasure Quest. Oracle three hundred millions were one percent swing. Talk to us about speed value. What >> happens in the numerator side of that equation >> S Oh, so it is lower in caste, but you have a higher. The cast of the actual cloud is a little a little higher. So overall, the pure hardware equipment Cass is is awash is not going to change much. It might be a little bit more expensive. You make the savings a cz? Well, because of the people you less less operators, simpler environment. Those are the savings you're going to make. And then you're going to push those back into into the organization a cz increased value that could be given to the line of business. >> So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to move, and you go with stack. That's best for those legacy applications. And since the vast majority of systems of record or running all scale large scale, then that means work. A cloud of customers is a superior fit for most from a lot of chances. So if you're not there, though, when you look at other options, all right, David Floy er thank you. Thanks very much for being on the Cube today, and you've been watching other wicked bon action >> item from the Cube Studios and Power Rialto, California on Peter Burke's Thanks very much for watching.

Published Date : Feb 4 2019

SUMMARY :

capabilities and services in multiple locations so that the cloud could move to the data conversation and to reveal some new research on what the cost in value propositions of the different options are What are the defining characteristics? So the dependence upon data Next to that is this notion of tightly coupled And then you go down even further to just multi cloud, So if I'm a CEO and I'm looking at a move to a cloud, I have to think about Greenfield and lost the conversion cast if I need to move to another database, So it just costs of moving the data, which is just uninterested. You need to know always that you can recover any data you These are loaded and she So I looking at the different So we focused on Oracle on Value is the amount of work you do versus What we think about that value created his consequence of possibilities and options of the cloud. of record, so that is the value that you can bring to it. So the common source led to efficiencies that then allowed a business to generate new premise or in the cloud, means that you can make decisions about where you process things So the three laws of again is the resources consumed to deliver the work to the business. go on, take away all of the requirements to deal with the hardware One of the first one. So obviously the ability to no So the first one is time to value, but also it's lower cost in for a conversion or moving the data upto the cloud or shifting lift I get faster time to value because I'm not having to is far, far higher than the costs of I T. For that particular company, Talk to us about speed value. Well, because of the people you less less operators, simpler environment. So the closure of the researchers If your CEO, you look at your legacy application going to be difficult to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

RobertPERSON

0.99+

February 2019DATE

0.99+

ten billion dollarsQUANTITY

0.99+

one percentQUANTITY

0.99+

two hundred millionQUANTITY

0.99+

ClaudePERSON

0.99+

David FoyerPERSON

0.99+

zeroQUANTITY

0.99+

Five yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

thirdQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

TodayDATE

0.99+

ArkalykORGANIZATION

0.99+

Power RialtoORGANIZATION

0.99+

oneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three hundred millionsQUANTITY

0.99+

first timeQUANTITY

0.99+

OneQUANTITY

0.99+

two hundredQUANTITY

0.99+

seventy plus percentQUANTITY

0.99+

Cube StudiosORGANIZATION

0.98+

around two billion dollarsQUANTITY

0.98+

oracleORGANIZATION

0.98+

eachQUANTITY

0.98+

Peter BurkePERSON

0.97+

over five yearsQUANTITY

0.97+

LeightonORGANIZATION

0.97+

David Floy erPERSON

0.97+

three hundredQUANTITY

0.97+

first oneQUANTITY

0.97+

two thousand customerQUANTITY

0.96+

Two new applicationsQUANTITY

0.96+

singleQUANTITY

0.96+

Peter BurroughsPERSON

0.96+

first typeQUANTITY

0.95+

One supplierQUANTITY

0.95+

High BandLOCATION

0.95+

single sourceQUANTITY

0.95+

two billion dollars a yearQUANTITY

0.95+

threeQUANTITY

0.93+

KhunORGANIZATION

0.93+

Treasure QuestORGANIZATION

0.93+

nineteen eighty three hundred million dollarsQUANTITY

0.92+

three lawsQUANTITY

0.92+

CongressORGANIZATION

0.91+

R. A YOTHER

0.9+

GreenfieldORGANIZATION

0.89+

AzureORGANIZATION

0.88+

one endQUANTITY

0.87+

WikibonORGANIZATION

0.87+

OutpostORGANIZATION

0.85+

High Leighton SeaLOCATION

0.85+

three main optionsQUANTITY

0.85+

CaliforniaLOCATION

0.82+

firstQUANTITY

0.78+

ArkhamLOCATION

0.76+

CubeORGANIZATION

0.76+

Razor JohnPERSON

0.63+

Cloud CloudCOMMERCIAL_ITEM

0.54+

RangelPERSON

0.48+

FortuneTITLE

0.47+

CloudORGANIZATION

0.43+

Old Version: James Kobielus & David Floyer, Wikibon | VMworld 2018


 

from Las Vegas it's the queue covering VMworld 2018 brought to you by VMware and its ecosystem partners and we're back here at the Mandalay Bay in somewhat beautiful Las Vegas where we're doing third day of VMworld on the cube and on Peterborough and I'm joined by my two lead analysts here at Ricky bond with me Jim Camilo's who's looking at a lot of the software stuff David floor who's helping to drive a lot of our hardware's research guys you've spent an enormous amount of time talking to an enormous number of customers a lot of partners and we all participated in the Analyst Day on Monday let me give you my first impressions and I want to ask you guys some questions here you thought so I have it this is you know my third I guess VMworld in or in a row and and my impression is that this has been the most coherent of the VM worlds I've seen you can tell when a company's going through a transition because they're reaching to try to bring a story together and that sets the tone but this one hot calendar did a phenomenal job of setting up the story it makes sense it's coherent possibly because it aligns so well with what we think is going to happen in the industry so I want to ask you guys based on three days of one around and talking to customers David foyer what's been the high point what have you found is the most interesting thing well I think the most interesting thing is the excitement that there is over VMware if you if you contrast that with a two three years ago the degree of commitment of customers to viennois the degree of integration they're wanting to make the degree rate of change and ideas that have come out of VMware it's like two different companies totally different companies some of the highlights for me were the RDS the bringing from AWS to on site as well as on the AWS cloud RDS capabilities I think that's a very very interesting thing that's the relational database is services the Maria DB and all the other services that's a very exciting thing to me and a hint to me that AWS is going to have to get serious about well Moore's gone out I think it's a really interesting point that after a lot of conversations with a lot of folks saying all AWS it's all going to go up to the cloud and wondering whether that also is a one-way street for VMware Casta Moore's right but now we're seeing it's much more of a bilateral relationship it's a moving it to the right place and that's the second thing the embracing of multi-cloud by everybody one cloud is not going to do everything they're going to be SAS clouds they're going to be multiple places where people are gonna put certain workloads because that's the best strategic fit for it and the acceptance in the marketplace that that is where it's going to go I think that again is a major change so hybrid cloud and multi cloud environments and then the third thing is I think the richness of the ecosystem is amazing the the going on the floor and the number of people that have come to talk to us with new ideas really fascinating ideas is something I haven't seen at all for the last last three four years and so I'm gonna come back to you on that but it goes back to the first point that you make that yeah there is a palpable excitement here about VMware that two-three years ago the conversation was how much longer is the franchise gonna be around Jim but now it's clear yeah it's gonna be around Jim how about you yeah actually I'm like you guys I'm a newbie to VM world this is my very first remember I'm a big data analyst I'm a data science an AI guy but obviously I've been aware of VMware and I've had many contacts with them over the years my take away my prime and I like Pat Gail singers I agree with you Peter they're really coherent take and I like that phrase even though it sounds clucking impact kind of apologize they are the dial tone to the multi-cloud if the surgery really gives you a strong sense or who else can you character is in this whole market space cloud computing has essentially a multi cloud provider who provide the unifying virtualization glue to help their custom to help customers who are investing in an AWS and maybe in a bit of you know you're adopting Google and Microsoft Azure and so forth providing a virtualization layer that's the above server virtualization network virtualization VDI all the way to the edge nobody can put it all is putting it all together and quite the way that VMware is one of the my chief takeaways is similar to David's which is that in terms of the notion of a hybrid cloud VMware with its whole what's it's doing with RDS but also projects like this project dimension which is in project in progress taking essentially the entire VMware virtualization stack and putting it onto an appliance for deployment on the edges and then for them to manage it VMware of this their plans as an end-to-end managed edge cloud service and so forth Wow the blurring of public and private cloud I don't even think the term hybrid cloud applies it's just a blurry the common cloud yeah it's moving to the workload the clouds moving to the data which is exactly what we say they are halfway there in terms of that vision halfway in a sense that RDS has been announced the you know on the VMware and this project dimension they're well along with that if there was a briefings for the analyst space I'm really impressed for how they're architecting this I think they've got a shot to really dominate well I'll tell you so I would agree with you just to maybe provide a slightly different version of one of the things you said I definitely agree I think what's VMware hopes to do and I think they're not alone is to have AWS look like an appliance to their console to have as you look like an appliance of their Khan so through free em where you can get access to whatever services you need including your VMware machines your VMs inside those clouds but that increasingly their their goal is to be that control point that management point for all of these different resources that are building and it is very compelling I think that there's one area that I still think we need more from as analysts and we always got to look through no and what's yeah what was more required and I hear what you say about project dimension but I think that the edge story still requires a fair amount of work oh yeah it's a project in place but that's going to be an increasingly important locus of how architectures get laid out how people think about applications in the future how design happens how methodologies for building software work David what do you think what when you look out what what is what what is more is needed for you so really I think there are two things that give me a small concern the the edge that's a long term view so they got time to get that right but the edge view is very much an IT view top-down and they are looking to put in place everything that they think the OT people should fit in with I think that is personally not going to be a winning strategy you you have to take it from the bottom up the world is going to go towards devices very rich devices and sensors lots of software right on that device the inference work on those devices and the job of IT will be to integrate those devices it won't be those devices taking on the standards of IT it'll be IT that has to shape itself to look after all those devices there so that's a that's the main viewpoint I think that needs adjustment and it will come I'm sure over time but as you said there's a lot of computer science it's going to be an enormous amount of new partnerships are gonna be fabricate exactly to make this happen Jim what do you think yeah I agree terms of partnerships one big gap from both VMware and Dell technologies partnerships and romance and technology proposes AI now they have a project VMware call from another project called project Magna which is really AI ops in fact I published a wiki about reports this week on AI ops AI to drive IT Service Management and to and they're doing some stuff they're working on that project it's just you know the beginning stages I think what's going to happen is that vmware dell technologies they're gonna have to make strategic acquisitions of AI solution providers to build up that capability because that's going to be fundamental to their ability to manage this complex multi called fabric from end to end continuously they need that competency internally that can't be simply a partner providing that that's got to be their core competencies so you know I'm gonna push it I'll give you the contrarian point of view okay we actually had Khamsin VMware we've had a lot of conversations about this does that is that a reflection of David's point about top-down buying things and pushing it down as opposed to other conversations we've had about how the edge is going to evolve where a lot of OT guys are going to combine with business expertise and technology expertise to create specialized solutions and is and then VMware is gonna have to reach out to them and make VMware relevant to them do you think it's going to be VMware buying a bunch of stuff or an a-grade no solution or is it going to be the solutions coming from elsewhere and VM at VMware I just becoming more relevant to them now you can still be buying a bunch of stuff to get that horizontal in place but which way you think it's going to go I think it's gonna be the top-down they're gonna buy stuff because if I talk to the channel one of the channel people this morning about well you know but they've got an IOT connected bundle and so forth they announced this show you know I think they agree with me that the core AI technology needs to be built into the fundamentals like the IOT stack bundle that they then provide to the channel partners for with you know with channel specific content that they can then tweak and customize to their specific needs but you know the core requirements for a I are horizontal you know it's the ability to run neural networks to do predictive analysis anomaly detection and so forth this is all cross-cutting across all domains it has to be in the core application stack they can't be simply something they source for particular channel opportunities it has to be leveraged across you know the same core tensorflow models for anomaly detection for manufacturing for logistics for you know customer relationship management whatever it's or are you saying essentially that then VMware becomes that horizontal play even though even if the solution providers are increasingly close to the actual action where the edges III I'm gonna disagree we can gently on that but we'd still be friends [Music] no it's you know I'm I'm an OT guy of hearth I suppose and I think that that is going to be a stronger force in terms of VMware but there will be some places where you it will be top-down but other places that where it's going to be need needed to adjust but I think there's one other there very interesting area I'd like to bring up in terms of of this question of acquisition what what we heard about beforehand was excellent results and VMware has been adding a you know a billion dollars a year in terms of free cash there and they have thirteen billion in short term cash there and the the refinancing from Dell is gonna take eleven of that thirteen and put it towards the towards the the company now you can work towards deltek yes well just Dell Dell as a hold and and silver later towards those partners I I personally believe that there is such a lot of opportunity that's going to be out there if you take NSX for example it has the potential to do things in new areas they're gonna need to provide solutions in those new areas and aggressively go after those new areas and that's going to mean big investments and many other areas where I think they are going to need acquisitions to strengthen the whole story they have the whole multi-cloud story about this real-time operating system in a sexy has a network routing virtualization backplane I mean it needs to go real-time so sensitive guaranteed ladies if they need that big investments guarantee yeah they need to go there yeah so what we're agreeing on that and I get concerned that it's not going to be given the right resources you know to be able to actually go after the opportunities that they have genuinely created it's gonna mean from you see how that plays out so I think all drugs in the future I think saying though is that there is going to be a solution a set of solution players that VMware is going to have to make significant moves to make them relevant and then the question is where it's the values story what's the value proposition it's probably gonna be like all partnerships yeah some are gonna claim that they are doing it also some are gonna DM where it's gonna claim that they do more of it but at the end of the day VMware has to make themself relevant to the edge however that happens I want to pick up on NSX because I'm a pretty big believer that NSX may be the very special crown jewel and a lot of the stuff this notion of hybrid cloud whatever we call it let's just call it extended cloud let me talk of a better word like it is predicated on the idea that I also have a network that can naturally and easily not just bridge but truly multi network interoperate internet work with a lot of different cloud sources but also all different cloud locations and there's not a lot of technologies out there that are great candidates to do that and it's and I look at NSX and I'm wondering is that gonna be kind of a I want to take the metaphor too far but is that gonna be kind of a new tcp/ip for the cloud in the sense that you're still gonna run over tcp/ip and you're still gonna run over the Internet but now we're gonna get greater visibility into jobs into workloads into management infrastructures into data locations and data placement predictive movement and NSX is going to be the at the vanguard of showing how that's gonna work and the security side of that especially to be able to know what is connected to what and what shouldn't be connected to what and to be able to have that yeah they need stateful structured streaming others Kafka flink whatever they need that to be baked into the whole nsx virtualization layer that much more programmable and that provides that much better a target for applications all right last question then we got a wrap guys David as you walk out the door get in the plane what are you taking away what's your last impression my last impression is one of genuine excitement wanting to work wanting to follow up with so many of the smaller organizations the partners that have been here and who are genuinely providing in this ecosystem a very rich tapestry of of capability that's great Jim my takeaway is I want to see their roadmap for kubernetes and serverless there wasn't a hole last year they made an announcement of a serverless project I forgot what the code name is didn't hear a whole lot about it this year but they're going up the app stack they got a coop you know distribution you know they're if they need a developer story I mean developers are building functional apps and so forth you know you can and they're also containerized they need they need a developer story and they need a server list story and they need to you need to bring us up to speed on where they're going in that regard because AWS their predominant partner I mean they got lambda functions and all that stuff you know that's that's the development platform of the present and future and I'm not hearing an intersection of that story with VMware's a story yeah my last thing that I'll say is that I think that for the next five years VMware is gonna be one of the companies that shapes the future of the cloud and I don't think we would have said that a couple of names no they wouldn't I agree with you so you said yes all right so this has been the wiki bond research leadership team talking about what we've heard at VMware this year VMworld this year a lot of great conversation feel free to reach out to us and if you want to spend more time with rookie bond love to have you once again Peter burrows for David floor and Jim Kabila's thank you very much for watching the cube we'll talk to you again [Music]

Published Date : Aug 29 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

James KobielusPERSON

0.99+

Jim KabilaPERSON

0.99+

thirteen billionQUANTITY

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

Jim CamiloPERSON

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

Las VegasLOCATION

0.99+

JimPERSON

0.99+

first impressionsQUANTITY

0.99+

three daysQUANTITY

0.99+

two thingsQUANTITY

0.99+

thirteenQUANTITY

0.99+

PeterPERSON

0.99+

last yearDATE

0.99+

Pat GailPERSON

0.99+

MoorePERSON

0.99+

Mandalay BayLOCATION

0.99+

first pointQUANTITY

0.99+

second thingQUANTITY

0.98+

firstQUANTITY

0.98+

GoogleORGANIZATION

0.97+

third thingQUANTITY

0.97+

this yearDATE

0.97+

thirdQUANTITY

0.97+

this yearDATE

0.97+

NSXORGANIZATION

0.97+

two-three years agoDATE

0.97+

David floorPERSON

0.96+

VMworldORGANIZATION

0.96+

two different companiesQUANTITY

0.95+

bothQUANTITY

0.95+

VMworld 2018EVENT

0.95+

Maria DBTITLE

0.95+

wikiORGANIZATION

0.95+

MicrosoftORGANIZATION

0.95+

this weekDATE

0.94+

two lead analystsQUANTITY

0.94+

David foyerPERSON

0.93+

deltekORGANIZATION

0.93+

MondayDATE

0.93+

third dayQUANTITY

0.93+

two three years agoDATE

0.92+

one areaQUANTITY

0.92+

this morningDATE

0.91+

oneQUANTITY

0.91+

KafkaTITLE

0.9+

Analyst DayEVENT

0.89+

VMworldEVENT

0.89+

KhamsinORGANIZATION

0.88+

VMwareTITLE

0.84+

Ricky bondORGANIZATION

0.84+

WikibonORGANIZATION

0.83+

one cloudQUANTITY

0.82+

lot of partnersQUANTITY

0.82+

elevenQUANTITY

0.81+

a billion dollars a yearQUANTITY

0.81+

Jeremy Werner, Toshiba | CUBEConversation, July 2018


 

(upbeat orchestral music) >> Hi I'm Peter Burris and welcome to another CUBE Conversation from our wonderful Palo Alto Studios. Great conversation today with Jeremy Werner who is the vice president of SSD Marketing at Toshiba Memory, Jeremy welcome to theCUBE. >> Thank you Peter, great to be here. >> You know Jeremy, one of the reasons why I find you being here so intriguing interesting is there's a lot going on in the industry. We talk about new types of workloads: AI, cloud, deep learning, all these other things, all these technologies are-- all these applications and workloads are absolutely dependent on the idea that the infrastructure has to start focusing less on just persisting memory and focusing more on delivering memory-- delivering data to these very advanced applications. That's where flash comes in. Tell us a little bit about the role that flash has had in the industry. >> It's amazing, thank you for recognizing that. So, flash has a long history. 30 years ago actually Toshiba invented flash memory, and it's had a transformation on people's lives everywhere, on all kinds of products starting with the very first application for flash being-- for NAND flash being kind of removable memory cards. You had the digital camera revolution, then it found its way into cell phones, that enabled smart phones and people carrying around all their media etc. And now we're in kind of this large third phase adoption which is, like you mentioned, the transition from persistent storage with a hard drive where, your data was available but not really available to do a lot with. To now storage on an SSD, which allows artificial intelligence, business analytics, and all the new workloads that are changing business paradigms. >> So clearly flash adoption is increasing in the data center. Wikibon has been talking about this for quite some time. My colleague David Foyer was one of the first people out there to project the role that flash was going to play within the data center. How are you seeing as you talk to customers, as you talk to some of the big systems manufacturers and some of the hyperscalers. How are you hearing or what are they saying about how they are applying and will intend to apply flash in the market today? >> It's amazing when we talk to customers they really can't get enough flash. As an industry we just came out of a major shortage of flash memory, and now a lot of new technologies are coming online. So, we at Toshiba, just announced our 96 layer 3D flash, our QLC flash. This is all in an attempt to get more flash storage into the hands of these customers so that they can bring these new applications to market. And this transformation, it's happening quickly although maybe not as quickly as people think because there's a very long road ahead of us. Still you look out 10 years into the future, you're talking about 40 or 50% growth per year, at least for the next decade. >> So I want to get to that in a second, but I want to touch upon something that you said that many of the naysayers about flash predicted that there would be shortfalls and they were very Chicken Little like. Oh my gosh, the sky is going to fall, the prices are going to go out of control. We did have a shortage, and it was a pretty significant one, but we were able to moderate some of the price increases so it didn't lead to a whole bunch of design losses or a disruption in how we thought about new workloads, did it? >> True, no it didn't, and I think that's the value of flash memory. Basically what we saw was the traditional significant decline in pricing took a pause, and you look back 20 years ago, I mean flash was 1000 times more expensive. And as we move down that cost curve, it enables more and more applications to adopt it. Even in today's pricing, flash is an amazingly valuable tool to data centers and enterprise as they roll out new workloads and particularly around analytics, and artificial intelligence, machine learning, kind of all the interesting new technologies that you hear about. >> Yeah, and I think that's probably going to be the way that these kinds of blips in supply are going to be-- it'll perhaps lead to a temporary moderation in how fast the prices drop. >> That's right. >> It's not going to lead to massive disruption and craziness. And I will also say this, you mentioned 20 years ago stuff was really expensive and I cut my teeth on mainframe stuff. And I remember when disk drives on the mainframe were $3500 a megabyte, so it could be a lot worse. So, let's now-- flash is a great technology, SSD is a great technology, but it's made valuable by an overall ecosystem. >> That's right. >> There's a lot of other supporting technologies that are really crucial here. Disk has been dominated by interfaces like SATA for a long time. Done very well by us. Allowed for a fair amount of parallelism, a lot of pathing to mainly disk, but that's starting to change as we start thinking about flash coming on and being able to provide much much faster access times. What's going on with SATA and what's on the horizon? >> Yeah, so great question. Really what we saw with SATA in about 2010 was the introduction of a six gigabit SATA interface, and that was a doubling of the prior speed that was available, and then zero progress since then, and actually the SATA roadmap has nothing forward. So people have been stuck effectively with that SATA interface for the last eight years. Now they've had some choices. You look at the existing ecosystem, the existing infrastructure, SATA and SAS drives were both choices, and SAS is a faster interface today up to 12 gigabit. It's full duplex where SATA is half duplex, so you can read and write in parallel, so actually you can get four times the speed on a SAS drive that you would get on a SATA drive today. The challenge with SAS, why everyone went to SATA-- I won't say everyone went to SATA, but maybe three or four times the adoption rate of SATA versus SAS was the SAS products that were available on the market really didn't deliver the most economical deployment of-- >> They were more expensive. >> They were more expensive. >> Alright, but that's changing. >> That is changing, so what we've been trying to do is prepare and work with our customers for a life after SATA. And it's been a long time coming, like I said eight years on this current interface. Recently we introduced what we call a value SAS product line. The value SAS product line brings a lot of the benefits of SAS, so the faster performance, the better reliability, and the better manageability, into the existing infrastructure, but at SATA-like economics. And that I think is going to be critical as customers look at the long-term life after SATA, which is the transition to NVMe and a flash-only world without having to be fully dependent on changing everything that they've ever done to move from SATA to NVMe. So, the life after SATA preparation on customers is how do I make the most out of my existing knowledge, my existing infrastructure capabilities. What's readily available from a support perspective as I prepare for that eventual transition to NVMe. >> Yeah I want to pick up on that notion of higher performance at improving cost of SAS and just make sure that we're clear here that SATA is an electrical interface. It has certain performance characteristics, but these new systems are putting an enormous amount of stress on that interface. And that means you can't put more work on top of that, not only from an application standpoint, but as you said crucially also from a management standpoint. When you put more reporting or you put more automation or your put more AI on some of these devices, that creates new load on those drives. Going to SAS releases that headroom, so now we can bring more management workloads. That's important, and this is what I want to test. That's important because as we do these more complex applications, we're pushing more work down closer to the data, and we're using a lot more data, it's going to require more automation. Is SAS going to provide the headroom that we need to actually bring new levels of reliability to more complex work? >> I believe it will, absolutely. SAS is the world's most trusted interface. So, when it comes to reliability, our SAS drives in the field are the most reliable product that our customers purchase today. And we take that same core technology and package in a way to make it truly an economical replacement for SATA. >> So we at Wikibon now have observed NVMe, so I want to turn a little bit of attention to that. We have observed that NVMe is in fact going to have a significant impact. But when Toshiba Memory is looking at what kinds of things customers are looking for, you're saying not so much SATA, let's focus on SAS, and let's bring NVMe online as the system designs are there. Is that kind of what it's about? >> You know I think it's a complicated situation. Not everyone is ready for everything at the same time. Even today, there's some major cloud providers that have just about fully transitioned to NVMe SSDs. And that transition has been challenging. So what we see is customers over the course of the next four or five years, their readiness for that transition from today to five years from now, that's happening based on the complexity of what they need to manage from a physical infrastructure, a software ecosystem perspective. So some customers have already migrated, and other customers are years away. And that is really what we're trying to help customers with. We have a very broad NVMe offering. Actually we have more NVMe SSDs than any other product line, but for a lot of those customers who want to continue with the digital transformation in to data analytics, in to realizing the value of all the data that they have available and transforming that into improved business processes, improved business results. Those customers don't want to have to wait for their infrastructure to catch up for NVMe. Value SAS gives them a means to make that transition, while continuing on to take advantage of all the capabilities of flash. One of the things that we always talk about, one of my responsibilities is product planning product definition, and one of the things that we always talk about is our ideal SSD, the bottleneck is the flash. In other words if you look at a drive there's so many things that could bottleneck performance. It could be the interface, it could be the power that you can consume and dissipate, it could be the megahertz in your controller >> You sound like an electrical engineer. >> I am an electrical engineer, but I'm a marketing guy, right? So, there's all kinds of bottlenecks, and when we design an SSD we want the flash to be the bottleneck cause at the end of the day, that's fundamentally what people need and want. And so, you look at SATA, and it's like, not only is it a bottleneck, but it's clamping the performance at 50% or less than 50% of what's achievable in the same power footprint, in the same cost footprint, so it's just not practical I mean the thing's eight years old so-- >> Yeah. Yeah. >> In technology eight years is a lot of time. >> Especially these days, and so to simplify that perhaps, or say that a little bit differently, bottom line is SAS is a smaller step for existing customers who don't have the expertise necessary to re-engineer an entire system and infrastructure. >> That's right, it gives them that stepping stone. >> So you also mentioned that there' a difference between the flash and the SSD, and that difference is an enormous amount of value-wide engineering that leads to automation, reliability, types of things you can do down at the drive. Talk to us a little bit about Toshiba, Toshiba Memory, as a supplier of that differentiating engineering that's going to lead to even superior performance at better cost and greater manageability and time to value on some of these new flash-based workloads. >> So I'm amazed at the quality of our engineering team and the challenges that they face to constantly be bringing out new technologies that keep up with the flash memory curve. And I actually joke sometimes, I say it's like being on a hamster wheel. It never stops, the second that you release a product you're developing the next product. I mean it's one of the fastest product life cycles in the entire industry, and you're talking about extremely complicated, complex systems with tight firmware development. So what we do at Toshiba Memory, we actually engineer our own SOCs and controllers, develop the RTL, manage that from basically architecture to production. We write all our own firmware, we assemble our own drives, we put it all together. The process for actually defining a product to when we release it is about five years. So we have meetings now, we're talking about what are we going to release in 2023? And that is one of the big challenges, because these design cycles are very long so anticipating where innovation is going, and today's innovation is at the speed of software, right? Not the speed of hardware. So how do you build that kind of flexibility and capability into your product so that you can keep up with new innovations no one might have seen five years ago? That's where Toshiba Memory's engineering team really shows its mettle. >> So let's get your back in theCUBE in the not-to-distant future to talk about what 2023 is going to look like, but for right now Jeremy Werner, Vice President of SSD Marketing at Toshiba Memory, thank you very much for being on theCUBE. >> Thank you, Peter. >> And once again, thanks for watching this CUBE Conversation. (upbeat orchestral music)

Published Date : Jul 27 2018

SUMMARY :

Hi I'm Peter Burris and welcome to that the infrastructure has to start focusing less on and all the new workloads that manufacturers and some of the hyperscalers. flash storage into the hands of these Oh my gosh, the sky is going to fall, machine learning, kind of all the interesting Yeah, and I think that's probably going to And I will also say this, you mentioned 20 years but that's starting to change as we start speed on a SAS drive that you would And that I think is going to be critical And that means you can't put more work SAS is the world's most trusted interface. and let's bring NVMe online as the system designs are there. One of the things that we always talk about, the thing's eight years old so-- Especially these days, and so to simplify that difference between the flash and the SSD, And that is one of the big challenges, not-to-distant future to talk about what 2023 And once again, thanks for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeremy WernerPERSON

0.99+

JeremyPERSON

0.99+

David FoyerPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

50%QUANTITY

0.99+

$3500QUANTITY

0.99+

threeQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

July 2018DATE

0.99+

2023DATE

0.99+

Toshiba MemoryORGANIZATION

0.99+

1000 timesQUANTITY

0.99+

eight yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

four timesQUANTITY

0.99+

20 years agoDATE

0.99+

20 years agoDATE

0.98+

less than 50%QUANTITY

0.98+

first applicationQUANTITY

0.98+

five years agoDATE

0.97+

next decadeDATE

0.97+

10 yearsQUANTITY

0.97+

30 years agoDATE

0.97+

ernerPERSON

0.97+

first peopleQUANTITY

0.97+

SASORGANIZATION

0.96+

third phaseQUANTITY

0.96+

Jeremy WPERSON

0.95+

about five yearsQUANTITY

0.94+

both choicesQUANTITY

0.93+

Vice PresidentPERSON

0.93+

secondQUANTITY

0.92+

Palo Alto StudiosORGANIZATION

0.87+

six gigabitQUANTITY

0.86+

2010DATE

0.86+

last eight yearsDATE

0.85+

eight years oldQUANTITY

0.83+

up to 12 gigabitQUANTITY

0.81+

SASTITLE

0.8+

zeroQUANTITY

0.79+

five yearsQUANTITY

0.78+

CUBEConversationEVENT

0.76+

96 layerQUANTITY

0.76+

about 40QUANTITY

0.72+

SATATITLE

0.6+

megabyteQUANTITY

0.57+

ConversationEVENT

0.57+

a secondQUANTITY

0.53+

next fourDATE

0.44+

Ratmir Timashev, Veeam | VeeamON 2018


 

>> Announcer: Live from Chicago, Illinois. It's the Cube, covering Veeamon 2018. Brought to you by Veeam. >> Welcome back to Chicago everybody, this is the Cube, the leader in live tech coverage. My name is Dave Vellante, and I'm joined by my co-host Stewart Miniman, Ratmir Timashev is here, he's the cofounder of Veeam and in my opinion, the man who brought Veeam into the modern era, created the persona of Veeam, allowed it to punch above its way, Ratmir thanks for coming back in the Cube, great to see you again. >> Thank you Dave, thanks. >> So congratulations on another kickoff to another great event, you painted Chicago green. Love it, first of all how do you feel. >> Fantastic, awesome. It's great being here, great city, the weather is finally nice, so spring is here finally, so we are great time. >> Yeah we had a little trouble getting in, but everybody's here, everybody's here safely which is the most important thing. I want you to talk about the evolution of Veeam, you started out as a virtualization specialist, generally VMware specialists, especially focusing on small business. We used to see you everywhere, now you're extending into the enterprise. What's that all about, what's the vision, give us your perspective. >> You're absolutely right, Veeam started with the single focus to be the best for VMware, for VMware, data protection, cap replication, and we started as the easy to use, simple, powerful solution for SMB, moved into mid-enterprise and now we added lots of enterprise features, and moving into the large enterprise. And last year was really the most important and most successful year, 2017, in the history of Veeam, so we finally admitted that we'd be lying to our customers for 10 years. >> Dave: You've been lying? >> Yeah, we've been lying. >> What do you mean by that. >> For 10 years we've been saying, Veeam is VMware only, Veeam is high B only, we will never do physical. So last year we introduced the comprehensive M2M platform to do everything, virtual, physical, and cloud. So we integrated our agent-based technology into our flagship product, to provide a single panel blast to manage all your data across the cloud, M2M. >> Why lie for a decade? >> That's a good question. You know, when you deal with sales people, smart sales people, they constantly ask you, hey, when I will do that, I will go and do physical, I was going to do physical. You have to tell them no, never, because once you say yeah, we will do physical, the next question is when. >> Dave: Yeah, when can I sell it, right. >> So we don't want to give our sales people an excuse to lose a deal because we've got the best virtual, go and sell the best virtual, and make our customers happy. >> You don't want to head fake the customers either. >> Maybe explain, what were the core principles back from the early days that are still holding true, what is the same and what's different now that you're doing cloud and virtual. >> Again, the core principle. >> Stu: Or physical, I should say. >> For principle, again, in terms of the product design, think customer first, make it easy for the customer and really stick to your core customer, that customer that is using your product every day. So make it easy, powerful, and affordable. That was our core principles in designing the product, and the whole business model behind Veeam. >> Talk about the metrics a little bit. Stu and I were talking at the open, 820 some odd million in booking, so you can see a billion dollars. We said, software companies that are a billion dollars are few and far between so that's a huge milestone if and when you hit that. But talk about that and the growth, share with us whatever metrics you can. >> Again, 2017 was one of the most successful years in our history, yeah, like you mention, we recorded bookings revenue of 830 million and that was 36% growth. Actually, our growth is accelerating as we become bigger. So we just celebrated 300,000 customers, we are adding 4,000 new customers every day, and Peter Mackay, our president and co COO mentioned this morning at the keynote, that we're adding 133 customers every single day, so that's very impressive. >> Yeah, it's awesome. So yeah, just to give you a sense, 300,000 customers, VMware, who basically owns the enterprise, says slightly over half a million customers. >> So we probably are on 50% of VMware, so we own 50% of VMware market in terms of data protection. >> So one of the challenges that we mentioned upfront was okay, so you drove a truck through the opportunity when virtualization VMware came in, and a lot of the incumbents were caught flat footed. They didn't have the architecture, they didn't have the go to market, et. Cetera. Now things are changing, moving to cloud, moving to this digital world, how does Veeam retain its edge in that new world. >> That's an excellent question, so that's the big opportunities that we see for the next five years. So we won the first battle, the battle of on pram, highly virtualized modern data center. We are the leader, we are number one data protection and ideal ability for that market, right. So the next battle, the next opportunity that we see for the next five years is to dominate the, what we call intelligent data management market in the multi cloud world. So we have to think how we approach that, once you win the market, like there is a saying, the winner takes it all. Once you win the market, you are going to dominate that, so for us the next two or three years are the most critical in dominating this multi cloud world for the next decade. >> Ratmir, I'd love to hear, you wrote that virtualization wave, which really was about creating virtualization admin, huge shift going from silos to admins. And we're seeing that change from architects in the cloud and the like, talk to, who you're selling to, and the partners that you have to grow. There's just so much change happening in that kind of environment. >> Yeah we see the change as we are moving from VMware administrator, so originally the product was designed for VMware administrator, now we are moving to the infrastructure person that is responsible not just for private part of your infrastructure, but for the multi cloud strategy, which includes the public cloud, SAS, physical servers, everything than an enterprise has as far as the infrastructure. >> Okay, so I want to go through just a couple of things that we talked about earlier and get your reaction to this. So some of the things that we've seen in our research is that data protection and orchestration are becoming much much more important in the list of CXO concerns. And that's something that your messaging is going after. But there's a dissonance between the business expects out of data protection and what IT is actually delivering, and I wonder if you can comment on that. >> Sure, so yeah, we are introducing our new message. So our previous message was focused on VMware administrator, now we are moving into the enterprise, and our message is about the importance of data. We see the three characteristics of the modern data, hyper critical, hyper sprawled, and hyper growth. So this leads to the need of creating a new type of solution what we call is intelligent data management solution. To manage the hyper available enterprise. So we're using the word hyper a lot because the data is now hyper critical, it's over distributed, hyper distributed, and is growing exponentially. That's part of our new message, that as we go into the C level people, about how important this data, and what with all the things that going on, in terms of the security compliance and how we're going to extend this platform to solve other business issues and provide more value and more business outcomes of using your late. Veeam's emporium has grown within this enterprise customers. However, as we mentioned, we are moving further, we are not standing still, so we have added lots of capabilities in terms of protecting cloud, native cloud, AWS, Azure, as well as a physical servers. So we are moving more into the end to end strategic data management platform provider from being just a niche point solution. >> I want to give you another stat that came out of our research, which I think you'll love, is that our David Foyer calculated that on average, a Fortune 1000 company over I think a three or a four year period, loses about a billion and a half dollars in value because of poorly architected data protection approaches, whether it's they're not end to end, or they're not protecting their cloud data properly, or they're not doing, whether it's backup or disaster recovery properly, well over a billion dollars over a four year period, your thoughts. >> Yeah, that's similar to what our research shows as well. So we do annual research and ask all customers how much down time and data loss costs them annually or through hour, that research shows that average enterprise can lose as much as over 10 million dollars per hour, so if you add it up over four years, that might be close to that number. But with all the compliance and the new security risks and security threat, and reason where this is becoming more and more of a critical business critical problem to solve. >> So this is a huge opportunity for Veeam, because when you think about your total available market, what a lot of time analysts will do is they'll add up all the spending on let's say data protection solutions, but to me your tam is actually quite a bit larger because of this lost revenue opportunity. It's many tens of billions, maybe 30 to 50 billion, I don't know if you have any thoughts on that. >> Yeah definitely, so data protection is just part of that core market right, so that data management is much bigger, by data management we mean not just the protection of data, but using this data to help businesses, to accelerate the innovation rate, so to reduce risk, to comply with the new regulations. So all these challenges are much bigger part of not just the data backup and recovery, overall data management market which is much bigger and probably is larger than 20, 30 billion range. >> So okay, so you have 2,500, 3,000 of your favorite people here gathered this week. As always I expect that you're going to have a big sendoff, a big party, what can we expect this week. >> As always, that's part of the Veeam culture, is work hard, play hard, and so Veeam is known for having the best parties. Yeah we, now Peter runs the company day to day, but culturally we still remain young entrepreneurial spirited company right, so we like party and we like to work hard. >> Well you know, if you've never been to a Veeam party, you're missing it. I don't usually stay for these things, I get out of here, we have to do so many Cubes, but we'll be at the Veeam party this week. >> Awesome, awesome. >> Thanks very much, always a pleasure seeing you, and congratulations on all your success. >> Thank you very much. >> Alright you're welcome. Keep it right there everybody, we'll be back with our next guest, you're watching the Cube from Veeamon 2018. We're in the Windy City and we'll be right back.

Published Date : May 15 2018

SUMMARY :

It's the Cube, covering Veeamon 2018. coming back in the Cube, Love it, first of all how do you feel. city, the weather is finally the evolution of Veeam, and moving into the large enterprise. data across the cloud, M2M. the next question is when. go and sell the best virtual, fake the customers either. back from the early days and the whole business model behind Veeam. the growth, share with us the most successful years So yeah, just to give you 50% of VMware, so we own the go to market, et. We are the leader, we are and the partners that you have to grow. but for the multi cloud So some of the things that the end to end strategic I want to give you another and the new security risks all the spending on let's say not just the data backup and recovery, So okay, so you have the company day to day, we have to do so many Cubes, and congratulations on all your success. We're in the Windy City

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Peter MackayPERSON

0.99+

DavePERSON

0.99+

RatmirPERSON

0.99+

Ratmir TimashevPERSON

0.99+

Stewart MinimanPERSON

0.99+

133 customersQUANTITY

0.99+

50%QUANTITY

0.99+

30QUANTITY

0.99+

2017DATE

0.99+

10 yearsQUANTITY

0.99+

StuPERSON

0.99+

last yearDATE

0.99+

VMwareORGANIZATION

0.99+

830 millionQUANTITY

0.99+

ChicagoLOCATION

0.99+

300,000 customersQUANTITY

0.99+

4,000 new customersQUANTITY

0.99+

300,000 customersQUANTITY

0.99+

Chicago, IllinoisLOCATION

0.99+

2,500, 3,000QUANTITY

0.99+

PeterPERSON

0.99+

VeeamORGANIZATION

0.99+

David FoyerPERSON

0.99+

this weekDATE

0.99+

first battleQUANTITY

0.99+

AWSORGANIZATION

0.98+

820QUANTITY

0.98+

over 10 million dollarsQUANTITY

0.98+

threeQUANTITY

0.98+

VeeamPERSON

0.98+

over half a million customersQUANTITY

0.98+

over four yearsQUANTITY

0.98+

tens of billionsQUANTITY

0.98+

50 billionQUANTITY

0.98+

next decadeDATE

0.98+

Windy CityLOCATION

0.97+

over a billion dollarsQUANTITY

0.96+

VeeamONEVENT

0.96+

oneQUANTITY

0.96+

about a billion and a half dollarsQUANTITY

0.95+

every single dayQUANTITY

0.92+

four yearQUANTITY

0.92+

this morningDATE

0.9+

singleQUANTITY

0.89+

2018DATE

0.88+

single panelQUANTITY

0.87+

20, 30 billionQUANTITY

0.84+

36% growthQUANTITY

0.83+

billion dollarsQUANTITY

0.83+

three characteristicsQUANTITY

0.82+

firstQUANTITY

0.8+

VeeamonTITLE

0.78+

a decadeQUANTITY

0.76+

three yearsQUANTITY

0.75+

CXOORGANIZATION

0.73+

M2MTITLE

0.72+

next five yearsDATE

0.72+

a billion dollarsQUANTITY

0.72+

Fortune 1000ORGANIZATION

0.62+

VeeamonEVENT

0.61+

millionQUANTITY

0.6+

Sam Werner & Steve Kenniston | IBM Think 2018


 

>> Narrator: From Las Vegas, it's The Cube. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think, everybody. My name's Dave Vallante, I'm here with Peter Burris. You're watching The Cube, the leader in live tech coverage. This is our day three. We're wrapping up wall to wall coverage of IBM's inaugural Think Conference. Thirty or forty thousand people, too many people to count, I've been joking all week. Sam Werner is here, he's the VP of Offering Management for Software Defined Storage, Sam, good to see you again. And Steve Kenniston is joining him otherwise known as the storage alchemist. Steven, great to see you again. >> Steven: Thanks, Dave. >> Dave: Alright, Sam. Let's get right into it. >> Sam: Alright. >> Dave: What is the state of data protection today and what's IBM's point of view? >> Sam: Well, I think anybody who's been following the conference and saw Jenny's key note, which was fantastic, I think you walked away knowing how important data is in the future, right? The way you get a competitive edge is to unlock insights from data. So if data's so important you got to be able to protect that data, but you're forced to protect all this data. It's very expensive to back up all this data. You have to do it. You got to keep it safe. How can you actually use that back-up data to, you know, perform analytics and gain some insights of that data that's sitting still behind the scenes. So that's what it's really all about. It's about making sure your data's safe, you're not going to lose it, that big big competitive advantage you have and that data, this is the year of the incumbent because the incumbent can start unlocking valuable data, so - >> Dave: So, Steve, we've talked about this many times. We've talked about the state of data protection, the challenges of sort of bolting on data protection as an afterthought. The sort of one size fits all problem, where you're either under protected or spending too much and being over protected, so have we solved that problem? You know, what is next generation data protection? What does it look like? >> [Steve} Yeah, I think that's a great Question, Dave. I think what you end up seeing a lot of... (audio cuts out) We talk at IBM about the modernize and transform, a lot. Right? And what I've started to try to do is boil it down almost at a product level. WhY - or at least an industry level - why modernize your data protection environment, right? Well if you look at a lot of the new technologies that are out there, costs have come way down, right? Performance is way up. And by performance around data protection we talk RPO's and RTO's. Management has become a lot simpler, a lot of design thinking put in the interfaces, making the Op Ec's job a lot easier around protecting information. A lot of the newer technologies are connected to the cloud, right? A lot simpler. And then you also have the ability to do what Sam just mentioned, which is unlock, now unlock that business value, right? How do I take the data that I'm protecting, and we talk a lot about data reuse and how do I use that data for multiple business purposes. And kind of unhinge the IT organization from being the people that stumble in trying to provide that data out there to the line of business but actually automate that a little bit more with some of the new solutions. So, that's what it means to me for a next generation protection environment. >> Dave: So it used to be this sort of, okay, I got an application, I got to install it on a server - we were talking about this earlier - get a database, put some middleware on - uh! Oh, yeah! I got to back it up. And then you had sort of these silos emerge. Virtualization came in, that obviously change the whole back up paradigm. Now you've got the cloud. What do you guys, what's your point of view on Cloud, everybody's going after this multi-cloud thing, protecting SAS data on prem, hybrid, off-prem, what are you guys doing there? >> Sam: So, uh, and I believe you spoke to Ed Walsh earlier this we very much believe in the multi-cloud strategy. We are very excited on Monday to go live with a Spectrum Protect Plus on IBM's cloud, so it's now available to back up workloads on IBM Cloud. And what's even more exciting about it is if you're running Spectrum Protect Plus on premises, you can actually replicate that data to the version running in the IBM cloud. So now you have the ability not only to back up your data to IBM cloud, back up your data IN IBM cloud where you're running applications there, but also be able to migrate work loads back and forth using this capability. And our plan is to continue to expand that to other clouds following our multi-cloud strategy. >> Dave: What's the plus? >> Sam: Laughs >> Dave: Why the plus? >> Kevin: That's the magic thing, they can't tell you. >> Group: (laughing) >> Dave: It's like AI, it's a black box. >> Sam: Well, I will answer that question seriously, though. IBM's been a leader in data protection for many years. We've been in the Gardeners Leaders Quadrant for 11 years straight with Spectrum Protect, and Spectrum Protect Plus is and extension of that, bringing this new modern approach to back up so it extends the value of our core capability, which you know, enterprises all over the world are using today to keep their data safe. So it's what we do so well, plus more! (laughing) >> Dave: Plus more! - [Sam] Plus more. >> Dave: So, Steve, I wonder if you could talk about the heat in the data protection space, we were at VM World last year, I mean, it was, that was all the buzz. I mean, it was probably the most trafficked booth area, you see tons of VC money that have poured in several years ago that's starting to take shape. It seems like some of these upstarts are taking share, growing, you know, a lot of money in, big valuations, um, what are your thoughts on What's that trend? What's happening there? How do you guys compete with these upstarts? >> Steve: Yeah, so I think that is another really good question. So I think even Ed talks a little bit about a third of the technology money in 2017 went to data protection, so there's a lot of money being poured in. There's a lot of interest, a lot of renewed interest in it. I think what you're seeing, because it cut - it's now from that next generation topic we just talked about, it's now evolving. And that evolution is it's not, it's no longer just about back up. It's about data reuse, data access, and the ability to extract value from that data. Now all of a sudden, if you're doing data protection right, you're backing up a hundred percent of your data. So somewhere in the repository, all my data is sitting. Now, what are the tools I can use to extract the value of that data. So there used to be a lot of different point products, and now what folks are saying is, well now, look, I'm already backing it up and putting it in this data silo, so to speak. How do I get the value out of it? And so, what we've done with Plus, and why we've kind of leap frogged ourselves here with - from going from Protect to Protect Plus, is to be able to now take that repository - what we're seeing from customers is there's a definitely a need for back up, but now we're seeing customers lead with this operational recovery. I want operational recovery and I want data access. So now, what Spectrum Protect Plus does is provides that access. We can do automation, we can provide self service, it's all rest API driven, and then what we still do is we can off load that data to Spectrum Protect, our great product, and then what ends up happening is the long term retention capabilities about corporate compliance or corporate governance, I have that, I'm protecting my business, I feel safe, but now I'm actually getting a lot more value out of that silo of data now. >> Peter: Well, one of the challenges, especially as we start moving into an AI analytics world, is that it's becoming increasingly clear that backing up the data, a hundred percent of the data, may not be capturing all of the value because we're increasingly creating new models, new relationships amongst data that aren't necessarily defined by an application. They're transient, then temporal, they're, they come up they come down, how does a protection plane handle, not only, you know, the data that's known, from sources that are known, but also identifying patterns of how data relationships are being created, staging it to the appropriate place, it seems as though this is going to become an increasingly important feature of any protection scheme? >> Steve: I think, I think a lot - you bring up a good topic here - I think a lot of the new protection solutions that are all rest API driven now have the capability to actually reach out to these other API's, and of course we have our whole Watson platform, our analytics platform that can now analyze that information, but the core part, and the reason why I think - back to your previous question about this investment in some of these newer technologies, the legacy technologies didn't have the metadata plane, for example, the catalog. Of course you had a back up catalog , but did you have an intelligent back up catalog. With the Spectrum Protect Plus catalog, we now have all of this metadata information about the data that you're backing up. Now if I create a snapshot, or reuse situation where to your point being, I want to spin something back up, that catalog keeps track of it now. We have full knowledge of what's going. You might not have chosen to again back that new snap up, but we know it's out there. Now we can understand how people are using the data, what are they using the data for, what is the longevity of how we need to keep that data? Now all of a sudden there's a lot more intelligence in the back up and again to your earlier question, I think that's why there's this renewed interest in kind of the evolution. >> Dave: Well, they say at this point you really can't do that multi-cloud without that capability. I wanted to ask you about something else, because you basically put forth this scenario or premise that it's not just about back up, it's not just about insurance, my words, there's other value that you could extract. Um, I want to bring up ransomware. Everybody talks about air gaps - David Foyer brings that up a lot and then I watch, like certain shows like, I don't know if you saw the Zero Days documentary where they said, you know, we laugh at air gaps, like, oh! Really? Yeah, we get through air gaps, no problem. You know, I'm sure they put physical humans in and they're going to infect. So, so there's - the point I'm getting to is there's other ways to protect against ransomware, and part of that is analytics around the data and all the data's - in theory anyway - in the backup store. So, what's going on with ransomware, how are you guys approaching that problem, where do analytics fit? You know, a big chewy question, but, have at it. >> Sam: Yeah, no I'm actually very glad you asked that question. We just actually released a new version of our core Spectrum Protect product and we actually introduced ransomware detection. So if you think about it, we bring in all of your data constantly, we do change block updates, so every time you change files it updates our database, and we can actually detect things that have changed in the pattern. So for example, if you're D-Dup rate starts going down, we can't D-Dup data that's encrypted. So if all of a sudden the rate of D-Duplication starts going down that would indicate the data's starting to be encrypted, and we'll actually alert the user that something's happening. Another example would be, all the sudden a significant amount of changes start happening to a data set, much higher than the normal rate of change, we will alert a user. It doesn't have to be ransomware, it could be ransomware. It could be some other kind of malicious activity, it could be an employee doing something they shouldn't be - accessing data that's not supposed to be accessed. So we'll alert the users. So this kind of intelligence, uh, you know is what we'll continue to try to build in. IBM's the leader in analytics, and we're bringing those skills and applying it to all of our different software. >> Dave: Oh, okay. You're inspecting that corpus of backup data, looking for anomalus behavior, you're say you're bringing in IBM analytics and also presumably some security capabilities from IBM, is that right? >> Sam: That's right. Absolutely. We work very closely with our security team to ensure that all the solutions we provide tie in very well with the rest of our capabilities at IBM. One other thing though, I'll mention is our cloud object storage, getting a little bit away from our backup software for a second, but object storage is used often - >> Kevin: But it's exciting! >> Sam: It is exciting! It's one of my favorite parts of the portfolio. It's a place where a lot of people are storing backup and archive data and we recently introduced worm capability, which mean Write Once Read Many. So once it's been written it can't be changed. It's usually used for compliance purposes but it's also being used as an air gap capability. If the data can't be changed, then essentially it can't be you know encrypted or attacked by ransomware. And we have certification on this as well, so we're SEC compliant, we can be used in regulated industries, so as we're able to in our data protection software off load data into a object store, which we have the capability, you can actually give it this worm protection, so that you know your backup data is always safe and can always be recovered. We can still do this live detection, and we can also ensure your backup is safe. >> Dave: That's great. I'm glad to hear that, cause I feel like in the old days, that I asked you that question about ransomware, and well, we're working on that - and two years later you've come up with a solution. What's the vibe inside of IBM in the storage group? I mean it seems like there's this renewed energy, obviously growth helps, it's like winning, you know, brings in the fans, but, what's your take Steve? And I'll close with Sam. >> Steve: I would almost want to ask you the same question. You've been interviewing a lot of the folks from the storage division that have come up here today and talked to you. I mean you must hear the enthusiasm and the excitement. Right? >> Dave: Yeah, definitely. People are pumped up. >> Steve: And I've rejoined IBM, Sam has rejoined IBM, right? And I think what we're finding inside is there used to be a lot of this, eh yeah, we'll eventually get there. In other words, it's like you said, next year, next year. Next, next quarter. Next third quarter, right? And now its, how do we get it done? People are excited, they want to, they see all the changes going on, we've done a lot to - I don't want to say sort out the portfolio, I think the portfolio's always been good - but now there's like a clean crisp clear story around the portfolio, how they fit together, why they're supposed to - and people are rallying behind that. And we're seeing customer - we're voted by IDCE, number one in the storage software business this year. I think people are really getting behind, you want to work for a winning team, and we're winning and people are getting excited about it. >> Dave: Yeah, I think there's a sense of urgency, a little startup mojo, it's back. So, love that, but Sam I'll give you the last word, before we wrap. Just on Think? Just on the Market? >> Sam: I got to tell you, Think has been crazy. It's been a lot of fun so far. I got to tell you, I have never seen so much excitement around our storage portfolio from customers. These were the easiest customer discussions I've ever had at one of these conferences, so they're really excited about what they're doing and they're excited about the direction we're moving in. So, yeah. >> Dave: Guy, awesome seeing you. Thanks for coming back on The Cube, both of you, and, uh, really a pleasure. Alright. Thank you for watching. Uh, this is a wrap from IBM Think 2018. Guys, thanks for helping us close that up. Peter, thank you for helping - >> Peter: Absolutely. >> Dave: me co-host this week. John Furie was unbelievable with the pop up cube, really phenomenal job, John and the crew. Guys, great great job. Really appreciate you guys coming in from wherever you were Puerto Rico or the Bahamas, I can't keep track of you anymore. Go to siliconangle.com, check out all the news. TheCube.net is where all these videos will be and wikibon.com for all the research, which Peter's group has been doing great work there. We're out! We'll see you next time. (lively tech music)

Published Date : Mar 22 2018

SUMMARY :

Brought to you by IBM. Sam, good to see you again. of that data that's sitting still behind the scenes. We've talked about the state of data protection, have the ability to do what Sam just mentioned, what are you guys doing there? So now you have the ability capability, which you know, enterprises all over the Dave: Plus more! heat in the data protection space, we were at VM World How do I get the value out of it? Peter: Well, one of the challenges, especially as we are all rest API driven now have the capability to actually and part of that is analytics around the data and all the So if all of a sudden the rate of D-Duplication starts going of backup data, looking for anomalus behavior, you're say our security team to ensure that all the solutions we so that you know your backup data is always safe like in the old days, that I asked you that question about You've been interviewing a lot of the folks from the storage Dave: Yeah, definitely. I think people are really getting behind, you want to work you the last word, before we wrap. I got to tell you, I have never seen Thank you for watching. and the crew.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamPERSON

0.99+

DavePERSON

0.99+

StevePERSON

0.99+

Dave VallantePERSON

0.99+

Peter BurrisPERSON

0.99+

Steve KennistonPERSON

0.99+

StevenPERSON

0.99+

Sam WernerPERSON

0.99+

JennyPERSON

0.99+

IBMORGANIZATION

0.99+

KevinPERSON

0.99+

2017DATE

0.99+

Puerto RicoLOCATION

0.99+

JohnPERSON

0.99+

PeterPERSON

0.99+

ThirtyQUANTITY

0.99+

Ed WalshPERSON

0.99+

next yearDATE

0.99+

David FoyerPERSON

0.99+

MondayDATE

0.99+

11 yearsQUANTITY

0.99+

John FuriePERSON

0.99+

BahamasLOCATION

0.99+

Las VegasLOCATION

0.99+

last yearDATE

0.99+

Zero DaysTITLE

0.99+

bothQUANTITY

0.99+

next quarterDATE

0.99+

siliconangle.comOTHER

0.99+

EdPERSON

0.99+

todayDATE

0.99+

SECORGANIZATION

0.99+

two years laterDATE

0.98+

oneQUANTITY

0.98+

IDCEORGANIZATION

0.98+

Next third quarterDATE

0.98+

this yearDATE

0.97+

WatsonTITLE

0.97+

several years agoDATE

0.96+

Wikibon Action Item | De-risking Digital Business | March 2018


 

>> Hi I'm Peter Burris. Welcome to another Wikibon Action Item. (upbeat music) We're once again broadcasting from theCube's beautiful Palo Alto, California studio. I'm joined here in the studio by George Gilbert and David Floyer. And then remotely, we have Jim Kobielus, David Vellante, Neil Raden and Ralph Finos. Hi guys. >> Hey. >> Hi >> How you all doing? >> This is a great, great group of people to talk about the topic we're going to talk about, guys. We're going to talk about the notion of de-risking digital business. Now, the reason why this becomes interesting is, the Wikibon perspective for quite some time has been that the difference between business and digital business is the role that data assets play in a digital business. Now, if you think about what that means. Every business institutionalizes its work around what it regards as its most important assets. A bottling company, for example, organizes around the bottling plant. A financial services company organizes around the regulatory impacts or limitations on how they share information and what is regarded as fair use of data and other resources, and assets. The same thing exists in a digital business. There's a difference between, say, Sears and Walmart. Walmart mades use of data differently than Sears. And that specific assets that are employed and had a significant impact on how the retail business was structured. Along comes Amazon, which is even deeper in the use of data as a basis for how it conducts its business and Amazon is institutionalizing work in quite different ways and has been incredibly successful. We could go on and on and on with a number of different examples of this, and we'll get into that. But what it means ultimately is that the tie between data and what is regarded as valuable in the business is becoming increasingly clear, even if it's not perfect. And so traditional approaches to de-risking data, through backup and restore, now needs to be re-thought so that it's not just de-risking the data, it's de-risking the data assets. And, since those data assets are so central to the business operations of many of these digital businesses, what it means to de-risk the whole business. So, David Vellante, give us a starting point. How should folks think about this different approach to envisioning business? And digital business, and the notion of risk? >> Okay thanks Peter, I mean I agree with a lot of what you just said and I want to pick up on that. I see the future of digital business as really built around data sort of agreeing with you, building on what you just said. Really where organizations are putting data at the core and increasingly I believe that organizations that have traditionally relied on human expertise as the primary differentiator, will be disrupted by companies where data is the fundamental value driver and I think there are some examples of that and I'm sure we'll talk about it. And in this new world humans have expertise that leverage the organization's data model and create value from that data with augmented machine intelligence. I'm not crazy about the term artificial intelligence. And you hear a lot about data-driven companies and I think such companies are going to have a technology foundation that is increasingly described as autonomous, aware, anticipatory, and importantly in the context of today's discussion, self-healing. So able to withstand failures and recover very quickly. So de-risking a digital business is going to require new ways of thinking about data protection and security and privacy. Specifically as it relates to data protection, I think it's going to be a fundamental component of the so-called data-driven company's technology fabric. This can be designed into applications, into data stores, into file systems, into middleware, and into infrastructure, as code. And many technology companies are going to try to attack this problem from a lot of different angles. Trying to infuse machine intelligence into the hardware, software and automated processes. And the premise is that meaty companies will architect their technology foundations, not as a set of remote cloud services that they're calling, but rather as a ubiquitous set of functional capabilities that largely mimic a range of human activities. Including storing, backing up, and virtually instantaneous recovery from failure. >> So let me build on that. So what you're kind of saying if I can summarize, and we'll get into whether or not it's human expertise or some other approach or notion of business. But you're saying that increasingly patterns in the data are going to have absolute consequential impacts on how a business ultimately behaves. We got that right? >> Yeah absolutely. And how you construct that data model, and provide access to the data model, is going to be a fundamental determinant of success. >> Neil Raden, does that mean that people are no longer important? >> Well no, no I wouldn't say that at all. I'm talking with the head of a medical school a couple of weeks ago, and he said something that really resonated. He said that there're as many doctors who graduated at the bottom of their class as the top of their class. And I think that's true of organizations too. You know what, 20 years ago I had the privilege of interviewing Peter Drucker for an hour and he foresaw this, 20 years ago, he said that people who run companies have traditionally had IT departments that provided operational data but they needed to start to figure out how to get value from that data and not only get value from that data but get value from data outside the company, not just internal data. So he kind of saw this big data thing happening 20 years ago. Unfortunately, he had a prejudice for senior executives. You know, he never really thought about any other people in an organization except the highest people. And I think what we're talking about here is really the whole organization. I think that, I have some concerns about the ability of organizations to really implement this without a lot of fumbles. I mean it's fine to talk about the five digital giants but there's a lot of companies out there that, you know the bar isn't really that high for them to stay in business. And they just seem to get along. And I think if we're going to de-risk we really need to help companies understand the whole process of transformation, not just the technology. >> Well, take us through it. What is this process of transformation? That includes the role of technology but is bigger than the role of technology. >> Well, it's like anything else, right. There has to be communication, there has to be some element of control, there has to be a lot of flexibility and most importantly I think there has to be acceptability by the people who are going to be affected by it, that is the right thing to do. And I would say you start with assumptions, I call it assumption analysis, in other words let's all get together and figure out what our assumptions are, and see if we can't line em up. Typically IT is not good at this. So I think it's going to require the help of a lot of practitioners who can guide them. >> So Dave Vellante, reconcile one point that you made I want to come back to this notion of how we're moving from businesses built on expertise and people to businesses built on expertise resident as patterns in the data, or data models. Why is it that the most valuable companies in the world seem to be the ones that have the most real hardcore data scientists. Isn't that expertise and people? >> Yeah it is, and I think it's worth pointing out. Look, the stock market is volatile, but right now the top-five companies: Apple, Amazon, Google, Facebook and Microsoft, in terms of market cap, account for about $3.5 trillion and there's a big distance between them, and they've clearly surpassed the big banks and the oil companies. Now again, that could change, but I believe that it's because they are data-driven. So called data-driven. Does that mean they don't need humans? No, but human expertise surrounds the data as opposed to most companies, human expertise is at the center and the data lives in silos and I think it's very hard to protect data, and leverage data, that lives in silos. >> Yes, so here's where I'll take exception to that, Dave. And I want to get everybody to build on top of this just very quickly. I think that human expertise has surrounded, in other businesses, the buildings. Or, the bottling plant. Or, the wealth management. Or, the platoon. So I think that the organization of assets has always been the determining factor of how a business behaves and we institutionalized work, in other words where we put people, based on the business' understanding of assets. Do you disagree with that? Is that, are we wrong in that regard? I think data scientists are an example of reinstitutionalizing work around a very core asset in this case, data. >> Yeah, you're saying that the most valuable asset is shifting from some of those physical assets, the bottling plant et cetera, to data. >> Yeah we are, we are. Absolutely. Alright, David Foyer. >> Neil: I'd like to come in. >> Panelist: I agree with that too. >> Okay, go ahead Neil. >> I'd like to give an example from the news. Cigna's acquisition of Express Scripts for $67 billion. Who the hell is Cigna, right? Connecticut General is just a sleepy life insurance company and INA was a second-tier property and casualty company. They merged a long time ago, they got into health insurance and suddenly, who's Express Scripts? I mean that's a company that nobody ever even heard of. They're a pharmacy benefit manager, what is that? They're an information management company, period. That's all they do. >> David Foyer, what does this mean from a technology standpoint? >> So I wanted to to emphasize one thing that evolution has always taught us. That you have to be able to come from where you are. You have to be able to evolve from where you are and take the assets that you have. And the assets that people have are their current systems of records, other things like that. They must be able to evolve into the future to better utilize what those systems are. And the other thing I would like to say-- >> Let me give you an example just to interrupt you, because this is a very important point. One of the primary reasons why the telecommunications companies, whom so many people believed, analysts believed, had this fundamental advantage, because so much information's flowing through them is when you're writing assets off for 30 years, that kind of locks you into an operational mode, doesn't it? >> Exactly. And the other thing I want to emphasize is that the most important thing is sources of data not the data itself. So for example, real-time data is very very important. So what is your source of your real-time data? If you've given that away to Google or your IOT vendor you have made a fundamental strategic mistake. So understanding the sources of data, making sure that you have access to that data, is going to enable you to be able to build the sort of processes and data digitalization. >> So let's turn that concept into kind of a Geoffrey Moore kind of strategy bromide. At the end of the day you look at your value proposition and then what activities are central to that value proposition and what data is thrown off by those activities and what data's required by those activities. >> Right, both internal-- >> We got that right? >> Yeah. Both internal and external data. What are those sources that you require? Yes, that's exactly right. And then you need to put together a plan which takes you from where you are, as the sources of data and then focuses on how you can use that data to either improve revenue or to reduce costs, or a combination of those two things, as a series of specific exercises. And in particular, using that data to automate in real-time as much as possible. That to me is the fundamental requirement to actually be able to do this and make money from it. If you look at every example, it's all real-time. It's real-time bidding at Google, it's real-time allocation of resources by Uber. That is where people need to focus on. So it's those steps, practical steps, that organizations need to take that I think we should be giving a lot of focus on. >> You mention Uber. David Vellante, we're just not talking about the, once again, talking about the Uberization of things, are we? Or is that what we mean here? So, what we'll do is we'll turn the conversation very quickly over to you George. And there are existing today a number of different domains where we're starting to see a new emphasis on how we start pricing some of this risk. Because when we think about de-risking as it relates to data give us an example of one. >> Well we were talking earlier, in financial services risk itself is priced just the way time is priced in terms of what premium you'll pay in terms of interest rates. But there's also something that's softer that's come into much more widely-held consciousness recently which is reputational risk. Which is different from operational risk. Reputational risk is about, are you a trusted steward for data? Some of that could be personal information and a use case that's very prominent now with the European GDPR regulation is, you know, if I ask you as a consumer or an individual to erase my data, can you say with extreme confidence that you have? That's just one example. >> Well I'll give you a specific number on that. We've mentioned it here on Action Item before. I had a conversation with a Chief Privacy Officer a few months ago who told me that they had priced out what the fines to Equifax would have been had the problem occurred after GDPR fines were enacted. It was $160 billion, was the estimate. There's not a lot of companies on the planet that could deal with $160 billion liability. Like that. >> Okay, so we have a price now that might have been kind of, sort of mushy before. And the notion of trust hasn't really changed over time what's changed is the technical implementations that support it. And in the old world with systems of record we basically collected from our operational applications as much data as we could put it in the data warehouse and it's data marked satellites. And we try to govern it within that perimeter. But now we know that data basically originates and goes just about anywhere. There's no well-defined perimeter. It's much more porous, far more distributed. You might think of it as a distributed data fabric and the only way you can be a trusted steward of that is if you now, across the silos, without trying to centralize all the data that's in silos or across them, you can enforce, who's allowed to access it, what they're allowed to do, audit who's done what to what type of data, when and where? And then there's a variety of approaches. Just to pick two, one is where it's discovery-oriented to figure out what's going on with the data estate. Using machine learning this is, Alation is an example. And then there's another example, which is where you try and get everyone to plug into what's essentially a new system catalog. That's not in a in a deviant mesh but that acts like the fabric for your data fabric, deviant mesh. >> That's an example of another, one of the properties of looking at coming at this. But when we think, Dave Vellante coming back to you for a second. When we think about the conversation there's been a lot of presumption or a lot of bromide. Analysts like to talk about, don't get Uberized. We're not just talking about getting Uberized. We're talking about something a little bit different aren't we? >> Well yeah, absolutely. I think Uber's going to get Uberized, personally. But I think there's a lot of evidence, I mentioned the big five, but if you look at Spotify, Waze, AirbnB, yes Uber, yes Twitter, Netflix, Bitcoin is an example, 23andme. These are all examples of companies that, I'll go back to what I said before, are putting data at the core and building humans expertise around that core to leverage that expertise. And I think it's easy to sit back, for some companies to sit back and say, "Well I'm going to wait and see what happens." But to me anyway, there's a big gap between kind of the haves and the have-nots. And I think that, that gap is around applying machine intelligence to data and applying cloud economics. Zero marginal economics and API economy. An always-on sort of mentality, et cetera et cetera. And that's what the economy, in my view anyway, is going to look like in the future. >> So let me put out a challenge, Jim I'm going to come to you in a second, very quickly on some of the things that start looking like data assets. But today, when we talk about data protection we're talking about simply a whole bunch of applications and a whole bunch of devices. Just spinning that data off, so we have it at a third site. And then we can, and it takes to someone in real-time, and then if there's a catastrophe or we have, you know, large or small, being able to restore it often in hours or days. So we're talking about an improvement on RPO and RTO but when we talk about data assets, and I'm going to come to you in a second with that David Floyer, but when we talk about data assets, we're talking about, not only the data, the bits. We're talking about the relationships and the organization, and the metadata, as being a key element of that. So David, I'm sorry Jim Kobielus, just really quickly, thirty seconds. Models, what do they look like? What are the new nature of some of these assets look like? >> Well the new nature of these assets are the machine learning models that are driving so many business processes right now. And so really the core assets there are the data obviously from which they are developed, and also from which they are trained. But also very much the knowledge of the data scientists and engineers who build and tune this stuff. And so really, what you need to do is, you need to protect that knowledge and grow that knowledge base of data science professionals in your organization, in a way that builds on it. And hopefully you keep the smartest people in house. And they can encode more of their knowledge in automated programs to manage the entire pipeline of development. >> We're not talking about files. We're not even talking about databases, are we David Floyer? We're talking about something different. Algorithms and models are today's technology's really really set up to do a good job of protecting the full organization of those data assets. >> I would say that they're not even being thought about yet. And going back on what Jim was saying, Those data scientists are the only people who understand that in the same way as in the year 2000, the COBOL programmers were the only people who understood what was going on inside those applications. And we as an industry have to allow organizations to be able to protect the assets inside their applications and use AI if you like to actually understand what is in those applications and how are they working? And I think that's an incredibly important de-risking is ensuring that you're not dependent on a few experts who could leave at any moment, in the same way as COBOL programmers could have left. >> But it's not just the data, and it's not just the metadata, it really is the data structure. >> It is the model. Just the whole way that this has been put together and the reason why. And the ability to continue to upgrade that and change that over time. So those assets are incredibly important but at the moment there is no way that you can, there isn't technology available for you to actually protect those assets. >> So if I combine what you just said with what Neil Raden was talking about, David Vallante's put forward a good vision of what's required. Neil Raden's made the observation that this is going to be much more than technology. There's a lot of change, not change management at a low level inside the IT, but business change and the technology companies also have to step up and be able to support this. We're seeing this, we're seeing a number of different vendor types start to enter into this space. Certainly storage guys, Dylon Sears talking about doing a better job of data protection we're seeing middleware companies, TIBCO and DISCO, talk about doing this differently. We're seeing file systems, Scality, WekaIO talk about doing this differently. Backup and restore companies, Veeam, Veritas. I mean, everybody's looking at this and they're all coming at it. Just really quickly David, where's the inside track at this point? >> For me it is so much whitespace as to be unbelievable. >> So nobody has an inside track yet. >> Nobody has an inside track. Just to start with a few things. It's clear that you should keep data where it is. The cost of moving data around an organization from inside to out, is crazy. >> So companies that keep data in place, or technologies to keep data in place, are going to have an advantage. >> Much, much, much greater advantage. Sure, there must be backups somewhere. But you need to keep the working copies of data where they are because it's the real-time access, usually that's important. So if it originates in the cloud, keep it in the cloud. If it originates in a data-provider, on another cloud, that's where you should keep it. If it originates on your premise, keep it where it originated. >> Unless you need to combine it. But that's a new origination point. >> Then you're taking subsets of that data and then combining that up for itself. So that would be my first point. So organizations are going to need to put together what George was talking about, this metadata of all the data, how it interconnects, how it's being used. The flow of data through the organization, it's amazing to me that when you go to an IT shop they cannot define for you how the data flows through that data center or that organization. That's the requirement that you have to have and AI is going to be part of that solution, of looking at all of the applications and the data and telling you where it's going and how it's working together. >> So the second thing would be companies that are able to build or conceive of networks as data. Will also have an advantage. And I think I'd add a third one. Companies that demonstrate perennial observations, a real understanding of the unbelievable change that's required you can't just say, oh Facebook wants this therefore everybody's going to want it. There's going to be a lot of push marketing that goes on at the technology side. Alright so let's get to some Action Items. David Vellante, I'll start with you. Action Item. >> Well the future's going to be one where systems see, they talk, they sense, they recognize, they control, they optimize. It may be tempting to say, you know what I'm going to wait, I'm going to sit back and wait to figure out how I'm going to close that machine intelligence gap. I think that's a mistake. I think you have to start now, and you have to start with your data model. >> George Gilbert, Action Item. >> I think you have to keep in mind the guardrails related to governance, and trust, when you're building applications on the new data fabric. And you can take the approach of a platform-oriented one where you're plugging into an API, like Apache Atlas, that Hortonworks is driving, or a discovery-oriented one as David was talking about which would be something like Alation, using machine learning. But if, let's say the use case starts out as an IOT, edge analytics and cloud inferencing, that data science pipeline itself has to now be part of this fabric. Including the output of the design time. Meaning the models themselves, so they can be managed. >> Excellent. Jim Kobielus, you've been pretty quiet but I know you've got a lot to offer. Action Item, Jim. >> I'll be very brief. What you need to do is protect your data science knowledge base. That's the way to de-risk this entire process. And that involves more than just a data catalog. You need a data science expertise registry within your distributed value chain. And you need to manage that as a very human asset that needs to grow. That is your number one asset going forward. >> Ralph Finos, you've also been pretty quiet. Action Item, Ralph. >> Yeah, I think you've got to be careful about what you're trying to get done. Whether it's, it depends on your industry, whether it's finance or whether it's the entertainment business, there are different requirements about data in those different environments. And you need to be cautious about that and you need leadership on the executive business side of things. The last thing in the world you want to do is depend on data scientists to figure this stuff out. >> And I'll give you the second to last answer or Action Item. Neil Raden, Action Item. >> I think there's been a lot of progress lately in creating tools for data scientists to be more efficient and they need to be, because the big digital giants are draining them from other companies. So that's very encouraging. But in general I think becoming a data-driven, a digital transformation company for most companies, is a big job and I think they need to it in piece parts because if they try to do it all at once they're going to be in trouble. >> Alright, so that's great conversation guys. Oh, David Floyer, Action Item. David's looking at me saying, ah what about me? David Floyer, Action Item. >> (laughing) So my Action Item comes from an Irish proverb. Which if you ask for directions they will always answer you, "I wouldn't start from here." So the Action Item that I have is, if somebody is coming in saying you have to re-do all of your applications and re-write them from scratch, and start in a completely different direction, that is going to be a 20-year job and you're not going to ever get it done. So you have to start from what you have. The digital assets that you have, and you have to focus on improving those with additional applications, additional data using that as the foundation for how you build that business with a clear long-term view. And if you look at some of the examples that were given early, particularly in the insurance industries, that's what they did. >> Thank you very much guys. So, let's do an overall Action Item. We've been talking today about the challenges of de-risking digital business which ties directly to the overall understanding of the role of data assets play in businesses and the technology's ability to move from just protecting data, restoring data, to actually restoring the relationships in the data, the structures of the data and very importantly the models that are resident in the data. This is going to be a significant journey. There's clear evidence that this is driving a new valuation within the business. Folks talk about data as the new oil. We don't necessarily see things that way because data, quite frankly, is a very very different kind of asset. The cost could be shared because it doesn't suffer the same limits on scarcity. So as a consequence, what has to happen is, you have to start with where you are. What is your current value proposition? And what data do you have in support of that value proposition? And then whiteboard it, clean slate it and say, what data would we like to have in support of the activities that we perform? Figure out what those gaps are. Find ways to get access to that data through piecemeal, piece-part investments. That provide a roadmap of priorities looking forward. Out of that will come a better understanding of the fundamental data assets that are being created. New models of how you engage customers. New models of how operations works in the shop floor. New models of how financial services are being employed and utilized. And use that as a basis for then starting to put forward plans for bringing technologies in, that are capable of not just supporting the data and protecting the data but protecting the overall organization of data in the form of these models, in the form of these relationships, so that the business can, as it creates these, as it throws off these new assets, treat them as the special resource that the business requires. Once that is in place, we'll start seeing businesses more successfully reorganize, reinstitutionalize the work around data, and it won't just be the big technology companies who have, who people call digital native, that are well down this path. I want to thank George Gilbert, David Floyer here in the studio with me. David Vellante, Ralph Finos, Neil Raden and Jim Kobelius on the phone. Thanks very much guys. Great conversation. And that's been another Wikibon Action Item. (upbeat music)

Published Date : Mar 16 2018

SUMMARY :

I'm joined here in the studio has been that the difference and importantly in the context are going to have absolute consequential impacts and provide access to the data model, the ability of organizations to really implement this but is bigger than the role of technology. that is the right thing to do. Why is it that the most valuable companies in the world human expertise is at the center and the data lives in silos in other businesses, the buildings. the bottling plant et cetera, to data. Yeah we are, we are. an example from the news. and take the assets that you have. One of the primary reasons why is going to enable you to be able to build At the end of the day you look at your value proposition And then you need to put together a plan once again, talking about the Uberization of things, to erase my data, can you say with extreme confidence There's not a lot of companies on the planet and the only way you can be a trusted steward of that That's an example of another, one of the properties I mentioned the big five, but if you look at Spotify, and I'm going to come to you in a second And so really, what you need to do is, of protecting the full organization of those data assets. and use AI if you like to actually understand and it's not just the metadata, And the ability to continue to upgrade that and the technology companies also have to step up It's clear that you should keep data where it is. are going to have an advantage. So if it originates in the cloud, keep it in the cloud. Unless you need to combine it. That's the requirement that you have to have that goes on at the technology side. Well the future's going to be one where systems see, I think you have to keep in mind the guardrails but I know you've got a lot to offer. that needs to grow. Ralph Finos, you've also been pretty quiet. And you need to be cautious about that And I'll give you the second to last answer and they need to be, because the big digital giants David's looking at me saying, ah what about me? that is going to be a 20-year job and the technology's ability to move from just

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

AmazonORGANIZATION

0.99+

David VellantePERSON

0.99+

DavidPERSON

0.99+

AppleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

NeilPERSON

0.99+

GoogleORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Dave VellantePERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

Jim KobeliusPERSON

0.99+

Peter BurrisPERSON

0.99+

JimPERSON

0.99+

Geoffrey MoorePERSON

0.99+

GeorgePERSON

0.99+

Ralph FinosPERSON

0.99+

Neil RadenPERSON

0.99+

INAORGANIZATION

0.99+

EquifaxORGANIZATION

0.99+

SearsORGANIZATION

0.99+

PeterPERSON

0.99+

March 2018DATE

0.99+

UberORGANIZATION

0.99+

TIBCOORGANIZATION

0.99+

DISCOORGANIZATION

0.99+

David VallantePERSON

0.99+

$160 billionQUANTITY

0.99+

20-yearQUANTITY

0.99+

30 yearsQUANTITY

0.99+

RalphPERSON

0.99+

DavePERSON

0.99+

NetflixORGANIZATION

0.99+

Peter DruckerPERSON

0.99+

Express ScriptsORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

David FoyerPERSON

0.99+

VeeamORGANIZATION

0.99+

$67 billionQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

first pointQUANTITY

0.99+

thirty secondsQUANTITY

0.99+

secondQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

Connecticut GeneralORGANIZATION

0.99+

two thingsQUANTITY

0.99+

bothQUANTITY

0.99+

about $3.5 trillionQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

CignaORGANIZATION

0.99+

BothQUANTITY

0.99+

2000DATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Dylon SearsORGANIZATION

0.98+

NVMe: Ready for the Enterprise


 

>> Announcer: From the Silicon Angle Media Office in Boston, Massachusetts. It's the theCUBE. Now here's your host Stu Miniman. >> Hi, I'm Stu Miniman and welcome to a special theCUBE conversation here in our Boston area studio. Happy to welcome back to the program, Danny Cobb, who's with Dell EMC in the CTO office. >> Thanks Stu, great to see you here today. >> Great to see you too. So Danny, we're going to talk about a topic that like many things in the industry. It seems like it's something that happen overnight, but there's been a lot of hard work going on for quite a lot of years, even going back to heck when you and I worked together. >> Danny: That's right. >> A company use to be called EMC. NVMe, so first of all just bring everybody up to speed as to what you work on inside the Dell family. >> Danny: Sure, so my responsibility at now Dell EMC has been this whole notion of emergence systems. New technologies, new capabilities that are just coming into broad market adoption, broad readiness, technological feasibility, and those kinds of things. And then making sure that as a company we're prepared for their adoption and inclusion in our product portfolio. So it's a great set of capabilities a great set of work to be doing especially if you have a short attention span like I do. >> Danny, I spend a lot of time these days in the open source world. You talk about people are moving faster, people are trying lots of technologies. You've been doing some really hard work. The company and the industry in the standards world. What's the importance of standards these days, and bring us back to how this NVMe stuff started. >> So a great way to get everybody up to speed as you mentioned when you kicked off. NVMe, an overnight success, almost 11 years in the making now. The very first NVMe standard was about 2007. EMC joined the NVMe consortium in 2008 along with an Austin, Texas computer company called Dell. So Dell and EMC were both in the front row of defining the NVMe standard, and essentially putting in place a set of standards, a set of architectures, a set of protocols, product adoption capabilities, compatibility capabilities for the entire industry to follow, starting in 2008. Now you know from our work together that the storage industry likes to make sure that everything's mature, everything works reliably. Everything has broad interoperability standards and things like that. So since 2008, we've largely been about how do we continue to build momentum and generate support for a new storage technology that's based on broadly accepted industry standards, in order to allow the entire industry to move forward. Not just to achieve the most out of the flash revolution, but prepare the industry for coming enhancements to storage class memory. >> Yeah, so storage class memory you mentioned things like flash. One thing we've looked at for a long time is when flash rolled out. There's a lot of adoption on the consumer side first, and then that drove the enterprise piece, but flash today is still done through Ikusi interface with SaaS or Sata. And believe we're finally getting rid of when we go to NVMe. What some in the industry have called the horrible Ikusi stack. >> Danny: That's right. >> So explain to us a little bit about first, the consumer piece of where this fits first, and how it gets the enterprise. Where are we in the industry today with that? >> Yeah so as you pointed out a number of the new media technologies have actually gained a broad acceptance and a grounds full of support starting in the consumer space. The rapid adoption of mobile devices whether initially iPods and iPhones and things like that. Tablets where the more memory you have the more songs you carry, the more pictures you can take. A lot of very virtuous cycle type things occurred in the consumer space to allow flash to go from a fairly expensive perhaps niche technology to broad high volume manufacturing. And with high volume manufacturing comes much lower costs and so we always knew that flash was fast when we first started working on it at EMC in 2005. It became fast and robust when we shipped in 2008. It went from flash to robust to affordable with technologies like the move from SLC to MLC, and now TLC flash and the continuing advances of Moore's law. And so flash has been the beneficiary of high volume consumer economics along with our friend Moore's law over a number of years. >> Okay, so on the NVMe piece, your friends down in Round Rock in Dell. They've got not only the storage portfolio, but on the consumer side. There's pieces like my understanding NVMe already in the market for some part of this today, correct. >> That's right, I think one of the very first adoption scenarios for NVMe was in Lightweight laptop device. The storage deck could be more efficient. The fundamental number of gates in Silicon required to implement the stack was more efficient. Power was more efficient, so a whole bunch of things that were beneficial to a mobile high volume client device like an ultra light, ultra portable laptop made it a great place to launch the technology. >> Okay, and so bring us to what does that mean then for storage? Is that available in the enterprise storage today? >> Danny: Yeah. >> And where is that today and where is that today, and where are we going to see in the next years though? >> So here's the progression that the industry has more or less followed. If we went from that high volume, ultra light laptop device to very inexpensive M.2 devices that could be used in laptops and desktops more broadly, also gained a fair amount of traction with certain used cases and hyperscalers. And then as the spec matured and as the enterprise ecosystem around it, broader data integrity type solutions in the sili-case itself. A number of other things that are bread and butter for enterprise class devices. As those began to emerge, we've now seen NVMe move forward from laptop and client devices to high volume M.2 devices to full function, full capability dual ported enterprise NVMe devices really crossing over this year. >> Okay, so that means we're going to see not only in the customer pieces but should be seeing really enterprise roll out in I'm assuming things like storage arrays, maybe hyper converged. All the different flavors in the not too distant future. >> Absolutely right, the people who get paid to forecast these things when they look into their crystal balls. They've talked about when does NVMe get close enough to its predecessor SaaS to make the switch over be a no brainer. And often times, you get a performance factor where there's more value or you get a cost factor where suddenly that becomes the way the game is won. In the case of NVMe versus SaaS, both of those situations value and cost are more or less a wash right now across the industry. And so there are very few impediments to adoption. Much like a few years ago, there were very few impediment to adoption of enterprise SSDs versus high performance HDDs. The 15Ks and the 10K HDDs. Once we got to close enough in terms of cost parity. The entire industry went all flash over night. >> Yeah, it's a little bit different than say the original adoption of flash versus HDD. >> Danny: That's right. >> HDD versus SSD. Remember back, you had to have the algebra sheet. And you said okay, how many devices did I have.? What's the power savings that I could get out of that? Plus the performance that I had and then does this makes sense. It seems like this is a much more broadly applicable type of solution that we'll see. >> Danny: Right. >> For much faster adoption. >> Do you remember those days of a little goes a long way? >> Stu: Yeah. >> And then more is better? And then almost be really good, and so that's where we've come over what seems like a very few years. >> Okay, so we've only been talking about NVMe, the thing I know David Foyer's been look a lot from an architectural standpoint. Where we see benefit obviously from NVMe but NVMe over Fabrics is the thing that has him really excited if you talk about the architectures, maybe just explain a little bit about what I get with NVMe and what I'll get added on top with the over fabric piece of that. >> Danny: Sure. >> And what's that roll out look like? >> Can I tell you a little story about what I think of as the birth of NVMe over Fabrics? >> Stu: Please. >> Some of your viewers might remember a project at EMC called Thunder. And Thunder was PCI flash with an RDMA over ethernet front end on it. We took that system to Intel developers forum as a proof of concept. Around the corner from me was an engineer named Dave Min-turn, who's an Intel engineer. Who had almost exactly the same software stack up and running except it was an Intel RDMA capability nick and an Intel flash drive, and of course some changes to the Intel processor stack to support the used case that he had in mind. And we started talking and we realized that we were both counting the number of instructions from packet arriving across the network to bytes being read or written on the vis-tory fast PCI E device. And we realized that there has to be a better way, and so from that day, I think it was September 2013, maybe it was August. We actually started working together on how can we take the benefits of the NVMe standard that exists mapped onto PCI E. And then map those same parameters as cleanly as we possibly can onto, at that time ethernet but also InfiniBand, Fiber channel, and perhaps some other transports as a way to get the benefits of the NVMe software stack, and build on top of the new high performance capabilities of these RDMA capable interconnects. So it goes way back to 2013, we moved it into the NVMe standard as a proposal in 2014. And again three, four years later now, we're starting to see solutions roll out that begin to show the promise that we saw way back then. >> Yeah and the challenge with networking obviously is sounds like you've got a few different transport layers that I can use there. Probably a number of different providers. How baked is the standard? Where do things like hits the interoperability fit into the mix? When do customers get their hands on it, and what can they expect the roll out to be? >> We're clearly at the beginning of what's about to be a very, I think long and healthy future for NVMe over Fabrics. I don't know about you. I was at Flash Memory Summit back in August in Santa Clara and there were a number of vendors there starting to talk about NVMe over Fabrics basics. FPGA implementation, system on chip implementations, software implementations across a variety of stacks. The great thing was NVMe over Fabrics was a phrase of the entire show. The challenging thing was probably no two of those solutions interoperated with each other yet. We were still at the running water through the pipes phase, not really checking for leaks and getting to broad adoption. Broad adoption I think comes when we've got a number of vendors broad interoperability, multi-supplier, component availability and those things, that let a number of implementations exists and interoperate because our customers live in a diverse multi-vendor environment. So that's what it will take to go from interesting proof of concept technology which I think is what we're seeing in terms of early customers engagement today to broad base deployment in both existing fiber channel implementations, and also in some next generation data center implementations, probably beginning next year. >> Okay, so Danny, I talked to a lot of companies out there. Everyone that's involved in this (mumbles) has been talking about NVMe over Fabric for a couple of years now. From a user standpoint, how are they going to help sort this out? What will differentiate the check box. Yes, I have something that follows this to, oh wait this will actually help performance so much better. What works with my environment? Where are the pitfalls and where are the things that are going to help companies? What's going to differentiate the marketplace? >> As an engineer, we always get into the speeds and the feeds and the weeds on performance and things like that, and while those are all true. We can talk about fewer and fewer instructions in the networks stack. Fewer and fewer instructions in the storage stack. We can talk about more efficient Silicon implementations. More affinity for multi-processor, multi-core processing environments, more efficient operating system implementations and things like that. But that's just the performance side. The broader benefits come to beginning to move to more cost effective data center fabric implementation. Where I'm not managing an orange wire and a blue wire unless that's really what I want. There's still a number of people who want to manage their fiber channel and will run NVMe over that. They get the compatibility that they want. They get the policies that they want and the switch behavior that they want, and the provisioning model that they want and all of those things. They'll get that in an NVMe over Fabrics implementation. A new data center however will be able to go, you know what, I'm all in day one on 25, 5000 bit gigabit ethernet as my fundamental connection of choice. I'm going 400 gigabit ethernet ports as soon as Andy Beck-tels shine or somebody gives them to me and things like that. And so if that's the data center architecture model that I'm in, that's a fundamental implementation decision that I get to make knowing that I can run an enterprise grade, storage protocol over the top of that, and the industry is ready. My external storage is ready, my servers are ready and my workloads can get the benefit of that. >> Okay, so if I just step back for a second, NVMe sounds like a lot of it is what we would consider the backend in proving that NVMe over Fabrics helps with some of the front end. From a customer stand point, what about their application standpoint? Can they work with everything that they have today? Are there things that they're going to want to do to optimize for that? So the storage industry just take care of it for them. What do they think about today and future planning from an application standpoint? >> I think it's a matter of that readiness and what is it going to take. The good news and this has analogs to the industry change from HDD to SSDs in the first place. The good new is you can make that switch over today and your data management application, your database application, your warehouse, you're analytics or whatever. Not one line of software changes. NVMe device shows up in the block stack of your favorite operating system, and you get lower latency, more IOs in parallel. More CPU back for your application to run because you don't need it in the storage stack anymore. So you get the benefits of that just by changing over to this new protocol. For applications who then want to optimize for this new environment, you can start thinking about having more IOs in flight in parallel. You could start thinking about what happens when those IOs are satisfied more rapidly without as much overhead in and interrupt processing and a number of things like that. You could start thinking about what happens when your application goes from hundred micro-second latencies and IOs like the flash devices to 10 microsecond or one microsecond IOs. Would perhaps with some of these new storage class memory devices that are out there. Those are the benefits that people are going to see when they start thinking about an all NVMe stack. Not just being beneficial for existing flash implementations but being fundamentally required and mandatory to get the benefits of storage class memory implementations. So this whole notion of future ready was one of the things that was fundamental in how NVMe was initially designed over 10 years ago. And we're starting to see that long term view pay benefits in the marketplace. >> Any insight from the customer standpoint? Is it certain applications or verticals where this is really going to help? I think back to the move to SSDs. It was David Foyer who just wet around the entire news feed. He was like, database, database, database is where we can have the biggest impact. What's NVMe going to impact? >> I think what we always see with these things. First of all, NVMe is probably going to have a very rapid advancement and impact across the industry much more quickly than the transition from HDD to SSD, so we don't have to go through that phase of a little goes a long way. You can largely make the switch and as your ecosystem supports it as your vendor of choice supports it. You can make that switch and to a large extent have the application be agnostic from that. So that's a really good way to start. The other place is you and I have had this conversation before. If you take out a cocktail napkin and you draw an equation that says time equals money. That's an obvious place where NVMe and NVMe over Fabrics benefit someone initially. High speed analytics, real time, high frequency trading, a number of things where more efficiency. My ability to do more work per unit time than yours gives me a competitive advantage. Makes my algorithms better, exposes my IP in a more advantageous way. Those are wonderful places for these types of emerging technologies to get adopted because the value proposition is just slam dunk simple. >> Yeah, so running through my head are all the latest buzz words. Is everything at Wikibon when we did our predictions for this year, data is at the center of all of it. But machine learning, AI, heck blockchain, Edge computing all of these things can definitely be affected by that. Is NVMe going to help all of them? >> Oh machine learning. Incredible high bandwidth application. Wonderful thing stream data in, compute on it, get your answers and things like that. Wonderful benefits for a new squeaky clean storage stack to run into. Edge where often times, real time is required. The ability to react to a stimulus and provide a response because of human safety issue or a risk management issue or what have you. Any place that performance let's you get close, get you outer close to real time is a win. And the efficiency of NVMe has a significant advantage in those environments. So NVMe is largely able to help the industry be ready just at the time that new processing models are coming in such as machine learning, artificial intelligence. New data center deployment architectures like the Edge come in and the new types of telemetry and algorithms that they maybe running there. It's really a technology that's arriving just at the time that the industry needs it. >> Yeah, was reading up on some of the blogs on the Dell sites. Jeff Brew-dough said, "We should expect "to see things from 2018." Not expecting you to pre-announce anything but what should we be looking for from Dell and the Dell family in 2018 when it comes to this space? >> We're very bullish on NVMe. We've been pushing very, very hard in the standards community. Obviously, we have already shipped NVMe for a series of internal use cases in our storage platforms. So we have confidence in the technology, its readiness, the ability of our software stacks to do what they need to do. We have a robust, multi-supplier supply chain ready to go so that we can service our customers, and provide them the choice in capacities and capabilities and things like that that are required to bet your business, and long term supply assurance for and things like that. So we're seeing the next year or so be the full transition to NVMe and we're ready for it. We've been getting ready for a long time. Now, the ecosystem is there and we're predicting very big things in the future. >> Okay, so Danny, you've been working on this for 11 years. Give us just a little bit of insight. What you learned, what this group has learned from previous transitions? What's excited you the most? Give us a little bit of sausage making? >> What's been funny about this is we talk about the initial transition to flash, and just getting to the point where a little goes a long way. That was a three year journey. We started in 2005, we shipped in 2008. We moved from there. We flash in a raise as a tier, as a cache, as the places where a little latency, high performance media adds value and those things. Then we saw the industry begin to develop into some server centric storage solutions. You guys have been at the front of forecasting what that market looks like with software defined storage. We see that in technologies like ScaleIO and VSAN where their abilities to start using the media when it's resident in a server became important. And suddenly that began to grow as a peer to the external storage market. Another market San alternative came along with them. Now we're moving even further out where it seems like we use to ask why flash? And it will get asked that. Now it's why not flash? Why don't we move there? So what we've seen is a combination of things. As we get more and more efficient low latency storage protocols. The bottle neck stops being about the network and start being about something else. As we get more multi-core compute capabilities and Moore's law continues to tickle along. We suddenly have enough compute and enough bandwidth and the next thing to target is the media. As we get faster and faster more capable media such as the move to flash and now the move to storage class memory. Again the bottle neck moves away from the media, maybe back to something else in the stack. As I advance compute in media and interconnect, suddenly it becomes beneficial for me to rewrite my application or re-platform it, and create an entire new set of applications that exploit the current capabilities or the technologies. And so we are in that rinse, lather repeat cycle right now in the technology. And for guys like you and me who've been doing this for awhile, we've seen this movie before. We know how it hands. It actually doesn't end. There are just new technologies and new bottlenecks and new manifestations of Moore's law and Holmes law and Metcalfe's law that come into play here. >> Alright so Danny, any final predictions from you on what we should be seeing? What's the next thing you work on that you call victory soon right? >> Yes, so I'm starting to lift my eyes a little bit and we think we see some really good capabilities coming at us from the device physicists in the white coats with the pocket protectors back in the fabs. We're seeing a couple of storage class memories begin to come to market now. You're led by Intel and microns, 3D XPoint but a number of other candidates on the horizon that will take us from this 100 microsecond world to a 10 microsecond world maybe to a 100 nanosecond world. And you and I we back here talking about that fairly soon I predict. >> Excellent, well Danny Cobb always a pleasure to catch up with you. Thanks so much for walking us through all of the pieces. We'll have lots more coverage of this technology and lots more more. Check out theCUBE.net. You can see Dell Technology World and lots of the other shows will be back. Thank you so much for watching theCUBE. (uptempo techno music)

Published Date : Mar 16 2018

SUMMARY :

Announcer: From the Silicon Angle Media Office Happy to welcome back to the program, to heck when you and I worked together. inside the Dell family. and those kinds of things. The company and the industry in the standards world. that the storage industry likes to make sure There's a lot of adoption on the consumer side first, and how it gets the enterprise. in the consumer space to allow flash to go from Okay, so on the NVMe piece, required to implement the stack was more efficient. and client devices to high volume M.2 devices in the customer pieces but should be seeing The 15Ks and the 10K HDDs. the original adoption of flash versus HDD. What's the power savings that I could get out of that? and so that's where we've come over but NVMe over Fabrics is the thing that has him that begin to show the promise that we saw way back then. Yeah and the challenge with networking obviously We're clearly at the beginning Where are the pitfalls and where are the things and the provisioning model that they want So the storage industry just take care of it for them. Those are the benefits that people are going to see I think back to the move to SSDs. You can largely make the switch and as your ecosystem are all the latest buzz words. that the industry needs it. of the blogs on the Dell sites. that are required to bet your business, What's excited you the most? and the next thing to target is the media. but a number of other candidates on the horizon and lots of the other shows will be back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2008DATE

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

2014DATE

0.99+

Dave Min-turnPERSON

0.99+

DannyPERSON

0.99+

2005DATE

0.99+

Danny CobbPERSON

0.99+

2018DATE

0.99+

StuPERSON

0.99+

one microsecondQUANTITY

0.99+

AugustDATE

0.99+

September 2013DATE

0.99+

Stu MinimanPERSON

0.99+

David FoyerPERSON

0.99+

10 microsecondQUANTITY

0.99+

Santa ClaraLOCATION

0.99+

11 yearsQUANTITY

0.99+

2013DATE

0.99+

BostonLOCATION

0.99+

Jeff Brew-doughPERSON

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

three yearQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

100 nanosecondQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

iPodsCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

Round RockLOCATION

0.99+

Boston, MassachusettsLOCATION

0.99+

next yearDATE

0.99+

todayDATE

0.99+

four years laterDATE

0.99+

WikibonORGANIZATION

0.99+

hundred micro-secondQUANTITY

0.99+

firstQUANTITY

0.99+

IntelORGANIZATION

0.98+

MoorePERSON

0.98+

10KQUANTITY

0.98+

oneQUANTITY

0.98+

25, 5000 bitQUANTITY

0.98+

2007DATE

0.97+

Flash Memory SummitEVENT

0.97+

NVMeORGANIZATION

0.96+

this yearDATE

0.96+

SiliconLOCATION

0.96+

twoQUANTITY

0.96+

threeDATE

0.96+

SataTITLE

0.96+

Wikibon | Action Item, Feb 2018


 

>> Hi I'm Peter Burris, welcome to Action Item. (electronic music) There's an enormous net new array of software technologies that are available to businesses and enterprises to tend to some new classes of problems and that means that there's an explosion in the number of problems that people perceive as could be applied, or could be solved, with software approaches. The whole world of how we're going to automate things differently in artificial intelligence and any number of other software technologies, are all being brought to bear on problems in ways that we never envisioned or never thought possible. That leads ultimately to a comparable explosion in the number of approaches to how we're going to solve some of these problems. That means new tooling, new models, new any number of other structures, conventions, and artifacts that are going to have to be factored by IT organizations and professionals in the technology industry as they conceive and put forward plans and approaches to solving some of these problems. Now, George that leads to a question. Are we going to see an ongoing ever-expanding array of approaches or are we going to see some new kind of steady-state that kind of starts to simplify what happens, or how enterprises conceive of the role of software and solving problems. >> Well, we've had... probably four decades of packaged applications being installed and defining really the systems of record, which first handled the ordered cash process and then layered around that. Once we had more CRM capabilities we had the sort of the opportunity to lead capability added in there. But systems of record fundamentally are backward looking, they're tracking about the performance of the business. The opportunity-- >> Peter: Recording what has happened? >> Yes, recording what has happened. The opportunity we have is now to combine what the big Internet companies pioneered, with systems of engagement. Where you had machine learning anticipating and influencing interactions. You can now combine those sorts of analytics with systems of record to inform and automate decisions in the form of transactions. And the question is now, how are we going to do this? Is there some way to simplify or, not completely standardized, but can we make it so that we have at least some conventions and design patterns for how to do that? >> And David, we've been working on this problem for quite some time but the notion of convergence has been extent in the hardware and the services, or in the systems business for quite some time. Take us through what convergence means and how it is going to set up new ways of thinking about software. >> So there's a hardware convergence and it's useful to define a few terms. There's converged systems, those are systems which have some management software that have been brought into it and then on top of that they have traditional SANs and networks. There's hyper-converged systems, which started off in the cloud systems and now have come to enterprise as well. And those bring software networking, software storage, software-- >> Software defined, so it's a virtualizing of those converged systems. >> David: Absolutely, and in the future is going to bring also automated operational stuff as well, AI in the operational side. And then there's full stack conversions. Where we start to put in the software, the application software, to begin with the database side of things and then the application itself on top of the database. And finally these, what you are talking about, the systems of intelligence. Where we can combine both the systems of record, the systems of engagement, and the real-time analytics as a complete stack. >> Peter: Let's talk about this for a second because ultimately what I think you're saying is, that we've got hardware convergence in the form of converged infrastructure, hyper-converged in the forms of virtualization of that, new ways of thinking about how the stack comes together, and new ways of thinking about application components. But what seems to be the common thread, through all of this, is data. >> David: Yes. >> So it's basically what we're seeing is a convergence or a rethinking of how software elements revolve around the data, is that kind of the centerpiece of this? >> David: That's the centerpiece of it and we had very serious constraints about accessing data. Those will improve with flash but there's still a lot of room for improvement. And the architecture that we are saying is going to come forward, which really helps this a lot, is the unit grid architecture. Where we offload the networking and the storage from the processor. This is already happening in the hyper scale clouds, they're putting a lot of effort into doing this. But we're at the same time allowing any processor to access any data in a much more fluid way and we can grow that to thousands of processes. Now that type of architecture gives us the ability to converge the traditional systems of record, and there are a lot of them obviously, and the systems of engagement and the the real-time analytics for the first time. >> But the focal point of that convergence is not the licensing of the software, the focal point is convergence around the data. >> The data. >> But that has some pretty significant implications when we think about how software has always been sold, how organizations to run software have been structured, the way that funding is set up within businesses. So George, what does it mean to talk about converging software around data from a practical standpoint over the next few years? >> Okay, so let me take that and interpret that as converging the software around data in the context of adding intelligence to our existing application portfolio and then the new applications that follow on. And basically, when we want to inject an intelligence enough to inform and anticipate and inform interactions or inform or automate transactions, we have a bunch of steps that need to get done. Where we're ingesting essentially contextual or ambient information. Often this is information about a user or the business process. And this data, it's got to go through a pipeline where there's both a Design Time and a Run Time. In addition to ingesting it, you have to sort of enrich it and make it ready for analysis. Then the analysis has essentially picking out of all that data and calculating the features that you plug into a machine learning model. And then that, produces essentially an inference based on all that data, that says well this is the probable value and it sounds like, sounds like it's in the weeds but the point is it's actually a standardized set of steps. Then the question is, do you put that all together in one product across that whole pipeline? Can one piece of infrastructure software manage that ? Or do you have a bunch of pieces each handing off to the next? And-- >> Peter: But let me stop you so because I want to make sure that we kind of follow this thread. So we've argued that hardware convergence and the ability to scale the role the data plays or how data is used, is happening and that opens up new opportunities to think about data. Now what we've got is we are centering a lot of the software convergence around the use of data through copies and other types of mechanisms for handling snapshots and whatnot and things like uni grid. What you're, let's start with this. It sounds like what you're saying is we need to think of new classes of investments in technologies that are specifically set up to handling the processing of data in a more distributed application way, right? If I got that right, that's kind of what we mean by pipelines? >> George: Yes. >> Okay, so once we do that, once we establish those conventions, once we establish organizationally institutionally how that's going to work. Now we take the next step of saying, are we going to default to a single set of products or are we going to do best to breed and what kind of convergence are we going to see there? >> And there's no-- >> First of all, have I got that right? >> Yes, but there's no right answer. And I think there's a bunch of variables that we have to play with that depend on who the customer is. For instance, the very largest and most sophisticated tech companies are more comfortable taking multiple pieces each that's very specialized and putting them together in a pipeline. >> Facebook, Yahoo, Google-- >> George: LinkedIn. >> Got it. >> George: Those guys. And the knobs that they're playing with, that everyone's playing with, are three, basically on the software side. There's your latency budget, which is how much time do you have to produce an answer. So that drives the transaction or the interaction. And it's not, that itself is not just a single answer because... It's not, the goal isn't to get it as short as possible. The goal is to get as much information into the analysis within the budgeted latency. >> Peter: So it's packing the latency budget with data? >> George: Yes, because the more data that goes into making the inference, the better the inference. >> Got it. >> The example that someone used actually on Fareed Zakaria GPS, one show about it was, if he had 300 attributes describing a person he could know more about that person then that person did (laughs) in terms of inferring other attributes. So the the point is, once you've got your latency budget, the other two knobs that you can play with are development complexity and admin complexity. And the idea is on development complexity, there's a bunch of abstractions that you have to deal with. If it's all one product you're going to have one data model, one address and namespace convention, one programming model, one way of persisting data, a whole bunch of things. That's simplicity. And that makes it more accessible to mainstream organizations. Similarly there's a bunch of, let me just add that, there's probably two or three times as many constructs that admins would have to deal with. So again, if you're dealing with one product, it's a huge burden off the admin and we know they struggled with Hadoop. >> So convergence, decisions about how to enact convergence is going to be partly or strongly influenced by those three issues. Latency budget, development complexity or simplicity, and administrative, David-- >> I'd like to add one more to that, and that is location of data. Because you want to be able to, you want to be able to look at the data that is most relevant to solving that particular problem. Now, today a lot of the data is inside the enterprise. There's a lot of data outside that but they're still, you will want to, in the best possible way, combine that data one way or another. >> But isn't that a variable on the latency budget? >> David: Well there's, I would think it's very useful to split the latency budget, which is to do with inference mainly, and development with the machine learning. So there is a development cycle with machine learning that is much longer. That is days, could be weeks, could be months. >> I would still done in Bash. >> It is or will be done, wait a second. It will be done in Bash, it is done in Bash, and it's. You need to test it and then deliver it as an inference engine to the applications that you're talking about. Now that's going to be very close together, that inference, then the rest of it has to be all physically very close together. But the data itself is spread out and you want to have mechanisms that can combine those datas, move application to those datas, bring those together in the best possible way. That is still a Bash process. That can run where the data is, in the cloud locally, wherever it is. >> George: And I think you brought up a great point, which I would tend to include in latency budget because... no matter what kind of answers you're looking for, some of the attributes are going to be pre computed and those could be-- >> David: Absolutely. >> External data. >> David: Yes. >> And you're not going to calculate everything in real time, there's just-- >> You can't. >> Yes you can't. >> But is the practical reality that the convergence of, so again, the argument. We've got all these new problems, all new kinds of new people that are claiming that they know how to solve the problems, each of them choosing different classes of tools to solve the problem, an explosion across the board in the approaches, which can lead to enormous downstream integration and complexity costs. You've used the example of Cloudera, for example. Some of the distro companies who claim that 50 plus percent of their development budget is dedicated to just integrating these pieces. That's a non-starter for a lot of enterprises. Are we fundamentally saying that the degree of complexity or the degree of simplicity and convergence, it's possible in software, is tied to the degree of convergence in the data? >> You're honing in on something really important, give me-- >> Peter: Thank you! (laughs) >> George: Give an example of the convergence of data that you're talking about. >> Peter: I'll let David do it because I think he's going to jump on it. >> David: Yes so let me take examples, for example. If you have a small business, there's no way that you want to invest yourself in any of the normal levels of machine learning and applications like that. You want to outsource that. So big software companies are going to do that for you and they're going to do it especially for the specific business processes which are unique to them, which give them digital differentiation of some sort or another. So for all of those type of things, software will come in from vendors, from SAP or son of SAP, which will help you solve those problems. And having data brokers which are collecting the data, putting them together, helping you with that. That seems to me the way things are going. In the same way that there's a lot of inference engines which will be out at the IOT level. Those will have very rapid analytics given to them. Again, not by yourself but by companies that specialize in facial recognition or specialize in making warehouse-- >> Wait a minute, are you saying that my customers aren't special, that require special facial recognition? (laughs) So I agree with David but I want to come back to this notion because-- >> David: The point I was getting at is, there's going to be lots and lots of room for software to be developed, to help in specific cases. >> Peter: And large markets to sell that software into. >> Very large markets. >> Whether it's a software, but increasingly also with services. But I want to come back to this notion of convergence because we talked about hardware convergence and we're starting to talk about the practical limits on software convergence. But somewhere in between I would argue, and I think you guys would agree, that really the catalyst for, or the thing that's going to determine the rate of change and the degree of convergence is going to be how we deal with data. Now you've done a lot of research on this, I'm going to put something out there and you tell me if I'm wrong. But at the end of the day, when we start thinking about uni grid, when we start thinking about some of these new technologies, and the ability to have single copies or single sources of data, multiple copies, in many respects what we're talking about is the virtualization of data without loss. >> David: Yes. >> Not loss of the characters, the fidelity of the data, or the state of the data. I got that right? >> Knowing the state of the data. >> Peter: Or knowing state of the data. >> If you take a snapshot, that's a point in time, you know what that point of time is, and you can do a lot of analytics for example on, and you want to do them on a certain time of day or whatever-- >> Peter: So is it wrong to say that we're seeing, we've moved through the virtualization of hardware and we're now in a hyper scale or hyper-converged, which is very powerful stuff. We're seeing this explosion in the amount of software that's being you know, the way we approach problems and whatnot. But that a forcing function, something that's going to both constrain how converged that can be, but also force or catalyze some convergence, is the idea that we're moving into an era where we can start to think about virtualized data through some of these distributed file systems-- >> David: That's right, and the metadata that goes with it. The most important thing about the data is, and it's increasing much more rapidly than data itself, is the metadata around it. But I want to just, make one point on this, all data isn't useful. There's a huge amount of data that we capture that we're just going to have to throw away. The idea that we can look at every piece of data for every decision is patently false. There's a lovely example of this in... fluid mechanics. >> Peter: Fluid dynamics. >> David: Fluid dynamics, if you're trying to, if you're trying to have simulation at a very very low level, the amount of-- >> Peter: High fidelity. >> High fidelity, you run out of capacity very very very quickly indeed. So you have to make trade-offs about everything and all of that data that you're doing in that simulation, you're not going to keep that. All the data from IOT, you can't keep that. >> Peter: And that's not just a statement about the performance or the power or the capabilities of the hardware, there's some physical realities-- >> David: Absolutely, yes. >> That are going to limit what you can do with the simulation. But, and we've talked. We've talked about this in other action items, There is this notion of options on data value, where the value of today's data is maybe-- >> David: Is much higher. >> Peter: Well it's higher from at a time standpoint for the problems that we understand and are trying to solve now but there may be future problems where we still want to ensure that we have some degree of data where we can be better at attending those future problems. But I want to come back to this point because in all honesty, I haven't heard anybody else talking about this and maybe's because I'm not listening. But this notion of again, your research that the notion of virtualized data inside these new architectures being a catalyst for a simplification of a lot of the sharing subsystem. >> David: It's essentially sharing of data. So instead of having the traditional way of doing it within a data center, which is I have my systems of record, I make a copy, it gets delivered to the data warehouse, for example. That's the way that's being done. That is too slow, moving data is incredibly slow. So another way of doing it is to share that data, make a virtual copy of it, and technologies allowing you to do that because the access density has gone up by thousands of times-- >> Peter: Because? >> Because. (laughs) Because of flash, because of new technologies at that level, >> Peter: High performance interfaces, high performance networks. >> David: All of that stuff is now allowing things, which just couldn't be even conceived. However, there is still a constraint there. It may be a thousand times bigger but there is still an absolute constraint to the amount of data that you can actually process. >> And that constraint is provided by latency. >> Latency. >> Peter: Speed of light. >> Speed of light and speed of the processes themselves. >> George: Let me add something that may help explain the sort of the virtualization of data and how it ties into the convergence or non convergence of the software around it. Which is, when we're building these analytic pipelines, essentially we've disassembled what used to be a DBMS. And so out of that we've got a storage engine, we've got query optimizers, we've got data manipulation languages which have grown into full-blown analytic languages, data definition language. Now the system catalog used to be just, a way to virtualize all the tables in the database and tell you where all the stuff was, and the indexes and things like that. Now, what we're seeing is since data is now spread out over so many places and products, we're seeing an emergence of a new of catalog. Whether that's from Elation or Dremio or on AWS, it's the Glue catalog, and I think there's something equivalent coming on Asure. But the point is, we're beginning, those are beginning to get useful enough to be the entry point for analytic products and maybe eventually even for transactional products to update, or at least to analyze the data in these pipelines that we're putting together out of these components of what was a disassembled database. Now, we could be-- >> I would make a difference there there between the development of analytics and again, the real-time use of those analytics within systems of intelligence. >> George: Yeah but when you're using them-- >> David: There's a different, problems they have to solve. >> George: But there's a Design Time and a Run Time, there's actually four pipelines for the sort of analytic pipeline itself. There's Design Time and Run Time, and then for the inference engine and the modeling that goes behind it, there's also a Design Time and Run Time. But I guess where. I'm not disagreeing that you could have one converged product to manage the Run Time analytic pipeline. I'm just saying that the pieces that you assemble could come from one vendor. >> Yeah but I think David's point, I think it's accurate and this has been since the beginning of time. (laughs) Certainly predated UNIVAC. That at the end of the day, read/write ratios and the characteristics of the data are going to have an enormous impact on the choices that you make. And high write to read ratios almost dictate the degree of convergence, and we used to call that SMP, or you know scale-up database managers. And for those types of applications, with those types of workloads, it's not necessarily obvious that that's going to change. Now we can still find ways to relax that but you're talking about, George, the new characteristics >> Injecting the analytics. >> Injecting the analytics where we're doing more reading as opposed to writing. We may still be writing into an application that has these characteristics-- >> That's a small amount of data. >> But a significant portion of the new function is associated with these new pipelines. >> Right. And it's actually... what data you create is generally derived data. So you're not stepping on something that's already there. >> All right, so let me get some action items here. David, I want to start with you. What's the action item? >> David: So for me, about conversions, there's two levels of conversions. First of all, converge as much as possible and give the work to the vendor, would be my action item. The more that you can go full stack, the more that you can get the software services from a single point, single throat to choke, single hand to shake, the more you have out source your problems to them. >> Peter: And that has a speed implication, time to value. >> Time to value, it has a, you don't have to do undifferentiated work. So that's the first level of convergence and then the second level of convergence is to look hard about how you can bring additional value to your existing systems of record by putting in automation or a real-time analytics. Which leads to automation, that is the second one, for me, where the money is. Automation, reduction in the number of things that people have to do. >> Peter: George, action item. >> So my action item is that you have to evaluate, you the customer have to evaluate sort of your skills as much as your existing application portfolio. And if more of your greenfield apps can start in the cloud and you're not religious about open source but you're more religious about the admin burden and development burden and your latency budget, then start focusing on the services that the cloud vendors originally created that were standalone, but they are increasingly integrating because the customers are leading them there. And then for those customers who you know, have decades and decades of infrastructure and applications on Prem and need a pathway to the cloud, some of the vendors formerly known as Hadoop vendors. But for that matter, any on Prem software vendor is providing customers a way to run workloads in a hybrid environment or to migrate data across platforms. >> All right, so let me give this a final action item here. Thank you David Foyer, George Gilbert. Neil Raiden and Jim Kobielus and the rest of the Wikibon team is with customers today. We talked today about convergence at the software level. What we've observed over the course of the last few years is an expanding array of software technologies, specifically AI, big data, machine learning, etc. That are allowing enterprises to think differently about the types of problems that they can solve with technology. That's leading to an explosion and a number of problems that folks are looking at, the number of individuals participating in making those decisions and thinking those issues through. And very importantly, an explosion of the number of vendors with piecemeal solutions about what they regard, their best approach to doing things. However, that is going to have a significant burden that could have enormous implications for years and so the question is, will we see a degree of convergence in the approach to doing software, in the form of pipelines and applications and whatnot, driven by a combination of: what the hardware is capable of doing, what the skills are or make possible, and very importantly, the natural attributes of the data. And we think that there will be. There will always be tension in the model if you try to invent new software but one of the factors that's going to bring it all back to a degree of simplicity, will be a combination of what the hardware can do, what people can do, and what the data can do. And so we believe, pretty strongly, that ultimately the issues surrounding data whether it be latency or location, as well as the development complexity and administrative complexity, are going to be a range of factors that are going to dictate ultimately of how some of these solutions start to converge and simplify within enterprises. As we look forward, our expectation is that we're going to see an enormous net new investment over the next few years in pipelines, because pipelines are a first-level set of investments on how we're going to handle data within the enterprise. And they'll look like, in certain respects, how DBMS used to look but just in a disaggregated way but conceptually and administratively and then from a product selection and service election standpoint, the expectation is that they themselves have to come together so the developers can have a consistent view of the data that's going to run inside the enterprise. Want to thank David Floyer, want to thank George Gilbert. Once again, this has been Wikibon Action Item and we look forward to seeing you on our next Action Item. (electronic music)

Published Date : Feb 16 2018

SUMMARY :

in the number of approaches to how we're going the sort of the opportunity to lead And the question is now, how are we going to do this? has been extent in the hardware and the services, and now have come to enterprise as well. of those converged systems. David: Absolutely, and in the future is going to bring hyper-converged in the forms of virtualization of that, and the the real-time analytics for the first time. the licensing of the software, the way that funding is set up within businesses. the features that you plug into a machine learning model. and the ability to scale how that's going to work. that we have to play with that It's not, the goal isn't to get it as short as possible. George: Yes, because the more data that goes the other two knobs that you can play with is going to be partly or strongly that is most relevant to solving that particular problem. to split the latency budget, that inference, then the rest of it has to be all some of the attributes are going to be pre computed But is the practical reality that the convergence of, George: Give an example of the convergence of data because I think he's going to jump on it. in any of the normal levels of there's going to be lots and lots of room for and the ability to have single copies Not loss of the characters, the fidelity of the data, the way we approach problems and whatnot. David: That's right, and the metadata that goes with it. and all of that data that you're doing in that simulation, That are going to limit what you can for the problems that we understand So instead of having the traditional way of doing it Because of flash, because of new technologies at that level, Peter: High performance interfaces, to the amount of data that you can actually process. and the indexes and things like that. the development of analytics and again, I'm just saying that the pieces that you assemble on the choices that you make. Injecting the analytics where we're doing But a significant portion of the new function is what data you create is generally derived data. What's the action item? the more that you can get the software services So that's the first level of convergence and applications on Prem and need a pathway to the cloud, of convergence in the approach to doing software,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

GeorgePERSON

0.99+

Peter BurrisPERSON

0.99+

Jim KobielusPERSON

0.99+

George GilbertPERSON

0.99+

PeterPERSON

0.99+

David FoyerPERSON

0.99+

George GilberPERSON

0.99+

Feb 2018DATE

0.99+

YahooORGANIZATION

0.99+

Neil RaidenPERSON

0.99+

twoQUANTITY

0.99+

FacebookORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

300 attributesQUANTITY

0.99+

BashTITLE

0.99+

threeQUANTITY

0.99+

second levelQUANTITY

0.99+

WikibonORGANIZATION

0.99+

two knobsQUANTITY

0.99+

todayDATE

0.99+

two levelsQUANTITY

0.99+

SAPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

first levelQUANTITY

0.99+

eachQUANTITY

0.98+

three issuesQUANTITY

0.98+

FirstQUANTITY

0.98+

first timeQUANTITY

0.98+

one pointQUANTITY

0.98+

one productQUANTITY

0.98+

bothQUANTITY

0.98+

UNIVACORGANIZATION

0.98+

50 plus percentQUANTITY

0.98+

decadesQUANTITY

0.98+

second oneQUANTITY

0.98+

single pointQUANTITY

0.97+

three timesQUANTITY

0.97+

one wayQUANTITY

0.97+

Wikibon Research Meeting | October 20, 2017


 

(electronic music) >> Hi, I'm Peter Burris and welcome once again to Wikibon's weekly research meeting from the CUBE studios in Palo Alto, California. This week we're going to build upon a conversation we had last week about the idea of different data shapes or data tiers. For those of you who watched last week's meeting, we discussed the idea that data across very complex distributed systems featuring significant amounts of work associated with the edge are going to fall into three classifications or tiers. At the primary tier, this is where the sensor data that's providing direct and specific experience about the things that the sensors are indicating, that data will then signal work or expectations or decisions to a secondary tier that aggregates it. So what is the sensor saying? And then the gateways will provide a modeling capacity, a decision making capacity, but also a signal to tertiary tiers that increasingly look across a system wide perspective on how the overall aggregate system's performing. So very, very local to the edge, gateway at the level of multiple edge devices inside a single business event, and then up to a system wide perspective on how all those business events aggregate and come together. Now what we want to do this week is we want to translate that into what it means for some of the new technologies, new analytics technologies that are going to provide much of the intelligence against each of this data. As you can imagine, the characteristics of the data is going to have an impact on the characteristics of the machine intelligence that we can expect to employ. So that's what we want to talk about this week. So Jim Kobielus, with that as a backdrop, why don't you start us off? What are we actually thinking about when we think about machine intelligence at the edge? >> Yeah, Peter, we at the edge, the edge of body, the device be in the primary tier that acquires fresh environmental data through its sensors, what happens at the edge? In the extreme model, we think about autonomous engines, let me just go there just very briefly, basically, it's a number of workloads that take place at the edge, the data workloads. The data is (mumbles) or ingested, it may be persisted locally, and that data then drives local inferences that might be using deep layer machine learning chipsets that are embedded in that device. It might also trigger various tools called actuations. Things, actions are taken at the edge. If it's the self-driving vehicle for example, an action may be to steer the car or brake the car or turn on the air conditioning or whatever it might be. And then last but not least, there might be some degree of adaptive learning or training of those algorithms at the edge, or the training might be handled more often up at the second or tertiary tier. The tertiary tier at the cloud level, which has visibility usually across a broad range of edge devices and is ingesting data that is originated from all of the many different edge devices and is the focus of modeling, of training, of the whole DevOps process, where teams of skilled professionals make sure that the models are trained to a point where they are highly effective for their intended purposes. Then those models are sent right back down to the secondary and the primary tiers, where act out inferences are made, you know, 24 by seven, based on those latest and greatest models. That's the broad framework in terms of the workloads that take place in this fabric. >> So Neil, let me talk to you, because we want to make sure that we don't confuse the nature of the data and the nature of the devices, which may be driven by economics or physics or even preferences inside of business. There is a distinction that we have to always keep track of, that some of this may go up to the Cloud, some of it may stay local. What are some of the elements that are going to indicate what types of actual physical architectures or physical infrastructures will be built out as we start to find ways to take advantage of this very worthwhile and valuable data that's going to be created across all of these different tiers? >> Well first of all, we have a long way to go with sensor technology and capability. So when we talk about sensors, we really have to define classes of sensors and what they do. However, I really believe that we'll begin to think in a way that approximates human intelligence, about the same time as airplanes start to flap their wings. (Peter laughs) So, I think, let's have our expectations and our models reflect that, so that they're useful, instead of being, you know hypothetical. >> That's a great point Neil. In fact, I'm glad you said that, because I strongly agree with you. But having said that, the sensors are going to go a long ways, when we... but there is a distinction that needs to be made. I mean, it may be that that some point in time, a lot of data moves up to a gateway, or a lot of data moves up to the Cloud. It may be that a given application demands it. It may be that the data that's being generated at the edge may have a lot of other useful applications we haven't anticipated. So we don't want to presume that there's going to be some hard wiring of infrastructure today. We do want to presume that we better understand the characteristics of the data that's being created and operated on, today. Does that make sense to you? >> Well, there's a lot of data, and we're just going to have to find a way to not touch it or handle it any more times than we have to. We can't be shifting it around from place to place, because it's too much. But I think the market is going to define a lot of that for us. >> So George, if we think about the natural place where the data may reside, the processes may reside, give us a sense of what kinds of machine learning technologies or machine intelligence technologies are likely to be especially attractive at the edge, dealing with this primary information. Okay, I think that's actually a softball which is, we've talked before about bandwidth and latency limitations, meaning we're going to have to do automated decisioning at the edge, because it's got to be fast, low latency. We can't move all the data up to the Cloud for bandwidth limitations. But, by contrast, so that's data intensive and it's fast, but up in the cloud, where we enhance our models, either continual learning of the existing ones or rethinking them entirely, that's actually augmented decisions, and augmented means it's augmenting a human in the process, where, most likely, a human is adding additional contextual data, performing simulations, and optimizing the model for different outcomes or enriching the model. >> It may in fact be a crucial element or crucial feature of the training by in fact, validating that the action taken by the system was appropriate. >> Yes, and I would add to that, actually, that you might, you used an analogy, people are going from two extremes where they say, some people say, "Okay, so all the analytics has to be done in the cloud," Wikibon and David Floyer, and Jim Kovielus have been pioneering the notion that we have to do a lot more at the client. But you might look back at client server computing where the client was focused on presentation, the server was focused on data integrity. Similarly, here, the edge or client is going to be focused on fast inferencing and the server is going to do many of the things that were associated with a DBMS and data integrity in terms of reproducibility, of decisions in the model for auditing, security, versioning, orchestration in terms of distributing updated models. So we're going to see the roles of the edge and the cloud rhyme with what we saw in server. Neither one goes away, they augment each other. >> So, Jim Kovielus, one of the key issues there is going to be the gateway, and the role that the gateway plays, and specifically here, we talked about the nature of again, the machine intelligence that's going to be operating more on the gateway. What are some of the characteristics of the work that's going to be performed at the gateway that kind of has oversight of groupings or collections of sensor and actuator devices? >> Right, good question. So the perfect example that everybody's familiar with now about a gateway in this environment, a smart home hub. A smart home hub, just for the sake of discussion, has visibility across two or more edge devices. It could be a smart speaker, could be the HVAC system is sensor equipped and so forth, what it does, the pool it performs, a smart hub of any sort, is that it acquires data from the edge devices, the edge devices might report all of their data directly to the hub, or the sensor devices might also do inferences and then pass on the results of the inferences it has given to the hub, regardless. What the hub does is A, it aggregates the data across those different edge devices over which it has this ability and control, B, it may perform it's own inferences based on models that look out across an entire home in terms of patterns of activity. Then it might take the hub, various actions autonomous by itself, without consulting an end user or anything else. It might take action in terms of beef up the security, adjust the HVAC, it adjusts the light in the house or whatever it might be, based on all that information streaming in real time. Possibly, its algorithms will allow you to determine what of that data shows an anomalous condition that deviates from historical patterns. Those kinds of determinations, whether it's anomalous or a usual pattern, are often taken at the hub level, 'cause it's maintaining sort of a homeostatic environment, as it were, within its own domain, and that hub might also communicate up the stream, to a tertiary tier that has oversight, let's say, of a smart city environment, where everybody in that city or whatever, might have a connection into some broader system that say, regulates utility usage across the entire region to avoid brownouts and that kind of thing. So that gives you an idea of what the role of a hub is in this kind of environment. It's really a controller. >> So, Neil, if we think about some of the issues that people really have to consider as they start to architect what some of these systems are going to look like, we need to factor both what is the data doing now, but also ensure that we build into the entire system enough of a buffer so that we can anticipate and take advantage of future ways of using that data. Where do we draw that fine line between we only need this data for this purpose now and geez, let's ensure that we keep our options open so that we can use as much data as we want at some point in time in the future? >> Well, that's a hard question, Peter, but I would say that if it turns out that this detailed data coming from sensors, that the historical aspect of it isn't really that important. If the things you might be using that data for are more current, then you probably don't need to capture all that. On the other hand, there have been many, many occasions historically, where data has been used other than its original purpose. My favorite example was scanners in grocery stores, where it was meant to improve the checkout process, not have to put price stickers on everything, manage inventory and so forth. It turned out that some smart people like IRI and some other companies said, "We'll buy that data from you, "and we're going to sell it to advertisers," and all sorts of things. We don't know the value of this data yet, it's too new. So I would err on the side of being conservative and capturing and saving as much as I could. >> So what we need to do is, we need to marry or we need to do an optimization of some form about how much is it going to cost to transmit the data versus what kind of future value or what kinds of options of future value might there be on that data. That is, as you said, a hard problem, but we can start to conceive of an approach to characterizing that ratio, can't we? >> I hope so. I know that, personally, when I download 10 gigabytes of data, I pay for 10 gigabytes of data, and it doesn't matter if it came from a mile away or 10,000 miles away. So there has to be adjustments for that. There's also ways of compressing data because this sensor data I'm sure is going to be fairly sparse, can be compressed, is redundant, you can do things like RLL encoding, which takes all the zeroes out and that sort of thing. There are going to be a million practices that we'll figure out. >> So as we imagine ourselves in this schemata of edge, hub, tertiary or primary, secondary and tertiary data and we start to envision the role that data's going to play and how we conduct or how we build these architectures and these infrastructures, it does raise an interesting question, and that is, from an economic standpoint, what do we anticipate is going to be the classes of devices that are going to exploit this data? David Foyer who's not here today, hope you're feeling better David, has argued pretty forcibly, that over the next few years we'll see a lot of advances made in microprocessor technology. Jim, I know you've been thinking about this a fair amount. What types of function >> Jim: Right. >> might we actually see being embedded in some of these chips that software developers are going to utilize to actually build some of these more complex and interesting systems? >> Yeah, first of all, one of the trends we're seeing in the chipset market for deep learning, just to be there for a moment, is that deep learning chipsets traditionally, when I say traditionally, the last several years the market has been dominated by GP's graphic processing unit. Invidia of course, is the primary provider of those. Of course, Invidia has been along around for a long time as a gaming solution provider. Now, what's happening with GPU technology, in fact, the latest generation of Invidia's architecture shows where it's going. The thing that is more deep learning optimized capabilities at the chipset level. They're called tensor processing, and I don't want to bore you with all the technical details, but the whole notion of-- >> Peter: Oh, no, Jim, do bore us. What is it? (Jim laughs) >> Basically deep learning is based on doing high speed, fast matrix map. So fundamentally, tensor cores do high velocity fast matrix math, and the industry as a whole is moving toward embedding more tensor cores directly into the chipset, higher density of tensor core. Invidia in its latest generation of chip has done that. They haven't totally taken out the gaming oriented GPU capabilities, but there are competitors and they have a growing list, more than a dozen competitors on the chipset side now. We're all going down a road of embedding far more technical processing units into every chip. Google is well known for something called GPU tensor processing units, their chip architecture. But they're one of many vendors that are going down that road. The bottom line is the chipset itself is becoming authenticated and being optimized for the core function that CPU and really GPU technology and even ASIX and FPGAs were not traditionally geared to do, which is just deep learning at a high speed, many cores, to do things like face recognition and video and voice recognition freakishly fast, and really, that's where the market is going in terms of enabling underlying chipset technology. What we're seeing is that, what's likely to happen in the chipsets of the year 2020 and beyond, they'll be predominantly tensor core processing units, But they'll be systemed on a chip that, and I'm just talking about future, not saying it's here now, systems on a chip that include some, a CPU, to managing real time OS, like a real time Linux or what not, and with highly dense tensor core processing unit. And in this capability, these'll be low power chips, and low cost commodity chips that'll be embedded in everything. Everything from your smart phone, to your smart appliances in your home, to your smart cars and so forth. Everything will have these commodity chips. 'Cause suddenly every edge device, everything will be an edge device, and will be able to provide more than augmentation, automation, all these things we've been talking about, in ways that are not necessarily autonomous, but can operate with a great degree of autonomy to help us human beings to live our lives in an environmentally contextual way at all points in time. >> Alright, Jim, let me cut you off there, because you said something interesting, a lot more autonomy. George, what does it mean, that we're going to dramatically expand the number of devices that we're using, but not expand the number of people that are going to be in place to manage those devices. When we think about applying software technologies to these different classes of data, we also have to figure out how we're going to manage those devices and that data. What are we looking at from an overall IT operations management approach to handling a geometrically greater increase in the number of devices and the amount of data that's being generated? (Jim starts speaking) >> Peter: Hold on, hold on, George? >> There's a couple dimensions to that. Let me start at the modeling side, which is, we need to make data scientists more productive or we need to push out to a greater, we need to democratize the ability to build models, and again, going back to the notion of simulation, there's this merging of machine learning and simulation where machine learning tells you correlations in factors that influence an answer. Whereas, the simulation actually lets you play around with those correlations, to find the causations, and by merging them, we make it much, much more productive to find the models that are both accurate and to optimize them for different outcomes. >> So that's the modeling issue. >> Yes. >> When we think about after we, which is great. Now as we think about some of the data management elements, what are we looking at from a data management standpoint? >> Well, and this is something Jim has talked about, but, you know we had DevOps for joining the, essentially merging the skills of the developers with the operations folks, so that there's joint responsibility of keeping stuff live. >> Well what about things like digital twins, automated processes, we've talked a little it about breadth versus depth, ITOM, What do you think? Are we going to build out, are all these devices going to reveal themselves, or are we going to have to put in place a capacity for handling all of these things in some consistent, coherent way? >> Oh, okay, in terms of managing. >> In terms of managing. >> Okay. So, digital twins were interesting because they pioneered or they made well known a concept called essentially, a symmetric network, or a knowledge graph, which is just a way of abstracting what is a whole bunch of data models and machine learning models that represents the structure and behavior of a device. In IIoT terminology, it was like an industrial device, like a jet engine. But that same construct, the knowledge graph and the digital twin, can be used to describe the application software and the infrastructure, both middleware and hardware, that makes up this increasingly sophisticated network of learning and inferencing applications. And the reason this is important, it sounds arcane, the reason it's important is we're building now vastly more sophisticated applications over great distances, and the only way we can manage them is to make the administrators far more productive. The state of the art today is, alerts on the performance of the applications, and alerts on the, essentially, the resource intensity of the infrastructure. By combining that type of monitoring with the digital twin, we can get a, essentially much higher fidelity reading on when something goes wrong. We don't get false positives. In other words, you don't have, if something goes wrong, it's like the fairy tale of the pea underneath the mattress, all the way up, 10 mattresses, you know it's uncomfortable. Here, it'll pinpoint exactly what gets wrong, rather than cascading all sorts of alerts, and that is the key to productivity in managing this new infrastructure. >> Alright guys, so let's go into the action item around here. What I'd like to do now is ask each of you for the action item that you think users are going to have to apply or employ to actually get some value, and start down this path of utilizing machine intelligence across these different tiers of data to build more complex, manageable application infrastructures. So, Jim, I'd like to start with you, what's your action item? >> My action item is related what George just said, modeled centrally, deployed in a decentralized fashion, machine learning, and use digital twin technology to do your modeling against device classes, in a more coherent way. There's not one model that won't fit all of the devices. Use digital twin technology to structure the modeling process to be able to tune a model to each class of device out there. >> George, action item. >> Okay, recognize that there's a big difference between edge and cloud, as Jim said. But I would elaborate, edge is automated, low latency decision making, extremely data intensive. Recognize that the cloud is not just where you trickle up a little bit of data, this is where you're going to use simulations, with a human in the loop, to augment-- >> System wide, system wide. >> System wide, with a human in the loop to augment how you evaluate new models. >> Excellent. Neil, action item. >> I would have people start on the right side of the diagram and start to think about what their strategy is and where they fit into these technologies. Be realistic about what they think they can accomplish and do the homework. >> Alright, great. So let me summarize our meeting this week. This week we talked about the role that the three tiers of data that we've described will play in the use of machine intelligence technologies as we build increasingly complex and sophisticated applications. We've talked about the difference between primary, secondary, and tertiary data. Primary data being the immediate experience of sensors. Analog being translated into digital, about a particular thing or set of things. Secondary being the data that is then aggregated off of those sensors for business event purposes, so that we can make a business decision, often automatically down at an edge scenario, as a consequence of signals that we're getting from multiple sensors. And then finally, tertiary data, that looks at a range of gateways and a range of systems, and is considering things at a system wide level, for modeling, simulation and integration purposes. Now, what's important about this is that it's not just better understanding the data and not just understanding the classes of technologies that we used, that will remain important. For example, we'll see increasingly powerful low cost device specific arm like processors pushed into the edge. And a lot of competition at the gateway, or at the secondary data tier. It's also important, however to think about the nature of the allocations and where the work is going to be performed across those different classifications. Especially as we think about machine learning, machine etiologies and deep learning. Our expectation is that we will see machine learning being used on all three levels, Where machine etiology is being used on against all forms of data to perform a variety of different work, but that the work that will be performed will be a... Will be naturally associated and related to the characteristics of the data that's being aggregated at that point. In other words, we won't see simulations, which are characteristics of tertiary data, George, at the edge itself. We will however, see edge devices often reduce significant amounts of data from a perhaps a video camera or something else to make relatively simple decisions that may involve complex technologies to allow a person into a building, for example. So our expectation is that over the next five years we're going to see significant new approaches to applying increasingly complex machine etiologies technologies across all different classes of data, but we're going to see them applied in ways that fit the patterns associated with that data, because it's the patterns that drive the applications. So our overall action item, it's absolutely essential that businesses that considering and conceptualizing what machine intelligence can do, but be careful about drawing huge generalizations about what the future machine intelligence is. The first step is to parse out the characteristics of the data driven by the devices that are going to generate it and the applications that are going to use it, and understand the relationship between the characteristics of that data and the types of machine intelligence work that can be performed. What is likely, is that an impedance mismatch between data and expectations of machine intelligence will generate a significant number of failures that often will put businesses back years in taking full advantage of some of these rich technologies. So, once again we want to thank you this week for joining us here on the Wikibon weekly research meeting. I want to thank George Gilbert who is here CUBE Studio in Palo Alto, and Jim Kobielus and Neil Raden who were both on the phone. And we want to thank you very much for joining us here today, and we look forward to talking to you again in the future. So this is Peter Burris, from the CUBE's Palo Alto Studio. Thanks again for watching Wikibon's weekly research meeting. (electronic music)

Published Date : Oct 20 2017

SUMMARY :

the characteristics of the data is going to have an impact that take place at the edge, the data workloads. that are going to indicate what types about the same time as airplanes start to flap their wings. It may be that the data that's being generated at the edge to not touch it or handle it any more times than we have to. and optimizing the model for different outcomes or crucial feature of the training and the server is going to do many of the things and the role that the gateway plays, is that it acquires data from the edge devices, and geez, let's ensure that we keep our options open that the historical aspect of it or we need to do an optimization of some form So there has to be adjustments for that. has argued pretty forcibly, that over the next few years in fact, the latest generation of Invidia's architecture What is it? in the chipsets of the year 2020 and beyond, that are going to be in place to manage those devices. that are both accurate and to optimize them Now as we think about some of the data management elements, essentially merging the skills of the developers and that is the key to productivity in managing the action item that you think to structure the modeling process to be able to tune a model Recognize that the cloud is not just where you trickle up to augment how you evaluate new models. Neil, action item. and do the homework. So our expectation is that over the next five years

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

George GilbertPERSON

0.99+

Jim KobielusPERSON

0.99+

Peter BurrisPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

Neil RadenPERSON

0.99+

PeterPERSON

0.99+

David FloyerPERSON

0.99+

DavidPERSON

0.99+

Jim KovielusPERSON

0.99+

David FoyerPERSON

0.99+

October 20, 2017DATE

0.99+

10 gigabytesQUANTITY

0.99+

last weekDATE

0.99+

10 mattressesQUANTITY

0.99+

10,000 milesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

CUBEORGANIZATION

0.99+

This weekDATE

0.99+

InvidiaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

secondQUANTITY

0.99+

two extremesQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

this weekDATE

0.99+

first stepQUANTITY

0.99+

bothQUANTITY

0.98+

one modelQUANTITY

0.98+

each classQUANTITY

0.98+

three tiersQUANTITY

0.98+

eachQUANTITY

0.98+

24QUANTITY

0.98+

oneQUANTITY

0.96+

a mileQUANTITY

0.96+

more than a dozen competitorsQUANTITY

0.95+

IRIORGANIZATION

0.95+

WikibonPERSON

0.94+

sevenQUANTITY

0.94+

firstQUANTITY

0.92+

CUBE StudioORGANIZATION

0.86+

2020DATE

0.85+

couple dimensionsQUANTITY

0.79+

Palo Alto StudioLOCATION

0.78+

single business eventQUANTITY

0.75+

tertiary tierQUANTITY

0.74+

last several yearsDATE

0.71+

yearsDATE

0.7+

twinQUANTITY

0.64+

Wikibon Analyst Meeting | Dell EMC Analyst Summit


 

>> Welcome to another edition of Wikibon's Weekly Research Meeting on theCUBE. (techno music) I'm Peter Burris, and once again I'm joined by, in studio, George Gilbert, David Floyer. On the phone we have Dave Vellante, Stu Miniman, Ralph Finos, and Neil Raden. And this week we're going to be visiting Dell EMC's Analyst Summit. And we thought we'd take some time today to go deeper into the transition that Dell and EMC have been on in the past few years, touching upon some of the value that they've been creating for customers and addressing some of the things that we think they're going to have to do to continue on the path that they're on and continue to deliver value to the marketplace. Now, to look back over the course of the past year, it was about a year ago that the transaction actually closed. And in the ensuing year, there's been a fair amount of change. We've seen some interesting moves by Dell to bring the companies together, a fair amount of conversation about how bigger is better. And at the most recent VMworld, we saw a lot of great news of VMworld, VMware in particular working more closely with Amazon and others, or AWS and others. So we've seen some very positive things happen in the course of the past year. But there are still some crucial questions that are addressed. And to kick us off, Dave Vellante, where are we one year in and what are we expecting to hear this week? >> Dave: And foremost, Michael Dell was trying to transform his company. It wasn't happening fast enough. He had to go private. He wanted to be an enterprise player, and amazingly, he and Silver Lake came up with four billion dollars in cash. And they may very well pull off one of the greatest wealth creation trades in the history of the computer industry because for four billion dollars, they're getting an asset that's worth somewhere north of 50 billion, and they're paying down the debt that they used to lever that acquisition through cash flow. So like I say, for a pittance (laughs) of four billion dollars, they're going to turn that into a lot of dough, tens and tens of billions. If you look at EMC pre the M and A, I'm sorry, if you look at Dell pre M and A, pre-merger, their transformation was largely failing. The company was making a lot of acquisitions but it wasn't able to reshape itself fast enough. If you look at EMC pre-merger, it was a powerhouse, but it was suffering from this decade-long collapse of infrastructure hardware and software pricing, which was very much a drag on growth and cash flow. So the company was forced to find a white knight, which came in the form of Michael Dell. So you had this low gross margin company, Dell's public gross margin before it went private were in the teens. EMC was in the roughly 60%. Merge those together and you get a roughly 30% plus gross margin entity. I don't think they're there yet. I think they got a lot of work to do. So a lot of talk about integration. And there's some familiarity with these two companies because they had a fairly large OEM deal for the better part of a decade in the 90s. But culturally, it's quite different. Dell's a very metrics-driven culture with a lot of financial discipline. EMC's kind of a take the hill, do whatever it takes culture. And they're in the process of bringing those together, and a lot of cuts are taking place. So we want to understand what impacts those will have to customers. The other point I want to make is that without VMware, in my view anyway, the combination of these companies would not be nearly as interesting. In fact, it would be quite boring. So the core of these companies, you know, have faced a lot of challenges. But they do have VMware to leverage. And I think the challenge that customers really need to think about is how does this company continue to innovate now that they can't really do M and A? If you look at EMC, for years, they would spend money on R and D and make incremental improvements to its product lines and then fill the gaps with M and A. And there're many, many examples of that, Isilon, Data Domain, XtremIO, and dozens of others. That kept EMC competitive. So how does Dell continue that strength? It spends about four and a half billion a year on R and D, and according to Wikibon's figures, that's about 6% of revenue. If you compare that with other companies, Oracle, Amazon, they're into the 12%. Google's mid-teens. Microsoft, obviously to 12, 13%. Cisco's up there. EMC itself was spending 12% on R and D. So IBM's only about 6%, but remember IBM, about two thirds of the company is services. It's not R and D heavy. So Dell has got to cut costs. It's a must. And what implications does that have on the service levels that customers have grown to expect, and what's the implications on Dell's roadmap? I think we would posit that a lot of the cash cows are going to get funded in a way that allows them to have a managed decline in that business. And it's likely that customers are going to see reduced roadmap functions going forward. So a key challenge that I see for Dell EMC is growth. The strength is really VMware, and the leverage of the VMware and their own install base I think gives Dell EMC the ability to keep pace with its competitors because it's got kind of the inside baseball there. It's got a little bit of supply chain leverage, and of course its sales force and its channels are a definite advantage for this company. But it's got a lot of weaknesses and challenges. Complexity of the portfolio, it's got a big debt load that hamstrings its ability to do M and A. I think services is actually a big opportunity for this company. Servicing its large install base. And I think the key threat is cloud and China. I think China, with its low-cost structure, made a deal like this inevitable. So I come back to the point of Michael Dell's got to cut in order to stay competitive. >> Peter: Alright, so one of the, sorry- >> Dave: Next week, hear a lot about sort of innovation strategies, which are going to relate to the edge. Dell EMC has not announced an edge strategy. It needs to. It's behind HPE in that regard, one its major competitors. And it's got to get into the game. And it's going to be really interesting to see how they are leveraging data to participate in that IOT business. >> Great summary, Dave. So you mentioned that one of the key challenges that virtually every company faces is how do they reposition themselves in a world in which the infrastructure platform, foundation, is going to be more cloud-oriented. Stu Miniman, why don't you take us through, very quickly, where Dell EMC is relative to the cloud? >> Stu: Yeah, great question, Peter. And just to set that up, it's important to talk about one of the key initiatives from Dell and EMC coming together, one of the synergies that Michael Dell has highlighted is really around the move from converged infrastructure to hyper converged infrastructure. And this is also the foundational layer that Dell EMC uses today for a lot of their cloud solutions. So EMC has done a great job with the first wave of converged infrastructure through partnering with Cisco. They created the Vblock, which is now VxBlock, which is now a multi-billion dollar revenue stream. And Dell did a really good job of jumping on early with the hyper converged infrastructure trend. So I'd written research years ago that not only was it through partnerships but through OEM deals, if you look at most of the solutions that were being sold on the market, the underlying server for them was Dell. And that was even before the EMC acquisition. Once they acquired EMC, they really get kind of control, if you will, of the VMware VSAN business, which is a very significant player. They have an OEM relationship with Nutanix, who's doing quite well in the space, and they put together their own full-stack solution, which takes Dell's hardware, the VMware VSAN, and the go-to-market processes of what used to be VCE, and they put together VxRail, which is doing quite well from a revenue and a growth standpoint. And the reason I set this all up to talk about cloud is that if you look at Dell's positioning, a lot of their cloud starts at that foundational infrastructure level. They have all of these enterprise hybrid clouds and different solutions that they've been offering for a few years. And underneath those, really it is a simplified infrastructure hardware offering. So whether that is the traditional VCE converged infrastructure solutions or the newer hyper converged infrastructure solutions, that's the base level. And then there's software that wraps on top of it. So they've done a decent amount of revenue. The concern I have is, you know, Peter, you laid out, it's very much a software world. We've been talking a lot at Wikibon about the multi-cloud nature of what's going on. And while Dell and the Dell family have a very strong position in the on-premises market, that's really they're center strength, is around hardware and customer and the enterprises data center. And the threat is public cloud and multi-cloud. And if it centers around hardware and especially when you dig down and say, "okay, I want to sell more servers," which is one of the primary drivers that Michael wants to have with his whole family of solutions, how much can you really live across these in various environments? Of course, they have partnerships with Microsoft. There's the VMware partnerships with Amazon, which is interesting, how they even partner with the likes of Google and others, it can be looked at. But from that kind of center strength is on premises and therefore they're not really living heavily in the public and multi-cloud world, unless you look at Pivotal. So Pivotal's a software, and that's where they're going to say that the big push is, but it's these massive shifts of large install base of EMC, Dell, and VMware, compared to the public cloud that are doing the land grabs. So this is where it's really interesting to look at. And the announcement that we're interested to look at is how IOT and edge fits into all of this. So David Foyer and you, Peter, research about how- >> Peter: Yeah, well, we'll get to that. >> Stu: There's a lot of nuance there. >> We'll get to that in a second, Stu. But one of the things I wanted to mention to David Floyer is that certainly in the case of Dell, they have been a major player in the Intel ecosystem. And as we think about what's going to happen over the course of the next couple of years, what's going to happen with Intel? It's going to continue to dominate. And what's that going to mean for Dell? >> Sure, Dell's success, I mean, what Stu has been talking about is the importance of volume for Dell, being a volume player. And obviously when they're looking at Intel, the PC is a declining market, and ARM is doing incredibly well in the mobile and other marketplaces. And Dell's success is essentially tied to Intel. So the question to ask is if Intel starts to lose market share to ARM and maybe even IBM, what is the impact on that on Dell? And in particular, what is the impact on the edge? And so if you look at the edge, there are two primary parts. We put forward there are two parts of the edge. There's the primary data, which is coming from the sensors themselves, from the cameras and other things like that. So there's the primary edge, and there's the secondary edge, which is after that data has been processed. And if you think about the primary edge, AI and DL go to the primary edge because that's where the data is coming in, and you want the highest fidelity of data. So you want to do the processing as close as possible to that. So you're looking at these examples in autonomous cars. You're seeing it in security cameras, that all of that processing is going to much cheaper chips, very, very close to the data itself. What that means is that most of that IOT, or could mean, is that most of that IOT could go to other vendors, other than Intel, to go to the ARM vendors. And if you look at that market, it's going to be very specialized in the particular industry and the particular problem it's trying to solve. So it's likely that non-IT vendors are going to be in that business. And you're likely to be selling to OT and not the IT. So all of those are challenges to Dell in attacking the edge. They can win the secondary edge, which is the compressed data, initially compressing it 1,000 to one, probably going to a million to one compression of the data coming from the sensors to a much higher value data but much, much smaller amounts, both on the compute side and on the storage side. So if that bifurcation happens at the edge, the size of marketplace is going to be very considerably reduced for Intel. And Dell has in my view a strategic decision to make of whether they get into being part of that ARM ecosystem for the edge. There's a strong argument that's saying that they would need to do that. >> And they will be announcing something on Monday, I believe, or next week. We're going to hear a lot about that. But when we think, ultimately, about the software that Dell and EMC are going to have to think about, they're very strong in VMware, which is important, and there's no question that virtual machines will remain important, if not only from an install base standpoint but from, in the future, how the cloud is organized and arranged and managed. Pivotal also is an interesting play, especially as it does a better job of incorporating more of the open source elements that are becoming very attractive to developers. But George, let me ask you a question, ultimately, about where is Dell in some of these more advanced software worlds? When we think about machine learning, when we think about AI, these are not strong markets right now, are not huge markets right now, but they're leading indicators. They're going to provide cues about where the industry's going to go and who's going to get a chance to provide the tooling for them. So what's our take right now, where Dell is, Dell EMC is relative to some of these technologies? >> Okay, so that was a good lead in for my take on all the great research David Floyer's done, which is when we go through big advances in hardware, typically relative price performance changes between CPU, memory, storage, networking. When we see big relative changes between those, then there's an opportunity for the software to be re-architected significantly. So in this case, what we call unigrid, what David's called unigrid previously is the ability to build scale-out, extremely high-performance clusters to the point where we don't have to bottleneck on shared storage like a SAN anymore. In other words, we can treat the private memory for each node as if it were storage, direct-attached storage, but it is now so fast in getting between nodes and to the memory in a node that for all intents and purposes, it can perform as if you had a shared storage small cluster before. Only now this can scale out to hundreds, perhaps thousands, of nodes. The significance of that is we are in an era of big data and big analytics. And so the issue here is can Dell sort of work with the most advanced software vendors who are trying to push the envelope to build much larger-scale data management software than they've been able to. Now, Dell has an upward, sort of an uphill climb to master the cloud vendors. They build their own infrastructure hardware. But they've done pools of GPUs, for instance, to accelerate machine learning training. Dell could work with these data management vendors to get pools of this scale-out hardware in the clouds to take advantage of the NoSQL databases, the NewSQL databases. There's an opportunity to leapfrog. What we found out at Oracle, at their user conference this week was even though they're building similar hardware, their database is not yet ready to take advantage of it. So there is an opportunity for Dell to start making inroads in the cloud where their generic infrastructure wouldn't. Now, one more comment on the edge, I know David was saying on the sort of edge device, that's looking more and more like it doesn't have to be Intel-compatible. But if you go to the edge gateway, the thing that bridges OT and IT, that's probably going to be their best opportunity on the edge. The challenge, though, is it's not clear how easy it will be in a low-touch sort of go-to-market model that Dell is accustomed to because like they discovered in the late 90s, it cost $6,000 per year per PC to support. And no one believed that number until Intel did a study on itself and verified it. The protocols from all the sensors on the OT side are so horribly complex and legacy-oriented that even the big auto manufacturers keep track of the different ones on a spreadsheet. So mapping the IT gateway server to all the OT edge devices may turn out to be horribly complex for a few years. >> Oh, it's not a question of may. It is going to be horribly complex for the next few years. (laughing) I don't think there's any question about that. But look, here's what I want to do. I want to ask one more question. And I'm going to go do a round table and ask everybody to give me what the opportunity is and what the threat is. But before I do that, the one thing we haven't discussed, and Dave Vellante, I'm going to throw it over to you, is we've looked at the past of Dell talks a lot about the advantages of its size and the economies of scale that it gets. And Dell's not in the semiconductor business or at least not in a big way. And that's one place where you absolutely do get economies of scale. They got VMware in the system software business, which is an important point. So there may be some economies there. But in manufacturing and assembly, as you said earlier, Dave, that is all under consideration when we think about where the real cost efficiencies are going to be. One of the key places may be in the overall engagement model. The ability to bring a broad portfolio, package it up, and make it available to a customer with the appropriate set of services, and I think this is why you said services is still an opportunity. But what does it mean to get to the Dell EMC overall engagement model as Dell finds or looks to find ways to cut costs, to continue to pay down its debt and show a better income statement? >> Dave: So let me take the customer view. I mean, I think you're right. This whole end to end narrative that you hear from Dell, for years you heard it from HP, I don't think it really makes that much of a difference. There is some supply chain leverage, no question. So you can get somewhat cheaper components, you could probably get supplies, which are very tight right now. So there are definitely some tactical advantages for customers, but I think your point is right on. The real leverage is the engagement model. And the interesting thing from I think our standpoint is that you've got a very high-touch EMC direct sales force, and that's got to expand into the channel. Now, EMC's done a pretty good job with the channel over the last, you know, half a decade. Dell doesn't have as good a reputation there. Its channel partners are many more but perhaps not as sophisticated. So I think one of the things to watch is the channel transformation and then how Dell EMC brings its services and its packages to the market. I think that's very, very important for customers in terms of reducing a lot of the complexity in the Dell EMC portfolio, which just doubled in complexity. So I think that is something that is going to be a critical indicator. It's an opportunity, and at the same time, if they blow it, it's a big threat to this organization. I think it's one of the most important things, especially, as you pointed out, in the context of cost cutting. If they lose sight of the importance of the customer, they could hit some bumps in the road and open it up for competition to come in and swoop some of their business. I don't think they will. I think Michael Dell is very focused on the customer, and EMC's culture has always been that way. So I would bet on them succeeding there, but it's not a trivial task. >> Yeah, I would agree with you. In fact, one of the statements that we heard from Michael Dell and other executives at Dell EMC at VMworld, over and over and over again, on theCUBE and elsewhere, was this notion of open with an opinion. And in many respects, the opinion is not just something that they say. It's something that they do through their packaging and how they put their technologies into the marketplace. Okay, guys, rapid fire, really, really, really short answers. Let's start with the threats. And then we'll close with the positive note on the strengths. David Floyer, really quick, biggest threat that we're looking at next week? >> The biggest threat is the evolution of ARM processes, and if they keep to an Intel-only strategy, that to me is their biggest threat. Those could offer a competition in both mobile, increasing percentages of mobile, and also also in the IOT and other processor areas. >> Alright, George Gilbert, biggest threat? >> Okay, two, summarizing the comments I made before, one, they may not be able to get the cloud vendors to adopt pools of their scale-out infrastructure because the software companies may not be ready to take advantage of it yet. So that's cloud side. >> No, you just get one. Dave Vellante. >> Dave: Interest rates. (laughing) >> Peter: Excellent. Stu Miniman. >> Stu: Software. >> Peter: Okay, come on Stu. Give me an area. >> Stu: Dell's a hardware company! Everything George said, there's no way the cloud guys are going to adopt Dell EMC's infrastructure gear. This is a software play. Dell's been cutting their software assets, and I'm really worried that I'm going to see an edge box, you know, that doesn't have the intelligence that they need to put the intelligence that they say that they're going to put in. >> So, specifically, it's software that's capable of running the edge centers, so to speak. Ralph Finos. >> Ralph: Yeah, I think the hardware race to the bottom. That's a big part of their business, and I think that's a challenge when you're looking at going head on head, with HPE especially. >> Peter: Neil Raden, Neil Raden. >> Neil: Private managed cloud. >> Or what we call true private cloud, which goes back to what Stu said, related to the software and whether or not it ends up being manageable. Okay, threats. David Floyer. >> You mean? >> Or I mean opportunities, strengths. >> Opportunities, yes. The opportunity is being by far the biggest IT place out there, and the opportunity to suck up other customers inside that. So that's a big opportunity to me. They can continue to grow by acquisition. Even companies the size of IBM might be future opportunities. >> George Gilbert. >> On the opposite side of what I said earlier, they really could work with the data management vendors because we really do need scale-out infrastructure. And the cloud vendors so far have not spec'd any or built any. And at the same time, they could- >> Just one, George. (laughing) Stu Miniman. >> Dave: Muted. >> Peter: Dave Vellante. >> Dave: I would say one of the biggest opportunities is 500,000 VMware customers. They've got the server piece, the networking piece kind of, and storage. And combine that with their services prowess, I think it's a huge opportunity for them. >> Peter: Stu, you there? Ralph Finos. >> Stu: Sorry. >> Peter: Okay, there you go. >> Stu: Dave stole mine, but it's not the VMware install base, it's really the Dell EMC install base, and those customers that they can continue moving along that journey. >> Peter: Ralph Finos. >> Ralph: Yeah, highly successful software platform that's going to be great. >> Peter: Neil Raden. >> Neil: Too big to fail. >> Alright, I'm going to give you my bottom lines here, then. So this week we discussed Dell EMC and our expectations for the Analyst Summit and our observations on what Dell has to say. But very quickly, we observed that Dell EMC is a financial play that's likely to make a number of people a lot of money, which by the way has cultural implications because that has to be spread around Dell EMC to the employee base. Otherwise some of the challenges associated with cost cutting on the horizon may be something of an issue. So the whole cultural challenges faced by this merger are not insignificant, even as the financial engineering that's going on seems to be going quite well. Our observation is that the cloud world ultimately is being driven by software and the ability to do software, with the other observation that the traditional hardware plays tied back to Intel will by themselves not be enough to guarantee success in the multitude of different cloud options that will become available, or opportunities that will become available to a wide array of companies. We do believe the true private cloud will remain crucially important, and we expect that Dell EMC will be a major player there. But we are concerned about how Dell is going to evolve as a, or Dell EMC is going to evolve as a player at the edge and the degree to which they will be able to enhance their strategy by extending relationships to other sources of hardware and components and technology, including, crucially, the technologies associated with analytics. We went through a range of different threats. If we identify two that are especially interesting, one, interest rates. If the interest rates go up, making Dell's debt more expensive, that's going to lead to some strategic changes. The second one, software. This is a software play. Dell has to demonstrate that it can, through its 6% of R and D, generate a platform that's capable of fully automating or increasing the degree to which Dell EMC technologies can be automated. In many conversations we've had with CIOs, they've been very clear. One of the key criteria for the future choices of suppliers will be the degree to which that supplier fits into their automation strategy. Dell's got a lot of work to do there. On the big opportunities side, the number one from most of us has been VMware and the VMware install base. Huge opportunity that presents a pathway for a lot of customers to get to the cloud that cannot be discounted. The second opportunity that we think is very important that I'll put out there is that Dell EMC still has a lot of customers with a lot of questions about how digital transformation's going to work. And if Dell EMC can establish itself as a thought leader in the relationship between business, digital business, and technology and bring the right technology set, including software but also packaging of other technologies, to those customers in a true private cloud format, then Dell has the potential to bias the marketplace to their platform even as the marketplace chooses in an increasingly rich set of mainly SaaS but public cloud options. Thanks very much, and we look forward to speaking with you next week on the Wikibon Weekly Research Meeting here on theCUBE. (techno music)

Published Date : Oct 9 2017

SUMMARY :

And in the ensuing year, there's been And it's likely that customers are going to see And it's got to get into the game. platform, foundation, is going to be more cloud-oriented. and the go-to-market processes of what used to be VCE, certainly in the case of Dell, So the question to ask is Dell EMC is relative to some of these technologies? in the clouds to take advantage and ask everybody to give me what the opportunity is and that's got to expand into the channel. And in many respects, the opinion is not just and if they keep to an Intel-only strategy, one, they may not be able to get No, you just get one. Dave: Interest rates. Peter: Excellent. Peter: Okay, come on Stu. the cloud guys are going to adopt that's capable of running the edge centers, so to speak. Ralph: Yeah, I think the hardware race to the bottom. related to the software and whether or not So that's a big opportunity to me. And the cloud vendors so far have not spec'd any Stu Miniman. And combine that with their services prowess, Peter: Stu, you there? install base, it's really the Dell EMC install base, that's going to be great. and the ability to do software,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

Neil RadenPERSON

0.99+

Dave VellantePERSON

0.99+

EMCORGANIZATION

0.99+

DavePERSON

0.99+

MichaelPERSON

0.99+

Peter BurrisPERSON

0.99+

Ralph FinosPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CiscoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

RalphPERSON

0.99+

PeterPERSON

0.99+

AWSORGANIZATION

0.99+

NeilPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Michael DellPERSON

0.99+

David FoyerPERSON

0.99+

NutanixORGANIZATION

0.99+

MondayDATE

0.99+

12%QUANTITY

0.99+

HPORGANIZATION

0.99+

hundredsQUANTITY

0.99+

next weekDATE

0.99+

Wikibon Analyst Meeting | Blockchain


 

>> Hi welcome to Wikibon's weekly Friday research meeting. Here on the queue. (tech music) >> I'm Peter Burris. We've assembled a gus team of analysts to discuss a very very important topic. Block chain. Now block chain means a lot of things to a lot of different people. Partly because there hasn't been a lot of practical utilization of it. We've talked a lot about bitcoin and ethereum and some other applications of block chain related technologies. But it's very clear that what block chain will become is more than what it is. And to try to unpack that and really understand block chain from the perspective of business decision makers, CIOs and IT. And the IT industry, we want to talk a little bit about what block chain is. What some of the key applications are. And what's it's going to mean from a technology design and investment standpoint over the next few years. Now to kick us off, we've asked David Floyer to start with a little observation on. Let's talk a bit about what is block chain David? >> Okay well block chain is a very exciting set of new technologies. But at heart it's the shared immutable ledger. So lets us go down one level from that. It allows consensus, it allows all of the participants to agree on its validity. It allows provenance to know exactly what has happened. The history of what's happened. It allows immutability so that no participant can tamper with a transaction or an asset value. And is now allows finality. A single shared ledger provided in one place. So that they can track the ownership of an asset or the completion of transaction. So the second concept's really important. Where are we applying this? And where need to apply this in any sort of business network. Assets can be real or they can be virtual. They can be widgets that you count or they can be IP for example. So the core of it is a business network. So what's the problem that it solves? The key problem that it solves is that in order to have those characteristics. Of consensus, provenance, immutability, finality. You, society had to put together very complex systems indeed. So give a few example of those, stock exchanges need to be created. Around the value of stocks then they were sold and the transactions. Credit card companies, the Swift banking system. Diamond dealers for example have to had a system by which they could know the provenance and value of these assets. These systems were essentially centralized. They were centralized and controlled centrally. Or there was a very very sophisticated complex trust and honor system. Some of the systems that have been put in place particularly in the Middle East. And they're expensive. The transaction cost is high for doing that. And companies that have allowed themselves to be in control of these, can take a high percentage. A large amount of money out of this. But they make a lot of money by owning this right to manage. This provenance, this immutable ledger. So the value of block chain is that we can cut down that cost and we can create many more smaller business networks. Which can us focus on a small area and get the same result as this big complex thing we had before. >> But a crucial feature is that David, is going to be the question of design. We're going to have to set these things up and design them right. And that's going to have a lot of implications for how businesses work. So John if I take a look at some of the applications of this. David talked about immutability, finality, provenance etc. And how it's going to take transaction costs out of the system. Where do we envision block chain's going to end up within the application framework? >> I think the key thing that on the application. There's a many series of use. Cause there's low hanging fruit today and then ones that people are connecting the dots in the future. The fundamental application impact really comes down to. Where the confusion and clarity come from. The difference between decentralized and distributed. That's often confused and I think applications purpose. The outcome of applications is really how people work and engage and create value. And the measure of that is how authority and control are provisioned. Distributed and decentralized has a unique difference there. That's a fundamental architectural thing that David pointed out. When it comes to block chain people get confused. They think bitcoin, they think ethereum. That's kind of on the currency side and the crypto side. But the momentum around decentralization goes much farther than that. So you're seeing things like energy systems. Grid Plus had a presale that was over $40 million. They're changing the game on how energy may be used and managed. The government, political sovereignty is changing. A breakthrough in science for instance. Crypser and other labs make opportunities for decentralized labs. Crowdfunding is an obvious one. You see that really get a lot of traction. Space exploration is one. Open source software, you're going to see a lot of activity there. Personal health monitoring, online educational systems, security. These are tell signs that the game will shift in terms of the new architecture. And then the impact will be the creative destruction around that. And how things are done so we were talking before we went on. About the role of horse and buggy verses a car. A mechanic on a car is not the same person managing the horse and buggy. That's the role of the service provider market. A lawyer is going to be very instrumental or legal but in new context. So the applications are going to morph around that, you're going to see people who deal with. Used cases like tokens, example that's hence the token sale. But applications that are already solving some of these problems with their business. Block chain opens up the door for a lot more head room for competitive advantage and value creation. I think that's where the action is. >> So Dave Valente, I want to bring you into this conversation very quickly. And try to build upon what John just talked about, this notion of the difference between distributed and decentralized. Distributed is kind of where things are. Decentralized is more of a state about authority. What kind of observations do we initially make about how block chain is going to impact the whole concept of authority within communities and markets? >> I think that's right I do think there are some subtle but important differences between distributed and decentralized. If you look at the internet initially and today. It's distributed but power increasingly has become centralized and that's problematic. Because it exposes us to a number of things. High value breaches if that power is centralized. Manipulation, surveillance risks, etc. I think there are you know some characteristics to look at that are relevant here. The distributive nature of that block chain, the immutability, and the lack of need or no need for single trusted third parties. So the distributed nature of the block chain verses that decentralized internet if you will. To use that as an analogy dramatically decreases those exposures. And it's much more inclusive. >> So when you think about that notion of inclusivity. We do have to come back to the idea that, we have certain ways. David you mentioned about how we're doing things today. Relatively high transaction cost but a few parties making an enormous amount of money but administering those transaction costs. And now we're talking about going to something that does inherently look more like a peer to peer but requires an enormous amount of upfront design. James Kobielus, talk a little bit about how we envision the transition. From where we are today to where certain attributes of these applications are going to be in the future. Are we going to need things like PKI? What is going to be the near term implications at a business level? >> Yeah. You know I agree with everything that Dave and John said about the business environment. Word going is that, what's fundamentally innovative about block chain. The evolution of distributive collaboration is a really clever commerce. It builds upon immutable distributive public identity. PKI, that's what PKI is all about. PKI has been around for a while. And adds to it an immutable distributed public ledger. And in the public ledger itself then the block chain becomes the foundation for distributed decentralized market places. With that said. Where it's going is that, increasingly there will be. Layered onto block chain, more standard interchange formats to enable various types of collaboration or interaction amongst various types of entity. And various types of business networks. I guess it's just the foundation for really a truly distributed peer to peer environment. At it's very heart there is still. As it were more centralized infrastructure called PKI with certification authorities and root CAs. That's not going away. That's becoming ever more fundamental the whole PKI infrastructure that's been build up. >> So David Foyer if I were to listen to this conversation as a CIO. I might think that this is going to be somebody else's problem. Lets take this down inside the business. What is it that a CIO needs to think about? This notion of distributed networks of data that both represent data and it can represent other assets? And what're some of the things that I need to start thinking about, inside my business? Is block chain really just at an economy level? Or is it going to have an impact on how I think about architecting, building, conceiving, deploying, and managing systems? >> So there's no shortcut to good systems design. People design very complex centralized systems. And they're going to need to design systems that work together. Especially when you go real time. So it's relatively simple to have batch systems which can catch up and things like that. But if you want to get the real value of block chain, it's going to be doing things in real time. So it's for example, if you're in a car and you want to get data from other cars. And you want to be able to feed data into that, to optimize on where you should have lunch or the best route to take. All of that data has to be done in real time. So what needs to be done is to make sure. As in any design of system that you have sufficient power, you have the network which is fast enough. And these types of systems because of their encryption because there's a lot of work that needs to be done to make them immutable and all the other characteristics. These systems take a lot more power to drive. >> David let me, let me jump in for a second. So one of the key differences just so we're clear. Is that we build these centralized systems and historically we've created a data store. That in a centralized system is under centralized control. And we serialize all access to that data through that centralized control. Fundamentally what we're talking, and that creates latency. Both on what's on the wire but also latency in terms of the path link. Of handling that serialization software through the system. What we're fundamentally talking about here is decentralizing that control. Putting the data everywhere but decentralizing that control. So we're not serializing anything through a central authority. That's fundamentally what we're doing right? >> Yes but a little caution there. You still got to have processed it in all of the nodes and for you to be able to get it. And you still got to make sure that all of that work's been done. >> That's all decentralized. >> It is decentralized but you still if people aren't keeping up to date up to time. You will still have a serialization impact. Eventually yes. >> So George think about from a peer to peer standpoint. What does this mean from thinking not just about designing systems at a grand scale but on a smaller scale. Can envision how block chain might be used to better marry identity, authority, and incentives as we think about building systems within a business? >> Well you had talked about the upfront design requirements. You talked about the upfront design requirements in organizational design enabled by this. At the risk of sounding big picture, this technology makes it easier to have an ecosystem of peer to peer companies that cooperate. Typically in the past we've had like supply chain masters. And they've sort of disseminated demand signals and collective supply signals. That was the central coordination, central trust sort of clearing house. And having the data distributed. The data distributed but this one system of record which essentially is logically centralized. Makes it easier to have a new sort of a new ecosystem design. >> So fundamentally we're talking about the idea of design very very large. In the sene of the degree to which we have to diminish the expectation that we'll fix design problems later on. We're going to have to do a lot of design work upfront. So David I want to close this conversation by bringing it down to the middle so to speak. Because when we think about unigrid and the idea of highly elastic, highly plastic systems. Where data's flying around and five milliseconds away from any other data kind of thing. There's going to be a need to envision how we can manage all of those applications or user problems within a system. In a way that sustains integrity of the data. Does block chain have a role to play inside the system and how we allocate resources? How we allocate data? What do you think? >> I think that's a very astute observation because one of the issues at heart here is ensuring that the system itself is not tampered with. The chips or the any part of it. So there is a role here potentially for block chain to be the arbiter of truth within the system itself. Or within the systems themselves. Now that is not here yet and that's got to be something which works super suer fast. It has to work in a way which allows the rest of the system to do its work. So it's going to be extremely interesting technology change to put it in there. But the value of it would be enormous. If you can trust then that the system itself. The chips, all of the. Everything within that system. For example you can take a snapshot these days which are very quick indeed. And if you can track that track all of the activities. You will have much greater confidence in the system itself. But that's not here yet and I suspect that's going to be quite a few years before those are put into the microcode etc. >> So John Furrier. That has an implication where we start thinking about control, authority. What's this going to mean? >> I mean David talks about the network aspect in the system's level. >> The systems of control you guys are getting at. But the edge of the network is where the action is, if you look at all the accessible block chain. You're seeing the edge of the network really be the economies of scale. And that's where, people call this the future of work. All this nonsense out there is true but the action for the people getting value are the ones that have economies of scale that go beyond their current economies of scale centralized systems. So you're seeing edge of the network type things. Crowdsourcing, edge of the network, of autonomous vehicles. You mentioned that used case. So the edge of the network paradigm that we've been researching at Wikibon. In covering on SiliconANGLE and on the cube of the events. Fundamental in this new exploration area. So for CIOs and for businesses trying to grab block chain. Which is different than the crypto currency piece, working together with tokens and block chain, is an edge of the network value proposition. As you go beyond centralization. Hence decentralization and distributed working together that's where the action is. The people that are realizing the benefits there and so companies that are evaluating their position. These of the block chain and crypto should be evaluating. Our we exploring these kinds of things? And that's where the filter is. >> Yeah so I'd say here's what I'd say just before I summarize gentlemen. I think you're right, I think that block chain that we as we've written on our Wikibon research. Folks have to design around the edge whether block chain's there or not. But block chain is going to ultimately make it easier to enact those designs over a period of time. Okay let me summarize guys. Great conversation today about block chain and our objective here is to bring it down from the level of magic, the level of potential. The level of someday into the level of practical. And I think what we've done is we've talked about block chain in a couple of different ways. First off, block chain is an immutable ledger that is decentralized in the sense that. A lot of different agents can gain control of a piece of data in a way that everybody else knows where it is and who has it. And that opens up an enormous amount of new application forms. We talked about what some of those application forms are. They can be open source software having an enormous new way of thinking about it. How they monetize work day performed. We've talked about how business networks can be established at a large and small scale That are capable of now but having a centralized authority that becomes the clearing house but rather reduce the transaction cost of deploying and running those networks. However all this means ultimately that the issue of design becomes that much more important. Block chain is not a magic technology. You just don't establish a block chain. It absolutely requires upfront thinking about what is it that you're trying to perform what is the work, what is the context that the block chain is trying to manage from an overall security standpoint? That's going to require a lot of very collaborative work between the CIO, the IT organization, the business and very importantly the lawyers. And that's not going to go away. We will see near term a number of interesting efforts from existing authorities. Folks who are handling public infrastructure, Swift and other types of networks try to use block chain as a mechanism. And that's likely to have some important queues as to how this is going to play out. But ultimately what CIOs need to do is they need to turn to somebody and they need to say, go understand block chain in an architectural level. So we can think about how we're going to build applications for communities that operate differently. Now the final point that I want to make here is that it's likely that we will block chain or block chain like technologies. Actually go deeper into systems as a way of arbitrating access to data and other resources within some of these highly elastic very a large scale unigrid like systems that we're talking about building. Definitely something to watch, not here today but likely something that's going to start hitting the market next few years. What's the action item? CIOs need to understand that block chain is not magic, it's not something that somebody else is going to do. You have to get someone on the issue of block chain architecture right now. Understand block chain design issues right now so that you can deploy block chain in small ways. But absolutely participate in the process of your business starting to enter into business networks that are likely to be mediated by block chain like technologies. Don't worry so much about bitcoin or ethereum. Watch those currencies, they're going to be important. But that's not really where the action is going to be over the next few years. The action is going to be how we think about bringing data and authority and identity closer to the work that's going to be performed increasingly at the edge. Utilizing a decentralized authority mechanism. And block chain right now is the best option we have. Thanks very much for observing us once again have an open conversation about a crucial research matter. This is Wikibon's research meeting on the cube. Until next time. (techy music)

Published Date : Sep 23 2017

SUMMARY :

Here on the queue. And the IT industry, It allows consensus, it allows all of the participants is going to be the question of design. So the applications are going to morph around that, is going to impact the whole concept of authority So the distributed nature of the block chain What is going to be the near term implications And in the public ledger itself What is it that a CIO needs to think about? or the best route to take. So one of the key differences just so we're clear. You still got to have processed it in all of the nodes but you still if people aren't keeping up to date So George think about from a peer to peer standpoint. And having the data distributed. In the sene of the degree to which we have to allows the rest of the system to do its work. What's this going to mean? I mean David talks about the network aspect The people that are realizing the benefits there The action is going to be how we think about

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

James KobielusPERSON

0.99+

JohnPERSON

0.99+

David FoyerPERSON

0.99+

David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

Dave ValentePERSON

0.99+

John FurrierPERSON

0.99+

GeorgePERSON

0.99+

over $40 millionQUANTITY

0.99+

Middle EastLOCATION

0.99+

PKIORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

second conceptQUANTITY

0.99+

one levelQUANTITY

0.99+

todayDATE

0.99+

one placeQUANTITY

0.99+

oneQUANTITY

0.98+

BothQUANTITY

0.98+

bothQUANTITY

0.98+

singleQUANTITY

0.97+

FridayDATE

0.97+

SwiftTITLE

0.97+

five millisecondsQUANTITY

0.97+

FirstQUANTITY

0.95+

next few yearsDATE

0.87+

one systemQUANTITY

0.86+

Grid PlusTITLE

0.81+

ledgerQUANTITY

0.71+

bitcoinOTHER

0.69+

a secondQUANTITY

0.67+

CrypserORGANIZATION

0.62+

issuesQUANTITY

0.6+

SiliconANGLEORGANIZATION

0.5+

deQUANTITY

0.34+

Day One Wrap Up | VMworld 2017


 

>> Narrator: Live from Las Vegas, it's the CUBE, covering VMworld 2017. Brought to you by VMware, and it's ecosystem partners. >> Welcome back to VMworld 2017, everybody. My name is Dave Velante, and this is our day one wrap. I'm here with Peter Burress and David Foyer who have been inside the analyst meeting all day. Peter, I want to start with you. The premise that we wanted to test coming into this show was the following question that we wanted answered: Is VMware's momentum a function of people realizing, experience the data realities of cloud. In other words, as you've phrased it, the reality that they must bring the cloud model to their data, versus trying to force-fit their business model into the cloud. Is that a reality that customers have now come to see, or is this really just kind of an end user, or an enterprise license agreement, product cycle for VMware? Is that what the momentum is behind? >> I don't think it's the latter but I think there's elements to it. So I think, I don't think customers have fully rocked the idea, fully conceived of the idea that their data is the most important asset, not their hardware. And that the goal is not to get rid of hardware, the goal is to get more value out of your data, and that means bringing cloud and cloud experience to your data. But I think also that VMware is interesting because they do have this enormous install base. You know, 500 thousand plus customers, many of whom are vitally dependent upon VMware as a technology. And for many years it looked as though VMware was just going to sit on that and milk it. But in the last two years, it's become very very evident that they're not. There have not been a lot of really hugely successful industry or company transformations in this industry. You can look back at IBM in the 90's, Microsoft has done it a couple times, but there aren't that many companies that have done a really great, hugely successful transformation. VMware may be one of them. So they're able to build on that notion that what's going to matter is: where is your data? And bringing function and capabilities to that data, number one, but leveraging their installed base, and providing the chops to help their customers move forward from where they are, is, in many respects, the core story of what's happening here. >> So David, let me bring you to the conversation. About a year ago, VMware and AWS announced a partnership. We're just starting to see the initial pieces of that, there's obviously a lot of engineering work having to be done and heavy lifting. But the other piece that might be a tailwind for VMware was their cloud strategy was all over the place for years. For the better part of a decade, it was Vcloud Air and then sort of shifting that strategy, owning their own cloud. >> They had no cloud strategy. >> Well, they tried a lot of different things, and none of them worked, and then basically they said "okay look, we're going to partner with IBM," "we're going to partner with Microsoft," "We're going to partner with AWS." In particular, the AWS partnership, it seems like brought a lot of clarity, do you think that made customers feel more comfortable that entering into long-term relationship with VMware, now that they had a clearer cloud strategy both for the customers and the partners, gave VMware a boost over this past year? >> Absolutely, and in particular, the knock on effect of the agreement with AWS gave confidence, I believe, to VMware customers that they knew, they had a path forward. They had a clear path forward. And the same with AWS, and they've extended that now with rack space and I hear that even Google is in the mix, as well. So, they've announced firm relationships with other clouds, they've announced their foundation, which is again, part of making the cloud respects part of the overall platform. >> Well, they really have to make sure it doesn't just become a marketing or markitecture. >> Sure, absolutely. But I'm impressed with the confidence they have. I think their story of any device, any application, on any cloud, the little piece of intrinsic security may be that needs a lot of work on that area. But the first three things I think is a strong, positive, confident start. >> Well, they've been talking about that for a while, but two years ago they had negative license growth, and now it's significant. I mean, double digit license growth, 13% last quarter, we think we've had three quarters of substantive revenue growth, so do we feel as though this is a semi-permanent on the near to mid-term trend? >> There are three platforms, aren't there? There's AWS, there's Lennox, and there's Azure. And at one stage, they were sort of feeding that everything might go to Lennox, I think there are three firm platforms that will, in my opinion, survive at least til the next decade. >> Certainly in the U.S. The global market has to weigh in, there may be some things that happen elsewhere, but certainly in the U.S. No, David's right. Where we are right now, kind of as I said, VMware is going through transformation. It looks like it's going to succeed, it's going to remain relevant, and it's going to be in a position to bring its customers forward, and show them a direction where they could put their money, where they're going to get value as opposed to putting their money where somebody else is going to get value. If they carry on with the transformation they're in, and the commitments that they're making, this is going to be, this is going to remain, one of the top five or eight technologies in the enterprise for the foreseeable future. >> Yeah, and I think people underestimate the power of the ecosystem. That's really kicked in. And I really do feel like it's some of that clarity with the cloud strategy. Now, the other interesting thing is VMware at one point wanted to own its own data centers and manage its own data centers. They just raised four billion dollars in debt. They're going to spend, maybe, a couple a hundred million on capex this year, that's it. I guarantee Google, and Microsoft, and the Hyperscale guys are going to spend a lot more than that. Very efficient operating model from that standpoint. They raised a bunch of cheap debt. They're buying back stock, many people feel like the stock is underpriced. The cash flow is really strong, operating cash flow at three billion dollars, so things are pretty good right now. The data center is on fire. What did you guys learn today, in terms of, that was of interest in terms of product announcements, innovations, other things that were of interest or exciting to you? >> Well, the first thing I think I learned, and David, you and I were talking about this a bit, is that when you peel back every major commitment that they're making right now, every new effort that they're undertaking, buried inside is NSX. Somewhere in there is NSX. And it looks like they're really going to bet heavily on NSX. And that makes some good sense, it's going to be a multi-cloud world, one of the biggest challenges the customers are going to have are how are they going to weave multiple clouds together so that you have a coherent application or set of work loads that you can manage. So that's probably the first thing, is that the last year, NSX started to come to the fore, this year, any conversation you have, blah blah blah, NSX, blah blah blah NSX. So NSX has replaced v sphere as the primary, that core technology that's going forward. >> Because of that multi-cloud imperative. >> Right. >> I would pick another area as perhaps also being very very important, and that was the success they've had with the vSAN. >> Peter: vSAN? >> vSAN. >> Oh yeah, totally. >> They have essentially reducing the cost, straight-forwardly reducing the cost of running a v-sphere environment by being able to put in vSAN, and they didn't have -- >> EFC's finally out of the way. >> Exactly. >> I'll say it. I mean let's face it, EFC held back VMware for years. When we first started coming to VMworld, and we said "wow, this company's in an amazing position" "to really innovate in storage," and storage is a real mess, but they didn't have the resources to do that, and they were sort of publishing these API's saying you guys all figure it out. Finally, you know, under the Gelsinger era, he was able to, I don't know, fight, beg, borrow, steal, who knows how it all went down internally, but they've really taken the handcuffs off. >> They have and it's good, and they're aggressive. >> But there's another thing, and that is the EMC's, EMC's transformation to Flash, absolutely facilitated the emergence of vSAN as a platform for how you're going to handle storage. So, it was a combination of things. I'm not sure if vSAN would've worked as well if EMC was still driving storage raise with this. >> Exactly. In fact, they gave some interesting numbers. >> So, they did. >> Yeah, 60% of vSAN is flash, and of the VX Rail, 71% is all flash. >> 71% is flash. >> And the reason they give, and I think is right, is that it's so much simpler for the VMware, for the VMware operators to manage. >> And that's Flash inside of what, a Dell server? Or an HPE server? Or? >> No, it doesn't matter. But the key thing is, is that, you know, >> Not an array. >> It might very well, not an array. It might very well be that EMC was holding things back, but I think there's also a very practical, technical reality here that the amazing potential of vSAN has become unlocked by the market's adoption of Flash. Which, you know, David was one of the guys that helped move the market many years ago. So it's coming together for them in ways that perhaps they planned from the beginning, but they're taking advantage of the opportunities as they emerge. And you know, I'll say one other thing, Pat Gelsinger took some serious hits over the last 18 months in the rumor mill, and he's still here, and his company's doing pretty well. >> Well, two years ago it was like, oh, Pat's on his way out, and then he gave a really strong keynote. I thought his keynote today was very crisp, and evidently he was a little under the weather, so he did a good job fighting through that, but last thing, any announcements that were exciting to you or things that you're expect, big announcements coming tomorrow? We're hearing about some super secret stuff that's kind of leaking out. >> Yeah, you got to be a little bit careful about that, but what did you hear today, David, that made you go "hmm?" >> Well, I still want to focus on one thing that I think is one of the biggest issues, and that is security. They were very open today, very very open today about what a mess security was. And they came up with something called, what was it, absence? Which is a good idea. >> Now you're making me go to my notes. >> Defense. >> App defense. Which is an idea, but it's just a start. These are huge amounts of greater investment in security, from Dell, from EMC, and from VMware, all together. They have to step up in a much bigger way. >> He said in his keynote today that the industry, as an industry we have let you down. Several years ago, one of the early years when we interviewed Pat in the CUBE I had asked him, is security a do-over? Unequivocally, he said yes, and that was years ago, we're still doing it over. Alright guys, we got to wrap. Thanks very much for coming on and close, I look forward to more analysis from you guys tomorrow and throughout the week. This is day one, we launch tomorrow, we start at 10:30 local time. >> That's pacific time. >> Yes, which is pacific time. We're in Las Vegas, watch siliconangle.tv. Siliconangle.com for all the news, check out wiki.com for all the research. We're out. This is day one, this is the CUBE, we'll see you tomorrow.

Published Date : Aug 29 2017

SUMMARY :

it's the CUBE, covering VMworld 2017. the reality that they must bring the cloud model And that the goal is not to get rid of hardware, But the other piece that might be a tailwind for the customers and the partners, And the same with AWS, and they've extended that now Well, they really have to make sure it doesn't But the first three things I think is a strong, on the near to mid-term trend? that everything might go to Lennox, and the commitments that they're making, I guarantee Google, and Microsoft, and the Hyperscale guys is that the last year, NSX started to come to the fore, and that was the success they've had with the vSAN. the resources to do that, absolutely facilitated the emergence of vSAN In fact, they gave some interesting numbers. is flash, and of the VX Rail, is that it's so much simpler for the VMware, But the key thing is, that helped move the market many years ago. and evidently he was a little under the weather, and that is security. They have to step up in a much bigger way. that the industry, as an industry Siliconangle.com for all the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Dave VelantePERSON

0.99+

DavidPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DellORGANIZATION

0.99+

EMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

David FoyerPERSON

0.99+

PeterPERSON

0.99+

VMwareORGANIZATION

0.99+

PatPERSON

0.99+

tomorrowDATE

0.99+

NSXORGANIZATION

0.99+

Las VegasLOCATION

0.99+

three billion dollarsQUANTITY

0.99+

Peter BurressPERSON

0.99+

13%QUANTITY

0.99+

EFCORGANIZATION

0.99+

LennoxORGANIZATION

0.99+

this yearDATE

0.99+

todayDATE

0.99+

last quarterDATE

0.99+

last yearDATE

0.99+

four billion dollarsQUANTITY

0.99+

oneQUANTITY

0.99+

siliconangle.tvOTHER

0.99+

71%QUANTITY

0.99+

three firm platformsQUANTITY

0.98+

U.S.LOCATION

0.98+

Several years agoDATE

0.98+

two years agoDATE

0.98+

first thingQUANTITY

0.98+

three platformsQUANTITY

0.97+

eight technologiesQUANTITY

0.97+

VMworldORGANIZATION

0.97+

next decadeDATE

0.96+

GelsingerPERSON

0.96+

three quartersQUANTITY

0.96+

FlashTITLE

0.96+

VMworld 2017EVENT

0.96+

one pointQUANTITY

0.95+

About a year agoDATE

0.95+