Breaking Analysis: How Snowflake Plans to Change a Flawed Data Warehouse Model
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE in ETR. This is Breaking Analysis with Dave Vellante. >> Snowflake is not going to grow into its valuation by stealing the croissant from the breakfast table of the on-prem data warehouse vendors. Look, even if snowflake got 100% of the data warehouse business, it wouldn't come close to justifying its market cap. Rather Snowflake has to create an entirely new market based on completely changing the way organizations think about monetizing data. Every organization I talk to says it wants to be, or many say they already are data-driven. why wouldn't you aspire to that goal? There's probably nothing more strategic than leveraging data to power your digital business and creating competitive advantage. But many businesses are failing, or I predict, will fail to create a true data-driven culture because they're relying on a flawed architectural model formed by decades of building centralized data platforms. Welcome everyone to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, I want to share some new thoughts and fresh ETR data on how organizations can transform their businesses through data by reinventing their data architectures. And I want to share our thoughts on why we think Snowflake is currently in a very strong position to lead this effort. Now, on November 17th, theCUBE is hosting the Snowflake Data Cloud Summit. Snowflake's ascendancy and its blockbuster IPO has been widely covered by us and many others. Now, since Snowflake went public, we've been inundated with outreach from investors, customers, and competitors that wanted to either better understand the opportunities or explain why their approach is better or different. And in this segment, ahead of Snowflake's big event, we want to share some of what we learned and how we see it. Now, theCUBE is getting paid to host this event, so I need you to know that, and you draw your own conclusions from my remarks. But neither Snowflake nor any other sponsor of theCUBE or client of SiliconANGLE Media has editorial influence over Breaking Analysis. The opinions here are mine, and I would encourage you to read my ethics statement in this regard. I want to talk about the failed data model. The problem is complex, I'm not debating that. Organizations have to integrate data and platforms with existing operational systems, many of which were developed decades ago. And as a culture and a set of processes that have been built around these systems, and they've been hardened over the years. This chart here tries to depict the progression of the monolithic data source, which, for me, began in the 1980s when Decision Support Systems or DSS promised to solve our data problems. The data warehouse became very popular and data marts sprung up all over the place. This created more proprietary stovepipes with data locked inside. The Enron collapse led to Sarbanes-Oxley. Now, this tightened up reporting. The requirements associated with that, it breathed new life into the data warehouse model. But it remained expensive and cumbersome, I've talked about that a lot, like a snake swallowing a basketball. The 2010s ushered in the big data movement, and Data Lakes emerged. With a dupe, we saw the idea of no schema online, where you put structured and unstructured data into a repository, and figure it all out on the read. What emerged was a fairly complex data pipeline that involved ingesting, cleaning, processing, analyzing, preparing, and ultimately serving data to the lines of business. And this is where we are today with very hyper specialized roles around data engineering, data quality, data science. There's lots of batch of processing going on, and Spark has emerged to improve the complexity associated with MapReduce, and it definitely helped improve the situation. We're also seeing attempts to blend in real time stream processing with the emergence of tools like Kafka and others. But I'll argue that in a strange way, these innovations actually compound the problem. And I want to discuss that because what they do is they heighten the need for more specialization, more fragmentation, and more stovepipes within the data life cycle. Now, in reality, and it pains me to say this, it's the outcome of the big data movement, as we sit here in 2020, that we've created thousands of complicated science projects that have once again failed to live up to the promise of rapid cost-effective time to insights. So, what will the 2020s bring? What's the next silver bullet? You hear terms like the lakehouse, which Databricks is trying to popularize. And I'm going to talk today about data mesh. These are other efforts they look to modernize datalakes and sometimes merge the best of data warehouse and second-generation systems into a new paradigm, that might unify batch and stream frameworks. And this definitely addresses some of the gaps, but in our view, still suffers from some of the underlying problems of previous generation data architectures. In other words, if the next gen data architecture is incremental, centralized, rigid, and primarily focuses on making the technology to get data in and out of the pipeline work, we predict it's going to fail to live up to expectations again. Rather, what we're envisioning is an architecture based on the principles of distributed data, where domain knowledge is the primary target citizen, and data is not seen as a by-product, i.e, the exhaust of an operational system, but rather as a service that can be delivered in multiple forms and use cases across an ecosystem. This is why we often say the data is not the new oil. We don't like that phrase. A specific gallon of oil can either fuel my home or can lubricate my car engine, but it can't do both. Data does not follow the same laws of scarcity like natural resources. Again, what we're envisioning is a rethinking of the data pipeline and the associated cultures to put data needs of the domain owner at the core and provide automated, governed, and secure access to data as a service at scale. Now, how is this different? Let's take a look and unpack the data pipeline today and look deeper into the situation. You all know this picture that I'm showing. There's nothing really new here. The data comes from inside and outside the enterprise. It gets processed, cleanse or augmented so that it can be trusted and made useful. Nobody wants to use data that they can't trust. And then we can add machine intelligence and do more analysis, and finally deliver the data so that domain specific consumers can essentially build data products and services or reports and dashboards or content services, for instance, an insurance policy, a financial product, a loan, that these are packaged and made available for someone to make decisions on or to make a purchase. And all the metadata associated with this data is packaged along with the dataset. Now, we've broken down these steps into atomic components over time so we can optimize on each and make them as efficient as possible. And down below, you have these happy stick figures. Sometimes they're happy. But they're highly specialized individuals and they each do their job and they do it well to make sure that the data gets in, it gets processed and delivered in a timely manner. Now, while these individual pieces seemingly are autonomous and can be optimized and scaled, they're all encompassed within the centralized big data platform. And it's generally accepted that this platform is domain agnostic. Meaning the platform is the data owner, not the domain specific experts. Now there are a number of problems with this model. The first, while it's fine for organizations with smaller number of domains, organizations with a large number of data sources and complex domain structures, they struggle to create a common data parlance, for example, in a data culture. Another problem is that, as the number of data sources grows, organizing and harmonizing them in a centralized platform becomes increasingly difficult, because the context of the domain and the line of business gets lost. Moreover, as ecosystems grow and you add more data, the processes associated with the centralized platform tend to get further genericized. They again lose that domain specific context. Wait (chuckling), there are more problems. Now, while in theory organizations are optimizing on the piece parts of the pipeline, the reality is, as the domain requires a change, for example, a new data source or an ecosystem partnership requires a change in access or processes that can benefit a domain consumer, the reality is the change is subservient to the dependencies and the need to synchronize across these discrete parts of the pipeline or actually, orthogonal to each of those parts. In other words, in actuality, the monolithic data platform itself remains the most granular part of the system. Now, when I complain about this faulty structure, some folks tell me this problem has been solved. That there are services that allow new data sources to really easily be added. A good example of this is Databricks Ingest, which is, it's an auto loader. And what it does is it simplifies the ingestion into the company's Delta Lake offering. And rather than centralizing in a data warehouse, which struggles to efficiently allow things like Machine Learning frameworks to be incorporated, this feature allows you to put all the data into a centralized datalake. More so the argument goes, that the problem that I see with this, is while the approach does definitely minimizes the complexities of adding new data sources, it still relies on this linear end-to-end process that slows down the introduction of data sources from the domain consumer beside of the pipeline. In other words, the domain experts still has to elbow her way into the front of the line or the pipeline, in this case, to get stuff done. And finally, the way we are organizing teams is a point of contention, and I believe is going to continue to cause problems down the road. Specifically, we've again, we've optimized on technology expertise, where for example, data engineers, well, really good at what they do, they're often removed from the operations of the business. Essentially, we created more silos and organized around technical expertise versus domain knowledge. As an example, a data team has to work with data that is delivered with very little domain specificity, and serves a variety of highly specialized consumption use cases. All right. I want to step back for a minute and talk about some of the problems that people bring up with Snowflake and then I'll relate it back to the basic premise here. As I said earlier, we've been hammered by dozens and dozens of data points, opinions, criticisms of Snowflake. And I'll share a few here. But I'll post a deeper technical analysis from a software engineer that I found to be fairly balanced. There's five Snowflake criticisms that I'll highlight. And there are many more, but here are some that I want to call out. Price transparency. I've had more than a few customers telling me they chose an alternative database because of the unpredictable nature of Snowflake's pricing model. Snowflake, as you probably know, prices based on consumption, just like AWS and other cloud providers. So just like AWS, for example, the bill at the end of the month is sometimes unpredictable. Is this a problem? Yes. But like AWS, I would say, "Kill me with that problem." Look, if users are creating value by using Snowflake, then that's good for the business. But clearly this is a sore point for some users, especially for procurement and finance, which don't like unpredictability. And Snowflake needs to do a better job communicating and managing this issue with tooling that can predict and help better manage costs. Next, workload manage or lack thereof. Look, if you want to isolate higher performance workloads with Snowflake, you just spin up a separate virtual warehouse. It's kind of a brute force approach. It works generally, but it will add expense. I'm kind of reminded of Pure Storage and its approach to storage management. The engineers at Pure, they always design for simplicity, and this is the approach that Snowflake is taking. Usually, Pure and Snowflake, as I have discussed in a moment, is Pure's ascendancy was really based largely on stealing share from Legacy EMC systems. Snowflake, in my view, has a much, much larger incremental market opportunity. Next is caching architecture. You hear this a lot. At the end of the day, Snowflake is based on a caching architecture. And a caching architecture has to be working for some time to optimize performance. Caches work well when the size of the working set is small. Caches generally don't work well when the working set is very, very large. In general, transactional databases have pretty small datasets. And in general, analytics datasets are potentially much larger. Is it Snowflake in the analytics business? Yes. But the good thing that Snowflake has done is they've enabled data sharing, and it's caching architecture serves its customers well because it allows domain experts, you're going to hear this a lot from me today, to isolate and analyze problems or go after opportunities based on tactical needs. That said, very big queries across whole datasets or badly written queries that scan the entire database are not the sweet spot for Snowflake. Another good example would be if you're doing a large audit and you need to analyze a huge, huge dataset. Snowflake's probably not the best solution. Complex joins, you hear this a lot. The working set of complex joins, by definition, are larger. So, see my previous explanation. Read only. Snowflake is pretty much optimized for read only data. Maybe stateless data is a better way of thinking about this. Heavily right intensive workloads are not the wheelhouse of Snowflake. So where this is maybe an issue is real-time decision-making and AI influencing. A number of times, Snowflake, I've talked about this, they might be able to develop products or acquire technology to address this opportunity. Now, I want to explain. These issues would be problematic if Snowflake were just a data warehouse vendor. If that were the case, this company, in my opinion, would hit a wall just like the NPP vendors that proceeded them by building a better mouse trap for certain use cases hit a wall. Rather, my promise in this episode is that the future of data architectures will be really to move away from large centralized warehouses or datalake models to a highly distributed data sharing system that puts power in the hands of domain experts at the line of business. Snowflake is less computationally efficient and less optimized for classic data warehouse work. But it's designed to serve the domain user much more effectively in our view. We believe that Snowflake is optimizing for business effectiveness, essentially. And as I said before, the company can probably do a better job at keeping passionate end users from breaking the bank. But as long as these end users are making money for their companies, I don't think this is going to be a problem. Let's look at the attributes of what we're proposing around this new architecture. We believe we'll see the emergence of a total flip of the centralized and monolithic big data systems that we've known for decades. In this architecture, data is owned by domain-specific business leaders, not technologists. Today, it's not much different in most organizations than it was 20 years ago. If I want to create something of value that requires data, I need to cajole, beg or bribe the technology and the data team to accommodate. The data consumers are subservient to the data pipeline. Whereas in the future, we see the pipeline as a second class citizen, with a domain expert is elevated. In other words, getting the technology and the components of the pipeline to be more efficient is not the key outcome. Rather, the time it takes to envision, create, and monetize a data service is the primary measure. The data teams are cross-functional and live inside the domain versus today's structure where the data team is largely disconnected from the domain consumer. Data in this model, as I said, is not the exhaust coming out of an operational system or an external source that is treated as generic and stuffed into a big data platform. Rather, it's a key ingredient of a service that is domain-driven and monetizable. And the target system is not a warehouse or a lake. It's a collection of connected domain-specific datasets that live in a global mesh. What is a distributed global data mesh? A data mesh is a decentralized architecture that is domain aware. The datasets in the system are purposely designed to support a data service or data product, if you prefer. The ownership of the data resides with the domain experts because they have the most detailed knowledge of the data requirement and its end use. Data in this global mesh is governed and secured, and every user in the mesh can have access to any dataset as long as it's governed according to the edicts of the organization. Now, in this model, the domain expert has access to a self-service and obstructed infrastructure layer that is supported by a cross-functional technology team. Again, the primary measure of success is the time it takes to conceive and deliver a data service that could be monetized. Now, by monetize, we mean a data product or data service that it either cuts cost, it drives revenue, it saves lives, whatever the mission is of the organization. The power of this model is it accelerates the creation of value by putting authority in the hands of those individuals who are closest to the customer and have the most intimate knowledge of how to monetize data. It reduces the diseconomies at scale of having a centralized or a monolithic data architecture. And it scales much better than legacy approaches because the atomic unit is a data domain, not a monolithic warehouse or a lake. Zhamak Dehghani is a software engineer who is attempting to popularize the concept of a global mesh. Her work is outstanding, and it's strengthened our belief that practitioners see this the same way that we do. And to paraphrase her view, "A domain centric system must be secure and governed with standard policies across domains." It has to be trusted. As I said, nobody's going to use data they don't trust. It's got to be discoverable via a data catalog with rich metadata. The data sets have to be self-describing and designed for self-service. Accessibility for all users is crucial as is interoperability, without which distributed systems, as we know, fail. So what does this all have to do with Snowflake? As I said, Snowflake is not just a data warehouse. In our view, it's always had the potential to be more. Our assessment is that attacking the data warehouse use cases, it gave Snowflake a straightforward easy-to-understand narrative that allowed it to get a foothold in the market. Data warehouses are notoriously expensive, cumbersome, and resource intensive, but they're a critical aspect to reporting and analytics. So it was logical for Snowflake to target on-premise legacy data warehouses and their smaller cousins, the datalakes, as early use cases. By putting forth and demonstrating a simple data warehouse alternative that can be spun up quickly, Snowflake was able to gain traction, demonstrate repeatability, and attract the capital necessary to scale to its vision. This chart shows the three layers of Snowflake's architecture that have been well-documented. The separation of compute and storage, and the outer layer of cloud services. But I want to call your attention to the bottom part of the chart, the so-called Cloud Agnostic Layer that Snowflake introduced in 2018. This layer is somewhat misunderstood. Not only did Snowflake make its Cloud-native database compatible to run on AWS than Azure in the 2020 GCP, what Snowflake has done is to obstruct cloud infrastructure complexity and create what it calls the data cloud. What's the data cloud? We don't believe the data cloud is just a marketing term that doesn't have any substance. Just as SAS is Simplified Application Software and iOS made it possible to eliminate the value drain associated with provisioning infrastructure, a data cloud, in concept, can simplify data access, and break down fragmentation and enable shared data across the globe. Snowflake, they have a first mover advantage in this space, and we see a number of fundamental aspects that comprise a data cloud. First, massive scale with virtually unlimited compute and storage resource that are enabled by the public cloud. We talk about this a lot. Second is a data or database architecture that's built to take advantage of native public cloud services. This is why Frank Slootman says, "We've burned the boats. We're not ever doing on-prem. We're all in on cloud and cloud native." Third is an obstruction layer that hides the complexity of infrastructure. and fourth is a governed and secured shared access system where any user in the system, if allowed, can get access to any data in the cloud. So a key enabler of the data cloud is this thing called the global data mesh. Now, earlier this year, Snowflake introduced its global data mesh. Over the course of its recent history, Snowflake has been building out its data cloud by creating data regions, strategically tapping key locations of AWS regions and then adding Azure and GCP. The complexity of the underlying cloud infrastructure has been stripped away to enable self-service, and any Snowflake user becomes part of this global mesh, independent of the cloud that they're on. Okay. So now, let's go back to what we were talking about earlier. Users in this mesh will be our domain owners. They're building monetizable services and products around data. They're most likely dealing with relatively small read only datasets. They can adjust data from any source very easily and quickly set up security and governance to enable data sharing across different parts of an organization, or, very importantly, an ecosystem. Access control and governance is automated. The data sets are addressable. The data owners have clearly defined missions and they own the data through the life cycle. Data that is specific and purposely shaped for their missions. Now, you're probably asking, "What happens to the technical team and the underlying infrastructure and the cluster it's in? How do I get the compute close to the data? And what about data sovereignty and the physical storage later, and the costs?" All these are good questions, and I'm not saying these are trivial. But the answer is these are implementation details that are pushed to a self-service layer managed by a group of engineers that serves the data owners. And as long as the domain expert/data owner is driving monetization, this piece of the puzzle becomes self-funding. As I said before, Snowflake has to help these users to optimize their spend with predictive tooling that aligns spend with value and shows ROI. While there may not be a strong motivation for Snowflake to do this, my belief is that they'd better get good at it or someone else will do it for them and steal their ideas. All right. Let me end with some ETR data to show you just how Snowflake is getting a foothold on the market. Followers of this program know that ETR uses a consistent methodology to go to its practitioner base, its buyer base each quarter and ask them a series of questions. They focus on the areas that the technology buyer is most familiar with, and they ask a series of questions to determine the spending momentum around a company within a specific domain. This chart shows one of my favorite examples. It shows data from the October ETR survey of 1,438 respondents. And it isolates on the data warehouse and database sector. I know I just got through telling you that the world is going to change and Snowflake's not a data warehouse vendor, but there's no construct today in the ETR dataset to cut a data cloud or globally distributed data mesh. So you're going to have to deal with this. What this chart shows is net score in the y-axis. That's a measure of spending velocity, and it's calculated by asking customers, "Are you spending more or less on a particular platform?" And then subtracting the lesses from the mores. It's more granular than that, but that's the basic concept. Now, on the x-axis is market share, which is ETR's measure of pervasiveness in the survey. You can see superimposed in the upper right-hand corner, a table that shows the net score and the shared N for each company. Now, shared N is the number of mentions in the dataset within, in this case, the data warehousing sector. Snowflake, once again, leads all players with a 75% net score. This is a very elevated number and is higher than that of all other players, including the big cloud companies. Now, we've been tracking this for a while, and Snowflake is holding firm on both dimensions. When Snowflake first hit the dataset, it was in the single digits along the horizontal axis and continues to creep to the right as it adds more customers. Now, here's another chart. I call it the wheel chart that breaks down the components of Snowflake's net score or spending momentum. The lime green is new adoption, the forest green is customers spending more than 5%, the gray is flat spend, the pink is declining by more than 5%, and the bright red is retiring the platform. So you can see the trend. It's all momentum for this company. Now, what Snowflake has done is they grabbed a hold of the market by simplifying data warehouse. But the strategic aspect of that is that it enables the data cloud leveraging the global mesh concept. And the company has introduced a data marketplace to facilitate data sharing across ecosystems. This is all about network effects. In the mid to late 1990s, as the internet was being built out, I worked at IDG with Bob Metcalfe, who was the publisher of InfoWorld. During that time, we'd go on speaking tours all over the world, and I would listen very carefully as he applied Metcalfe's law to the internet. Metcalfe's law states that the value of the network is proportional to the square of the number of connected nodes or users on that system. Said another way, while the cost of adding new nodes to a network scales linearly, the consequent value scores scales exponentially. Now, apply that to the data cloud. The marginal cost of adding a user is negligible, practically zero, but the value of being able to access any dataset in the cloud... Well, let me just say this. There's no limitation to the magnitude of the market. My prediction is that this idea of a global mesh will completely change the way leading companies structure their businesses and, particularly, their data architectures. It will be the technologists that serve domain specialists as it should be. Okay. Well, what do you think? DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn? Remember, these episodes are all available as podcasts, so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com, and don't forget to check out etr.plus for all the survey analysis. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching. Be well, and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and the data team to accommodate.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Bob Metcalfe | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Metcalfe | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
November 17th | DATE | 0.99+ |
75% | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Snowflake | TITLE | 0.99+ |
1,438 respondents | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
October | DATE | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
today | DATE | 0.99+ |
more than 5% | QUANTITY | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
Snowflake Data Cloud Summit | EVENT | 0.99+ |
Second | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
both dimensions | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
DSS | ORGANIZATION | 0.99+ |
1980s | DATE | 0.99+ |
each company | QUANTITY | 0.99+ |
decades ago | DATE | 0.98+ |
zero | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
2010s | DATE | 0.98+ |
each quarter | QUANTITY | 0.98+ |
Third | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
earlier this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Pure | ORGANIZATION | 0.98+ |
fourth | QUANTITY | 0.98+ |
IDG | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
Decision Support Systems | ORGANIZATION | 0.96+ |
Boston | LOCATION | 0.96+ |
single digits | QUANTITY | 0.96+ |
siliconangle.com | OTHER | 0.96+ |
one | QUANTITY | 0.96+ |
Spark | TITLE | 0.95+ |
Legacy EMC | ORGANIZATION | 0.95+ |
Kafka | TITLE | 0.94+ |
ORGANIZATION | 0.94+ | |
Snowflake | EVENT | 0.92+ |
first mover | QUANTITY | 0.92+ |
Azure | TITLE | 0.91+ |
InfoWorld | ORGANIZATION | 0.91+ |
dozens and | QUANTITY | 0.91+ |
mid to | DATE | 0.91+ |
Breaking Analysis: Dell Technologies Financial Meeting Takeaways
>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now here's your host, Dave Vellante. >> Hi, everybody, welcome to this Cube Insights, powered by ETR. In this breaking analysis I want to talk to you about what I learned this week at Dell Technology's financial analyst meeting in New York. They gathered all the financial analysts, Rob Williams hosted it, he's the head of IR, Michael Dell of course was there. They had Dennis Hoffman who is the head of strategic planning, Jeff Clarke who basically runs the business and Tom Sweet, of course, who was the star of the show, the CFO, all the analysts want to see him. Dell laid out its longterm goals, it provided much clearer understanding of its strategic direction, basically focused on three areas. Dell believes that IT is getting more complex, we know that, they want to capitalize on that by simplifying IT. We'll talk about that. And then they want to position for the wave of digital transformations that are coming and they also believe, Dell believes, that it can capitalize on the consolidation trend, consolidating vendors, so I'll talk about each of those. And so let me bring up the first slide, Alex, if you would. The takeaways from the Dell financial analyst meeting. Let me share with you the overall framework that Tom Sweet laid out. And I have to say, the messaging was very consistent, these guys were very well-prepared. I think Dell is, from a management perspective, very well-run company. They're targeting three to 5% growth on what they're saying is a 4% GDP forecast. Or sorry, 4%, I have GDP here, it's really 4% industry growth. GDP's a little lower than that obviously. So this is IDC data, Gartner data, 4% industry growth. So that's an error on my part, I apologize. The strategies to grow relative to their competition. So grow share on a relative basis. So whatever the market does, again, not GDP, but whatever the market does, Dell wants to grow faster than the market. So it wants to gain share, that's its primary metric. From there they want to grow operating income and they want to grow that faster than revenue, that's going to throw off cash. And then they're going to also continue to delever the balance sheet. I think they paid down 17 billion in debt since the EMC acquisition. They want to get to a two X debt to EBITA ratio within 18 months. And what they're saying is, you know, they talked about, Tom Sweet talked about this consistent march toward investment-grade rating. They've been talkin' about that for awhile. He made the comment, we don't need to have a triple A rating but we want to get to the point where we can reduce our interest expense, and that will, 'cause they'll drop right into the bottom line. So they talked about these various levers that they can turn, some of them under the P and L, gaining share, some are their operating structure and their organizational structure, and one big one is obviously their debt structure. The other key issue here is will this cut the liquidity discount that Dell faces? What do I mean by that? Well, VMware has about a $60 billion valuation. Dell owns about 80% of VMware, which would equate to 48 billion. But if you look at Dell's market cap, it's only 37 billion. So it essentially says that Dell's core business is worth minus 11 billion. We used to talk about this when EMC owned VMware. Its core business only comprised about 40% of the overall value of the company, in this case because of the high debt, Dell has a negative value. And it's not just the high debt. Michael Dell has control over the voting shares, it's essentially a conglomerate structure, there's very high debt, and it's a relatively low margin business, notwithstanding VMware. And so as a result, Dell trades at a discount relative to what you would think it should trade at, given its prominence in the market, $92 billion company, the leader in every category under the sun. So that's the big question is can Dell turn these levers, drop EBITA or cash to the bottom line, affect operating income, and then ultimately pay down its debt and affect that discount that it trades at? Okay, bring up, if you would, Alex, the next slide. Now I want to share with you the takeaways from the Dell line of business focus. This really was Jeff Clarke's presentations that I'm going to draw from. Servers, we know, they're softer demand, but the key there is they're really faced tough compares. Last year, Dell's server business grew like crazy. So this year the comparisons are lessened. But there's less spending on servers. I'll share with you some of the ETR data. Storage, they call it holding serve, you saw last quarter I did an analysis, I took the ETR data and the income statement, it showed Pure was gaining share at like 22% growth from the income statement standpoint. Dell was 0% growth but is actually growing faster than its competitors. With the exception of Pure. It's growing faster than the market. So Dell actually gained share with 0% growth. Dell's really focused on consolidating the portfolio. They've cut the portfolio down from 80, I think actually the right number is 88 products, down to 20 by May of 2020. They've got some new mid-range coming, they've just refreshed their data protection portfolio, so again, by May of next year, by Dell Technologies World they'll have a much, much more simplified portfolio. And they're gaining back share. They've refocused on the storage business. You might recall after the acquisition, EMC was kind of a mess. It was losing share before the acquisition, it was so distracted with all the Elliott Management stuff goin' on. And kind of took its eye off the ball, and then after the acquisition it took awhile for them to get their act together. They gained back about 375 basis points in the last 18 months. Remember a basis point is 1/100th of 1%. So gaining share and their consistent focus on trying to do that. Their PC business, which is actually doin' quite well, is focused on the commercial segment and focused on higher margins. They made the statement that the PCs are kind of undersupply right now so it's helping margins. There's a big focus in Jeff Clarke's organization on VMware integration. To me this makes a lot of sense. To the extent that you can take the VMware platform and make Dell hardware run VMware better, that's something that is an advantage for Dell, obviously. And at the same time, VMware has to walk the fine line with the ecosystem. But certainly it's earned the presence in the market now that it can basically do what I just said, tightly integrate with Dell and at the same time serve the ecosystem, 'cause frankly, the ecosystem has no choice. It must serve VMware customers. The strategy, essentially, is to, as I say, capitalize on vendor consolidation, leverage value across the portfolio, so whether it's pivotal, VMware integration, the security portfolio, try to leverage that and then differentiate with scale. And Dell really has the number one supply chain in the tech business. Something that Dave Donatelli at HP, when he was at HP, used to talk about. HPE doesn't really talk about that supply chain advantage anymore 'cause essentially it doesn't have it. Dell does. So Jeff Clarke's reorganization, he came in, he streamlined the organization, really from the focus on R and D to product to collaboration across the organization and the VMware integration. I actually was quite impressed with when I first met Jeff Clarke I guess two years ago now, what he and the organization have accomplished since then. No BS kind of person. And you can see it's starting to take effect. So we'll keep an eye on that. The next slide I want to show you, I want to bring in the ETR data. We've been sharing with you the ETR spending intention surveys for the last couple of weeks and months. ETR, enterprise technology research, they have a data platform that comprises 4,500 practitioners that share spending data with them. CIOs, IT managers, et cetera. What I'm showing here is a cut off of the server sector. So I'm going to drill down into server and storage. So these are spending intentions from the July survey asking about the second half of 2019 relative to the first half of 2019. And this is a drill-down into the giant public and private firms. Why do I do that? Because in meeting the ETR, this is the best indicator. So it's big, big public companies and big private companies. Think Uber. Private companies that spend a ton of dough on IT. UPS before it went public, for example. So those companies are in here. And they're, according to ETR, the best indicators. What this chart shows, so the bars show, and I've shared this with you a number of times, the lime green is we're adding, we're new to this platform, we're new adoption. The evergreen is we're spending more, the gray is we're spending the same, the light red or pink is we're spending less, and the dark red is we're leaving the platform. So if you subtract the red from the green you get what's called a net score, and that's that blue line. And this is the overall server spending intentions from that July survey. The end is about 525 respondents out of the 4,500. And this is, again, those that just answered the question on server. So you can see the net score on server spend is dropping. And you can see the market share on server is dropping. The takeaway here is that servers, as a percentage of overall IT spend, are on a downward slope, and have been for quite some time. Back to the January '16 survey. Okay, so that's going to serve us. Let's take a look at the same data for storage. So if, Alex, if you bring up the storage sector slide, You can see kind of a similar trend. And I would argue what's happening here, a couple of things. You've got the CLOB effect, I'll talk about that some more, and you've also got, in this case, the flash, all-flash array effect. What happened was you had all-flash arrays and flash come into the data center, and that gave performance a huge headroom. Remember, spinning disk was the last bastion of mechanical movement and it was the main bottleneck in terms of overall application performance. IO was the problem. Well you put a bunch of flash into the system and it gives a lot of headroom. People used to over-provision capacity just for performance reasons. So flash has had the effect of customers saying, hey, my performance is good, I don't need to over-provision anymore, I don't need to buy so much. So that combined with cloud, I think, has put down the pressure on the storage business as well. Now the next slide, Alex, that I want you to bring up is the vendor net scores, the server spending intentions. And what I've done is I've highlighted Dell EMC. Now what's happening here in the slide, and I realize it's an eye chart, but basically where you want to be in this chart is in the left-hand side. What it shows is the spending intentions and the momentum from the October '18, which is the gray, the April '19, which is the blue, and then the July '19 which is the most recent one. Again, the end is 525 in the servers for the July '19 survey. And you can see Dell's kind of in the middle of the pack. You'd love to be in the left-hand side, you know, Docker, Microsoft, VMware, Intel, Ubuntu. And you don't want to be on the right-hand side, you know, Fujitsu, IBM, is sort of below the line. Dell's kind of in the middle there, Dell EMC. The next slide I want to show you is that same slide for storage. And again, you can see here is that on-- So this is vendor net scores, the storage spending intentions. On the left-hand side it's all the high growth companies. Rubrik, Cohesity, Nutanix, Pure, VMware with vSAN, Veeam. You see Dell EMC's VxRail. On the right-hand side, you see the guys that are losing momentum. Veritas, Iron Mountain, Barracuda, HitachiHDS, Fusion-io still comes up in the survey after the acquisition by Western Digital. Again, you see Dell EMC kind of holding serve in the middle there. Not great, not bad. Okay, so that's kind of just some other ETR data that I wanted to share. All right, next thing we're going to talk about is the macros market summary. And Alex, I've got some bullet points on this, so if you bring up that slide, let me talk about that a little bit. So five points here. First, cloud continues to eat away at on-prem, despite all this talk about repatriation, which I know does happen. People try to throw everything to the cloud and they go, whoa! Look at my Amazon bill, yeah, I get that. That's at the margin. The main trend is that cloud continues to grow. That whole repatriation thing is not moving the on-prem market. On-prem is kind of steady eddy. Storage is still working through that AFA injection. Got a lot of headroom from performance standpoint. So people don't need to buy as much as they used to because you had that step function in performance. Now eventually the market will catch up, all this digital transformation is happening, all this data is flowing through the system and it will catch up, and the storage market is elastic. As NAN prices fall, people will, I predict, will buy more storage. But there's been somewhat of a lull in the overall storage market. It's not a great market right now, frankly, at the macro level. Now ETR does these surveys on a quarterly basis. They're just about to release the October survey, and they put out a little glimpse on Friday about this survey. And I'll share some bullet points there. Overall IT spending clearly is softening. We kind of know that, everybody kind of realizes that. Here's the nuance. New adoptions are reverting to pre-2018 levels, and the replacements are rising. What does this mean? So the number of respondents that said, oh yes, we're adopting this platform for the first time is declining, and the replacements are actually accelerating. Why is that? Well I was at ETR last week and we were talking about this and one of the theories, and I think it's a good one, is that 2016, 2017 was kind of experimentation around digital transformation. 2018, people started to put things into production or closer to production, they were running systems in parallel, and now they're making their bets, they're saying, hey, this test worked, let's put this heavy into production in 2019, and now we're going to start replacing. So we're not going to adopt as much stuff 'cause we're not doing as much experimentation. We're going to now focus and narrow in on those things that are going to drive our business, and we're going to replace those things that aren't going to drive our business. We're going to start unplugging them. So that's some of what's happening. Another big trend is Microsoft. Microsoft is extending its presence throughout. They're goin' after collaboration, you saw the impact that they had on Slack and Slack stock recently. So Slack Box, Dropbox, are kind of exposed there. They're goin' after security, they've just announced a SIM product. So Splunk and IBM, they're kind of goin' after that base. The application performance management vendors. For instance, New Relic. Microsoft goin' after them. Obviously they got a huge presence in cloud. Their Windows 10 cycle is a little slower this time around, but they've got other businesses that are really starting to click. So Microsoft is one of the few vendors that really is showing accelerated spending momentum in the ETR data. Financial services and telcos, which are always leading spender indicators, are actually very weak right now. That's having a spillover effect into Europe, which is over-banked, if I can use that term. Banking heavy, if you will. So right now it's not a pretty picture, but it's not a disaster. I don't want to necessarily suggest this as like going back to 2007, 2008, it's not. It's really just a matter of things are softening and it's, you know, maybe taking a little breath. Okay, so let me summarize the meeting overall. Again, it was a very well-run meeting. Started at 9:00, ended at 12:00, bagged lunch, go home. Nice and crisp. So these guys are very well-prepared. I think, again, Dell is a extremely well-managed company. They laid out a much clearer vision for Wall Street of its strategy, where it's headed. As they say, they're going after IT complexity. I want to make a comment on this. You think about Legacy EMC. Legacy EMC was not the company that you would expect to deal with complexity. In fact, they were the culprit of complexity. One of the things that Jeff Clarke did when he came in, he said, this portfolio's too complex, needs to be simplified. Joe Tucci used to say, overlap is better than gaps. Jeff Clarke said we got too much overlap. We don't have a lot of gaps so let's streamline that portfolio. Taking advantage of vendor consolidation, this is an interesting one. Ever since I've been in this business, which has been quite a long time now, I've been hearing that buyers want to consolidate the number of vendors that they have. They've really not succeeded in doing that. Now can they do that now 'cause there are less vendors? Well, in a sense, yes, there are less sort of on-prem big vendors. EMC's no longer in the market, you don't have companies like Sun and Digital anymore, Compact is gone. HP split in two, but still. You're not seeing a huge number of new vendors, at scale, come into the market. Except you've got AWS and Google as new players there. So I think that injects sort of a new dynamic that a lot of people like to put cloud aside and kind of ignore it and talk about the old on-prem business, but I think that you're going to see a lot of experimentations and workload ins and outs, particularly with AWS and Google and of course Azure, which is in itself, their cloud is almost a separate force. So we'll see how that shakes up. As I say, servers right now, Dell's got a very tough compare. I think Dell will be fine in the server space. Storage, it's all about simplifying the portfolio, they've got a refreshed portfolio focused on regaining share. They've rebranded everything Power, so their whole line is going to be Power by, if it's not already, by May of next year, Dell Technologies World. It's a much more scalable portfolio. And I think Dell's got a lot of valuation levers. They're a $92 billion company, they've got their current operations, their current P and L, their share gains, their cross-company synergies, particularly with VMware, they can expand their TAM into cloud with partnerships like they're doing with AWS and others, Google, Microsoft. The Edge is a TAM expansion opportunity to them. And also corporate structure. You've seen them. VMware acquired Pivotal. They're cleaning that up. I'm sure they could potentially make some other moves. Secureworks is out there, for example. Maybe they'll do some things with RSA. So they got that knob to turn and they can delever. Paying down the debt to the extent that they can get back to investment grade, that will lower their interest rates, that'll drop right to the bottom line, and they'll be able to reinvest that. And Tom Sweet said, within 18 months, we'll be able to get there with that two X ratio relative to EBITA, and that's when they're going to start having conversations with the rating agencies to talk about you know, hey, maybe we can get a better rating and lower our interest expense. Bottom line, did Wall Street buy the story? Yes. But I don't think it's going to necessarily change anything in the near term. This is a show me from Missouri, prove it, execute, and then I think Dell will get rewarded. Okay, so this is Dave Vellante, thanks for watching this Cube Insights powered by ETR. We'll see ya next time. (electronic music)
SUMMARY :
From the SiliconANGLE Media Office And at the same time, VMware has to walk the fine line
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Clarke | PERSON | 0.99+ |
Rob Williams | PERSON | 0.99+ |
Tom Sweet | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Donatelli | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Dennis Hoffman | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Alex | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
October '18 | DATE | 0.99+ |
4,500 | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
April '19 | DATE | 0.99+ |
4,500 practitioners | QUANTITY | 0.99+ |
Legacy EMC | ORGANIZATION | 0.99+ |
17 billion | QUANTITY | 0.99+ |
$92 billion | QUANTITY | 0.99+ |
Sun | ORGANIZATION | 0.99+ |
July '19 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
July | DATE | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Iron Mountain | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
4% | QUANTITY | 0.99+ |
HitachiHDS | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Dropbox | ORGANIZATION | 0.99+ |
May | DATE | 0.99+ |
2007 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
88 products | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Jeff Boudreau, Dell EMC | Dell EMC World 2017
>> Announcer: Live from Las Vegas, it's theCUBE, covering Dell EMC World 2017, brought to you by Dell EMC. >> Hey, welcome back everyone. We're here live in Las Vegas for Dell EMC World 2017. This is theCUBE's eighth year of covering, since the inception of theCUBE. EMC World, which is now called Dell EMC, with the first year of the combination coming together. Exciting, a lot of storylines here. I'm John Furrier with SiliconANGLE, and my co-host with SiliconANGLE, Paul Gillin. Our next guest, Jeff Boudreau, who's the president of Dell EMC Storage Division. It sounds weird to say that because EMC used to be a storage company. Now, they're a part of Dell Technologies with a zillion brands. Jeff, welcome to theCUBE. >> Thank you for having me. >> So first, explain quickly what the storage division is, because Chad Sakac does converge infrastructure, but, you own all the storage. >> Jeff: Correct. >> So just quickly, just. >> Yeah, real quick, keeping it simple, if you think about the components and the building blocks, you have Ashley running compute, you have Tom Burns running network, you have myself running storage, and you have Beth Phalen running data protection, and then Chad's the converge platform, so when we integrate the solution, Chad has that piece. So yes, all storage, and that would be Legacy or Heritage Dell storage, but also Heritage and Legacy EMC storage. All that came together so, just massive revenue, massive amounts of customers, and then obviously tons of engineers around the world. >> You got a lot of work to do. Obviously storage is not going away. It's changing radically and in different ways. Certainly cloud is accelerating it. The data center world is changing. They call it data center. They don't call it server center. They call it the data center 'cause there's more data coming. So I got to ask you, one of my favorite quotes in your keynote this morning was, "We have self-driving cars. "Why don't we have self-driving storage "or autonomous storage?" Which is provocative, but also very relevant. Can you explain what you meant by that, and let's dig deeper into that. >> Yeah. I mean, actually, it's one of my favorite topics, actually. So I have these notions of the pillars of innovation, right? And I want to start looking at, to your point, things are changing in the markets and in the way customers are using our products, and I want to embrace that change and innovate around that change. The notion of the day in the life of the storage admin, or the day in the life of the data center admin, and the day in the life of just about anybody using the product. We got to make it simpler, right? It goes back to consumer simplicity, a lot of this stuff. What we're trying to do there is actually make the storage be smart enough to actually just take care of itself. It's kind of the set it and forget it notion. So, as part of autonomous storage, we look at four attributes simplistically, in regards to how you would have a self-driving system. The first one is about being application-centric, 'cause, at the end of the day, it's all about the app and the workload. We all know that, right, and that's what the users care about. So the way I kind of looked at that this morning is, the example is, that's like telling your car where you want to go, or is it a turn-by-turn decision, right, that you would give it? The second thing is about being policy driven. I'm not going to say, hey listen, do I want to take the fastest route or do I want to take the scenic route? >> John: Yep. >> Right, simplistically. And then, I'm going to be honest, that stuff's relatively easy and we do a bunch of that stuff today. Really when it starts to get the next level is when this stuff becomes self-aware, right? Understanding its resources, and then understanding if you're in or out of those boundaries. So, am I, you know, swerving out of my lane? Do I have enough gas? Dot, dot, dot, right, as an example. And then where it gets really powerful is the fourth component for me, is self-optimization. That's being able to understanding what, you're self-aware and then making a better outcome for your customer. At the end of the day, that could be, hey, there's not enough charging stations to get to your destination, or you don't have enough charge, or better yet, the storage will actually or the system would tell you, hey take this path. There's enough charging stations. I'll get you there on time and safely, right? >> So you'd be very happy if you had the brand bumper sticker of the Tesla of storage. >> Absolutely. >> To your point about the stations, you know, people love Teslas. It's very sexy and it's relevant. I got to ask you about machine learning. It's obviously AIs hyped-up. We call it, not artificial intelligence, more augmented intelligence. That's a better definition because artificial intelligence is this weak, weak word. (laughing) It doesn't really exist, it's kind of out there. But, augmentation of value is about machine learning and deep learning. These are the learning systems you mentioned. Self-optimization, that's basically learning machines. >> That's right. >> What are you guys doing around big data, I meant big data machine learning and some of these things? >> So we're doing a handful of things. So along the keynote, I mentioned builtin analytics. That's two things, you know. Dell EMC, we store, protect and secure more data than anybody else in the world, par or not. So, you know and I used to joke, EMC used to be known for where information lives. Dell Technologies and Dell EMC has to be known for where information comes alive, right? And actually providing value or generating value for our customers. So, what we're going to do is, we're going to have builtin capabilities into the array, but also we're going to plug into the broader ecosystem, you know, with analytics and service providers to really help drive that value and that creation. So, you'll see a lot more around the storage itself around that self-learning and understanding. That kind of, the core components of an array. What makes it run healthy or unhealthy enables the customer to better utilize and add more value to their stack. Then also, going into that broader ecosystem, making sure that they can really drop that value to their customers and to their business. >> What pieces will you be delivering first? >> On the storage side itself, specifically, we'll be, we already have a lot of products that have probably two or three of the capabilities. So between app-centric, policy driven, and self-awareness, those are the ones we're on right now, big time. And I have a team focused on the machine learning, so we can really start self-optimizing. We actually just, we mentioned, I don't know if the guys that were on, where all the folks we've had on the show talked about a thing called cloud IQ? >> Paul: Yes, they did. >> That was something that we built a SAS model in the cloud, which is all about machine learning. >> So, that's part of this whole rollout, is moving to the cloud eventually. >> Oh, absolutely. It's critical. >> So another quote that I have off from your keynote, because it had some great soundbites, Isilon, one of the products in your portfolio, "Isilon is known to be the gold standard "for storage in the genome sequencing." Obviously, with the massive amounts of computers available, certainly cloud computing has now made it possible to do amazing things with the data, and make that literally come alive, and hopefully people could live longer. But, genome sequencing is actually doable, and price points at. >> It's crazy, probably the last two years alone, it's dramatically and drastically just reduced. >> But why is Isilon the gold standard for sequence? What specifically is it that's great about it? >> Well, it basically, in regards to the data structure and how we can process that big data, and make sense of it quickly as we analyze it, there's nothing faster in the market. And it'd be interesting, and now that we've brought out the Isilon All-Flash array with the new infinity platform, we've taken something that, two years ago took weeks, down to days, down to hours, and with the All-Flash array we've taken it down to minutes, 22 minutes to be exact. >> So I got to to ask you, I'll let you think about the portfolio question and I'm going to ask in about a minute how you're going to rectify all that, where the overlap is or isn't, so you can work on that, but the next question is, the industry, you're not seeing a lot of start-ups trying to do what Isilon did. You heard Isilon guys leaving, starting companies, and everyone's kind of pivoting, but you are seeing startups in the white spaces, data protection, so question, how hard is it to do a startup right now and get venture funding, because it seems to be scale is the issue, and it's hard to be a, quote, pure storage company, pun intended. Pure Storage was the last company to challenge you guys. (laughing) >> You know, I've also thrown that app in there as well, it's standing on their own, regardless of that. But, let's be clear, this market is consolidating. That's good for us, no ifs, ands, or buts about it. So, with our size and scale, Dell and EMC, and Dell Technologies together, our size, our scale, our buying power, unmatched by anybody. It's going to be hard, for you know, there's a lot of companies. >> Hard nut to crack, for a start up to come in and it could be the table stakes are too high. >> So in regards, the innovation, spending R&D, value chain that we get, efficiency, buying power, intel, I mean think about all the stuff around the whole ecosystem, it's just really hard to do, so this market is consolidating. We're good with that. What I want to do is consolidate this market faster, and I want to drive more share of this market in regards to that, and I think it's really tough. So, you've seen small guys be bought recently. Some guys that aren't doing well. Some people, >> Acquihires. >> Maybe some of these orange people are really in the red, going back to your pun. So, they're struggling, right, some of these guys are struggling, so I really think this is an opportunity, as we force this market as it consolidates, for us, I think it's a real big thing. >> Alright, so the question on the consolidation, sorry Paul to interrupt, but okay. I get the consolidation. Now the growth, we're going to put the pedal to the medal. It's going to come from where? You got a mature market, you're consolidating. You lock that in, so that's big dollars by the way. It's not like small numbers, and the table stakes are high so, storage going to crack that nut. Then you got to have the growth strategy where there's a hockey stick opportunity. >> Yeah, so, from a storage perspective, let's be clear, they're going to have the growth. It's going to come through things like HTI. It's going to come through things like software-defined, in regards to how we do that, right? That's obviously part of, software-defined is part of my portfolio as well, so, it will be a journey in regards to where we grow. The traditional space is huge, let's be very clear. It's massive, right. We have to do that and do that extremely well, and protect that, and make sure our customers are taken care of, but as they go on their journey, if it's software-defined, or cloud, or what have you, we want to make sure we're relevant can help with that. So, one will be taking share in that space, there'll be opportunities as the consumption models, the technology, and the deployment models change, we'll be front and center in all of those. >> Speaking of software-defined, essentially that can deposition some of the storage elements below it. Cisco is wrestling with that right now. Where they sort of held off for a couple of years on software-defined network, and they're finally embracing it. What are you doing to balance the need for your, the elements of your portfolio to shine, with also the need to get customers to software definition? >> Absolutely, so, right now, I mean, it's a journey. Let's be clear, right, so, I have a, actually, a large customer that was on stage with me. I'll leave their name out, if anyone wants to watch who I talked to on day one, they'll know who I'm talking about. They're on a journey. Right now, today, they're probably one of the, I would say, leaders in software-defined storage and driving software-defined storage is part of their IT transformation and their data centers. They're 20% software-defined, 80% still traditional physical infrastructure. They have a big stake in the ground. In three or four short years, 2020, they're going to be 50/50. Right, and these the guys that are leading and driving. So just give the folks a little bit of balance in regards to yes, that's where their growth will come from, this is where the appeal is, but it will be a journey in how we get through that, but also for me, going back to scale and Dell Technologies, I have products like Scale IO. I have virtual Isilon, I have virtual Unity. In regards to what we do with ECS, that's an object store, so file blocker object, we have it. And then I have wonderful 14G servers from my friend Ashley in the compute team. Go well together. >> So, Pat Gelsinger's talking about a developer-ready infrastructure. You've mentioned on your keynote, I thought was clever, the cloud-ready storage. Talk about that dynamic because, love this soundbite, "Storage has to be cloud ready, cloud connected, "build out on off prim, and live in a multi-cloud world." Don't comment on the multi-cloud. We think that's going to happen. Not ready for primetime right now. I'm pretty certainly it is happening. Their is some latency issues on multi-cloud. >> Sure >> We don't want to digress. But hybrid is definitely happening, but multi-cloud is the gateway, hybrid cloud is the gateway to multi-cloud. Dealing with legacy to cloud native, that gap with hybrid. How do you look at that, cause that's truly going to be a great opportunity for you, and being cloud storage ready, I'm sorry, cloud ready. >> Cloud ready. >> Ready storage, how do you make that happen? >> Yeah, so, cloud connections is one of the big ones for me. So, the ability to connect to the cloud and allow people to move, seamlessly move data in and out of the cloud. So depending if it's a traditional on prim or off prim, so we have great technologies for file, we have CTA, which is a cloud tiering appliance, all file based, gets rave reviews from our customers and we're able to help them not only on our products but actually on some competitor file products to move it off to the cloud if need be, which has actually been a pretty big win for us. In addition to that, Isilon has a notion of cloud pools that people haven't seen, but again it's a transparent seamless data mobility from Isilon, from core to the cloud, if you will. Then, lastly we have a cloud array, which is a block device that allows us to move block data from a traditional asset into the cloud as well, so, we have a handful of products or features that are natively in certain products today. We'll be evolving that over time. >> You got everything! >> Well we'll be evolving that over time too, so we want to have a more coherent simple storage for our customers, right? It doesn't matter what the data type is, we want to be able to present it to the cloud. >> But you, by that I mean EMC, were late to the Flash market, but have caught up, are now the leaders in the Flash market. Phenomenal growth year over year. What did you do to pull that off? That's kind of counter-intuitive. >> Focus and energy, and a lot of great engineers. I'll be honest with you, alright. And then a great sales team behind it as well. So, we were late to the game. We made some decisions to lead with certain products and drive certain products where if we, you know if you took a step back, I think we'd said, "Hey, listen, we all agreed Flash was critical. Flash will be everywhere, compute, storage, network." And then you could debate on the consumption model, if it would be all Flash systems versus hybrid systems, or what have you. At the end of the day, Flash was pretty critical, and I think we're all on the same page there, in regards to how you want to attack the problem, if it's a hybrid or all Flash, that's maybe where we got a little stuck in our own way. But then focus from all the teams, if it was Bemax or Midrange or Xtreme IO, Isilon now. Teams have done an amazing job catching up, and then working with Billy and Marius, and the go-to-market teams, they've been phenomenal. It's been a huge shape with our. >> Well that's a good point to my portfolio question, is you guys really worked that problem hard. I think it was a two year window, we saw all kinds of architecture, but that was good timing on that, because you were early on the trend architecturally was happening in a real sense, although late to the game. Things kind of played out and you kind of shaped your portfolio up during that time. Kind of a forcing function. One, is that what happened, and two, what is the current view of the portfolio right now? Do you feel comfortable about the overlap, gaps, things that you think about? >> Yeah absolutely, so let me take the media one first, and go back to Flash specifically. We learned a lot, and yes that did help us shape our portfolios and go forward and actually try to focus specific architectures to specific-use cases to make our customers successful. We also learned is we didn't ever want to be late again. That's why, with NVME specifically, that we actually were major contributors and actually a codeveloper on NVME. Which others can talk about NVME in their marketing material. They weren't actually at the table codeveloping. >> You get the scar tissue, saying if we don't get out in this, we're going to. (laughing) >> That's right, so we've learned and I don't want it to happen again. I want to be a learning organization. I don't want it to happen again, so we're going to be driving that, and we'll be a leader in NVME, and as more media transitions happen in Flash, there'll be more, trust me, a few years down the road beyond that, we're already looking at, we'll continue to make sure that we're a leader in that space. Now on the portfolio, I talk to customers all the time. They love the portfolio in the sense of, that they understand, they believe in the fact that there's no, one size does not fit all. They have said, though, "Hey you got a lot of products "and you have to simplify." In full transparency that's something we've been working through and we'll continue to work through that. We will have overlap, and we want overlap, it's good. >> John: Better to have overlaps than gaps. >> And Joe Tucci was a huge person and I couldn't support more but we want it to be planned overlap versus unplanned overlap, because we want to be making sure we want to make our customers successful, and we say hey, and that we can articulate clearly to our customers why they'd use a product and the value its going to drive for their use-case, for their application. Same thing our sales guys, same thing for our service guys, same thing for our own engineers. We want to keep people aligned and focused on what we're trying to do. So that way, we can provide a better outcome on the other side. >> Are you looking at, when you're looking out at the future, do you see anything disruptive on the horizon? Is there anything that could change this industry fundamentally? >> Obviously, I always keep cloud in the back of mind. It's still something, you know. Depending on how people want to portray it or look at it. People call it a destination, some people call it a media, what have you. I call it a virtual infrastructure. Depending on how you want to define it, there's different ways. >> A mainframe in the sky. >> You know, I say that one always keeps me up at night, depending on how things go, but there's a lot of cool things going on in storage actually where I think, first, let's be clear, storage in general maybe hesitant. It's lost some of the sex appeal, if you will, the attractiveness of it. I think that's starting to come back with some of the things like NVME and cloud-ready and multi-dimensional scaling. There's a whole bunch of things going around there that actually is going to kind of drive that back. Now in addition to that, though, we all know internet of things, machine learning, you know, video, it's exploding. So let's be very clear there'll be a lot, >> Storage is not going away, but the machine learning certainly is going to be a nice jolt in the arm for optimization and automation. >> There's a huge opportunity there, in regards to that. Cloud is one of the big ones, but I think there's a lot of things, I guess the had-wins stay at wins, there's a lot of things in favor to really kind of push us forward. >> Jeff Boudreau, thank you for coming on theCUBE. We really appreciate the insight, and candid commentary and analysis and insight into your business. Appreciate it. You got the storage, it's not going away. It's called the data center and the cloud for a reason. It's all about the data and the value of business will be data driven. This is theCUBE bringing data to you here live from Las Vegas at Dell EMC World 2017. I'm John Furrier with Paul Gillin. Back with more, stay with us after the short break. (lively music)
SUMMARY :
brought to you by Dell EMC. since the inception of theCUBE. but, you own all the storage. and the building blocks, you have They call it the data center in regards to how you would have a self-driving system. At the end of the day, that could be, the brand bumper sticker of the I got to ask you about machine learning. enables the customer to better utilize And I have a team focused on the machine learning, in the cloud, which is all about machine learning. is moving to the cloud eventually. It's critical. to do amazing things with the data, It's crazy, probably the last two years alone, Well, it basically, in regards to the data structure about the portfolio question and I'm going to ask It's going to be hard, for you know, and it could be the table stakes are too high. So in regards, the innovation, spending R&D, are really in the red, going back to your pun. You lock that in, so that's big dollars by the way. in regards to how we do that, right? the elements of your portfolio to shine, In regards to what we do with ECS, that's an object store, We think that's going to happen. the gateway to multi-cloud. So, the ability to connect to the cloud so we want to have a more coherent the leaders in the Flash market. in regards to how you want to attack the problem, to my portfolio question, is you guys and go back to Flash specifically. You get the scar tissue, Now on the portfolio, I talk to customers all the time. a better outcome on the other side. Obviously, I always keep cloud in the back of mind. Now in addition to that, though, we all know Storage is not going away, but the machine learning Cloud is one of the big ones, This is theCUBE bringing data to you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Boudreau | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Ashley | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Marius | PERSON | 0.99+ |
22 minutes | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Tom Burns | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Beth Phalen | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
two year | QUANTITY | 0.99+ |
Billy | PERSON | 0.99+ |
Midrange | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
Heritage | ORGANIZATION | 0.99+ |
NVME | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
fourth component | QUANTITY | 0.98+ |
Bemax | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
Dell EMC Storage Division | ORGANIZATION | 0.97+ |
Legacy | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
Flash | TITLE | 0.96+ |
Xtreme IO | ORGANIZATION | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
first one | QUANTITY | 0.95+ |
Chad | ORGANIZATION | 0.93+ |
Dell EMC World 2017 | EVENT | 0.92+ |
One | QUANTITY | 0.92+ |
Teslas | ORGANIZATION | 0.91+ |
HTI | ORGANIZATION | 0.89+ |
one size | QUANTITY | 0.89+ |
this morning | DATE | 0.89+ |
day one | QUANTITY | 0.89+ |