Image Title

Search Results for InfoWorld:

David Linthicum, Deloitte US | Supercloud22


 

(bright music) >> "Supermetafragilisticexpialadotious." What's in a name? In an homage to the inimitable Charles Fitzgerald, we've chosen this title for today's session because of all the buzz surrounding "supercloud," a term that we introduced last year to signify a major architectural trend and shift that's occurring in the technology industry. Since that time, we've published numerous videos and articles on the topic, and on August 9th, kicked off "Supercloud22," an open industry event designed to advance the supercloud conversation, gathering input from more than 30 experienced technologists and business leaders in "The Cube" and broader technology community. We're talking about individuals like Benoit Dageville, Kit Colbert, Ali Ghodsi, Mohit Aron, David McJannet, and dozens of other experts. And today, we're pleased to welcome David Linthicum, who's a Chief Strategy Officer of Cloud Services at Deloitte Consulting. David is a technology visionary, a technical CTO. He's an author and a frequently sought after keynote speaker at high profile conferences like "VMware Explore" next week. David Linthicum, welcome back to "The Cube." Good to see you again. >> Oh, it's great to be here. Thanks for the invitation. Thanks for having me. >> Yeah, you're very welcome. Okay, so this topic of supercloud, what you call metacloud, has created a lot of interest. VMware calls it cross-cloud services, Snowflake calls it their data cloud, there's a lot of different names, but recently, you published a piece in "InfoWorld" where you said the following. "I really don't care what we call it, "and I really don't care if I put "my own buzzword into the mix. "However, this does not change the fact "that metacloud is perhaps the most important "architectural evolution occurring right now, "and we need to get this right out of the gate. "If we do that, who cares what it's named?" So very cool. And you also mentioned in a recent article that you don't like to put out new terms out in the wild without defining them. So what is a metacloud, or what we call supercloud? What's your definition? >> Yeah, and again, I don't care what people call it. The reality is it's the ability to have a layer of cross-cloud services. It sits above existing public cloud providers. So the idea here is that instead of building different security systems, different governance systems, different operational systems in each specific cloud provider, using whatever native features they provide, we're trying to do that in a cross-cloud way. So in other words, we're pushing out data integration, security, all these other things that we have to take care of as part of deploying a particular cloud provider. And in a multicloud scenario, we're building those in and between the clouds. And so we've been tracking this for about five years. We understood that multicloud is not necessarily about the particular public cloud providers, it's about things that you build in and between the clouds. >> Got it, okay. So I want to come back to that, to the definition, but I want to tie us to the so-called multicloud. You guys did a survey recently. We've said that multicloud was mostly a symptom of multi-vendor, Shadow Cloud, M&A, and only recently has become a strategic imperative. Now, Deloitte published a survey recently entitled "Closing the Cloud Strategy, Technology, Innovation Gap," and I'd like to explore that a little bit. And so in that survey, you showed data. What I liked about it is you went beyond what we all know, right? The old, "Our research shows that on average, "X number of clouds are used at an individual company." I mean, you had that too, but you really went deeper. You identified why companies are using multiple clouds, and you developed different categories of practitioners across 500 survey respondents. But the reasons were very clear for "why multicloud," as this becomes more strategic. Service choice scale, negotiating leverage, improved business resiliency, minimizing lock-in, interoperability of data, et cetera. So my question to you, David, is what's the problem supercloud or metacloud solves, and what's different from multicloud? >> That's a great question. The reality is that if we're... Well, supercloud or metacloud, whatever, is really something that exists above a multicloud, but I kind of view them as the same thing. It's an architectural pattern. We can name it anything. But the reality is that if we're moving to these multicloud environments, we're doing so to leverage best of breed things. In other words, best of breed technology to provide the innovators within the company to take the business to the next level, and we determine that in the survey. And so if we're looking at what a multicloud provides, it's the ability to provide different choices of different services or piece parts that allows us to build anything that we need to do. And so what we found in the survey and what we found in just practice in dealing with our clients is that ultimately, the value of cloud computing is going to be the innovation aspects. In other words, the ability to take the company to the next level from being more innovative and more disruptive in the marketplace that they're in. And the only way to do that, instead of basically leveraging the services of a particular walled garden of a single public cloud provider, is to cast a wider net and get out and leverage all kinds of services to make these happen. So if you think about that, that's basically how multicloud has evolved. In other words, it wasn't planned. They didn't say, "We're going to go do a multicloud." It was different developers and innovators in the company that went off and leveraged these cloud services, sometimes with the consent of IT leadership, sometimes not. And now we have these multitudes of different services that we're leveraging. And so many of these enterprises are going from 1000 to, say, 3000 services under management. That creates a complexity problem. We have a problem of heterogeneity, different platforms, different tools, different services, different AI technology, database technology, things like that. So the metacloud, or the supercloud, or whatever you want to call it, is the ability to deal with that complexity on the complexity's terms. And so instead of building all these various things that we have to do individually in each of the cloud providers, we're trying to do so within a cross-cloud service layer. We're trying to create this layer of technology, which removes us from dealing with the complexity of the underlying multicloud services and makes it manageable. Because right now, I think we're getting to a point of complexity we just can't operate it at the budgetary limits that we are right now. We can't keep the number of skills around, the number of operators around, to keep these things going. We're going to have to get creative in terms of how we manage these things, how we manage a multicloud. And that's where the supercloud, metacloud, whatever they want to call it, comes that. >> Yeah, and as John Furrier likes to say, in IT, we tend to solve complexity with more complexity, and that's not what we're talking about here. We're talking about simplifying, and you talked about the abstraction layer, and then it sounds like I'm inferring more. There's value that's added on top of that. And then you also said the hyperscalers are in a walled garden. So I've been asked, why aren't the hyperscalers superclouds? And I've said, essentially, they want to put your data into their cloud and keep it there. Now, that doesn't mean they won't eventually get into that. We've seen examples a little bit, Outposts, Anthos, Azure Arc, but the hyperscalers really aren't building superclouds or metaclouds, at least today, are they? >> No, they're not. And I always have the predictions for every major cloud conference that this is the conference that the hyperscaler is going to figure out some sort of a multicloud across-cloud strategy. In other words, building services that are able to operate across clouds. That really has never happened. It has happened in dribs and drabs, and you just mentioned a few examples of that, but the ability to own the space, to understand that we're not going to be the center of the universe in how people are going to leverage it, is going to be multiple things, including legacy systems and other cloud providers, and even industry clouds that are emerging these days, and SaaS providers, and all these things. So we're going to assist you in dealing with complexity, and we're going to provide the core services of being there. That hasn't happened yet. And they may be worried about conflicting their market, and the messaging is a bit different, even actively pushing back on the concept of multicloud, but the reality is the market's going to take them there. So in other words, if enough of their customers are asking for this and asking that they take the lead in building these cross-cloud technologies, even if they're participating in the stack and not being the stack, it's too compelling of a market that it's not going to drag a lot of the existing public cloud providers there. >> Well, it's going to be interesting to see how that plays out, David, because I never say never when it comes to a company like AWS, and we've seen how fast they move. And at the same time, they don't want to be commoditized. There's the layer underneath all this infrastructure, and they got this ecosystem that's adding all this tremendous value. But I want to ask you, what are the essential elements of supercloud, coming back to the definition, if you will, and what's different about metacloud, as you call it, from plain old SaaS or PaaS? What are the key elements there? >> Well, the key elements would be holistic management of all of the IT infrastructure. So even though it's sitting above a multicloud, I view metacloud, supercloud as the ability to also manage your existing legacy systems, your existing security stack, your existing network operations, basically everything that exists under the purview of IT. If you think about it, we're moving our infrastructure into the clouds, and we're probably going to hit a saturation point of about 70%. And really, if the supercloud, metacloud, which is going to be expensive to build for most of the enterprises, it needs to support these things holistically. So it needs to have all the services, that is going to be shareable across the different providers, and also existing legacy systems, and also edge computing, and IoT, and all these very diverse systems that we're building there right now. So if complexity is a core challenge to operate these things at scale and the ability to secure these things at scale, we have to have commonality in terms of security architecture and technology, commonality in terms of our directory services, commonality in terms of network operations, commonality in term of cloud operations, commonality in terms of FinOps. All these things should exist in some holistic cross-cloud layer that sits above all this complexity. And you pointed out something very profound. In other words, that is going to mean that we're hiding a lot of the existing cloud providers in terms of their interfaces and dashboards and things like that that we're dealing with today, their APIs. But the reality is that if we're able to manage these things at scale, the public cloud providers are going to benefit greatly from that. They're going to sell more services because people are going to find they're able to leverage them easier. And so in other words, if we're removing the complexity wall, which many in the industry are calling it right now, then suddenly we're moving from, say, the 25 to 30% migrated in the cloud, which most enterprises are today, to 50, 60, 70%. And we're able to do this at scale, and we're doing it at scale because we're providing some architectural optimization through the supercloud, metacloud layer. >> Okay, thanks for that. David, I just want to tap your CTO brain for a minute. At "Supercloud22," we came up with these three deployment models. Kit Colbert put forth the idea that one model would be your control planes running in one cloud, let's say AWS, but it interacts with and can manage and deploy on other clouds, the Kubernetes Cluster Management System. The second one, Mohit Aron from Cohesity laid out, where you instantiate the stack on different clouds and different cloud regions, and then you create a layer, a common interface across those. And then Snowflake was the third deployment model where it's a single global instance, it's one instantiation, and basically building out their own cloud across these regions. Help us parse through that. Do those seem like reasonable deployment models to you? Do you have any thoughts on that? >> Yeah, I mean, that's a distributed computing trick we've been doing, which is, in essence, an agent of the supercloud that's carrying out some of the cloud native functions on that particular cloud, but is, in essence, a slave to the metacloud, or the supercloud, whatever, that's able to run across the various cloud providers. In other words, when it wants to access a service, it may not go directly to that service. It goes directly to the control plane, and that control plane is responsible... Very much like Kubernetes and Docker works, that control plane is responsible for reaching out and leveraging those native services. I think that that's thinking that's a step in the right direction. I think these things unto themselves, at least initially, are going to be a very complex array of technology. Even though we're trying to remove complexity, the supercloud unto itself, in terms of the ability to build this thing that's able to operate at scale across-cloud, is going to be a collection of many different technologies that are interfacing with the public cloud providers in different ways. And so we can start putting these meta architectures together, and I certainly have written and spoke about this for years, but initially, this is going to be something that may escape the detail or the holistic nature of these meta architectures that people are floating around right now. >> Yeah, so I want to stay on this, because anytime I get a CTO brain, I like to... I'm not an engineer, but I've been around a long time, so I know a lot of buzzwords and have absorbed a lot over the years, but so you take those, the second two models, the Mohit instantiate on each cloud and each cloud region versus the Snowflake approach. I asked Benoit Dageville, "Does that mean if I'm in "an AWS east region and I want to do a query on Azure West, "I can do that without moving data?" And he said, "Yes and no." And the answer was really, "No, we actually take a subset of that data," so there's the latency problem. From those deployment model standpoints, what are the trade-offs that you see in terms of instantiating the stack on each individual cloud versus that single instance? Is there a benefit of the single instance for governance and security and simplicity, but a trade-off on latency, or am I overthinking this? >> Yeah, you hit it on the nose. The reality is that the trade-off is going to be latency and performance. If we get wiggy with the distributed nature, like the distributed data example you just provided, we have to basically separate the queries and communicate with the databases on each instance, and then reassemble the result set that goes back to the people who are recording it. And so we can do caching systems and things like that. But the reality is, if it's distributed system, we're going to have latency and bandwidth issues that are going to be limiting us. And also security issues, because if we're removing lots of information over the open internet, or even private circuits, that those are going to be attack vectors that hackers can leverage. You have to keep that in mind. We're trying to reduce those attack vectors. So it would be, in many instances, and I think we have to think about this, that we're going to keep the data in the same physical region for just that. So in other words, it's going to provide the best performance and also the most simplistic access to dealing with security. And so we're not, in essence, thinking about where the data's going, how it's moving across things, things like that. So the challenge is going to be is when you're dealing with a supercloud or metacloud is, when do you make those decisions? And I think, in many instances, even though we're leveraging multiple databases across multiple regions and multiple public cloud providers, and that's the idea of it, we're still going to localize the data for performance reasons. I mean, I just wrote a blog in "InfoWorld" a couple of months ago and talked about, people who are trying to distribute data across different public cloud providers for different reasons, distribute an application development system, things like that, you can do it. With enough time and money, you can do anything. I think the challenge is going to be operating that thing, and also providing a viable business return based on the application. And so why it may look like a good science experiment, and it's cool unto itself as an architect, the reality is the more pragmatic approach is going to be a leavitt in a single region on a single cloud. >> Very interesting. The other reason I like to talk to companies like Deloitte and experienced people like you is 'cause I can get... You're agnostic, right? I mean, you're technology agnostic, vendor agnostic. So I want to come back with another question, which is, how do you deal with what I call the lowest common denominator problem? What I mean by that is if one cloud has, let's say, a superior service... Let's take an example of Nitro and Graviton. AWS seems to be ahead on that, but let's say some other cloud isn't quite quite there yet, and you're building a supercloud or a metacloud. How do you rationalize that? Does it have to be like a caravan in the army where you slow down so all the slowest trucks can keep up, or are the ways to adjudicate that that are advantageous to hide that deficiency? >> Yeah, and that's a great thing about leveraging a supercloud or a metacloud is we're putting that management in a single layer. So as far as a user or even a developer on those systems, they shouldn't worry about the performance that may come back, because we're dealing with the... You hit the nail on the head with that one. The slowest component is the one that dictates performance. And so we have to have some sort of a performance management layer. We're also making dynamic decisions to move data, to move processing, from one server to the other to try to minimize the amount of latency that's coming from a single component. So the great thing about that is we're putting that volatility into a single domain, and it's making architectural decisions in terms of where something will run and where it's getting its data from, things are stored, things like that, based on the performance feedback that's coming back from the various cloud services that are under management. And so if you're running across clouds, it becomes even more interesting, because ultimately, you're going to make some architectural choices on the fly in terms of where that stuff runs based on the active dynamic performance that that public cloud provider is providing. So in other words, we may find that it automatically shut down a database service, say MySQL, on one cloud instance, and moved it to a MySQL instance on another public cloud provider because there was some sort of a performance issue that it couldn't work around. And by the way, it does so dynamically. Away from you making that decision, it's making that decision on your behalf. Again, this is a matter of abstraction, removing complexity, and dealing with complexity through abstraction and automation, and this is... That would be an example of fixing something with automation, self-healing. >> When you meet with some of the public cloud providers and they talk about on-prem private cloud, the general narrative from the hyperscalers is, "Well, that's not a cloud." Should on-prem be inclusive of supercloud, metacloud? >> Absolutely, I mean, and they're selling private cloud instances with the edge cloud that they're selling. The reality is that we're going to have to keep a certain amount of our infrastructure, including private clouds, on premise. It's something that's shrinking as a market share, and it's going to be tougher and tougher to justify as the public cloud providers become better and better at what they do, but we certainly have edge clouds now, and hyperscalers have examples of that where they run a instance of their public cloud infrastructure on premise on physical hardware and software. And the reality is, too, we have data centers and we have systems that just won't go away for another 20 or 30 years. They're just too sticky. They're uneconomically viable to move into the cloud. That's the core thing. It's not that we can't do it. The fact of the matter is we shouldn't do it, because there's not going to be an economic... There's not going to be an economic incentive of making that happen. So if we're going to create this meta layer or this infrastructure which is going to run across clouds, and everybody agrees on, that's what the supercloud is, we have to include the on-premise systems, including private clouds, including legacy systems. And by the way, include the rising number of IoT systems that are out there, and edge-based systems out there. So we're managing it using the same infrastructure into cloud services. So they have metadata systems and they have specialized services, and service finance and retail and things like doing risk analytics. So it gets them further down that path, but not necessarily giving them a SaaS application where they're forced into all of the business processes. We're giving you piece parts. So we'll give you 1000 different parts that are related to the finance industry. You can assemble anything you need, but the thing is, it's not going to be like building it from scratch. We're going to give you risk analytics, we're giving you the financial analytics, all these things that you can leverage within your applications how you want to leverage them. We'll maintain them. So in other words, you don't have to maintain 'em just like a cloud service. And suddenly, we can build applications in a couple of weeks that used to take a couple of months, in some cases, a couple of years. So that seems to be a large take of it moving forward. So get it up in the supercloud. Those become just other services that are under managed... That are under management on the supercloud, the metacloud. So we're able to take those services, abstract them, assemble them, use them in different applications. And the ability to manage where those services are originated versus where they're consumed is going to be managed by the supercloud layer, which, you're dealing with the governance, the service governance, the security systems, the directory systems, identity access management, things like that. They're going to get you further along down the pike, and that comes back as real value. If I'm able to build something in two weeks that used to take me two months, and I'm able to give my creators in the organization the ability to move faster, that's a real advantage. And suddenly, we are going to be valued by our digital footprint, our ability to do things in a creative and innovative way. And so organizations are able to move that fast, leveraging cloud computing for what it should be leveraged, as a true force multiplier for the business. They're going to win the game. They're going to get the most value. They're going to be around in 20 years, the others won't. >> David Linthicum, always love talking. You have a dangerous combination of business and technology expertise. Let's tease. "VMware Explore" next week, you're giving a keynote, if they're going to be there. Which day are you? >> Tuesday. Tuesday, 11 o'clock. >> All right, that's a big day. Tuesday, 11 o'clock. And David, please do stop by "The Cube." We're in Moscone West. Love to get you on and continue this conversation. I got 100 more questions for you. Really appreciate your time. >> I always love talking to people at "The Cube." Thank you very much. >> All right, and thanks for watching our ongoing coverage of "Supercloud22" on "The Cube," your leader in enterprise tech and emerging tech coverage. (bright music)

Published Date : Aug 24 2022

SUMMARY :

and articles on the Oh, it's great to be here. right out of the gate. The reality is it's the ability to have and I'd like to explore that a little bit. is the ability to deal but the hyperscalers but the ability to own the space, And at the same time, they and the ability to secure and then you create a layer, that may escape the detail and have absorbed a lot over the years, So the challenge is going to be in the army where you slow down And by the way, it does so dynamically. of the public cloud providers And the ability to manage if they're going to be there. Tuesday, 11 o'clock. Love to get you on and to people at "The Cube." and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David LinthicumPERSON

0.99+

David McJannetPERSON

0.99+

DeloitteORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

August 9thDATE

0.99+

AWSORGANIZATION

0.99+

Benoit DagevillePERSON

0.99+

Kit ColbertPERSON

0.99+

25QUANTITY

0.99+

two monthsQUANTITY

0.99+

Charles FitzgeraldPERSON

0.99+

50QUANTITY

0.99+

next weekDATE

0.99+

M&AORGANIZATION

0.99+

Mohit AronPERSON

0.99+

John FurrierPERSON

0.99+

each cloudQUANTITY

0.99+

Tuesday, 11 o'clockDATE

0.99+

two weeksQUANTITY

0.99+

TuesdayDATE

0.99+

60QUANTITY

0.99+

todayDATE

0.99+

MySQLTITLE

0.99+

100 more questionsQUANTITY

0.99+

eachQUANTITY

0.99+

last yearDATE

0.99+

each instanceQUANTITY

0.99+

30 yearsQUANTITY

0.99+

20QUANTITY

0.99+

Moscone WestLOCATION

0.99+

3000 servicesQUANTITY

0.99+

one modelQUANTITY

0.99+

70%QUANTITY

0.99+

second oneQUANTITY

0.98+

1000QUANTITY

0.98+

30%QUANTITY

0.98+

500 survey respondentsQUANTITY

0.98+

1000 different partsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

single componentQUANTITY

0.98+

single layerQUANTITY

0.97+

Deloitte ConsultingORGANIZATION

0.97+

oneQUANTITY

0.97+

NitroORGANIZATION

0.97+

about five yearsQUANTITY

0.97+

more than 30 experienced technologistsQUANTITY

0.97+

about 70%QUANTITY

0.97+

single instanceQUANTITY

0.97+

Shadow CloudORGANIZATION

0.96+

SnowflakeTITLE

0.96+

The CubeORGANIZATION

0.96+

third deploymentQUANTITY

0.96+

Deloitte USORGANIZATION

0.95+

Supercloud22ORGANIZATION

0.95+

20 yearsQUANTITY

0.95+

each cloud regionQUANTITY

0.95+

second two modelsQUANTITY

0.95+

Closing the Cloud Strategy, Technology, Innovation GapTITLE

0.94+

one cloudQUANTITY

0.94+

single cloudQUANTITY

0.94+

CohesityORGANIZATION

0.94+

one serverQUANTITY

0.94+

single domainQUANTITY

0.94+

each individual cloudQUANTITY

0.93+

supercloudORGANIZATION

0.93+

metacloudORGANIZATION

0.92+

multicloudORGANIZATION

0.92+

The CubeTITLE

0.92+

GravitonORGANIZATION

0.92+

VMware ExploreEVENT

0.91+

couple of months agoDATE

0.89+

single global instanceQUANTITY

0.88+

SnowflakeORGANIZATION

0.88+

cloudQUANTITY

0.88+

Breaking Analysis: How Snowflake Plans to Change a Flawed Data Warehouse Model


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE in ETR. This is Breaking Analysis with Dave Vellante. >> Snowflake is not going to grow into its valuation by stealing the croissant from the breakfast table of the on-prem data warehouse vendors. Look, even if snowflake got 100% of the data warehouse business, it wouldn't come close to justifying its market cap. Rather Snowflake has to create an entirely new market based on completely changing the way organizations think about monetizing data. Every organization I talk to says it wants to be, or many say they already are data-driven. why wouldn't you aspire to that goal? There's probably nothing more strategic than leveraging data to power your digital business and creating competitive advantage. But many businesses are failing, or I predict, will fail to create a true data-driven culture because they're relying on a flawed architectural model formed by decades of building centralized data platforms. Welcome everyone to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, I want to share some new thoughts and fresh ETR data on how organizations can transform their businesses through data by reinventing their data architectures. And I want to share our thoughts on why we think Snowflake is currently in a very strong position to lead this effort. Now, on November 17th, theCUBE is hosting the Snowflake Data Cloud Summit. Snowflake's ascendancy and its blockbuster IPO has been widely covered by us and many others. Now, since Snowflake went public, we've been inundated with outreach from investors, customers, and competitors that wanted to either better understand the opportunities or explain why their approach is better or different. And in this segment, ahead of Snowflake's big event, we want to share some of what we learned and how we see it. Now, theCUBE is getting paid to host this event, so I need you to know that, and you draw your own conclusions from my remarks. But neither Snowflake nor any other sponsor of theCUBE or client of SiliconANGLE Media has editorial influence over Breaking Analysis. The opinions here are mine, and I would encourage you to read my ethics statement in this regard. I want to talk about the failed data model. The problem is complex, I'm not debating that. Organizations have to integrate data and platforms with existing operational systems, many of which were developed decades ago. And as a culture and a set of processes that have been built around these systems, and they've been hardened over the years. This chart here tries to depict the progression of the monolithic data source, which, for me, began in the 1980s when Decision Support Systems or DSS promised to solve our data problems. The data warehouse became very popular and data marts sprung up all over the place. This created more proprietary stovepipes with data locked inside. The Enron collapse led to Sarbanes-Oxley. Now, this tightened up reporting. The requirements associated with that, it breathed new life into the data warehouse model. But it remained expensive and cumbersome, I've talked about that a lot, like a snake swallowing a basketball. The 2010s ushered in the big data movement, and Data Lakes emerged. With a dupe, we saw the idea of no schema online, where you put structured and unstructured data into a repository, and figure it all out on the read. What emerged was a fairly complex data pipeline that involved ingesting, cleaning, processing, analyzing, preparing, and ultimately serving data to the lines of business. And this is where we are today with very hyper specialized roles around data engineering, data quality, data science. There's lots of batch of processing going on, and Spark has emerged to improve the complexity associated with MapReduce, and it definitely helped improve the situation. We're also seeing attempts to blend in real time stream processing with the emergence of tools like Kafka and others. But I'll argue that in a strange way, these innovations actually compound the problem. And I want to discuss that because what they do is they heighten the need for more specialization, more fragmentation, and more stovepipes within the data life cycle. Now, in reality, and it pains me to say this, it's the outcome of the big data movement, as we sit here in 2020, that we've created thousands of complicated science projects that have once again failed to live up to the promise of rapid cost-effective time to insights. So, what will the 2020s bring? What's the next silver bullet? You hear terms like the lakehouse, which Databricks is trying to popularize. And I'm going to talk today about data mesh. These are other efforts they look to modernize datalakes and sometimes merge the best of data warehouse and second-generation systems into a new paradigm, that might unify batch and stream frameworks. And this definitely addresses some of the gaps, but in our view, still suffers from some of the underlying problems of previous generation data architectures. In other words, if the next gen data architecture is incremental, centralized, rigid, and primarily focuses on making the technology to get data in and out of the pipeline work, we predict it's going to fail to live up to expectations again. Rather, what we're envisioning is an architecture based on the principles of distributed data, where domain knowledge is the primary target citizen, and data is not seen as a by-product, i.e, the exhaust of an operational system, but rather as a service that can be delivered in multiple forms and use cases across an ecosystem. This is why we often say the data is not the new oil. We don't like that phrase. A specific gallon of oil can either fuel my home or can lubricate my car engine, but it can't do both. Data does not follow the same laws of scarcity like natural resources. Again, what we're envisioning is a rethinking of the data pipeline and the associated cultures to put data needs of the domain owner at the core and provide automated, governed, and secure access to data as a service at scale. Now, how is this different? Let's take a look and unpack the data pipeline today and look deeper into the situation. You all know this picture that I'm showing. There's nothing really new here. The data comes from inside and outside the enterprise. It gets processed, cleanse or augmented so that it can be trusted and made useful. Nobody wants to use data that they can't trust. And then we can add machine intelligence and do more analysis, and finally deliver the data so that domain specific consumers can essentially build data products and services or reports and dashboards or content services, for instance, an insurance policy, a financial product, a loan, that these are packaged and made available for someone to make decisions on or to make a purchase. And all the metadata associated with this data is packaged along with the dataset. Now, we've broken down these steps into atomic components over time so we can optimize on each and make them as efficient as possible. And down below, you have these happy stick figures. Sometimes they're happy. But they're highly specialized individuals and they each do their job and they do it well to make sure that the data gets in, it gets processed and delivered in a timely manner. Now, while these individual pieces seemingly are autonomous and can be optimized and scaled, they're all encompassed within the centralized big data platform. And it's generally accepted that this platform is domain agnostic. Meaning the platform is the data owner, not the domain specific experts. Now there are a number of problems with this model. The first, while it's fine for organizations with smaller number of domains, organizations with a large number of data sources and complex domain structures, they struggle to create a common data parlance, for example, in a data culture. Another problem is that, as the number of data sources grows, organizing and harmonizing them in a centralized platform becomes increasingly difficult, because the context of the domain and the line of business gets lost. Moreover, as ecosystems grow and you add more data, the processes associated with the centralized platform tend to get further genericized. They again lose that domain specific context. Wait (chuckling), there are more problems. Now, while in theory organizations are optimizing on the piece parts of the pipeline, the reality is, as the domain requires a change, for example, a new data source or an ecosystem partnership requires a change in access or processes that can benefit a domain consumer, the reality is the change is subservient to the dependencies and the need to synchronize across these discrete parts of the pipeline or actually, orthogonal to each of those parts. In other words, in actuality, the monolithic data platform itself remains the most granular part of the system. Now, when I complain about this faulty structure, some folks tell me this problem has been solved. That there are services that allow new data sources to really easily be added. A good example of this is Databricks Ingest, which is, it's an auto loader. And what it does is it simplifies the ingestion into the company's Delta Lake offering. And rather than centralizing in a data warehouse, which struggles to efficiently allow things like Machine Learning frameworks to be incorporated, this feature allows you to put all the data into a centralized datalake. More so the argument goes, that the problem that I see with this, is while the approach does definitely minimizes the complexities of adding new data sources, it still relies on this linear end-to-end process that slows down the introduction of data sources from the domain consumer beside of the pipeline. In other words, the domain experts still has to elbow her way into the front of the line or the pipeline, in this case, to get stuff done. And finally, the way we are organizing teams is a point of contention, and I believe is going to continue to cause problems down the road. Specifically, we've again, we've optimized on technology expertise, where for example, data engineers, well, really good at what they do, they're often removed from the operations of the business. Essentially, we created more silos and organized around technical expertise versus domain knowledge. As an example, a data team has to work with data that is delivered with very little domain specificity, and serves a variety of highly specialized consumption use cases. All right. I want to step back for a minute and talk about some of the problems that people bring up with Snowflake and then I'll relate it back to the basic premise here. As I said earlier, we've been hammered by dozens and dozens of data points, opinions, criticisms of Snowflake. And I'll share a few here. But I'll post a deeper technical analysis from a software engineer that I found to be fairly balanced. There's five Snowflake criticisms that I'll highlight. And there are many more, but here are some that I want to call out. Price transparency. I've had more than a few customers telling me they chose an alternative database because of the unpredictable nature of Snowflake's pricing model. Snowflake, as you probably know, prices based on consumption, just like AWS and other cloud providers. So just like AWS, for example, the bill at the end of the month is sometimes unpredictable. Is this a problem? Yes. But like AWS, I would say, "Kill me with that problem." Look, if users are creating value by using Snowflake, then that's good for the business. But clearly this is a sore point for some users, especially for procurement and finance, which don't like unpredictability. And Snowflake needs to do a better job communicating and managing this issue with tooling that can predict and help better manage costs. Next, workload manage or lack thereof. Look, if you want to isolate higher performance workloads with Snowflake, you just spin up a separate virtual warehouse. It's kind of a brute force approach. It works generally, but it will add expense. I'm kind of reminded of Pure Storage and its approach to storage management. The engineers at Pure, they always design for simplicity, and this is the approach that Snowflake is taking. Usually, Pure and Snowflake, as I have discussed in a moment, is Pure's ascendancy was really based largely on stealing share from Legacy EMC systems. Snowflake, in my view, has a much, much larger incremental market opportunity. Next is caching architecture. You hear this a lot. At the end of the day, Snowflake is based on a caching architecture. And a caching architecture has to be working for some time to optimize performance. Caches work well when the size of the working set is small. Caches generally don't work well when the working set is very, very large. In general, transactional databases have pretty small datasets. And in general, analytics datasets are potentially much larger. Is it Snowflake in the analytics business? Yes. But the good thing that Snowflake has done is they've enabled data sharing, and it's caching architecture serves its customers well because it allows domain experts, you're going to hear this a lot from me today, to isolate and analyze problems or go after opportunities based on tactical needs. That said, very big queries across whole datasets or badly written queries that scan the entire database are not the sweet spot for Snowflake. Another good example would be if you're doing a large audit and you need to analyze a huge, huge dataset. Snowflake's probably not the best solution. Complex joins, you hear this a lot. The working set of complex joins, by definition, are larger. So, see my previous explanation. Read only. Snowflake is pretty much optimized for read only data. Maybe stateless data is a better way of thinking about this. Heavily right intensive workloads are not the wheelhouse of Snowflake. So where this is maybe an issue is real-time decision-making and AI influencing. A number of times, Snowflake, I've talked about this, they might be able to develop products or acquire technology to address this opportunity. Now, I want to explain. These issues would be problematic if Snowflake were just a data warehouse vendor. If that were the case, this company, in my opinion, would hit a wall just like the NPP vendors that proceeded them by building a better mouse trap for certain use cases hit a wall. Rather, my promise in this episode is that the future of data architectures will be really to move away from large centralized warehouses or datalake models to a highly distributed data sharing system that puts power in the hands of domain experts at the line of business. Snowflake is less computationally efficient and less optimized for classic data warehouse work. But it's designed to serve the domain user much more effectively in our view. We believe that Snowflake is optimizing for business effectiveness, essentially. And as I said before, the company can probably do a better job at keeping passionate end users from breaking the bank. But as long as these end users are making money for their companies, I don't think this is going to be a problem. Let's look at the attributes of what we're proposing around this new architecture. We believe we'll see the emergence of a total flip of the centralized and monolithic big data systems that we've known for decades. In this architecture, data is owned by domain-specific business leaders, not technologists. Today, it's not much different in most organizations than it was 20 years ago. If I want to create something of value that requires data, I need to cajole, beg or bribe the technology and the data team to accommodate. The data consumers are subservient to the data pipeline. Whereas in the future, we see the pipeline as a second class citizen, with a domain expert is elevated. In other words, getting the technology and the components of the pipeline to be more efficient is not the key outcome. Rather, the time it takes to envision, create, and monetize a data service is the primary measure. The data teams are cross-functional and live inside the domain versus today's structure where the data team is largely disconnected from the domain consumer. Data in this model, as I said, is not the exhaust coming out of an operational system or an external source that is treated as generic and stuffed into a big data platform. Rather, it's a key ingredient of a service that is domain-driven and monetizable. And the target system is not a warehouse or a lake. It's a collection of connected domain-specific datasets that live in a global mesh. What is a distributed global data mesh? A data mesh is a decentralized architecture that is domain aware. The datasets in the system are purposely designed to support a data service or data product, if you prefer. The ownership of the data resides with the domain experts because they have the most detailed knowledge of the data requirement and its end use. Data in this global mesh is governed and secured, and every user in the mesh can have access to any dataset as long as it's governed according to the edicts of the organization. Now, in this model, the domain expert has access to a self-service and obstructed infrastructure layer that is supported by a cross-functional technology team. Again, the primary measure of success is the time it takes to conceive and deliver a data service that could be monetized. Now, by monetize, we mean a data product or data service that it either cuts cost, it drives revenue, it saves lives, whatever the mission is of the organization. The power of this model is it accelerates the creation of value by putting authority in the hands of those individuals who are closest to the customer and have the most intimate knowledge of how to monetize data. It reduces the diseconomies at scale of having a centralized or a monolithic data architecture. And it scales much better than legacy approaches because the atomic unit is a data domain, not a monolithic warehouse or a lake. Zhamak Dehghani is a software engineer who is attempting to popularize the concept of a global mesh. Her work is outstanding, and it's strengthened our belief that practitioners see this the same way that we do. And to paraphrase her view, "A domain centric system must be secure and governed with standard policies across domains." It has to be trusted. As I said, nobody's going to use data they don't trust. It's got to be discoverable via a data catalog with rich metadata. The data sets have to be self-describing and designed for self-service. Accessibility for all users is crucial as is interoperability, without which distributed systems, as we know, fail. So what does this all have to do with Snowflake? As I said, Snowflake is not just a data warehouse. In our view, it's always had the potential to be more. Our assessment is that attacking the data warehouse use cases, it gave Snowflake a straightforward easy-to-understand narrative that allowed it to get a foothold in the market. Data warehouses are notoriously expensive, cumbersome, and resource intensive, but they're a critical aspect to reporting and analytics. So it was logical for Snowflake to target on-premise legacy data warehouses and their smaller cousins, the datalakes, as early use cases. By putting forth and demonstrating a simple data warehouse alternative that can be spun up quickly, Snowflake was able to gain traction, demonstrate repeatability, and attract the capital necessary to scale to its vision. This chart shows the three layers of Snowflake's architecture that have been well-documented. The separation of compute and storage, and the outer layer of cloud services. But I want to call your attention to the bottom part of the chart, the so-called Cloud Agnostic Layer that Snowflake introduced in 2018. This layer is somewhat misunderstood. Not only did Snowflake make its Cloud-native database compatible to run on AWS than Azure in the 2020 GCP, what Snowflake has done is to obstruct cloud infrastructure complexity and create what it calls the data cloud. What's the data cloud? We don't believe the data cloud is just a marketing term that doesn't have any substance. Just as SAS is Simplified Application Software and iOS made it possible to eliminate the value drain associated with provisioning infrastructure, a data cloud, in concept, can simplify data access, and break down fragmentation and enable shared data across the globe. Snowflake, they have a first mover advantage in this space, and we see a number of fundamental aspects that comprise a data cloud. First, massive scale with virtually unlimited compute and storage resource that are enabled by the public cloud. We talk about this a lot. Second is a data or database architecture that's built to take advantage of native public cloud services. This is why Frank Slootman says, "We've burned the boats. We're not ever doing on-prem. We're all in on cloud and cloud native." Third is an obstruction layer that hides the complexity of infrastructure. and fourth is a governed and secured shared access system where any user in the system, if allowed, can get access to any data in the cloud. So a key enabler of the data cloud is this thing called the global data mesh. Now, earlier this year, Snowflake introduced its global data mesh. Over the course of its recent history, Snowflake has been building out its data cloud by creating data regions, strategically tapping key locations of AWS regions and then adding Azure and GCP. The complexity of the underlying cloud infrastructure has been stripped away to enable self-service, and any Snowflake user becomes part of this global mesh, independent of the cloud that they're on. Okay. So now, let's go back to what we were talking about earlier. Users in this mesh will be our domain owners. They're building monetizable services and products around data. They're most likely dealing with relatively small read only datasets. They can adjust data from any source very easily and quickly set up security and governance to enable data sharing across different parts of an organization, or, very importantly, an ecosystem. Access control and governance is automated. The data sets are addressable. The data owners have clearly defined missions and they own the data through the life cycle. Data that is specific and purposely shaped for their missions. Now, you're probably asking, "What happens to the technical team and the underlying infrastructure and the cluster it's in? How do I get the compute close to the data? And what about data sovereignty and the physical storage later, and the costs?" All these are good questions, and I'm not saying these are trivial. But the answer is these are implementation details that are pushed to a self-service layer managed by a group of engineers that serves the data owners. And as long as the domain expert/data owner is driving monetization, this piece of the puzzle becomes self-funding. As I said before, Snowflake has to help these users to optimize their spend with predictive tooling that aligns spend with value and shows ROI. While there may not be a strong motivation for Snowflake to do this, my belief is that they'd better get good at it or someone else will do it for them and steal their ideas. All right. Let me end with some ETR data to show you just how Snowflake is getting a foothold on the market. Followers of this program know that ETR uses a consistent methodology to go to its practitioner base, its buyer base each quarter and ask them a series of questions. They focus on the areas that the technology buyer is most familiar with, and they ask a series of questions to determine the spending momentum around a company within a specific domain. This chart shows one of my favorite examples. It shows data from the October ETR survey of 1,438 respondents. And it isolates on the data warehouse and database sector. I know I just got through telling you that the world is going to change and Snowflake's not a data warehouse vendor, but there's no construct today in the ETR dataset to cut a data cloud or globally distributed data mesh. So you're going to have to deal with this. What this chart shows is net score in the y-axis. That's a measure of spending velocity, and it's calculated by asking customers, "Are you spending more or less on a particular platform?" And then subtracting the lesses from the mores. It's more granular than that, but that's the basic concept. Now, on the x-axis is market share, which is ETR's measure of pervasiveness in the survey. You can see superimposed in the upper right-hand corner, a table that shows the net score and the shared N for each company. Now, shared N is the number of mentions in the dataset within, in this case, the data warehousing sector. Snowflake, once again, leads all players with a 75% net score. This is a very elevated number and is higher than that of all other players, including the big cloud companies. Now, we've been tracking this for a while, and Snowflake is holding firm on both dimensions. When Snowflake first hit the dataset, it was in the single digits along the horizontal axis and continues to creep to the right as it adds more customers. Now, here's another chart. I call it the wheel chart that breaks down the components of Snowflake's net score or spending momentum. The lime green is new adoption, the forest green is customers spending more than 5%, the gray is flat spend, the pink is declining by more than 5%, and the bright red is retiring the platform. So you can see the trend. It's all momentum for this company. Now, what Snowflake has done is they grabbed a hold of the market by simplifying data warehouse. But the strategic aspect of that is that it enables the data cloud leveraging the global mesh concept. And the company has introduced a data marketplace to facilitate data sharing across ecosystems. This is all about network effects. In the mid to late 1990s, as the internet was being built out, I worked at IDG with Bob Metcalfe, who was the publisher of InfoWorld. During that time, we'd go on speaking tours all over the world, and I would listen very carefully as he applied Metcalfe's law to the internet. Metcalfe's law states that the value of the network is proportional to the square of the number of connected nodes or users on that system. Said another way, while the cost of adding new nodes to a network scales linearly, the consequent value scores scales exponentially. Now, apply that to the data cloud. The marginal cost of adding a user is negligible, practically zero, but the value of being able to access any dataset in the cloud... Well, let me just say this. There's no limitation to the magnitude of the market. My prediction is that this idea of a global mesh will completely change the way leading companies structure their businesses and, particularly, their data architectures. It will be the technologists that serve domain specialists as it should be. Okay. Well, what do you think? DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn? Remember, these episodes are all available as podcasts, so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com, and don't forget to check out etr.plus for all the survey analysis. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching. Be well, and we'll see you next time. (upbeat music)

Published Date : Nov 14 2020

SUMMARY :

This is Breaking Analysis and the data team to accommodate.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

Bob MetcalfePERSON

0.99+

Zhamak DehghaniPERSON

0.99+

MetcalfePERSON

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

November 17thDATE

0.99+

75%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

fiveQUANTITY

0.99+

2020DATE

0.99+

SnowflakeTITLE

0.99+

1,438 respondentsQUANTITY

0.99+

2018DATE

0.99+

OctoberDATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

todayDATE

0.99+

more than 5%QUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

FirstQUANTITY

0.99+

2020sDATE

0.99+

Snowflake Data Cloud SummitEVENT

0.99+

SecondQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

both dimensionsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

iOSTITLE

0.99+

DSSORGANIZATION

0.99+

1980sDATE

0.99+

each companyQUANTITY

0.99+

decades agoDATE

0.98+

zeroQUANTITY

0.98+

firstQUANTITY

0.98+

2010sDATE

0.98+

each quarterQUANTITY

0.98+

ThirdQUANTITY

0.98+

20 years agoDATE

0.98+

DatabricksORGANIZATION

0.98+

earlier this yearDATE

0.98+

bothQUANTITY

0.98+

PureORGANIZATION

0.98+

fourthQUANTITY

0.98+

IDGORGANIZATION

0.97+

TodayDATE

0.97+

eachQUANTITY

0.97+

Decision Support SystemsORGANIZATION

0.96+

BostonLOCATION

0.96+

single digitsQUANTITY

0.96+

siliconangle.comOTHER

0.96+

oneQUANTITY

0.96+

SparkTITLE

0.95+

Legacy EMCORGANIZATION

0.95+

KafkaTITLE

0.94+

LinkedInORGANIZATION

0.94+

SnowflakeEVENT

0.92+

first moverQUANTITY

0.92+

AzureTITLE

0.91+

InfoWorldORGANIZATION

0.91+

dozens andQUANTITY

0.91+

mid toDATE

0.91+

Ken Yeung, Tech Reporter | Samsung Developer Conference 2017


 

>> Announcer: Live from San Francisco it's TheCUBE covering Samsung Developer Conference 2017. Brought to you by Samsung. (digital music) >> Hey welcome back and we're live here in San Francisco this is TheCUBE's exclusive coverage Samsung Developer Conference #SDC2017, I'm John Furrier co-founder of SiliconANGLE Media Coast My next guest is Ken Yeoung tech reporter here inside TheCUBE. I've known Ken for almost 10 years now plus been in the Silicon Valley beat scene covering technology, communities, and all the cutting edge tech but also some of the old established companies. Great to see you. >> Likewise, thanks for having me. >> So tech reporter, let's have a little reporter session here because reporting here at Samsung, to me, is my first developer conference with Samsung. I stopped going to the Apple World Developer Conference when it became too much of a circus around, you know, close to a couple of years before Steve Jobs died. >> Right. >> Now this whole scene well we will have to talk to Steve Gall when we get down there but here, my first one, my reports an awakening I get the TV thing but I'm like IoT that's my world. >> Ken: Oh really? >> I want to see more IoT >> Ken: Yeah. >> So it's good to see Samsung coming into the cloud and owning that. So, that's exciting for me. What do you see as a report that you could file? >> You know, so it's funny because I actually did write a post this morning after watching the keynote yesterday. While I was at VentureBeat a few months ago I reported on Bixby's launch when it came out with the Galaxy S8 and when I heard about what that was it was kind of interesting. That was one of the biggest selling points for me to switch over from my iPhone. And when I tried it out it was interesting. I was kind of wondering how it would stand up against Google Assistant because both of them are installed on the same device. But now as you see with Bixby 2.0 and now with the SmartThings you start to see Samsung's vision. Right now it's on a mobile, it's just very piecemeal. But now when you tackle it on with the TVs, with the fridges, monitors, ovens and everything like that it becomes your entire home. It becomes your Jarvis. You don't actually have to spend 150 bucks or 200 bucks on an Alexa-enabled device or Google Home that most people may not be totally familiar with. But if you have a TV you're familiar with it. >> Obviously you mentioned Jarvis. That's reference to the old sitcom and when Mark Zuckerberg tried his Jarvis project which was, you know, wire his home from scratch. Although a science project, you talk about real utility. I mean so we're getting down to the consumerization so let's take that to the next level. >> Ken: Right. >> If you look at the trends in Silicon Valley it's certainly in the tech industry block, chain and ICOs are really hot. Mission point offerings. That's based on utility right? So, utility-based ICOs, so communities using gamification. Game apps, utility. Samsung, SmartThings. Using their intelligence to not just be the next Amazon. >> Right >> The commerce cloud company, they're just trying to be a better Samsung. >> Ken: Exactly. >> Which they've had some problems in the past and we've heard from analysts here Patrick Morgan was on, pointed out... Illustrated the point. They're a stovepipe company. And with Bixby 2.0 they're like breaking down the silos. We had the execs on here saying that's their goal. >> Ken: Exactly. Yeah if you look on here everything has been siloed. You look at a lot of tech companies now and you don't get to see their grand vision. Everyone has this proto-program when they start these companies and when they expand then you start to see everything come together. Like for example, whether it's Square, whether it's Apple, whether it's Google or Facebook, right? And Samsung, a storied history, right, they've been around for ages with a lot of great technology and they've got their hands in different parts. But from a consumer standpoint you're like likelihood of you having a Samsung device in your home is probably pretty good and so why not just expand that leverage that technology. Right now tech is all about AI. You start to see a lot of the AI stars get acquired or heavily funded and heavily invested. >> Really The Cube is AI, we're AI machine right here. Right here is the bot, analyst report. People are AI watching. But I mean what the hell is AI? AI is machine learning, using software, >> Data collection. >> Nailed it. >> And personalization. And you look at I interviewed a Samsung executive at CAS last year this January, and he was telling me about the three parts. It has to be personal, it has to be contextual and it has to be conversational in terms of AI. What you saw yesterday during the keynote and what executives and the companies have been repeatedly saying is that's what Bixby is. And you could kind of say that's similar to what Google has with Google Assistant you can see that with Alexa but it's still very... Those technologies are very silent. >> What were those three things again? Personal, >> Personable, contextual, and conversational. >> That is awesome, in fact, that connects with what Amy Joe Kim, CEO of ShuffleBrain. She took it from a different angle; she's building these game apps but she's becoming more of a product development. Because it's not just build a game like a Zynga game or you know, something on a mobile phone. She's bringing gaming systems. Her thesis was people are now part of the game. Now those are my words but, she's essentially saying the game system includes data from your friends. >> Right. >> The game might suck but my friends are still there. So there's still some social equity in there. You're bringing it over to the contextual personal, this is the new magic for app developers. Is this leading to AR? >> Oh absolutely. >> I mean we're talking about ... This is the convergence of the new formulas for successful app development. >> Right, I mean we were talking about earlier what is AI and I mentioned all about data and it's absolutely true. Your home is collecting so much data about you that it's going to offer that personal response. So you're talking about is this going to lead to AR? Absolutely, so whatever data it has about your home you might bring your phone out as you go shopping or whatnot. You might be out sight-seeing and have your camera out. And it might bring back some memories, right or might display a photo from your photo album or something. So there's a lot of interesting ties that could come into it and obviously Samsung's camera on their phones are one of the top ones on the market. So there's potential for it, yeah. >> Sorry Ken, I've got to ask you. So looking at the bigger picture now let's look outside of Samsung. We can look at some tell signs here Google on stage clearly not grand-standing but doing their thing. Android, you know, AR core, starting to see that Google DNA. Now they've got tensor flow and a lot of goodness happening in the cloud with Sam Ramji over there kicking ass at Google doing a great job. Okay, they're the big three, some people call it the big seven I call it the big three. It's Amazon, Microsoft, Google. Everyone else is fighting for four, five, six. Depending on who you want to talk to. But those are the three, what I call, native clouds. Ones that are going to be whole-saleing resource. Amazon is not Google, Amazon has no Android. They dropped their phones. Microsoft, Joe Belfiore said hey I'm done with phones they tapped out. So essentially Microsoft taps out of device. They've still got the Xbox. Amazon tapping out of phones. They've got commerce. They've got web service. They've got entertainment. This is going to be interesting. What's your take? >> Well interesting is an under-statement there. I mean, you look at what the ... Amazon, right now, is basically running the show when it comes to virtual assistant or voice-powered assistance. Alexa, Amazon launched a bunch of Alexa products recently and then soon after, I believe it was the last month, Google launches a whole bunch of Google home devices as well. But what's interesting is that both of those companies are targeting... Have a different approach to what Samsung is, right? Remember Samsung's with Bixby 2.0 is all about consolidating the home, right? In my post I coined that it was basically their fight to unite the internet of things kind of thing. But, you know, when it comes to Alexa with Amazon and Google they're targeting not only the smaller integrations with maybe like August or SmartLocks or thermostats and whatnot but they're also going after retailers and businesses. So how many skills can you have on Alexa? How many, what are they called, actions can you have on Google Home? They're going after businesses. >> Well this is the edge of the network so the reason why, again coming back full-circle, I was very critical on day one yesterday. I was kind of like, data IoT that's our wheelhouse in TheCUBE. Not a lot of messaging around that because I don't think Samsung is ready yet and nor should they be given their evolution. But in Amazon's world >> I think they're ... The way they played it yesterday was pretty good a little humble, like they didn't set that expectation like oh my god this is going to >> They didn't dismiss it but they were basically not highlighting it right. >> Well they did enough. They did enough to entice you to tease it but like, look, they have a long way to go to kind of unite it. SmartThings has been around for a while so they've been kind of building it behind the scenes. Now this is like hey now we're going to slap on AI. It's similar to ... >> What do you hear from developers? I've been hearing some chirping here about AI it's got to be standardized and not sure. >> Oh, absolutely. I think a lot of developers will probably want to see hey if I'm going to build... If I want to leverage AI and kind of consolidate I want to be able to have it to maximize my input maximize my reach. Like I don't want to have to build one action here one service skill here. Whatever Samsung's going to call for Bixby. You know I want to make it that one thing. But Samsung's whole modernization that's going to be interesting in terms of your marketplace. How does that play out? You know, Amazon has recently started to monetize or start to incentivize, as it were, developers. And Google if they're not already doing that will probably has plenty of experience in doing that. With Android and now they can do that with Google. >> So I've got to ask you about Facebook. Facebook has been rumored to have a phone coming but I mean Facebook's >> Ken: They tried that once. >> They're Licking their wounds right now. I mean the love on Facebook is not high. Fake news, platform inconsistencies. >> Ken: Ad issues. >> Moves fast, breaks stuff. Zuck is hurting. It's hurting Zuck. Certainly the Russian stuff. I think, first of all, it's really not Facebook's fault. They never claimed to be some original content machine. They just got taken advantage of through bad arbitrage. >> It's gets it to some scale. >> People are not happy with Facebook right now so it's hard for them to choose a phone. >> Well, you're right. There are rumors that they were going to introduce the phone again after... We all remember Facebook Home which was, you know, we won't talk about that anymore. But I think there was talk about them doing a speaker some sort of video thing. I think they were calling it... I believe it's called Project Aloha. I believe Business ETC. and TechCrunch have reported on that extensively. That is going to compete with what Amazon's going. So everyone is going after Amazon, right. So I think don't discount Samsung on this part I think they are going to be I don't want to call them the dark horse but you know, people are kind of ignoring them right now. >> Well if Samsung actually aligned with Amazon that would be very because they'd have their foot in both camps. Google and Amazon. Just play Switzerland and win on both sides. >> Samsung, I think Samsung >> That might be a vital strategy. Kinesis if the customers wanted to do that. Google can provide some cloud for them, don't know how they feel about that. >> Yeah I mean Samsung will definitely be... I think has the appeal with their history they can go after the bigger retailers. The bigger manufacturers to leverage them because there's some stability as opposed to well I'm not going to give access to my data to Amazon you look at Amazon now as Amazon's one of the probably the de facto leader in that space. You see people teaming up with Google to compete against them. You know, there's a anti-Amazony type of alliance out there. >> Well I would say there's a jealousy factor. >> Ken: True, true. >> But a lot of the fud going out there... I saw Matt Asay's article in InfoWorld... And it was over the top basically saying that Amazon's not giving back an open source. I challenged Andy Jesse two years ago on that and Matt's behind the times. Matt you've got to get with the program you're a little bit hardcore pushed there. But I think he's echoing the fear of the community. Amazon's definitely doing open source first of all but the same thing goes for Ali Baba. I asked the founder of Ali Baba cloud last week when I was in China. You guys are taking open source what are you giving back and it was off the record comment and he was like, you know, they want to give back. So, just all kinds of political and or incumbent positions on open source, that to me is going to be the game-changer. Linux foundation, Hipatchi is growing, exponential growth in open source over the next five to ten years. Just in terms of lines of code shipped. >> Right. >> Linux foundation's shown those numbers and 10% of that code is going to be new. 90% of the code's going to be re-used and so forth. >> Ken: Oh absolutely. I mean you're going to need to have a lot of open source in order for this eco-system to really flourish. To build it on your own and build it proprietary it basically locks it down. Didn't Sony deal with that when they were doing, like, they're own memory cards for cameras and stuff and now their cameras are using SD cards now. So you're starting to see, I think, a lot of companies will need to be supportive of open source. In tech you start to see people boasting that, you know, we are doing this in open source. Or you know, Facebook constantly announces hey we are releasing this into open source. LinkedIn will do that. Any company that you talk to will... >> Except Apple. Apple does some open source. >> Apple does some open source, yeah. >> But they're a closed system and they are cool about it. They're up front it. Okay final question, bottom line, Samsung Developer Conference 2017 what should people know that didn't make it or are watching this, what should they know about what they missed and what Samsung's doing, what they need to do better. >> You know I think what really took the two-day conference is basically Bixby. You look at all the sessions; all about Bixby. SmartThings, sure they consolidated everything into the SmartThings cloud, great. But you know SmartThings has been around for a while and I'm interested to see how well they've been doing. I wish they released a little bit more numbers on those. But Bixby it was kind of an interesting 10 million users on them after three months launching in the US which is very is a pretty good number but they still have a bit of a ways to go and they're constantly making improvements which is a very good, good, good thing as well. >> Ken Yeoung, a friend of TheCUBE, tech reporter formerly with VentureBeat now onto his next thing what are you going to do? Take some time off? >> Take some time off, continue writing about what I see and who knows where that takes me. >> Yeah and it's good to get decompressed, you know, log off for a week or so. I went to China I was kind of off Facebook for a week. It felt great. >> Yeah. (laughs) >> No more political posts. One more Colin Kaepernick kneeling down during the national anthem or one more anti-Trump post I'm going to... It was just disaster and then the whole #MeToo thing hit and oh my god it was just so much hate. A lot of good things happening though in the world and it's good to see you writing out there. It's TheCUBE, I'm John Furrier, live in San Francisco, Samsung Developer Conference exclusive Cube coverage live here we'll be right back with more day two coverage of two days. We'll be right back.

Published Date : Oct 19 2017

SUMMARY :

Brought to you by Samsung. and all the cutting edge tech but also I stopped going to the Apple World Developer Conference I get the TV thing but I'm like IoT So it's good to see Samsung coming into the cloud But now when you tackle it on with the TVs, so let's take that to the next level. Using their intelligence to not just be the next Amazon. The commerce cloud company, they're just trying to be We had the execs on here saying that's their goal. and when they expand then you But I mean what the hell is AI? and it has to be conversational in terms of AI. or you know, something on a mobile phone. You're bringing it over to the contextual personal, This is the convergence of the new formulas for Your home is collecting so much data about you that This is going to be interesting. I mean, you look at what the ... Not a lot of messaging around that because I don't think like oh my god this is going to They didn't dismiss it but they were They did enough to entice you it's got to be standardized and not sure. that's going to be interesting in terms of your marketplace. So I've got to ask you about Facebook. I mean the love on Facebook is not high. They never claimed to be some original content machine. so it's hard for them to choose a phone. I think they are going to be Google and Amazon. Kinesis if the customers wanted to do that. I think has the appeal with their history they can go in open source over the next five to ten years. and 10% of that code is going to be new. in order for this eco-system to really flourish. Apple does some open source. and what Samsung's doing, and I'm interested to see how well they've been doing. and who knows where that takes me. Yeah and it's good to get decompressed, you know, and it's good to see you writing out there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ken YeoungPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Joe BelfiorePERSON

0.99+

GoogleORGANIZATION

0.99+

Patrick MorganPERSON

0.99+

Ken YeungPERSON

0.99+

FacebookORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

Steve JobsPERSON

0.99+

TechCrunchORGANIZATION

0.99+

Matt AsayPERSON

0.99+

Steve GallPERSON

0.99+

Sam RamjiPERSON

0.99+

AppleORGANIZATION

0.99+

Andy JessePERSON

0.99+

LinkedInORGANIZATION

0.99+

Colin KaepernickPERSON

0.99+

Amy Joe KimPERSON

0.99+

USLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Mark ZuckerbergPERSON

0.99+

KenPERSON

0.99+

150 bucksQUANTITY

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

SonyORGANIZATION

0.99+

200 bucksQUANTITY

0.99+

10%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

ZyngaORGANIZATION

0.99+

ChinaLOCATION

0.99+

Galaxy S8COMMERCIAL_ITEM

0.99+

San FranciscoLOCATION

0.99+

VentureBeatORGANIZATION

0.99+

a weekQUANTITY

0.99+

last weekDATE

0.99+