Opening Keynote | Supercloud2
(intro music plays) >> Okay, welcome back to Supercloud 2. I'm John Furrier with my co-host, Dave Vellante, here in our Palo Alto Studio, with a live performance all day unpacking the wave of Supercloud. This is our second edition. Back for keynote review here is Vittorio Viarengo, talking about the hype and the reality of the Supercloud momentum. Vittorio, great to see you. You got a presentation. Looking forward to hearing the update. >> It's always great to be here on this stage with you guys. >> John Furrier: (chuckles) So the business imperative for cloud right now is clear and the Supercloud wave points to the builders and they want to break through. VMware, you guys have a lot of builders in the ecosystem. Where do you guys see multicloud today? What's going on? >> So, what we see is, when we talk with our customers is that customers are in a state of cloud chaos. Raghu Raghuram, our CEO, introduced this term at our user conference and it really resonated with our customers. And the chaos comes from the fact that most enterprises have applications spread across private cloud, multiple hyperscalers, and the edge increasingly. And so with that, every hyperscaler brings their own vertical integrated stack of infrastructure development, platform security, and so on and so forth. And so our customers are left with a ballooning cost because they have to train their employees across multiple stacks. And the costs are only going up. >> John Furrier: Have you talked about the Supercloud with your customers? What are they looking for when they look at the business value of Cross-Cloud Services? Why are they digging into it? What are some of the reasons? >> First of all, let's put this in perspective. 90, 87% of customers use two or more cloud including the private cloud. And 55%, get this, 55% use three or more clouds, right? And so, when you talk to these customers they're all asking for two things. One, they find that managing the multicloud is more difficult than the private cloud. And that goes without saying because it's new, they don't have the skills, and they have many of these. And pretty much everybody, 87% of them, are seeing their cost getting out of control. And so they need a new approach. We believe that the industry needs a new approach to solving the multicloud problem, which you guys have introduced and you call it the Supercloud. We call it Cross-Cloud Services. But the idea is that- and the parallel goes back to the private cloud. In the private cloud, if you remember the old days, before we called it the private cloud, we would install SAP. And the CIO would go, "Oh, I hear SAP works great on HP hardware. Oh, let's buy the HP stack", right? (hosts laugh) And then you go, "Oh, oh, Oracle databases. They run phenomenally on Sun Stack." That's another stack. And it wasn't sustainable, right? And so, VMware came in with virtualization and made everything look the same. And we unleashed a tremendous era of growth and speed and cost saving for our customers. So we believe, and I think the industry also believes, if you look at the success of Supercloud, first instance and today, that we need to create a new level of abstraction in the cloud. And this abstraction needs to be at a higher level. It needs to be built around the lingua franca of the cloud, which is Kubernetes, APIs, open source stacks. And by doing so, we're going to allow our customers to have a more unified way of building, managing, running, connecting, and securing applications across cloud. >> So where should that standardization occur? 'Cause we're going to hear from some customers today. When I ask them about cloud chaos, they're like, "Well, the way we deal with cloud chaos is MonoCloud". They sort of put on the blinders, right? But of course, they may be risking not being able to take advantage of best-of-breed. So where should that standardization layer occur across clouds? >> [Vittorio Viarengo] Well, I also hear that from some customers. "Oh, we are one cloud". They are in denial. There's no question about it. In fact, when I met at our user conference with a number of CIOs, and I went around the room and I asked them, I saw the entire spectrum. (laughs) The person is in denial. "Oh, we're using AWS." I said, "Great." "And the private cloud, so we're all set." "Okay, thank you. Next." "Oh, the business units are using AWS." "Ah, okay. So you have three." "Oh, and we just bought a company that is using Google back in Europe." So, okay, so you got four right there. So that person in denial. Then, you have the second category of customers that are seeing the problem, they're ahead of the pack, and they're building their solution. We're going to hear from Walmart later today. >> Dave Vellante: Yeah. >> So they're building their own. Not everybody has the skills and the scale of Walmart to build their own. >> Dave Vellante: Right. >> So, eventually, then you get to the third category of customers. They're actually buying solutions from one of the many ISVs that you are going to talk with today. You know, whether it is Azure Corp or Snowflake or all this. I will argue, any new company, any new ISV, is by definition a multicloud service company, right? And so these people... Or they're buying our Cross-Cloud Services to solve this problem. So that's the spectrum of customers out there. >> What's the stack you're focusing on specifically? What is VMware? Because virtualization is not going away. You're seeing a lot more in the cloud with networking, for example, this abstraction layer. What specifically are you guys focusing on? >> [Vittorio Viarengo] So, I like to talk about this beyond what VMware does, just 'cause I think this is an industry movement. A market is forming around multicloud services. And so it's an approach that pretty much a whole industry is taking of building this abstraction layer. In our approach, it is to bring these services together to simplify things even further. So, initially, we were the first to see multicloud happening. You know, Raghu and Sanjay, back in what, like 2016, 17, saw this coming and our first foray in multicloud was to take this sphere and our hypervisor and port it natively on all the hyperscaling, which is a phenomenal solution to get your enterprise application in the cloud and modernize them. But then we realized that customers were already in the cloud natively. And so we had to have (all chuckle) a religion discussion internally and drop that hypervisor religion and say, "Hey, we need to go and help our customers where they are, in a native cloud". And that's where we brought back Pivotal. We built tons around it. We shifted. And then Aria. And so basically, our evolution was to go from, you know, our hypervisor to cloud native. And then eventually we ended up at what we believe is the most comprehensive multicloud services solution that covers Application Development with Tanzu, Management with Aria, and then you have NSX for security and user computing for connectivity. And so we believe that we have the most comprehensive set of integrated services to solve the challenges of multicloud, bringing excess simplicity into the picture. >> John Furrier: As some would say, multicloud and multi environment, when you get to the distributed computing with the edge, you're going to need that capability. And you guys have been very successful with private cloud. But to be devil's advocate, you guys have been great with private cloud, but some are saying like, you guys don't get public cloud yet. How do you answer that? Because there's a lot of work that you guys have done in public cloud and it seems like private cloud successes are moving up into public cloud. Like networking. You're seeing a lot of that being configured in. So the enterprise-grade solutions are moving into the cloud. So what would you say to the skeptics out there that say, "Oh, I think you got private cloud nailed down, but you don't really have public cloud." (chuckles) >> [Vittorio Viarengo] First of all, we love skeptics. Our engineering team love skeptics and love to prove them wrong. (John laughs) And I would never ever bet against our engineering team. So I believe that VMware has been so successful in building a private cloud and the technology that actually became the foundation for the public cloud. But that is always hard, to be known in a new environment, right? There's always that period where you have to prove yourself. But what I love about VMware is that VMware has what I believe, what I like to call "enterprise pragmatism". The private cloud is not going away. So we're going to help our customers there, and then, as they move to the cloud, we are going to give them an option to adopt the cloud at their own pace, with VMware cloud, to allow them to move to the cloud and be able to rely on the enterprise-class capabilities we built on-prem in the cloud. But then with Tanzu and Aria and the rest of the Cross-Cloud Service portfolio, being able to meet them where they are. If they're already in the cloud, have them have a single place to build application, a single place to manage application, and so on and so forth. >> John Furrier: You know, Dave, we were talking in the opening. Vittorio, I want to get your reaction to this because we were saying in the opening that the market's obviously pushing this next gen. You see ChatGPT and the success of these new apps that are coming out. The business models are demanding kind of a digital transformation. The tech, the builders, are out there, and you guys have a interesting view because your customer base is almost the canary in the coal mine because this is an Operations challenge as well as just enabling the cloud native. So, I want to get your thoughts on, you know, your customer base, VMware customers. They've been in IT Ops for generations. And now, as that crowd moves and sees this Supercloud environment, it's IT again, but it's everywhere. It's not just IT in a data center. It's on-premises, it's cloud, it's edge. So, almost, your customer base is like a canary in the coal mine for this movement of how do you operationalize multiple environments? Which includes clouds, which includes apps. I mean, this is the core question. >> [Vittorio Viarengo] Yeah. And I want to make this an industry conversation. Forget about VMware for a second. We believe that there are like four or five major pillars that you need to implement to create this level of abstraction. It starts from observability. If you don't know- You need to know where your apps are, where your data is, how the the applications are performing, what is the security posture, what is their performance? So then, you can do something about it. We call that the observability part of this, creating this abstraction. The second one is security. So you need to be- Sorry. Infrastructure. An infrastructure. Creating an abstraction layer for infrastructure means to be able to give the applications, and the developer who builds application, the right infrastructure for the application at the right time. Whether it is a VM, whether it's a Kubernetes cluster, or whether it's microservices, and so on and so forth. And so, that allows our developers to think about infrastructure just as code. If it is available, whatever application needs, whatever the cost makes sense for my application, right? The third part of security, and I can give you a very, very simple example. Say that I was talking to a CIO of a major insurance company in Europe and he is saying to me, "The developers went wild, built all these great front office applications. Now the business is coming to me and says, 'What is my compliance report?'" And the guy is saying, "Say that I want to implement the policy that says, 'I want to encrypt all my data no matter where it resides.' How does it do it? It needs to have somebody logging in into Amazon and configure it, then go to Google, configure it, go to the private cloud." That's time and cost, right? >> Yeah. >> So, you need to have a way to enforce security policy from the infrastructure to the app to the firewall in one place and distribute it across. And finally, the developer experience, right? Developers, developers, developers. (all laugh) We're always trying to keep up with... >> Host: You can dance if you want to do... >> [Vittorio Viarengo] Yeah, let's not make a fool of ourselves. More than usual. Developers are the kings and queens of the hill. They are. Why? Because they build the application. They're making us money and saving us money. And so we need- And right now, they have to go into these different stacks. So, you need to give developers two things. One, a common development experience across this different Kubernetes distribution. And two, a way for the operators. To your point. The operators have fallen behind the developers. And they cannot go to the developer there and tell them, "This is how you're going to do things." They have to see how they're doing things and figure out how to bring the gallery underneath so that developers can be developers, but the operators can lay down the tracks and the infrastructure there is secure and compliant. >> Dave Vellante: So two big inferences from that. One is self-serve infrastructure. You got- In a decentralized cloud, a Supercloud world, you got to have self-serve infrastructure, you got to be simple. And the second is governance. You mentioned security, but it's also governance. You know, data sovereignty as we talked about. So the question I have, Vittorio, is where does the customer start? >> [Vittorio Viarengo] So I, it always depends on the business need, but to me, the foundational layer is observability. If you don't know where your staff is, you cannot manage, you cannot secure it, you cannot manage its cost, right? So I think observability is the bar to entry. And then it depends on the business needs, right? So, we go back to the CIO that I talked to. He is clearly struggling with compliance and security. >> Hosts: Mm hmm. >> And so, like many customers. And so, that's maybe where they start. There are other customers that are a little behind the head of the pack in terms of building applications, right? And so they're looking at these, you know, innovative companies that have the developers that get the cloud and build all these application. They are leader in the industry. They're saying, "How do I get some of that?" Well, the way you get some of that is by adopting modern application development and platform operational capabilities. So, that's maybe, that's where they should start. And so on and so forth. It really depends on the business. To me, observability is the foundational part of this. >> John Furrier: Vittorio, we've been on this conversation with you for over a year and a half now with Supercloud. You've been a leader in seeing the wave, you and Raghu and the team at VMware, among other industry leaders. This is our second event. If you're- In the minute and a half that we have left, when you get asked, "what is this Supercloud multicloud Cross-Cloud thing? What's it mean?" I mean, I mentioned earlier, the market, the business models are changing, tech's changing, society needs more economic value out of the cloud. Builders are out there. If someone says, "Hey, Vittorio, what's the bottom line? What's really going on? Why should I pay attention to this wave? What's going on?" How would you describe the relevance of Supercloud? >> I think that this industry is full of smart vendors and smart customers. And if we are smart about it, we look at the history of IT and the history of IT repeats itself over and over again. You follow the- He said, "Follow the money." I say, "Follow the developers." That's how I made my career. I follow great developers. I look at, you know, Kit Colbert. I say, "Okay. I'm going to get behind that guy wherever he is going." And I try to add value to that person. I look at Raghu and all the great engineers that I was blessed to work with. And so the engineers go and explore new territories and then the rest of the stacks moves around. The developers have gone multicloud. And just like in any iteration of IT, at some point, the way you get the right scales at the right cost is with abstractions. And you can see it everywhere from, you know, bits and bytes, integration, to SOA, to APIs and microservices. You can see it now from best-of-breed hyperscaler across multiple clouds to creating an abstraction layer, a Supercloud, that creates a unified way of building, managing, running, securing, and accessing applications. So if you're a customer- (laughs) A minute and a half. (hosts chuckle) If you are customers that are out there and feeling the pain, you got to adopt this. If you are customers that is behind and saying, "Maybe you're in denial" look at the customers that are solving the problems today, and we're going to have some today. See what they're doing and learn from them so you don't make the same mistakes and you can get there ahead of it. >> Dave Vellante: Gracely's Law, John. Brian Gracely. That history repeats itself and- >> John Furrier: And I think one of these, "follow the developers" is interesting. And the other big wave, I want to get your comment real quick, is that developers aren't just application developers. They're network developers. The stack has completely been software-enabled so that you have software-defined networking, you have all kinds of software at all aspects of observability, infrastructure, security. The developers are everywhere. It's not just software. Software is everywhere. >> [Vittorio Viarengo] Yeah. Developers, developers, developers. The other thing that we can tell, I can tell, and we know, because we live in Silicon Valley. We worship developers but if you are out there in manufacturing, healthcare... If you have developers that understand this stuff, pamper them, keep them happy. (hosts laugh) If you don't have them, figure out where they hang out and go recruit them because developers indeed make the IT world go round. >> John Furrier: Vittorio, thank you for coming on with that opening keynote here for Supercloud 2. We're going to unpack what Supercloud is all about in our second edition of our live performance here in Palo Alto. Virtual event. We're going to talk to customers, experts, leaders, investors, everyone who's looking at the future, what's being enabled by this new big wave coming on called Supercloud. I'm John Furrier with Dave Vellante. We'll be right back after this short break. (ambient theme music plays)
SUMMARY :
of the Supercloud momentum. on this stage with you guys. and the Supercloud wave And the chaos comes from the fact And the CIO would go, "Well, the way we deal with that are seeing the problem, and the scale of Walmart So that's the spectrum You're seeing a lot more in the cloud and then you have NSX for security And you guys have been very and the rest of the that the market's obviously Now the business is coming to me and says, from the infrastructure if you want to do... and the infrastructure there And the second is governance. is the bar to entry. Well, the way you get some of that out of the cloud. the way you get the right scales Dave Vellante: Gracely's Law, John. And the other big wave, make the IT world go round. We're going to unpack what
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Vittorio Viarengo | PERSON | 0.99+ |
Vittorio | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Azure Corp | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
third category | QUANTITY | 0.99+ |
87% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
second edition | QUANTITY | 0.99+ |
A minute and a half | QUANTITY | 0.99+ |
second event | QUANTITY | 0.99+ |
second category | QUANTITY | 0.99+ |
Raghu Raghuram | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Supercloud2 | EVENT | 0.99+ |
first | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Supercloud | ORGANIZATION | 0.98+ |
Aria | ORGANIZATION | 0.98+ |
third part | QUANTITY | 0.98+ |
Gracely | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
second | QUANTITY | 0.97+ |
HP | ORGANIZATION | 0.97+ |
second one | QUANTITY | 0.97+ |
five major pillars | QUANTITY | 0.97+ |
SAP | ORGANIZATION | 0.97+ |
17 | DATE | 0.97+ |
over a year and a half | QUANTITY | 0.96+ |
First | QUANTITY | 0.96+ |
one cloud | QUANTITY | 0.96+ |
first instance | QUANTITY | 0.96+ |
Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps
>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)
SUMMARY :
with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob Muglia | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
David Flynn | PERSON | 0.99+ |
Veronica | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Nick Taylor | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Veronica Durgin | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Rob Ho | PERSON | 0.99+ |
Warner Media | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Ionis Pharmaceutical | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
David Flore | PERSON | 0.99+ |
DBT Labs | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Bob | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
21 sessions | QUANTITY | 0.99+ |
Darren Bramberm | PERSON | 0.99+ |
33 guests | QUANTITY | 0.99+ |
Nir Zuk | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Harveer Singh | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
200 locations | QUANTITY | 0.99+ |
August | DATE | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Data Mesh | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
David.Vellante@siliconangle.com | OTHER | 0.99+ |
next week | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.98+ |
Silicon Angle | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Eric Bradley | PERSON | 0.98+ |
two | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Sachs | ORGANIZATION | 0.98+ |
SAKS | ORGANIZATION | 0.98+ |
Supercloud | EVENT | 0.98+ |
last August | DATE | 0.98+ |
each week | QUANTITY | 0.98+ |
theCUBE on Supercloud | AWS Summit New York 2022
welcome back to thecube's live coverage coming to you from the big apple in new york city we're talking all things aws summit but right now i've got two powerhouses you know them you love them john furrier dave vellante going to be talking about super cloud guys we've been talking a lot about this there's a big event coming up on the cube august 9th and i gotta start dave with you because we talk about it pretty much in every interview where it's relevant why super cloud yeah so john furrier years ago started a tradition lisa prior to aws which was to lay down the expectation for our audiences what they should be looking for at aws reinvent okay john when did that start 2012 2013. actually 2013 was our first but 2015 was the first time when we get access to andy jassy who wasn't doing any briefings and we realized that the whole industry started looking at amazon web services as a structural forcing function of massive change uh some say inflection point we were saying complete redefinition so you wrote the trillion dollar baby yeah right which actually turns into probably multi-trillion dollars we got it right on that one surprisingly it was pretty obvious so every year since then john has published the seminal article prior to reinvent so this year we were talking we're coming out of the isolation economy and john hedwig also also adam silevski was the new ceo so we had a one-on-one with adam that's right and then that's where the convergence between andy jassy and adam celebski kicked in which is essentially those guys work together even though they he went off and boomerang back in as they say in aws but what's interesting was is that adam zluski's point of view piggyback jassy but he had a different twist yeah some so you know low you know people who didn't have really a lot of thought into it said oh he's copying microsoft moving up the stack we're like no no no no no something structural is happening again and so john wrote the piece and he started sharing it we're collaborating he said hey dave take a take a look add your perspectives and then jerry chen had just written castles in the cloud and he talked about sub-markets and we were sort of noodling and one of the other things was in 2018 2019 around that time at aws re invent there was this friction between like snowflake and aws because redshift separated compute from storage which was snowflake's whole thing now fast forward to 2021 after we're leaving you know the covert economy by the way everyone was complaining they are asking jassy are you competing with your ecosystem the classic right trope and then in in remember jason used to use cloudera as the example i would like to maybe pick a better example snowflake became that example and what the transition was it went from hey we're kind of competitive for sure there's a lot of examples but it went from we're competitive they're stealing our stuff to you know what we're making so much money building on top of aws specifically but also the clouds and cross clouds so we said there's something new happening in the ecosystem and then just it popped up this term super cloud came up to connote a layer that floats above the hyperscale capex not is it's not pass it's not sas it's the combination of the of those things on top of a new digital infrastructure and we chose the term super cloud we liked it better than multi-cloud because multiplayer at least one other point too i think four or five years earlier dave and i across not just aws reinvent all of our other events we were speculating that there might be a tier two cloud service provider models and we've talked with intel about this and others just kind of like evaluating it staring at it and we met by tier two like maybe competing against amazon but what happened was it wasn't a tier two cloud it was a super cloud built on the capex of aws which means initially was a company didn't have to build aws to be like aws and everybody wanted to be like aws so we saw the emergence of the smart companies saying hey let's refactor our business model in the category or industry scope and to dominate with cloud scale and they did it that then continued that was the premise of chen's post which was kind of rift on the cube initially which is you can have a moat in a castle in the cloud and have a competitive advantage and a sustainable differentiation model and that's exactly what's happening and then you introduce the edge and hybrid you now have a cloud operating model that that super cloud extends as a substrate across all environments so it's not multi-cloud which sounds broken and like put it distance jointed joint barriers hybrid cloud which is the hybrid operating model at scale and you don't have to be amazon to take advantage of all the value creation since they took care of the capex now they win too on the other side because because they're selling ec2 and storage and ml and ai and this is new and this is information that people don't might not know about internally at aws there was a debate dave okay i heard this from sources do we go all in and compete and just own the whole category or open the ecosystem and coexist with [Â __Â ] why do we have these other companies or snowflake and guess what the decision was let's make it open ecosystem and let's have our own offerings as well and let the winner take off smart because they can't hire enough people and we just had aws and snowflake on the cube a few weeks ago talking about the partnership the co-op petition the value in it but what's been driving it is the voice of the customer but i want to ask you paint the picture for the audience of the critical key components of super cloud what are those yeah so i think first and foremost super cloud as john was saying it's not multi-cloud chuck whitten had a great phrase at dell tech world he said multi-cloud by default right versus multi-cloud by design and multi-cloud has been by default it's been this sort of i run in aws and i run my stack in azure or i run my stack in gcp and it works or i wrap my stack in a container and host it in the cloud that's what multi-cloud has been so the first sort of concept is it's a layer that that abstracts the underlying complexity of all the clouds all the primitives uh it takes advantage of maybe graviton or microsoft tooling hides all that and builds new value on top of that the other piece of of super cloud is it's ecosystem driven really interesting story you just told because literally amazon can't hire everybody right so they have to rely on the ecosystem for feature acceleration so it's it also includes a path layer a super pass layer we call it because you need to develop applications that are specific to the problem that the super cloud is solving so it's not a generic path like openshift it's specific to whether it's snowflake or [Â __Â ] or aviatrix so that developers can actually build on top of and not have to worry about that underlying and also there's some people that are criticizing um what we're doing in a good way because we want to have an open concept sure but here's the thing that a lot of people don't understand they're criticizing or trying to kind of shoot holes in our new structural change that we're identifying to comparing it to old that's like saying mainframe and mini computers it's like saying well the mainframe does it this way therefore there's no way that's going to be legitimate so the old thinking dave is from people that have no real foresight in the new model right and so they don't really get it right so what i'm saying is that we look at structural change structural change is structural change it either happens or it doesn't so what we're observing is the fact that a snowflake didn't design their solution to be multi-cloud they did it all on aws and then said hey why would we why are we going to stop there let's go to azure because microsoft's got a boatload of customers because they have a vertically stacking integration for their install base so if i'm snowflake why wouldn't i be on azure and the same for gcp and the same for other things so this idea that you can get the value of an amp what amazon did leverage and all that value without paying for it up front is a huge dynamic and that's not just saying oh that's cloud that's saying i have a cloud-like scale cloud-like value proposition which which will look like an ecosystem so to me the acid test is if i build on top of say [Â __Â ] or say snowflake or super cloud by default i'm either a category leader i own the data at scale or i'm sharing data at scale and i have an ecosystem people are building on top of me so that's a platform so that's really difficult so what's happening is these ecosystem partners are taking advantage as john said of all the hyperscale capex and they're building out their version of a distributed global system and then the other attribute of super cloud is it's got metadata management capability in other words it knows if i'm optimizing for latency where in the super cloud to get the data or how to protect privacy or sovereignty or how many copies to make to have the proper data protection or where the air gap should be for ransomware so these are examples of very specific purpose-built super clouds that are filling gaps that the hyperscalers aren't going after what's a good example of a specific super cloud that you think really articulates what you guys are talking about i think there are a lot of them i think snowflake is a really good example i think vmware is building a multi-cloud management system i think aviatrix and virtual you know private cloud networking and for high performance networking i think to a certain extent what oracle is doing with azure is is is definitely looks like a super cloud i think what capital one is doing by building on to taking their own tools and and and moving that to snowflake now that they're not cross-cloud yet but i predict that they will be of i think uh what veeam is doing in data protection uh dell what they showed at dell tech world with project alpine these are all early examples of super well here's an indicator here's how you look at the example so to me if you're just lifting and shifting that was the first gen cloud that's not changing the business model so i think the number one thing to look at is is the company whether they're in a vertical like insurance or fintech or financial are they refactoring their spend not as an i.t cost but as a refactoring of their business model yes like what snowflake did dave or they say okay i'm gonna change how i operate not change my business model per se or not my business identity if i'm gonna provide financial services i don't have to spend capex it's operating expenses i get the capex leverage i redefine i get the data at scale and now i become a service provider to everybody else because scale will determine the power law of who wins in the verticals and in the industry so we believe that snowflake is a data warehouse in the cloud they call it a data cloud now i don't think snowflake would like that dave i call them a data warehouse no a super data cloud but but so the other key here is you know the old saying that andreessen came up with i guess with every company's a software company well what does that mean it means every company software company every company is going digital well how are they going to do that they're going to do that by taking their business their data their tooling their proprietary you know moat and moving that to the cloud so they can compete at scale every company should be if they're not thinking about doing a super cloud well walmart i think i think andreessen's wrong i think i would revise and say that andreessen and the brain trust at andreas and horowitz is that that's no longer irrelevant every company isn't a software company the software industry is called open source everybody is an open source company and every company will be at super cloud that survives yeah to me to me if you're not looking at super cloud as a strategy to get value and refactor your business model take advantage of what you're paying it for but you're paying now in a new way you're building out value so that's you're either going to be a super cloud or get services from a super cloud so if you're not it's like the old joke dave if you're at the table and you don't know who the sucker is it's probably you right so if you're looking at the marketplace you're saying if i'm not a super cloud i'm probably gonna have to work with one because they're gonna have the data they're gonna have the insights they're gonna have the scale they're going to have the castle in the cloud and they will be called a super cloud so in customer conversations helping customers identify workloads to move to the cloud what are the ideal workloads and services to run in super cloud so i honestly think virtually any workload could be a candidate and i think that it's really the business that they're in that's going to define the workload i'll say what i mean so there's certain businesses where low latency high performance transactions are going to matter that's you know kind of the oracle's business there's certain businesses like snowflake where data sharing is the objective how do i share data in a governed way in a secure way in any location across the world that i can monetize so that's their objective you take a data protection company like veeam their objective is to protect data so they have very specific objectives that ultimately dictate what the workload looks like couchbase is another one they they in my opinion are doing some of the most interesting things at the edge because this is where when you when you really push companies in the cloud including the hyperscalers when they get out to the far edge it starts to get a little squishy couchbase actually is developing capabilities to do that and that's to me that's the big wild card john i think you described it accurately the cloud is expanding you've got public clouds no longer just remote services you're including on-prem and now expanding out to the near edge and the deep what do you call it deep edge or far edge lower sousa called the tiny edge right deep edge well i mean look at look at amazon's outpost announcement to me hp e is opportunity dell has opportunities the hardware box guys companies they have an opportunity to be that gear to be an outpost to be their own output they get better stacks they have better gear they just got to run cloud on it yeah right that's an edge node right so so that's that would be part of the super cloud so this is where i think people that are looking at the old models like operating systems or systems mindsets from the 80s they look they're not understanding the new architecture what i would say to them is yeah i hear what you're saying but the structural change is the nodes on the network distributed computing if you will is going to run hybrid cloud all the way across the fact that it's multiple clouds is just coincidence on who's got the best capex value that people build on for their super cloud capability so why wouldn't i be on azure if microsoft's going to give me all their customers that are running office 365 and teams great if i want to be on amazon's kind of sweet which is their ecosystem why wouldn't i want to tap into that so again you can patch it all together in the super cloud so i think the future will be distributed computing cloud architecture end to end and and we felt that was different from multi-cloud you know if you want to call it multi-cloud 2.0 that's fine but you know frankly you know sometimes we get criticized for not defining it tightly enough but we continue to evolve that definition i've never really seen a great definition from multi-cloud i think multi-cloud by default was the definition i run in multiple clouds you know it works in azure it's not a strategy it's a broken name it's a symptom right it's a symptom of multi-vendor is really what multi-cloud has been and so we felt like it was a new term of examples look what we're talking about snowflake data bricks databricks another good one these are these are examples goldman sachs and we felt like the term immediately connotes something bigger something that sits above the clouds and is part of a digital platform you know the people poo poo the metaverse because it's really you know not well defined but every 15 or 20 years this industry goes through dave let me ask you a question so uh lisa you too if i'm in the insurance vertical uh and i'm a i'm an insurance company i have competitors my customers can go there and and do business with that company and you know and they all know that they go to the same conferences but in that sector now you have new dynamics your i.t spend isn't going to keep the lights on and make your apps work your back-end systems and your mobile app to get your whatever now it's like i have cloud scale so what if i refactored my business model become a super cloud and become the major primary service provider to all the competitors and the people that are the the the channel partners of the of the ecosystem that means that company could change the category totally okay and become the dominant category leader literally in two three years if i'm geico okay i i got business in the cloud because i got the app and i'm doing transactions on geico but with all the data that they're collecting there's adjacent businesses that they can get into maybe they're in the safety business maybe they can sell data to governments maybe they can inform logistics and highway you know patterns roll up all the people that don't have the same scale they have and service them with that data and they get subscription revenue and they can build on top of the geico super insurance cloud right yes it's it's unlimited opportunity that's why it's but the multi-trillion dollar baby so talk to us you've done an amazing job of talking which i know you would of why super cloud what it is the critical components the key workloads great examples talk to us in our last few minutes about the event the cube on super cloud august 9th what's the audience going to who are they going to hear from what are they going to learn yeah so august 9th live out of our palo alto studio we're going to have a program that's going to run from 9 a.m to 1 p.m and we're going to have a number of industry luminaries in there uh kit colbert from from vmware is going to talk about you know their strategy uh benoit de javille uh from snowflake is going to is going to be there of g written house of sky-high security um i i i don't want to give it away but i think steve mullaney is going to come on adrian uh cockroft is coming on the panel keith townsend sanjeev mohan will be on so we'll be running that live and also we'll be bringing in pre-recorded interviews that we'll have prior to the show that will run post the live event it's really a pilot virtual event we want to do a physical event we're thinking but the pilot is to bring our trusted friends together they're credible that have industry experience to try to understand the scope of what we're talking about and open it up and help flesh out the definition make it an open model where we can it's not just our opinion we're observing identifying the structural changes but bringing in smart people our smart friends and companies are saying yeah we get behind this because it has it has legs for a reason so we're gonna zoom out and let people participate and let the conversation and the community drive the content and that is super important to the cube as you know dave but i think that's what's going on lisa is that it's a pilot if it has legs we'll do a physical event certainly we're getting phones to bring it off the hook for sponsors so we don't want to go and go all in on sponsorships right now because it's not about money making it's about getting that super cloud clarity around to help companies yeah we want to evolve the concept and and bring in outside perspectives well the community is one of the best places to do that absolutely organic it's an organic community where i mean people want to find out what's going on with the best practices of how to transform a business and right now digital transformation is not just getting digitized it's taking advantage of the technology to leapfrog the competition so all the successful people we talked to at least have the same common theme i'm changing my game but not changing my game to the customer i'm just going to do it differently better faster cheaper more efficient and have higher margins and beat the competition that's the company doesn't want to beat the competition go to thecube.net if you're not all they're all ready to register for the cube on supercloud august 9th 9am pacific you won't want to miss it for john furrier and dave vellante i'm lisa martin we're all coming at you from new york city at aws summit 22. i'll be right back with our next guest [Music] you
SUMMARY :
and the deep what do you call it deep
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
adam silevski | PERSON | 0.99+ |
jerry chen | PERSON | 0.99+ |
john hedwig | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
thecube.net | OTHER | 0.99+ |
august 9th | DATE | 0.99+ |
lisa martin | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
john furrier | PERSON | 0.99+ |
adam | PERSON | 0.99+ |
august 9th | DATE | 0.99+ |
2013 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
new york | LOCATION | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
9 a.m | DATE | 0.99+ |
adam celebski | PERSON | 0.99+ |
john | PERSON | 0.99+ |
dave | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
1 p.m | DATE | 0.99+ |
dave vellante | PERSON | 0.99+ |
august 9th 9am | DATE | 0.99+ |
multi-trillion dollars | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
walmart | ORGANIZATION | 0.99+ |
multi-trillion dollar | QUANTITY | 0.99+ |
adam zluski | PERSON | 0.98+ |
steve mullaney | PERSON | 0.98+ |
20 years | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
first gen | QUANTITY | 0.97+ |
jason | PERSON | 0.97+ |
two three years | QUANTITY | 0.97+ |
new york city | LOCATION | 0.96+ |
andy jassy | PERSON | 0.96+ |
project alpine | ORGANIZATION | 0.96+ |
aws | ORGANIZATION | 0.96+ |
this year | DATE | 0.96+ |
aviatrix | ORGANIZATION | 0.96+ |
geico | ORGANIZATION | 0.95+ |
graviton | ORGANIZATION | 0.95+ |
super cloud | ORGANIZATION | 0.92+ |
two | QUANTITY | 0.92+ |
one | QUANTITY | 0.92+ |
andreas | ORGANIZATION | 0.92+ |
vmware | ORGANIZATION | 0.91+ |
jassy | PERSON | 0.91+ |
80s | DATE | 0.89+ |
adrian | PERSON | 0.89+ |
years | DATE | 0.89+ |
trillion dollar | QUANTITY | 0.88+ |
palo alto studio | ORGANIZATION | 0.88+ |
tier two | QUANTITY | 0.88+ |
lot of people | QUANTITY | 0.87+ |
office 365 | TITLE | 0.87+ |
keith townsend | PERSON | 0.86+ |
azure | TITLE | 0.85+ |
a few weeks ago | DATE | 0.84+ |
azure | ORGANIZATION | 0.84+ |
every company | QUANTITY | 0.8+ |
andreessen | PERSON | 0.8+ |
intel | ORGANIZATION | 0.8+ |
reinvent | EVENT | 0.79+ |
cloudera | TITLE | 0.79+ |
Alice Taylor, The Walt Disney Studios & Soumyendu Sarkar, HPE | HPE Discover 2020
From around the globe. It's theCUBE covering HPE's Discover Virtual Experience. Brought to you by HPE. >> Hello and welcome back to the CUBE's coverage of HPE discover Virtual Experience. This is theCUBE, I'm John Furrier, your host, we're here in the Palo Alto Studio for the remote interviews. We have a great innovation story here with Disney and HPE, Alice Taylor, Vice President of Content Innovation with studioLAB at Disney. And Soumyendu Sarkar, distinguished technologist director of AI at HPE. Thanks for coming on Alice. Samandiyu thank you for taking the time. >> No worries. Great to be here. Hi. >> Hi >> I love this story. I think it's an innovation story and I think it's going to be one that we'll experience in our life going forward, and that is media, video, and its experiences and these innovation about AI, It's a lot to do with the collaboration between Disney studioLAB, Alice, that you're running, and it's super, super important and fun as well and very relevant and Cool. So first before we get started, Alice, take a minute to explain a little about yourself and how StudioLAB came about. >> Oh my goodness. StudioLAB is just in its second year of operation. It was an idea that was had by our CTO. I'm going to say three years ago, And at the time, just previously before that, I had a startup company that came through the Disney accelerator. So I was already inside the building and the team there said well, the CTO there, and the boss said, you know, we need to start up an innovation lab that will investigate storytelling through emerging technology. And that's basically being the majority of my background. So I said, yes. And then since then we've been growing a team. We opened the lab in may of 2018 and here we are, in the middle of a pandemic, but it has grown like crazy. It's just a wonderful place to be and to operate. And we've been doing some amazing projects with some amazing partners, >> And it's not unusual that an entrepreneur has this kind of role to think outside the box. We'll get at some of that. Talk about your experience as an entrepreneur, how you got into this position, because you came in as an entrepreneur, you're doing some creative things. Tell us that story real quick. >> Yeah. Okay. Well, so as you can tell, I'm British. My actual background started, my whole career started in technology in the mid 90s. As I started as a trainee video editor, but then switched very quickly in 95 to building websites and from there on, and it was internet all the way. But I've always focused on storytelling and I, you know, much of my background is working for broadcasters and media and content creators. So I was five years at the BBC in their R and D department. And I'm actually out here as VP of digital media for them, and then Channel 4 as well. And throughout the whole process, I was always interested in how to tell stories with new technology and the new mediums as they emerged. So yeah, slight side story and doing a startup, which was actually in toys and video games, but again, big digital storytelling environments for children. And then I came round Robin, if you like into Disney and here we are still looking at how to make films and episodic content, even more, you name it faster, better, more exciting, using the best and greatest in emerging tech as we find it. >> The lab that you're doing, it's an accelerant almost for new technologies. Your job is to what? look out over the horizon next 10 years or so to figure out? >> Yeah >> what's next. It's not a structured thing. You have some reign to be creative and experiment? >> Well, yeah, I mean, the studioLAB, at the studios, well, Disney has eight studios at the moment, And what we do is we look at actually the whole breadth of storytelling. So right from the moment when a creative has an idea through to how our guests and fans might be receiving the end product out in the world, and we segregate that whole breadth into three categories; Ideate, when, you know, the process of generating the idea and building it, Make, how we make it, where we make it, what we make it with and then Experience. How we experience it out in the world. So we have a whole slew of projects, the studio level works with some of the best technology companies in the world. And we call those our innovation partners and we sign these partnerships really to bring what we like to call Superpowers to the system. We like to think that the combination of those companies and what comes out of these projects is going to give our filmmakers superpowers, but also that combinatorial effect of Disney, you know, in this case, for instance, working with HPE, like producing something that Disney couldn't necessarily do on its own or the HBE couldn't necessarily do on his own either. So yeah, it's a huge remit and we don't look quite so far out, generally speaking as 10 years, it's more like three to now. We don't do day to day operational work, but we try to pick something up a couple of years before it's going to be operationally ready and really investigate it then and get a bit of a headstart. >> Well, it's great to have HPE as partner and having that bench of technology, software, and people, and it's just a nice power source for you as well. >> Exactly So Soumyendu talk about HPE relationship with Disney, because you got a lot of deep technical from the lab standpoint to resilient technology. How are you involved? What's your role, you guys sitting around you riffing and put a whiteboard together and say, Hey, we're going to solve these big problems? ... Here's the future of consumption, here's the future of video... What goes on? Tell us the relationship between you guys. >> Yeah, it's a good question. At HPE We can not only make the servers, but what we also do is we work quite a lot on optimizing some of the Artificial Intelligence solutions and algorithms on the GPUs and scale it across Servers. So this opportunity came up from Disney where Disney came up with a very innovative solution where they were solving the video quality problem. As you know, there are a lot of blemishes in the Video that can come up and Disney wanted to fix all of them. And they came up with great algorithm, but what happens is, like with great algorithm comes a huge amount of computational complexity which needs quite a bit of heterogeneous input in both in Parallel Processing and in Sequential Processing. So we thought that it's a perfect, I'd say combination of two skillsets to make this video quality software execute at speeds which are needed for production in Disney. >> So it's good to have a data center whenever you need it to, you guys have some great technology. We'll hear a lot more from the Execs at HPE. On our reporting Alice, we want to to get your thoughts. We're covering some of those new edge technologies, we're talking about new experiences. I gave a talk at Sundance a few years ago called the new creative class, and it's really about this next wave of art and filmmakers who are using the tools of the trade, which is a cellphone, you know, really easy to set up a studios and use the technology. Can you give us some examples of how the studioLAB collaborates with filmmakers and the Execs to push the art and technology of storytelling to be fresh, Because the sign of the times, are Instagram and Tik Tok, this is just very elementary, the quality and the storytelling is pretty basic dopamine driven, but you can almost imagine that the range of quality that's going to come, so access to more people, certainly more equipment and cameras, et cetera. What's next? How do you guys see And what some examples can you share? >> Oh, that's an amazing question. I mean, where working on Films and Episodics rather than very short form content , Obviously. But you're absolutely right. There's a lot of consumer grade technology that is entering the production pipeline in many ways and in many areas, whether it's phones or iPads, using certain bits of software. One of the things that we're building at the moment is the ability to generate photometrical models, capturing with consumer drones or even iPhones, and then getting that data into a 3-D model as soon as possible. There's a really big theme of what we want to do. It's like make the process more efficient so that our creatives and the folks working on productions, aren't having to slog through something that's and tedious. They want to get to the storytelling and the art and the act of storytelling as much as possible. And so waiting for a model to render or waiting for the QC process to finish is what we want to kind of get rid of. So they can really get to the meat of the problem much, much faster. And just going back to what Soumyendu was saying about the AI project here, I mean, it was about finding the dead pixels on the screen when we do all finished prints, which would you believe we do with humans? Humans are the best, or historically have been the best at finding dead pixels, but what a job to have to do at the end of the process. To go through quality control and then have to go and manually find the little dead pixels in each frame of our print, right? Nobody actually wants to be doing that job. So the algorithm goes and looks for those automatically. And then HPE came in and sped that whole process up by 9X. So now it actually runs fast enough to be used on our final prints. >> You know, it's interesting in the tech trend for the past 10, 15 years that I've been covering cloud technology even in the early days, it was kind of on the fringe and then become mainstream. But all the trends were more agility, faster, take away that heavy lifting so that the focus on the job at hand, whether its creative or writing software. This is kind of a a success formula, and you're kind of applying it to film and creation, which is still, like software, it's kind of the same thing almost. >> Yeah >> So you know, when you see these new technologies, I'd love to get both of your reactions to this. One of the big misses, that people kind of miss is the best stuff is often misunderstood until it's understood. >> Yes >> And we're kind of seeing that now with Covid and everyone's like no way I could've seen this. No, no one predicted it. So what's an example of something that people might be misunderstanding. That's super relevant, that might become super important very quickly. Any thoughts? >> Gosh, that's a great one. Well, I can give an example of something that has come and gone and then come and potentially gone, except it hasn't. You'll see. It's VR. So it came whenever it was, 20 years ago and then 10 years ago, and everybody was saying VR is going to change the world. And then it reappeared again, six years ago. And again, everybody said it was going to change the world. And in terms of film production, it really has. But that's slightly gone unnoticed. I think, because out in the market, everyone is expecting VR to have been a huge consumer success. And I suspect it still will be one day a huge consumer success. But meanwhile, in the background, We are using VR on a daily basis in film production, Virtual production is one of the biggest emerging processes that is happening. If you've seen anything to do with Jungle Book, Lion King , the Mandalorian, anything that industrial light and magic work on, you're really looking at a lot of virtual production techniques that have ended up on the screen. And it is now a technology that we can't do without. I'm going to have to think two seconds for something that's emerging. AI and ML is a huge area. Obviously, we're scratching it. I don't think anyone is going to to say that it's going to come and go this one. This is huge, but we're already just beginning to see where and how we can apply AI and ML. >> Yeah. >> So Soumyendu, did you want to jump in on that one? >> Yeah, Let me take it from the technology standpoint. I think Alice sort of puts out some very cool trends. Now what happens in tHE AI and ML spaces, people can come up with creative ideas, but one of the biggest challenges is how do you take those ideas for commercial usage and make it work at a speed, as Alice was mentioning, makes it feasible in production. So accelerating AI/ML and making it in a form, which is usable is super important. And the other aspect of it is, just see, for instance, video quality, that Alice was mentioning. Dead pixel is one type, And I know that Disney is working on certain other video qualities to fix the blemishes, but there is a whole variety of these blemishes and with human operators, Its kind of impossible to scale up the production and to find all these different artifacts, and especially now, as you can see, the video is disseminated in your phones, in your iPads. Like, you know, in just streaming. So this is a problem of scale and to solve this is also like, you know, a lot of computers, and I'd say a lot of collaboration with complementary skillsets that make AI real. >> I was talking with a friend who was an early Apple employee. He's now retired, good friend. And we were talking about, you know, all the dev apps, agile, go fast, scale up. And he made a comment. I want to get your reaction to it. He said, "you know, what we're missing is craft." And software used to be a craft game. So when you have speed, you lose craft. And we see that certainly with cloud and agility and then iterate, and then you get to a good product over time. But I think one of the things that's interesting and you guys are kind of teasing out is you can kind of get craft with the help from some of these technologies, where, you can kind of build crafting into it. >> Yap Alice, what's your reaction to that? >> One of our favorite anecdotes from the lion King is, so Jon Favreau the director, built out the virtual production system with his team to make the film. And it allowed for a smaller production team acting on a smaller footprint. What they didn't do was shorten the time to make the film, what the whole system enabled was more content created within that same amount of time. So effectively Jon had more tapes and more material to make his final film with. And that's what we want people to have. We want them to not ever to have to say, Oh, I missed my perfect shot because of, I don't know what, you know, we ran out of time, so we couldn't get the perfect shot. That's it, that's a terrible thing. We never want that to happen. So where technology can help gather as much material as possible in the most efficient way, basically at the end of the day for our our creatives, that means more ability to tell a story. >> So Soumyendu, this is an example of the pixel innovation, the video QC, it's really a burden if you have to go get it and chase it, you can automate that. That's back to some of the tech trends. A lot of automation action in there. >> Yeah, absolutely. And as Alice was mentioning, if you can bridge the gap between imagination and realization then you have solved the problem. That way, the people who are creative can think and implement something in a very short time. And that's fair, like, you know, some of these scientists come in >> Well, I also very impressed and I'm looking forward to coming down and visiting studio labs when the world gets back to work, >> Alright. >> You guys are in the part of Burbank and all the action. I know you're a little sort of incubate. It's really kind of R and D meet commercially. Commercial is really cool. But I have to ask you what the COVID-19 going on, how are you guys handling the situation? Certainly impacted people coming to work. >> Yeah >> How has your team in been impacted and how are you guys continuing the mission? >> Well, the lab itself is obviously a physical place on the lot. It's in the old animation building. But there's also this program of innovation that we have with our partners. To be honest, we didn't slow down at all the team carried on the next day from home. And in fact, we have expanded even, because new projects came rolling in as folks who were stuck at home suddenly had needs. So we had editors needing to work remotely, you name it, folks with bad home connections, wondering if we had some 5G phones hanging around, that kind of thing. And so everything really expanded a bit. We are hoping to get back into physical co-location as soon as possible, not least to be able to shoot movies again. But I think that there will be an element of this remote working that's baked in forever from here on then. Not least, coz it was just a round, this kind of, what this has done is accelerated things like the beginning of cloud adoption properly, in the beginning of remote teleworking and remote telepresence, and then also ideas coming out of that. So you know, again, the other day I heard Holograms coming up, like, can we have holograms yet? >> Yeah, we can do that, we've done that, Lets do it. Bring that back. >> And so it's that kind of thing. Exactly, that's going to come around again. Yeah. But you know what? The team have all been amazing. But we'll miss each other, you know, there's something about real life that can't be replaced by technology. >> Well, You know, we were talking earlier on theCUBE last week about, the future got pulled to the present, not the present accelerated the future. Which exposes some of these things that are really important and you mentioned it. So I have to ask you Alice, as you guys got more work, obviously it makes sense. What have you learnt as adapting and leading your team through this change? Any learnings you can share with folks? >> Well, yes, that's a good one. But mainly resilience. It's been a nonstop and quite relentless and the news out there is extraordinary. So we're also trying to balance a very full pipeline of work with understanding that people are struggling to balance their lives as well at home, You know, kids, pets, BLM, like you name it, everything is affecting everybody. So resilience and empathy is really top of my mind at the moment as we try to continue to succeed, but making sure that everybody stays healthy and sane. >> Yeah. And in great news, you got a partner here with HPE, the innovation doesn't stop there. You still have to partner. How do you keep up with these technologies and the importance of partners, comments, and Soumyendu your comment as well. >> Yeah. So HPE has been a great leader in accommodating all HPE employees to work from remote and in the process, what we also discovered is, we humans are innovative. So we discover the innovative ways where we can still work together. So we increased the volume of our virtual collaborations, and I have worked with Erica from Disney, who is a tremendous facilitator and a technologist of mine, to have this close collaboration going, and we almost missed nothing. But yes, we would like to, you know the feel each other to be in close proximity, look at each other's eyes. Probably that's the only missing thing, a crest of it, You know, we created an environment where we can collaborate and work pretty well. And to Alice's point in the process, we also discovered a lot of things which can be done in remote considering the community of Silicon Valley. >> You know, I'd love. The final question I want to get your thoughts on is your favorite technologies that you're excited about. But some Soumyendu, you know, we were talking amongst us nerds and geeks here in Silicon Valley around, you know, what Virtualization... Server Virtualization has done. And HPE knows a lot about server virtualization. You're in the server business, that created cloud, because with virtualization, you could create one server and great many servers, but I think this COVID-19 and the future beyond it, virtualization of life, an immersion of digital is going to bring and change a lot of things. You guys highlighted a few of them. This virtualization of life, society, experiences, play, work. It's not just work it's experiences. So Internet of Things, devices, how I'm consuming, how I'm producing, it's really going to have an impact. I'd love to get your, both of your thoughts on this kind of "virtualization of life" because it certainly impacts studioLAB, because you think about these things, Alice, and HP has to invent the tech to get scaling up. So final question. What do you think about virtualization of life and what technologies do you see that you're excited about to help make our lives better? >> Wow. Goodness, me. I think we're only beginning to understand the impact that things like video conferencing has on folks. You know, I don't know whether you've seen all of the articles flying around about how it's a lot more work to do, video conferencing, that you don't have the same subtle cue as you have in real life. And again, you know, virtual technologies like VR and similar, are not going to solve that immediately. So what will have to happen is that humans themselves will adapt to the systems. I think though, fundamentally we're about to enter a radical period. We basically have already a radical period of innovation because as folks understand what's at their fingertips and then what's missing, we're going to see all sorts of startups and new ideas come rushing out. As people understand this new paradigm and what they can do to solve, for the new pains that come out of it. I mean, just from my perspective, I have back-to-back nine hours of etc a day. And by the end of the day, I can barely walk. What are we going to do about that? I think we're going to see, >> Holograms, I like that Idea. >> right, we're going to see home exercise equipment combined with like, you know, really good ones. Like you've seen pellets on the shares going crazy. There's going to be tons of that. So I'm just really excited at the kind of three years or so. I think that we're going to see of radical innovation, the likes of which we have always usually been held back by other reasons, maybe not enough money or not enough permission. Whereas now people are like, we have to fix this problem. >> Well, you got a great job. I want to come, just quit my job and come join studio lab, sounds like that's a playground of fun. They have great stuff. >> Ton of fun. >> Soumyendu, close this out here. What are you excited about as we virtualize. You're in the labs, creating new technology, you're a distinguished technologist and director of AI. I Wean, you're on the cutting edge. You're riding the wave too. What's your take on this virtual center? >> I think, you know the COVID experience, what it has done is it has pushed the edge to the home. So now, if you really see a home is one of the principle connectivity to the outside world, as far as professionalism goes. And with that, what AI also offers is like a better experience. Right now we are all Gaga about zoom being able to do a video conferencing, but as Alice was pointing out, there is that ER, and the VR. Now consider combining the augmented reality. And the way that we do review a conference and all the other AI innovations that we can bring in so that the interactions becomes much more real. And that is like, you know, I'd say, where the world is moving. >> I can't let this go. I have to go one more step in because you guys brought that up. Alice, you mentioned the fatigue and all these things. And if you think about just the younger generations, we have to invest in our communities and our young people. I mean, think about all the kids who have to go back to school in September, in the fall, what their world's like. And you talk about, you know, we can handle video, but learners? So the transformation that's going to come down the path really fast is how do you create an experience for education and for learning and connecting. This is huge. Thoughts and reactions to that. So it's something that I've been thinking a lot about, but I'm sure a lot of other parents have as well. >> My take on that, kids, I've worked a lot with kids and kids media. And over the years, you often find that when a new media does come in, there's a lot of fear around it, but kids are plastic and incredibly good at adapting to new media and new technology and new ways of working. The other thing is, I think this generation of kids have really had to live through something, you know, and it's going to have, with luck, taught them some resilience. I think, if there's one thing that teachers can be focusing on, it is things like resilience and how to cope under very unusual and very unpredictable circumstances, which is never good for things like anxiety. But it's also the reality of the world, you know, be adaptive and learn, keep learning. These are great messages to give to kids. I think if anything, they are the ones who'll figure out how to socialize online successfully and healthily. So we're going to have to learn from them. >> Yeah. They're going to want to make it to be fun too. I mean, you have to make it entertaining. I mean, I find my personal experience, if it's boring, it ain't going to work. Thank you so much, Alice. Well, thank you very much for that comment and insight really enjoy. Congratulations on studioLAB, you got a great mission and very cool and very relevant. Soumyendu thank you very much for sharing the insights on HPE's role in that. I appreciate it. Thank you very much. >> Thanks. It's nice. >> Okay. >> Thanks John. >> This is theCUBE virtual covering HPE Discover Virtual Experience. I'm John Furrier, your host of theCUBE. Stay tuned for more coverage from HPE Discover Virtual Experience after this break.
SUMMARY :
Brought to you by HPE. for the remote interviews. Great to be here. and I think it's going to and the boss said, you know, has this kind of role to and I, you know, over the horizon next 10 You have some reign to be of the best technology and having that bench of technology, ... Here's the future of consumption, and algorithms on the GPUs that the range of quality is the ability to generate so that the focus on the job at hand, One of the big misses, And we're kind of seeing that I don't think anyone is going to to say and to solve this is also like, you know, and then you get to a the time to make the film, the video QC, And that's fair, like, you know, But I have to ask you what in the beginning of remote teleworking Yeah, we can do that, But we'll miss each other, you know, So I have to ask you Alice, and the news out there is extraordinary. and the importance of partners, comments, and in the process, the tech to get scaling up. And by the end of the day, at the kind of three years or so. Well, you got a great job. You're in the labs, pushed the edge to the home. and reactions to that. and how to cope under very unusual I mean, you have to make it entertaining. It's nice. This is theCUBE virtual
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jon Favreau | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
BBC | ORGANIZATION | 0.99+ |
Alice Taylor | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Alice | PERSON | 0.99+ |
Soumyendu | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
Soumyendu Sarkar | PERSON | 0.99+ |
Samandiyu | PERSON | 0.99+ |
September | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
The Walt Disney Studios | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Lion King | TITLE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
iPads | COMMERCIAL_ITEM | 0.99+ |
Jungle Book | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Channel 4 | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
9X | QUANTITY | 0.99+ |
six years ago | DATE | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
two seconds | QUANTITY | 0.99+ |
nine hours | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
each frame | QUANTITY | 0.99+ |
eight studios | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
three years ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
may of 2018 | DATE | 0.98+ |
mid 90s | DATE | 0.98+ |
three years | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
Erica | PERSON | 0.97+ |
20 years ago | DATE | 0.97+ |
two skillsets | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
one type | QUANTITY | 0.95+ |
95 | QUANTITY | 0.95+ |
Burbank | LOCATION | 0.95+ |
first | QUANTITY | 0.94+ |
Donnie Berkholz, Carlson Wagonlit Travel | CUBEConversation, November 2018
(lively music) >> Hello, and welcome to this special CUBE conversation. I'm John Furrier, founder of SiliconANGLE Media, co-host of theCUBE. We are here in our Palo Alto Studio to have a conversation around cloud computing, multi-cloud, hybrid cloud, the changes going on in the IT industry and for businesses across the globe as impacted by cloud computing, data, AI. All that's coming together, and a lot of people are trying to figure out how to architect their solution to scale globally but also take care of their businesses, not just cutting costs for information technologies, but delivering services that scale and benefit the businesses and ultimately their customers, the end users. I'm here with a very special guest, Donnie Berkholz, who's the VP of IT services delivery at CWT, Carlson Wagonlit Travel. Also the program chair of the Open Source summit, part of the Linux Foundation, formerly an analyst, a great friend of theCUBE. Donnie, great to see you. Thanks for joining us today. >> Well, thanks for having me on the show. I really appreciate it. >> So we've been having a lot of conversations around, obviously, cloud. We've been there, watching it, from day one. I know you have been covering it as an analyst. Part of that cloud ought to go back to 2007, '08 time frame roughly speaking, you know, even before that with Amazon. Just the massive growth certainly got everyone's attention. IBM once called Amazon irrelevant. Now going full cloud with buying Red Hat for billions and billions of dollars at a 63% premium. Open Source has grown significantly, and now cloud absolutely is the architectural linchpin for companies trying to change how they do business, gather more efficiencies, all built on the ethos of DevOps. That is now kind of going mainstream. So I want to get your thoughts and talk about this across a variety of touchpoints. One is what people are doing in your delivering services, IT services for CWT, and also trying to get positioned for the future. And then Open Source. You're on the Open Source program chair. Open Source driving all these benefits, now with IBM buying Red Hat, you've seen the commercialization of Open Source at a whole nother level which is causing a lot of conversation. So tell us what you're doing and what CWT is about and your role at the company. >> Absolutely, thank you. So CWT, we're in the middle of this journey we call CWT 3.0, which is really one about how do we take the old school green screens that you've seen when you've got travel agents or airline agents booking travel and bring people into the picture and blend together people with technology. So I joined about a year and a half ago to really help push things forward from the perspective of DevOps, because what we came to realize here was we can't deliver quickly and iterate quickly without the underlying platforms that give us the kind of agility that we need without the connections across a lot of our different product groups that led us, again, to iterate on the right things from the perspective of our customers. So I joined a year and a half ago. We've made a lot of strides since then in modernizing many of our technology platforms. The way I think about it here, it's a large enterprise. We've got hundreds of different applications. We've got many, many different product teams, and everything is on a spectrum. We've got some teams that are on the bleeding edge. Not even the leading edge, but I'd say the bleeding edge, trying out the very latest things that come out, experimenting with brand new Open Source tools, with brand new cloud offerings to see, can we incorporate that as quickly as possible so we can innovate faster than our competitors? Whether those are the traditional competitors or some of the new software companies coming into things from that angle. And then on the other end of the spectrum, we've got teams who are taking a much more conservative approach, and saying, "Let's wait and see what sticks "before we pick it up." And the fortunate thing, I think, about a company at the scale we are, is that we can have some of those groups really innovating and pushing the needle, and then other groups who can wait and see which parts stick before we start adopting those at scale. >> And so you've got to manage the production kind of stability versus kind of kicking the tires for the new functionality. So I've got to ask you first. Set up the architecture there. Are you guys on premise with cloud hybrid? Are you in the cloud-native? Do you have multiple clouds? Could you just give a sense of how you're deploying specifically with cloud? >> Yeah, absolutely. I think just like anything else, it's a spectrum of all we see here. There's a lot of different products. Some of them have been built cloud-native. They're using those serverless functions as service technologies from scratch. Brought in some leaders from Amazon to lead some of that drive here. They brought in a lot of good thinking, a lot of good culture, a lot of new perspective to the technologies we're adopting as a company that's not traditionally been a software company. But that is more and more so every day. So we've got some of that going on as completely cloud-native. We've got some going on that's more, I would say, hybrid cloud, where we're spanning between a public cloud environment back to our data centers, and then we've got some that are different applications across multiple different public clouds, because we're not in any one place right now. We're putting things in the best place to do the job. So that's very much the approach that we take, and it's one that, you know, back when I was in my analyst's world, as one of my colleagues called it, the best execution venue. What's the best place? What's the right place to do the right kind of task? We incorporate what are the best technologies we can adopt to help us differentiate more quickly, and where does the data live? What's the data gravity look like? Because we can't be shipping data back and forth. We can't have tons of transactions going back and forth all the time between different public clouds or between a public cloud and one of our data centers. So how do we best account for that when we're architecting what our applications should look like, whether they're brand new ones or whether they're ones we're in the middle of modernizing. >> Great, thanks for sharing, that's great, so yeah, I totally see that same thing. People put, you know, where the best cloud for the app, and if you're Microsoft Shop, you use Azure. If you want to kick the tires on Amazon, there's good roles for that, so we're seeing a lot of those multiple clouds. But while I've got you on the line here, I know you've been an analyst. I want you to just help me define something real quick because there's always kind of confusion between hybrid cloud and multi-cloud. Certainly the multi-cloud, we're getting a lot of hype on that. We're seeing with Kubernetes, with stateful applications versus stateless. You're seeing some conversations there. Certainly on Open Source, that's top of the agenda. Donnie, explain for folks watching the difference between hybrid cloud and multi-cloud, because there's some nuances there, and some people have different definitions. How do you guys look at that? Cause you have multiple clouds, but some aren't necessarily running a workload across clouds yet because of latency issues, so define what hybrid means to you guys and what multi-cloud means to you. >> All right, yeah, I think for us, hybrid cloud would be something where it's about integrating an on-prem workload off a more traditional workload with something in a public cloud environment. It's really, hybrid cloud to me is not two different public clouds working together or even the same application in two different public clouds. That's something a little bit different, and that's where you start to get, I think, into a lot of the questions of what is multi-cloud? We've seen that go through a lot of different transitions over the past decade or so. We've seen a lot of different, you know, vendors, going out there thinking they could sell multi-cloud management that, you know, panned out at different levels of success. I think for at least a decade, we've been talking about ideas like can we do cloud bursting? Has that ever really worked in practice? And I think it's almost as rare as a unicorn. You know, on-prem for the cost efficiencies and then we burst the cloud for the workload. Well, you know, to this day, I've never seen anything that gives you 100% functionality and 100% performance comparability between an on-prem workload and public cloud workload. There always seems to be some kind of difference, and this is a conversation that, I think, Randy Bias has actually been a great proponent of it's not just about the API compatibility. It's not just, you know, can I run Azure in their data centers or in mine? It's about what is the performance difference look like? What does the availability difference look like? Can I support that software in my data center as well as the engineers at Microsoft or at Amazon or at Google or wherever else they're supporting it today? Can I keep it up and running as well? Can I keep it performing as well? Can I find problems as quickly? And that's where it comes to the question of how do we focus on our differentiators and let the experts focus on theirs. >> That's a great point about Randy Bias. Love that great API debate. I was looking at some of that footage we had years ago. But this brings up a good point that I want to get your reaction to, because, you know, a lot of vendors going out there, saying, "Oh, our cloud's this. "We've got all this stuff going on," and there's a lot of hype and a lot of posturing and positioning. The great thing about cloud is that you really can't fake it until you make it. It's got to be working, right? So when you get into the kind of buying into the cloud. You say, "Okay, great, we're going to do some cloud," and maybe you get some cloud architects together. They say, "Okay, here's what it means to us. "In each environment, we'll have to, you know, "understand what that means and then go do it." The reality kind of kicks in, and this is what I'd like to get your reaction to. What is the realities when you say, "Okay, "I want to go to cloud," either for pushing the envelope and/or moving solid workloads that are in production into the cloud. What is the impact on the network, network security, and application performance? Because at the end of the day, those are going to be impacted. Those three areas come up a lot in conversations when all of the glam and all the bloom is off the rose, those are the things that are impacted. What's your thoughts on how practitioners should prepare for those three areas? The network impact, network security impact, and application performance? >> Yeah, I think preparation is exactly the right word there of how do we get the people we have up to speed? And how do we get more and more out of that kind of project mindset and into much more of the product mindset and whether that product is customer-facing or whether that product is some kind of infrastructure or platform product? That's the kind of thinking we're trying to have going into it of how do we get our people, who, you know, may run a Ci Cd pipeline, may run an on-prem container platform, may even be responsible for virtualization, may be responsible for on-prem networks or firewalls or security. How do we get them up to speed and turn them into real software engineers? That's a multi-year journey. That's not something that happens overnight. You can't bring in a team of consultants to fix that problem for you and say, "Oh, well, we came in and implemented it, "and now it's yours, and we walk out the door." It's no longer that, you know, build and operate mindset that you could take a little bit more with on-prem. Because everything is defined as code. And if you don't know how to deal with code, you're going to be in a real rough spot the next time you have to make a change to that stuff that that team of consultants came in and implemented for you. So I think it's turned into a much more long-term approach, which is very, very healthy for technology and for technology companies as a whole of how do we think about this long-term and in a sustainable way, think about scaling up our people. What do those training paths look like? What do those career paths look like? So we can decide, you know, how many people do we want certified? What kind of certifications should they have or equivalent skill sets? I remember hearing not too long ago that I think it was Capital One had over 10,000 people who were AWS certified, which is an enormously large number to think about, but that's the kind of transitions that we've been making as we become more and more cloud-native and cloud by default, is getting the right people. The people we have today trained up in these new kinds of skill sets instead of assuming that's something we can have some team fly in from magic land and implement and then fly away again afterwards. >> That's great, Don, thanks for sharing that insight. I also want to get your thoughts on the Open Source summit, but before we get there, I've got to ask you a question around some of the trends we've been seeing. Early on at DevOps we saw this together of the folks doing the hard work in the early pioneering days, where you saw the developers really getting closer to the front lines. They were becoming part of the business conversation. In the old world of IT, "Okay, here's our strategy. "Consolidate this, load some virtual machines," you know, "Get all this stuff up and running." The business decisions would then trickle down to the tech folks, then with the DevOps revolution, that's now cloud computing and all things, you know, IoT and everything else happening where the developers and the engineering side of it and the applications are on the front lines. They're in more of the business conversations, so I have to ask you. When you're at CWT, what are some of the business drivers and conversations that you guys are having with executive management around choices? Are they business drivers? Do you see an order of preference around agility? The transformation value for either customers or employees, compliance and security, are the top ones that people talk about generally. Of those business drivers, which ones do you guys see the most that are part of iterating through the architecture and ultimately the environment that you deploy? >> Yeah, I think as part of what I mentioned earlier, that we're on this journey we call CWT 3.0, and what's really new about that is bringing in speed and agility into the conversation of if we have something that we imagine as a five year transformation, how do we get to market quickly with new products so that we can start really executing and seeing the outcomes of it? So we've always had the expectations around availability, around security, around all these other factors. Those aren't going away. Instead, we're adding a new one, so we've got new conversations and a new balance to reach at an executive level of we now need a degree of speed that was not the expectation, let's say, a decade ago. It may not even have been the expectation in our industry five years ago, but is today. And so we're now incorporating speed into that balance of maybe we'll decide to very intentionally say, "We're not going to go over quite as many nine's today "so that we can be iterating more quickly on our software." Or, "We're going to invest more "in better release management approaches and tools," right? Like Canary releases, like, you know, Green-Blue releases, all these sorts of new techniques, feature flags, that sort of thing so that we can better deal with speed and better account for the risk and spread it to the smallest surface area possible. >> And you were probably doing those things also to understand the impact and look at kind of what's that's coming in that you're instrumenting in infrastructure because you don't want to have to put it out there and pray and hope that it works. Right, I mean? The old way. >> The product teams that are building it are really great and really quick at understanding about what the user experience looks like. And whether that's their Real User monitoring tools or through, you know, other tools and tricks that we may incorporate to understand what our users are doing on our tools in real time, that's the important part of this, is to shorten the iteration cycle and to understand what things look like in production. You've got to expose that back to the software engineers, to the business analysts, to the product managers who are building it or deciding what should be built in the first place. >> All right, so now that you're on the buyer's side, you've actually got people knocking on your door. "Hey, Donnie, buy my cloud. "Do this, you know, I've got all these solutions. "I've got all these tools. "I've got a toolshed full of," you know, the fool with the tool, as they say. You don't want to be that person, right? So ultimately you've got to pick an environment that's going to scale. When you look at the cloud, how do you evaluate the different clouds? You mentioned gravity or data gravity earlier. All kinds of new criteria is up there now in terms of cloud selection. You mentioned best cloud for the job. I get that. Is there certain things that you look for? Is there a list? Is there criteria on cloud selection that goes through your desk? >> Yeah, I think something that's been really healthy for me coming into the enterprise side from the analyst perspective is you get a couple of new criteria that start to rise up real quickly. You start thinking about things like what's that vendor relationship going to look like? How is the sales force? Are they willing to work with you? Are they willing to adapt to your needs? And then you can adapt back with them so you can build a really strong, healthy relationship with some of your strategic vendors, and to me, a public cloud vendor is absolutely a strategic vendor. That's one where you have to really care a lot and invest in that relationship and make sure things go well when you're sailing together, going in the same direction. And so to me, that's a little bit of a newer factor because it was easy to sit back and come in as the strategic advisor role and say, "Oh, you should go with this cloud. "You should go with that cloud "because of reasons X, Y, or Z," but that doesn't really account for a lot of things that happen behind the scenes, right? What's your sourcing and human department doing? How do they like to work with around contract, right? Will you negotiate a good MSA? All these sorts of things where you don't think about that when you're only thinking about technology and business value. You also have to think about the other, just the day to day, what does it look like? What's the blocking and tackling working with some of those strategic vendors? So you've got that to incorporate in addition to the other criteria around do they have great managed services? You know, self-service managed services that will work for your needs? For example, what do they have around data bases? What do they have around stream processing? What do they have around serverless platforms, right? Whatever it might be that suits the kinds of needs you have. Like for example, you might think about what does our business look like, and it's a graph, right? It's travelers, it's airports, it's planes, it's hotels. It's a bunch of different graphs all intersecting, and so we might imagine looking for a cloud provider that's really well-suited to processing those sorts of workloads. >> In the old days, the networking guys used to run the keys to the kingdom. Hey, you know, I'm going to rack and stack servers. I'm going to do all this stuff, but I've got to go talk to the networking guys, make sure all the routes are provisional and all that's locked down, mainly because that was a perimeter environment then. With cloud now, what's the impact of the networking? What's the role of the network? As we see DevOps notion of infrastructure as code, you've got to compute networking stores as three main pillars of all environments. Compute, check. Stores getting better. Networking, can you imagine Randy Bias? This was a big pet peeve for him. What's the role that cloud does? What's the role of the network with your cloud strategy? >> Yeah, I think something that I've seen following DevOps for the past decade or so has been that, you know, it really started as the ops doing development moved more into the developers and the ops working together and in many cases sharing roles in different ways, then incorporated, you know, QA, and incorporated product, to some extent. Most recently it's really been focused on security and how do we have that whole DevSecOps, SecDevOps thing going on. Something that's been trailing behind a little bit was network, absolutely. I had some very close friends about 10 years ago, maybe, who were getting into that, and they were the only people they knew and they only people they'd ever even heard of thinking beyond the level of using some kind of an expect script to automate your network interaction. But now I think networking as code is really starting to pick up. I mean, you look at what people are doing in public cloud environments. You look at what Open Source projects like Ansible are doing or on the new focus on network functionality. They're not alone in that. Many others are investing in that same kind of area. It's finally really starting to get up. Like for example, we have an internal DevOps Day that we run twice a year, and at the most recent one, guess who one of our speakers was? It was a network engineer talking about the kinds of automation they'd been starting to build against our network environments, not just in public cloud, but also on-premise. And so we're really investing in bringing them into our broader DevOps community, even though Net may not be in the name today. I don't think the name can ever extend to include all possible roles. But it is absolutely a big transition that more and more companies, I think, are going to see rolling along, and one that we've seen happening in public cloud externally for many, many years now. It's been inevitable that the network's going to get engaged in that automation piece. And the network teams are going to be more and more thinking about how do we focus our time in automation and on defining policy, and how do we enable the product teams to work in a self-service way, right? We set up the governance, but governance now means they can move at speed. It doesn't mean wait seven to 30 days for us to verify all of the port openings, match our requirements, and so on and so forth. That's defined up front. >> Yeah, and that's awesome, and I think that's the last leg of the stool in my opinion, and I think you nailed it. Making it operationally automation enabled, and then actually automating it. So, okay, before we get to the Open Source, one final question for you. You know, as you look at plan for the technologies around containers and microservices, what sounds a lot like networking constructs, provisioning, services. The role of stateless applications become a big part of that. As you look at those technologies, what are some of the things you're looking for and evaluating containers and microservices? And what role will that play in your environment and your job? >> I think something that we spend a lot of time focusing on is what is the day two experience going to look like? What is it going to be like? Not just to roll it out initially, but to, you know, operate on an ongoing basis, to make upgrades, to monitor it, to understand what's happening when things are going wrong, to understand, you know, the security stance we're at, right? How well are we locked down? Is everything up-to-date? How do we know that and verify it on a continuous basis instead of the, you know, older school approach of hey, we kind of do a ECI survey or an audit, you know, once a year, and that's the day we're in compliance, and then after that, we're not. Which I was just reading some stories the other day about companies saying, "Hey, there's a large percentage "of the time that you're out of compliance, "but you make sure to fix it just in time "for your quarterly surveys or scans or what have you." And so that's what we spend a lot of our time focusing on is not just the ease of installation, but the ease of ongoing operability and getting really good visibility into the security, into the health, of the underlying platforms that we're running. And in some cases, that may push us to, let's say, a cloud managed service. In some cases, we may say, "Well, that doesn't quite suit our needs." We might have some unique requirements, although I spend a lot of my time personally saying, "In most cases, we are not a snowflake, right?" We should be a snowflake where we differentiate as a company. We should not be a snowflake at the level of our monitoring tools. There's nothing unique we should really be doing in that area. So how can we make sure that we use, whether it's trusted vendors, trusted cloud providers, or trusted Open Source projects with a large and healthy community behind them to run that stuff instead of build it ourselves, 'cause that's not our forte. >> I love that. That's a great conversation I'd love to have with you another time around competitive advantage around IT which is coming back in vogue again. It hasn't been that way in awhile because of all the consolidation and outsourcing. You're seeing people really, really ramp up and say, "Wait a minute, we outsourced our core competency and IT," and now with cloud, there's a competitive advantage, so how do you balance the intellectual property that you need to build for the business and then also use the scale and agility with Open Source? So I want to move to that Open Source conversation. I think this is a good transition. Developers at the end of the day still have to build the apps and services they're going to run on these environments to add value. So Open Source has become, I won't say a professional circuit for developers. It really is become the place for developers because that's where now corporations and projects have been successful, and it's going to a whole nother level. Talk about how Open Source is changing, and specifically around it becoming a common vehicle for one, employees of companies to participate in as part of their job, and two, how it's going to a whole nother level with all this code that's flying around. You can't, you know, go dig without finding out that, you know, new TensorFlow library's been donated for Google, big code bases are being rolled in there, and still the same old success formula for Open Source is continuing to work. You're on the program chair for Open Source summit, which is part of the Linux foundation, which has been very, very successful in this modern era. How has that changed? What's going on in Open Source? And how does that help people who are trying to stand up architecture and build businesses? >> I think Open Source has gone through a lot of transitions over the past decade or so. All right, so it started, and in many ways it was driven by the end users. And now it's come back full circle so that it's again driven more and more by the end users in a way that there was a middle term there where Open Source was really heavily dominated by vendors, and it's started to come back around, and you see a lot of the web companies in particular, right? You're sort of Googles and Amazons and LinkedIns and Facebooks and Twitters, they're open sourcing tools on an almost daily basis, it feels like. I just saw another announcement yesterday, maybe the day before, about a whole set of kernel tools that I think it was Facebook had open sourced. And so you're seeing that pace just going so quickly, and you think back to the days of, for example, the Apache web server, right? Where did that come about from? It didn't come from a software vendor. It came from a coalition of end users all working together to develop the software that they needed because they felt like there's a big gap there and there's an opportunity to cooperate. So it's been really pleasing for me to see that kind of come back around full circle of now, you can hardly turn around and see a company that doesn't have some sort of Open Source program office or something along those lines where they start to develop a much more healthy approach to it. All right, the early 2000's, it was really heavy on that fear and uncertainty and doubt around Open Source. In particular by some vendors, but also a lot of uncertainty because it wasn't that common, or maybe it wasn't that visible inside of these Fortune 500 global 2000 companies. It may have been common, right? What we used to say back when I worked at RedMonk was you turned around, and you asked the database admins, you know, "Are you running MySQL? "Or are you running Postgres?" You asked the infrastructure engineers, "Are you running Linux here?" and you'll get a yes, nine times out of ten, but the CIO was the last to know. Well now, it's started to flip back around because the CIO's are seeing the business value and adopting Open Source and having a really healthy approach to it, and they're trying to kind of normalize the approach to it as a consequence to that, saying, "Look, it's awesome "that we're adopting Open Source. "We have to use this "so that we can get a competitive advantage "because every thousand lines of code we can adopt "is a thousand lines of code we don't have to write, "and we can focus on our own products instead." And then starting to balance that new model of it used to be, you know, is it buy versus built? And then Sass came around, and it's buy versus build versus rent. And now there's Open Source, and it's buy versus build versus rent versus adopt. So every one of these just shifts conversation a little bit of how do you make the right choice at the right time at the right level of the stack? >> Yeah, that's a great observation, and it's awesome insight. It feels like dumping a little bit, a lot of dumping going on in Open Source, and you worry that the flood of vendor-contributed code is the new tactic, but if you look at all the major inflection points from the web, you know, through bitcoin, which is now 10 years old this year, it all started out as organic community projects or conversations on a message board. So there's still a revolution, and I think you're right. Their script is flipping around. I love that comment about the CIO's were last to know about Open Source. I think now that might be flipping around to the CIO's will be last to know about some proprietary advantage that might come out. So it's interesting to see the trend where you're starting to see smart people look at using Open Source but really identifying how they can use their engineering and their intellectual capital to build something proprietary within Open Source for IT advantage. Are you seeing that same trend? Is that on the radar at all? Is that just more of a fantasy on my part? >> I think it's always on the radar, and I think especially with Open Source projects that might be just a little bit below the surface of where a company's line of business is, that's where it will happen the most often. And so, you know, if you were building an analytics product, and you decided to build it on top of, you know, maybe there's the ELK Stack or the Elastic Stack, or maybe there's Graylog. There's a bunch of tools in that space, right? Maybe, you know, Solar, that sort of thing. And you're building an analytics tool or some kind of graph tool or whatever it might be, yeah, you might be inclined to say, "Well, the functionality's not quite there. "Maybe we need to build a new plugin. "Maybe we need to enhance a little bit." And I think this is the same conversation that a lot of the Linux kernel embedded group went through some number of years ago, which is, it's long term a higher burden to maintain a lot of those forks in-house and keep updating them forever than it is to bring some of that functionality back upstream. That's a good, healthy dialogue that hopefully will be happening more and more inside a lot of these companies that are taking Open Source and enhancing it for their own purposes, is taking the right level of those enhancements, deciding what that right level is, and contributing those back upstream and building a really healthy upstream participation regardless of whether you're a software vendor or an adopter of that software that uses it as a really critical part of their product stack. >> Awesome, Donnie, thanks for spending the time chatting with me today. Great to see you, great to connect over our remote here in our studio in Palo Alto. A final question for you. Are you having fun, these days? And what are you most excited about because, again, you've seen. You've been on multiple sides of the table. You've seen what the vendors have. You actually had the realities of doing your job to build value for Carlson Wagonlit Travel, CWT. What are you excited about right now? What's hot for you? What's jazzing you these days? >> Yeah, I think what's hot for me is, you know, to me there's nothing or very little that's revolutionary in technology. A lot of it is evolutionary, right? So you can't say nothing's new. There's always something a little bit different. And so the serverless is another example of something that it's a little bit different. It's a little bit new. It's similar to some previous takes, but you got new angles, specifically around the financials and around, you know, how do you pay? How is it priced? How do you get really almost closer to the metal, right? Get the things you need to happen closer to the way you're paying for them or the way they're running. That's remains a really exciting area for me. I've been going to Serverlessconf for probably since the first or second one now. I haven't been to the most recent one, but you know, there's so much value left in there to be tapped that I'm not yet really on to say, "What's next? What's next?" I've helped myself move out of that analyst world of getting excited about what's next, and for me it's now, "What's ready now?" Where can I leverage some value today or tomorrow or next week? And not think about what's coming down the pipe. So for me, that's, "Well, what went GA?" Right? What can I pick up? What can I scale inside our company so that we can drive the kinds of change we're looking for? So, you know, you asked me what am I the most excited about right now, and it's being here a year and a half and seeing the culture change that I've been driving since day one start to come back. Seeing teams that have never built automation in their lives independently go and learn it and build some automation and save themselves 80 hours a month. That's one example that just came out of our group a couple months back. That's what's valuable for me. That's what I love to see happen. >> Automation's addicting. It's almost an addictive flywheel. We automate something. Oh, that's awesome. I can move on to something else, something better. That was grunt work. Why do I want to do that again? Donnie, thanks so much, and again, thanks for the insight. I appreciate you taking the time and sharing with theCUBE here in our studio. Donnie Berkholz is the VP of IT source of CWT, a great guest. I'm John Furrier here inside theCUBE studio in Palo Alto. Thanks for watching. (lively music)
SUMMARY :
and for businesses across the globe Well, thanks for having me on the show. Part of that cloud ought to go back to 2007, '08 time frame We've got some teams that are on the bleeding edge. So I've got to ask you first. and it's one that, you know, so define what hybrid means to you guys and that's where you start to get, I think, What is the realities when you say, "Okay, and into much more of the product mindset and conversations that you guys are having and better account for the risk and spread it and pray and hope that it works. and to understand what things look like in production. "I've got a toolshed full of," you know, Whatever it might be that suits the kinds of needs you have. run the keys to the kingdom. It's been inevitable that the network's going to get engaged of the stool in my opinion, and I think you nailed it. of hey, we kind of do a ECI survey or an audit, you know, That's a great conversation I'd love to have with you and you think back to the days of, for example, at all the major inflection points from the web, you know, and you decided to build it on top of, you know, And what are you most excited about I haven't been to the most recent one, but you know, I appreciate you taking the time
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
November 2018 | DATE | 0.99+ |
Donnie Berkholz | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
63% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Randy Bias | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Carlson Wagonlit Travel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
a year and a half ago | DATE | 0.99+ |
five year | QUANTITY | 0.99+ |
nine times | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
CWT | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Googles | ORGANIZATION | 0.98+ |
five years ago | DATE | 0.98+ |
ten | QUANTITY | 0.98+ |
three areas | QUANTITY | 0.98+ |
Ansible | ORGANIZATION | 0.98+ |
80 hours a month | QUANTITY | 0.98+ |
Don | PERSON | 0.98+ |
over 10,000 people | QUANTITY | 0.98+ |
LinkedIns | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
a decade ago | DATE | 0.97+ |
a year and a half | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
twice a year | QUANTITY | 0.97+ |
SecDevOps | TITLE | 0.97+ |
past decade | DATE | 0.96+ |
one final question | QUANTITY | 0.96+ |
billions of dollars | QUANTITY | 0.95+ |
Elastic Stack | TITLE | 0.95+ |
One | QUANTITY | 0.95+ |
Facebooks | ORGANIZATION | 0.95+ |
early 2000's | DATE | 0.95+ |
DevOps Day | EVENT | 0.94+ |
ELK Stack | TITLE | 0.94+ |
this year | DATE | 0.94+ |
CWT 3.0 | TITLE | 0.94+ |
Open Source | EVENT | 0.93+ |
Azure | TITLE | 0.92+ |
Apache | ORGANIZATION | 0.91+ |
'08 | DATE | 0.91+ |
Saar Gillai, Teridion | CUBEConversation, Sept 2018
(dramatic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in our Palo Alto studio for a CUBE conversation. It's really a great thing that we like to take advantage of. A little less hectic than the show world and we're right in the middle of all the shows, if you're paying attention. So we're happy to have a CUBE alumni on. He's been on many, many times. Saar Gillai , he's now the CEO of Teridion. And Saar, welcome. I don't think we've talked to you since you've been in this new role. >> Yeah, it's been about a year I think. >> Been 'about a year. So give us kind of the update on Teridion. What it's all about and really more importantly, what attracted you to the opportunity? >> Sure. First of all, great to be here. I don't know where John is. I'm looking for him. He ran away. Maybe he knew I was coming. >> Somewhere over the Atlantic I think. 35,000 feet. >> I'll follow up on that later but hey, you're here. So, you know Teridion, let's talk about maybe the challenge that Teridion is addressing first so people will understand that, right. So if you look about what's going on these days with the advent of Cloud. and how people are really accessing stuff, things have really moved in the past. Most of the important services that people access were in a data center and were accessed through the LAN so the enterprise had control over them and if you wanted to access an app, if it didn't work, somebody when into the LAN, played around with some CISCO router and things maybe got better. >> But at least you had control. >> You had control and if you look at what's happened over the last decade, but certainly in the last five years, with SAS and the Cloud. Stating the obvious, more and more of your services now are actually being accessed through your WAN and in many cases, that actually means the internet itself. If you're accessing Salesforce or Box or Ignite or any of these services. The challenge with that is that now means that a critical part of your user experience, you don't control. The vendor doesn't control because you can make the best SAS up in the world but, and those apps are increasingly very dynamic. Caching doesn't solve this problem and the problem is now, okay, but I'm experiencing it over the internet. And while the internet is a great tool obviously, it's not really built for reliabilty, consistency, and consistent speed. Reality, if you look at the internet, it was designed to sent one packet to NORAD and tell them that some nuclear missile died somewhere. That's what it was designed for right? So the packet will get there but the jitter and all these things may work and so what happens is that, now you have a consistency problem. Historically, people will say well, that's all been addressed through traditional caching and that's true. Caching still has it's place. The reality is though that caching is more for stuff that doesn't change a lot and now, it's all very dynamic. If you're uploading a file, that's not a caching activity. If you're doing something in Salesforce, it's very dynamic. It's not cached. At Teridion, we looked at this problem. Teridion's been around about four years. I've been there for about a year. We felt that the best way to solve this problem was actually to leverage some of the Cloud technology that already exists to solve it. So what we do, actually, is we build an overlay network on top of the public Cloud surface area. So instead of traditionally, the way people did things is they would build a network themselves but today the public Cloud guys honestly are spending gazillions of dollars building infrastructure. Why not leverage it the same way that you don't buy CPUs, why buy routers? What we do is we create a massive overlay network on demand on the public Cloud surface area. And public Cloud means not just Amazon or Google but also people like AliCloud, DigitalOcean, Vulture, any Cloud provider really, some Russian Cloud providers. And then we monitor the internet conditions and then we build a fast path. If you think about it almost like a ways, a fast path for your packet from wherever the customer is to your service thereby dramatically increasing the speed but also providing much higher reliabilty. >> So, lot of thoughts. If I'm hearing right, you're leveraging the public Cloud infrastructure so they're pipes, if you will. >> And they're CPUs. >> And they're CPUs but then you're putting basically waypoints on that packet's journey to reroute to a different public Cloud infrastructure for that next leg if that's more appropriate. >> Yeah, and basically what I'm doing is I'm basically just saying if there's a, if your server's here whether they're on a public Cloud or somewhere else, it doesn't matter, and a customer is here, through some redirection, I will create a router on a public Cloud so a soft router, somewhere close from a network perspective to a user and somewhere close to the server and then between them, I'll create an overlay fast path. And then, what is goes over will be based on whatever the algorithm figures out. The way we know where to go over is we also have a sensor network distributed throughout the public Cloud surface areas and it's constantly creating a heat map of where there's capacity, where there's problems, where there's jitter and we'll create a fast path. Typically that fast path will give you, one of the challenges, I'll give you an example. So let's say you're on Comcast and let's say you've got 40 meg let's say, your connection at home. And then you connect to some server and theoretically that server has much more, right? But reality is, when you do that connection, it's not going to be 40 meg. Sometimes it's 5 meg, okay? So we'll typically give you almost your full capacity that you have from your first provider all the way there by creating this fast path. >> So how does it compare, we hear things about like Direct Connect between Equinix and Amazon or a lot of peer relationships that get set up. How does what you're doing kind o' compare, contrast, play, compare to those solutions? >> Direct Connect is sort of a static connection. If you have an office and you want to have a Direct Connection, it's got advantages and it's useful in certain areas. Part of the challenge there is that first of all, it has a static capacity. It's static and it has a certain capacity. What we do, because it's completely software oriented, is we'll create a connection and if you want more capacity, we'll just create more routers. So you can have as much capacity as you want from wherever you want where with Direct Connect, you say I want this connection, this connection, this much capacity and it's static. So if you have something very static, then that may be a good solution for you but if you're trying to reach people at other places and it's dynamic, and also you want variable capacities. For example, let's say you say I want to pay for what I use. I don't want to pay for a line. Historically, when you're using these things, you say okay, if the maximum I may want is 40 meg, you say okay, give me a 40 meg line. That's expensive. >> Right, right. >> But what if you say I want 40 meg only for a few hours a day right? So in my case, you just say look, I want to do this many terabytes. And if you want to do it at 40 meg, do it at 40 meg. It doesn't matter. So it's much more dynamic and this lends itself more to the modern way of people thinking of things. Like the same way you used to own a server and you had to buy the strongest server you needed for the end of the month because maybe the finance guy needed to run something. Today you don't do that right? You just go to public Cloud and when it's the end of the month, you get more CPUs. We're the same thing. You just set a connection. If you need more capacity, then you'll get more capacity that you need. We had a customer that we were working with that was doing some mobile stuff in China and all of a sudden, they needed to do 600,000 connections a minute from China. And so we just scaled up. You don't have to preconfigure any of this stuff. >> Right, right. So that's really where you make the comparison of public Cloud for networking because you guys are leveraging public Cloud infrastructure, you're software based so that you can flex so you don't have the old model. >> It's completely elastic, like I said. It's very similar. Our view is the compute in the last decade, obviously, compute has moved from a very static I own everything mode to let's use dynamic resources as much as possible. Of course, there's been a lot of advantage to that. Why wouldn't your connectivity, especially your connectivity outside which is increasing your connectivity also use that paradigm. Why do you need to own all this stuff? >> Right, right. As you said before we turned the cameras on the value proposition to your customers who are the people that basically run these big apps, is the fact that they don't have to worry about that but net is just flat out faster to execute the simple operations like uploading or downloading something to BOX. >> And again, you mentioned BOX, they're one of our big customers and we have a massive network if you thing about how much BOX uploads in a given day, right? 'Cause there's a lot of there traffic that goes through us. But if you think about these SAS providers, they really need to focus on making their app as good as possible and advancing it and making it as sophisticated as possible and so, the problem is then there's this last edge which is from their server all the way to the customer, they don't really control. But that is really important to the customer experience, right? If you're trying to upload something to BOX or trying to use some website and it's really slow, your user experience is bad. It doesn't matter if it's the internet's fault. You're still as a customer, So this gives them control. They give us that ability and then we have control that we can give it much faster speed. Typically in the US, it may be two to five times faster. If you're going outside the US, it could be much faster sometimes. In China, we go 15 times faster. But also, it's consistent and if you have issues, we have a knock, we monitor, we can go look at it. If some customer says I have a problem, right? We'll immediately be able to say okay, here's the problem. Maybe there's a server issue and so forth as opposed to them saying I have a problem and the SAS vendor saying well, it's fine on our side. >> Right, right. So, I'm curious on your go to market. Obviously, you said BOX is a example of a customer. You've got some other ones on the website. Who are these big application service providers, that term came up the other day, like flashback to 1990. 1998 >> I call them SAS >> It's funny, we were talking about the old days. >> To me, it's all the same, as a service guy. >> But then, as you go to market then going to include going out directly through the public Clouds in some of their exchanges so that basically, I could just buy a faster throughput with the existing service. Where do you go from here? I imagine, who doesn't want faster internet service period? >> Yeah, we started off going to the people who have the biggest challenge and easier to work with a small company right? You want to work with a few big guys. They also help you design your solution, make sure it's good. If you can run BOX and Traffic and Ignite. Traffic can probably handle other things, last year for example. We are looking at potentially providing some of the service, for example, if you're accessing S3 for example, we can access S3 at least three times faster. So we are looking potentially at putting something on the web where you could just go to Amazon and sign up for that. The other thing that we're looking at, which is later in the year, probably is that we haven't gotten a lot of requests from people that said hey, since the WAN is the new LAN, right, and they want to also try to use this technology for their enterprise WAN between branch offices where SD-WAN is sort of playing today, we've gotten a lot of requests to leverage this technology also in SD-WAN and so we're also looking at how that could potentially play out because again, people just say look, why can't I use this for all my WAN connectivity? Why is it only for SAS connectivity? >> Right, right. I mean it makes sense. Again, who doesn't want, the network never goes fast enough, right? Never, never, never. >> It's not only speed. I agree with you but it's not only speed. What you find, what people take for granted in the LAN but they only notice it when now they're running over the LAN is that it's a business critical service. So you want it to be consistent. If it's up, it needs to have latency, jitter, control. It needs to be consistent. It can't be one second it's great, the next second it's bad and you don't know why and visibility. No one's ever had that problem. >> I'm just laughing. I'm thinking of our favorite Comcast here. If they're not a customer, you need to get them on your list. Help make some introductions hopefully. >> So, people take that for granted when they're LAN and then when they move to the Cloud, they just assume that it's going to continue but it doesn't actually work that way. Then they get people from branch offices complaining that they couldn't upload a doc or the sales person was slow and all these problems happen and the bigger issue is, not only is this a problem, you don't have control. As a person providing a service, you want to have control all the way so you can say "yeah, I can see it. "I'm fixing it for you here. "I fixed it for you." And so it's about creating that connection and making it business critical. >> It's just a funny thing that we see over and over and over where cutting edge and brand new quickly becomes expected behavior very, very quickly. The best delivery by the best service, suddenly you have an expectation that that's going to be consistent across all your experiences with all your apps. So you got to deliver that QS. >> Yeah, and I think the other thing that we notice, of course, is because of the explosion of data right? It's true that the internet's capacity is growing but data is growing faster because people want to do more because CPUs are stronger, your handset is stronger and so, so much of it is dynamic. Like I said before, historically, some of this was solved by just let's cache everything. But today, everything is dynamic. It's bidirectional and the caching technology doesn't do that. It's not built for that. It's a different type of network. It's not built for this kind of capacity so as more and more stuff is dynamic, it becomes difficult to do these things and that's really where we play. And again, I think the key is that historically, you had to build everything. But the same way that you have all these SAS providers not building everything themselves but just building the app and then running on top of the public Cloud. The same thing is why would I go build a network when the public Cloud is investing a hundred billion dollars a year in building massive infrastructure. >> Yeah, and they are, big infrastructure. Well Saar, thanks for giving us the update and stopping by and we will watch the story unfold. >> Great to be here. >> Alright. And we'll send John a message. >> I'll have to track him down. >> Alright, he's Saar, I'm Jeff. You're watching theCUBE. It's a CUBE conversation at our Palo Alto Studio. Thanks for watching. We'll see you next time. (dramatic music)
SUMMARY :
I don't think we've talked to you what attracted you to the opportunity? First of all, great to be here. Somewhere over the Atlantic I think. and if you wanted to access an app, and the problem is now, okay, but so they're pipes, if you will. to reroute to a different that you have from your first compare to those solutions? and if you want more capacity, Like the same way you used to own a server so you don't have the old model. Why do you need to own all this stuff? the value proposition to your customers and if you have issues, we have a knock, Obviously, you said BOX is talking about the old days. To me, it's all the But then, as you go to the web where you could just go the network never goes fast enough, right? and you don't know why and visibility. you need to get them on your list. all the way so you can So you got to deliver that QS. But the same way that you and stopping by and we will And we'll send John a message. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
15 times | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Saar | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Saar Gillai | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Teridion | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
S3 | TITLE | 0.99+ |
35,000 feet | QUANTITY | 0.99+ |
Sept 2018 | DATE | 0.99+ |
last year | DATE | 0.99+ |
1998 | DATE | 0.99+ |
1990 | DATE | 0.99+ |
DigitalOcean | ORGANIZATION | 0.99+ |
Vulture | ORGANIZATION | 0.99+ |
CISCO | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
five times | QUANTITY | 0.99+ |
one packet | QUANTITY | 0.99+ |
first provider | QUANTITY | 0.99+ |
AliCloud | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
gazillions of dollars | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
today | DATE | 0.98+ |
40 meg | QUANTITY | 0.98+ |
5 meg | QUANTITY | 0.98+ |
Atlantic | LOCATION | 0.97+ |
one | QUANTITY | 0.97+ |
one second | QUANTITY | 0.97+ |
Teridion | ORGANIZATION | 0.96+ |
Palo Alto Studio | LOCATION | 0.96+ |
last decade | DATE | 0.96+ |
about a year | QUANTITY | 0.96+ |
First | QUANTITY | 0.94+ |
600,000 connections a minute | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
about four years | QUANTITY | 0.92+ |
last five years | DATE | 0.91+ |
Ignite | ORGANIZATION | 0.9+ |
Salesforce | TITLE | 0.9+ |
Russian | OTHER | 0.9+ |
Box | ORGANIZATION | 0.89+ |
a hundred billion dollars a year | QUANTITY | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
NORAD | ORGANIZATION | 0.83+ |
SAS | TITLE | 0.82+ |
Salesforce | ORGANIZATION | 0.81+ |
Direct Connect | OTHER | 0.79+ |
few hours a day | QUANTITY | 0.76+ |
three times | QUANTITY | 0.72+ |
Cloud | TITLE | 0.68+ |
Connect | OTHER | 0.5+ |
Liran Zvibel, WekaIO | CUBEConversation, April 2018
[Music] hi I'm Stu minimun and this is the cube conversation in Silicon angles Palo Alto office happy to welcome back to the program Lear on survival who is the co-founder and CEO of Weka IO thanks so much for joining me thank you for having me over alright so on our research side you know we've really been saying that data is at the center of everything it's in the cloud it's in the network and of course in the storage industry data has always been there but I think especially for customers it's been more front and center well you know why is data becoming more important it's not data growth and some of the other things that we've talked about for decades but you know how was it changing what are you hearing from customers today so I think the main difference is that organization they're starting to understand that the more data they have the better service they're going to provide to their customers and there will be an overall better company than their competitors so about 10 years ago we started hearing about big data and other ways that in a more simpler form just went over sieved through a lot of data and tried to get some sort of high-level meaning out of it last few years people are actually employing deep learning machine learning technique to their vast amounts of data and they're getting much higher level of intelligence out of their huge capacities of data and actually with deep learning the more data you have the better outputs you get before we go into kind of the m/l and the deep learning piece just did kind of a focus on data itself there's some that say you know digital transformation is it's this buzzword when I talk to users absolutely they're going through transformations you know we're saying everybody's becoming a software company but how does data specifically help them with that you know what what what is your viewpoint there and what are you hearing from your customers so if you look at it from the consumer perspective so people now keep track record of their lives at much higher resolution than the and I'm not talking about the images rigid listen I'm talking about the vast amount of data that they store so if I look at how many pictures I have of myself as a kid and how many pictures I have of my kids like you could fit all of my pictures into albums I can probably fit my my kids like a week's worth of time into albums so people keep a lot more data as consumers and then organization keep a lot more data of their customers in order to provide better service and better overall product you know the industry as an industry we saw a real mixed bag when it came to Big Data when I was saying great I have lots more volume of data that doesn't necessarily mean that I got more value out of it so what are the one of the trends that you're seeing why is you know where things like you deep learning machine learning AI you know is it going to be different or is this just kind of the next iteration of well we're trying and maybe we didn't hit as well with big data let's see if this does it does better so I think that Big Data had its glory days and now where they're coming to to the end of that crescendo because people realized that what they got was sort of aggregate of things that they couldn't make too much sense of and then people really understand that for you to make better use of your data you need to employ way similarly to how the brain works so look a lot of data and then you have to have some sense out of their data and once you've made some sense out of that data we can now get computers to go through way more data and make a similar amount of sense out of that and actually get much much better results so just instead of going finding anecdotes or this thing that you were able to do with big date you're actually now are able to generate intelligent systems you know what one of the other things we saw is it used to be okay I have this this huge back catalogue or I'm going to survey all the data I've collected today you know it's much more you know real times a word that's been thrown around for many years you know whether it do you say live data or you know if you're at sensors where I need to have something where I can you know train models react immediately that that kind of immediacy is much more important you know that's what I'm assuming that's something that you're seeing from customers to indeed so what we say is that customers end up collecting vast amounts of data and then they train their models on these kind of data and then they're pushing these intelligent models to the edges and then you're gonna have edges running inference and that could be a straight camera it could be a camera in the store or it could be your car and then usually you run these inference at the endpoints using all the things you've trained the models back then and you will still keep the data push it back and then you should you still run inference at the data center sort of doing QA and now the edges also know to mark where they couldn't make sense of what they saw so the the data center systems know what should we look at first how we make our models smarter for the next iteration because these are closed-loop systems you train them you push through the edges the edges tell you how well you think they think they understood your train again and things improve we're now at the infancy of a lot of these loops but I think the following probably two to five years will take us through a very very fascinating revolution where systems all around us will become way way more intelligent yeah and there's interesting architectural discussions going on if you talk about this edge environment if I'm an autonomous vehicle now from an airplane of course I need to react there I can't go back to the cloud but you know what what happens in the cloud versus what happens at the edge where do where does Weka fit into that that whole discussion so where we currently are running we're running at the data centers so at Weka we created the fastest file system that's perfect for AI and machine learning and training and we make sure that your GPU field servers that are very expensive never sit idle the second component of our system is tearing two very effective object storages that can run into exabytes so we have the system that makes sure you can have as many GPU servers churning all the time and getting the results getting the new models while having the ability to read any form of data that was collected in the several years really through hundreds of petabytes of data sets and now we have customers talking about exabytes of data sets representing a single application not throughout the organization just for that training application yeah so a I in ml you know Keita is that that the killer use case for your customers today so that's one killer application just because of the vast amount of data and the high-performance nature of the clients we actually show clients that runwa kayo finished training sessions ten times faster than how they would use traditional NFS based solutions but just based on the different way we handle data another very strong application for us is around Life Sciences and genomics where we show that we're the only storage that let these processes remain CPU bound so any other storage at some points becomes IO bound so you couldn't paralyzed paralyzed the processing anymore we actually doesn't matter how many servers you run as clients you double the amount of clients you either get the twice the result the same amount of time or you get the same result it's half the time and with genomics nowadays there are applications that are life-saving so hospitals run these things and they need results as fast as they can so faster storage means better healthcare yeah without getting too deep in it because you know the storage industry has lots of wonkiness and it's there's so many pieces there but you know I hear life scientists I think object storage I hear nvme I think block storage your file storage when it comes down to it you know why is that the right architecture you know for today and what advantages does that give you so we we are actually the only company that went through the hassles and the hurdles of utilizing nvme and nvme of the fabrics for a parallel file system all other solutions went the easier route and created the block and the reason we've created a file system is that this is what computers understand this is what the operating system understand when you go to university you learn computer science they teach you how to write programs they need a file system now if you want to run your program over to servers or ten servers what you need is a shirt file system up until we came gold standard was using NFS for sharing files across servers but NFS was actually created in the 80s when Ethernet run at 10 megabit so currently most of our customers run already 100 gigabytes which is four orders of magnitude faster so they're seeing that they cannot run a network protocol that was designed four orders of magnitude last speed with the current demanding workloads so this explains why we had to go and and pick a totally different way of pushing data to the to the clients with regarding to object storages object storages are great because they allow customers to aggregate hard drives into inexpensive large capacity solutions the problem with object storages is that the programming model is different than the standard file system that computers can understand in too thin two ways a when you write something you don't know when it's going to get actually stored it's called eventual consistency and it's very difficult for mortal programmers to actually write a system that is sound that is always correct when you're writing eventual consistency storage the second thing is that objects cannot change you cannot modify them you need to create them you get them or you can delete them they can have versions but this is also much different than how the average programmer is used to write its programs so we are actually tying between the highest performance and vme of the fabrics at the first year and these object storages that are extremely efficient but very difficult to work with at the back and tier two a single solution that is highest performance and best economics right there on I want to give you the last word give us a little bit of a long view you talked about where we've gone how parallel you know architecture helps now that we're at you know 100 Gig look out five years in the future what's gonna happen you know blockchain takes over the world cloud dominates everything but from an infrastructure application in you know storage world you know where does wek I think that the things look like so one one very strong trend that we are saying is around encryption so it doesn't matter what industry I think storing things in clear-text for many organizations just stops making sense and people will demand more and more of day of their data to be encrypted and tighter control around everything that's one very strong trend that we're seeing another very strong trend that we're seeing is enterprises would like to leverage the public cloud but in an efficient way so if you were to run economics moving all your application to the public cloud may end up being more expensive than running everything on Prem and I think a lot of organizations realized that the the trick is going to be each organisation will have to find a balance to what kind of services are run on Prem and these are going to be the services that are run around the clock and what services have the more of a bursty nature and then organization will learn how to leverage the public cloud for its elasticity because if you're just running on the cloud you're not leveraging the elasticity you're doing it wrong and we're actually helping a lot of our customers do it with our hybrid cloud ability to have local workloads and the cloud workloads and getting these whole workflows to actually run is a fascinating process they're on thank you so much for joining us great to hear the update not only on Weka but really where the industry is going dynamic times here in the industry data at the center of all cubes looking to cover it at all the locations including here and our lovely Palo Alto Studio I'm Stu minimun thanks so much for watching the cube thank you very much [Music] you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Liran Zvibel | PERSON | 0.99+ |
100 gigabytes | QUANTITY | 0.99+ |
April 2018 | DATE | 0.99+ |
10 megabit | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Weka IO | ORGANIZATION | 0.99+ |
Weka | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
five years | QUANTITY | 0.98+ |
second component | QUANTITY | 0.98+ |
each organisation | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Stu minimun | PERSON | 0.97+ |
two ways | QUANTITY | 0.97+ |
Prem | ORGANIZATION | 0.96+ |
ten times | QUANTITY | 0.95+ |
about 10 years ago | DATE | 0.94+ |
one | QUANTITY | 0.94+ |
Stu minimun | PERSON | 0.94+ |
last few years | DATE | 0.93+ |
hundreds of petabytes of data sets | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
several years | QUANTITY | 0.92+ |
80s | DATE | 0.91+ |
single application | QUANTITY | 0.9+ |
decades | QUANTITY | 0.9+ |
a lot of data | QUANTITY | 0.89+ |
Silicon angles | LOCATION | 0.89+ |
half the time | QUANTITY | 0.87+ |
ten servers | QUANTITY | 0.87+ |
two very effective object | QUANTITY | 0.87+ |
single solution | QUANTITY | 0.86+ |
four orders | QUANTITY | 0.85+ |
four orders | QUANTITY | 0.85+ |
a week | QUANTITY | 0.84+ |
Palo Alto Studio | ORGANIZATION | 0.8+ |
lot more data | QUANTITY | 0.78+ |
WekaIO | ORGANIZATION | 0.78+ |
100 Gig | QUANTITY | 0.74+ |
Lear on | TITLE | 0.72+ |
double | QUANTITY | 0.72+ |
many pieces | QUANTITY | 0.65+ |
Keita | ORGANIZATION | 0.63+ |
lot of data | QUANTITY | 0.6+ |
lot | QUANTITY | 0.58+ |
lots | QUANTITY | 0.58+ |
application | QUANTITY | 0.56+ |
vast amounts of data | QUANTITY | 0.54+ |
exabytes | QUANTITY | 0.53+ |
trend | QUANTITY | 0.52+ |
CEO | PERSON | 0.5+ |
Big Data | ORGANIZATION | 0.45+ |
Haseeb Budhani, Rafay Systems | CUBEConversation, April 2018
(light music) >> Hi, I'm Stu Miniman and this is a special CUBE Conversation here in SiliconANGLE Media's, Palo Alto Studio. Happy to bring back to the program Haseeb Budhani who, last time I talked to Haseeb, Haseeb worked at a number of interesting startups, been a Chief Product Officer, had many various roles, and today, is a founder and CEO. So, we always love to have back CUBE alums, especially doing interesting things, getting out there with that entrepreneurial spirit, so, Haseeb, great to see you. Thanks so much for joining us. >> Great to see you and the first time you and I met, the stage was not as nice as this. That was many, many, many years ago. >> You know, we've been growing up a bit, just like the ecosystems around us. You and I talked about things like replication, changing with data and storage and everything else in various roles so, Rafay Systems, tell us a little bit. What was the inspiration? Tell us a little bit about the founding team, the why the company first. >> Sure. As you know, right before Rafay Systems, I started a company called Soha. Soha was acquired by Akamai 18 odd months ago. I think we all, we learn by failing. There was one specific thing we did very poorly at Soha, which was how we ran operations, how we thought about getting closer to our users and so on, that once we left Akamai, so my co-founder from Soha and I are doing this company again together, he was our VP of Attorney there, he's our VP of Attorney here. When we left Akamai after our stint there, we spent time thinking about what kind of applications have, when you kind of think in terms of an application stack, some microservices in an application stack are always going to need to be as close to the end point as possible. So we were trying to figure out who has that problem and how do they solve it. So, here's what we found. Many, many applications have this problem, nobody knows how to solve it well. I mean, if you think Siri, there's an edge that Apple is running for that. If you think eBay, there's transactions happening in region and so on. Or when you think IoT, there are edges being created in the IoT world, and we wanted to come up with a framework or a platform to solve these problems well for all these different application developers. So we came up with the concept that we call the Programmable Edge. The idea is that we want to help our customers run certain microservices, the ones that are latency sensitive, as close to their end points as possible. And an end point could be a car, it could be a phone, it could be a sensor, doesn't matter what it is, but we want to help them get their applications out as quickly as possible. >> Yeah. Before we get into some of the technology, Rafay Systems, Soha Systems, where did the names for these come from? >> Soha is my daughter's name. Rafay is my son's name. We have two kids. I don't know what I'm going to do after this. I need a job. I don't know what I'm going to do after this company. But, actually, our VP Marketing at Soha, he was the one who wanted to use his name. So when we started the previous company, I called it Bubble Wrap, because I thought we were wrapping apps in a bubble, I thought that was really cool. Everybody hated it. (laughs) >> Yeah, there are too many puns on popping the bubble or things like that, it would be challenging. >> I thought it was, I still think it's awesome, but nobody liked it. So, he was looking for a name and we had hired a new agency, they were ready to roll out a new website, we didn't have a name. So, in, like, a four hour window, we had to come up with something. He says, "That's a short enough name "and looks like you own the domain anyway, "let's just use that." Of course, my kids love it. Then once we started the second company, it had to be named after my son. >> Your daughter wasn't a little upset that you sold off the company and now have nothing to do with it? >> It was a pretty healthy outcome so I think she's fine. (both laughing) >> Excellent. Talking about microservices applications around the globe. I was at the Adobe Summit recently and, you're right, it's a very different conversation than, say, ZDNs in the past. But it's, "How many instances do I have? "How do I manage that? "What's their concern?" Networking's always been one of those underlying challenges. Think back to the failed XSPs in the 90s, (Haseeb laughs) and when Cloud started 10 plus years ago, it was like, "Oh, are we going to be able to handle that today?" Think back to Citrix and their NetScaler product is one of those secret sauce things in there that those of us in the networking space really understand it but most people, "Oh, SAS is going to be great "and things will just work anywhere on any device anywhere." But there's some real challenges there. >> Haseeb: Absolutely. >> What's that big gap in the market and are there other companies that are trying to help solve this? >> I used to work in NetScaler a long time ago. I don't know if you brought it up because of that, but I think it's an incredibly amazing product that became the foundation of many things. I think two things are happening in our industry that allow companies like ours to exist, at least from an applications perspective. One is containers, the fact that we are now able to package things not as big, fat VMs, but smaller, essentially, process level things. And then microservices, the fact that we have this notion of loose coupling between services and you can have certain APIs that expose things to each other. And if you at least thematically think about it, if there's a loose coupling it can extend them out so long as I get more value out of doing so. And that, fundamentally, is what we think is an interesting thing happening out there. The fact that there are loose couplings, the fact that applications are no longer monolithic allows us to make better decisions about what needs to run where. The challenge is how do you make that happen? The example I always share with people is, let's say, let's imagine for a second that you have access to 100,000 regions all around the world. You have edges everywhere, 100,000 locations where you can run your code. What do you do next? How do you decide which ones you need? Do you need 5,000? Do you need 80,000? That needs to be solved by the platform. We are at a point now, particularly when it comes to locations, that these are no longer decisions that an Ops Team can make. That has to be driven by the platform and the platform that we are envisioning is going to help our customers, basically, in terms of where the code goes, how they think about performance, et cetera. These are things that will be expressed as a policy to our platform and we help them determine where the location should be and so on. >> Alright. Haseeb, I think many of us lost too many hours fighting in the industry of, what was cloud, What wasn't cloud, various definitions, those ontological discussions, academically they make sense. Heck, when I talk to customers today it's not like, "Well, I'm figuring out my public cloud strategy," or this and that. They have a cloud strategy because there's various pieces in there to connect. Edge is one of those. I haven't heard that people don't like the term, but if I'll talk to seven different companies, Edge means a very different thing to all of them. You and I reconnected actually when we'd both written similar articles that said, "Well, Edge does not kill the public cloud." Peter Levine wrote a very interesting piece with that eye-catching title that was like, "Well, Edge is going to have trillions of devices "and there'll be more data at the Edge than anywhere else." And it's like, okay, yes, yes, yes, but that does not mean that public cloud evaporates tomorrow, right? Nice try, Amazon, good luck on your next business. (laughs) Maybe give us a little bit your definition of Edge, but, more importantly, who are the type of customers that you're talking to and what is the opportunity and challenges of that Edge environment? >> Sure. So let's talk about what Edge means. I think we both agree that the word edge is a misnomer and depends. There are many kinds of edges, if you will. A car for a Tesla, that's an edge, right? Because they are running compute jobs on the car. I use the phrase device edge to describe that thing, the car is a device edge. You're also going to have the car talking to things out there somewhere. If two cars are interacting with each other, you don't want that interaction or the rendezvous point for that interaction being very, very far away, you want to be somewhere close by. I call that the infrastructure edge. Now, infrastructure edge, since you asked, I'm going to go down that rabbit hole, you could be running at the edge of the internet. So think Equanex or Digital or anybody who's got massive pairing presence and so on. So that's the internet edge, as far as infrastructure is concerned. But if you talk to an AT&T, because you said depending on who you talk to their idea is different, in AT&T's mind or Verizon's mind, maybe the base station is the edge, so I call that the wireless edge. Again, infrastructure. So, at a very high level, there is the device edge, there is the infrastructure edge, and then there's a cloud. Applications will span all of these things. It's not one or the other, that doesn't make any sense. Any application will have workloads that are best run in Amazon or, of course, now I think we use Amazon like TiVo, Amazon means public cloud. >> Stu: Like Kleenex. (laughs) >> Like Kleenex. >> Exactly. >> Some things will run in the core, and some things will run in the middle, and then some things will run at the edge. Now in this kind of discussion, I didn't describe another kind of edge which is the IoT edge. Within a factory, or some gas location or some oil and gas facility out there where maybe you don't even have good connectivity back to the internet. They're going to probably have an edge on prem at the factory edge. That too is a necessity. So you have lots of data being generated, they're going to put it in that location. So we should maybe stop thinking in terms of an edge, it just depending on the application that you're targeting, that application's sub-components may need to run in different places, but that makes it so much harder. We couldn't even figure out how to run things in a single region in Amazon, or two, people still have trouble running across availability zones in Amazon. Now we're saying, "Hey, you're going to have four edges, "or five edges, and you're going to have 100 locations," how is this going to work? And that is the challenge. That's, of course, the opportunity as well, because there are applications out there, I talked about the car use case, which seems to be a real use case for many car companies, particularly the ones who are going autonomous with their fleets. They have this challenge. Lots of data being generated and they need to process it as quickly as possible because there's lots of noise on the wire. This data problem, data is gravity, you want to, instead of moving data to a location where there is compute, you want to move compute as close to the data as possible. That's the trend I look for when we're looking for customers. Who has lots of data/traffic being generated at the edge? That could be a sensor company, probably do a number of IoT companies that are pushing data up and it turns out that it's a lot of data or they have compliance challenges, they're going to have PAI come out of a region. So these are some of the use cases we were looking at. These use cases are new use cases, even in older applications, there are needs that can be fulfilled with an edge. Here's an example I tend to use to describe the problem, not that this is a use case. When I talk to OVC and I'm trying to explain to them why an edge matters, at least thematically, I ask the question; if you go to an e-commerce site, how much time do you spend buying versus browsing? What is your answer? >> The buying is a very small piece of it. >> Yeah. >> But it's the most important part. >> 99% of the time is spent looking at read-only stuff. Why do we need to go back to the core if you're not buying? What if the inventory could be pushed to the edge and you can just interact and look at the inventory, and when you make a purchase decision that goes to the core? That's what's possible with the edge. In fact, I believe that some number of years down the line, that's how all applications are going to behave. The things that are read-only, state management, state validation, cookie validation for example, for authentication, these are things that are going to happen at the edge of the internet or wherever the edge happens to be, and then actual purchase decisions or state change decisions will happen in the core. >> Alright. Haseeb, explain to us where in the stack your solution fits. You mentioned everything from the hyper-scale clouds to Equanex out to devices in cars and the like, so where is your layer? Where is your secret sauce? >> So we expect to sit at the internet edge, once the wireless edge is a real thing 5G becomes out there, we expect to sit somewhere there, somewhere between the internet edge. We are, the way we think about this is there are aggregation points, on the internet, in the network, where you have need to put compute so you can make aggregate decisions across multiple devices. That's where we are building our company. In terms of the stack, we are essentially helping our customers run their compute. Think of us as a platform where customers can bring their code, if you will. Because at the end of the day it's computing. Yes, it's about traffic and data but you still need to run compute somewhere, so we are helping our customers run that compute at the internet edge or the wireless edge. >> Okay. Are your customers some of the Telcos, MSPs cloud providers and the enterprise or how does that relationship work? >> The ideal customers for us are SAS companies who are running applications on the internet that generate money. They care about performance. And they will pay money if we can cut their performance by whatever factor it happens to be. Providers, service providers, in our mind, are partners for us. So we're engaged actually with a number of providers out there who are trying to figure out how to, basically, monetize their existing infrastructure investments better. And edge is a new concept that has been introduced to them and they, as you know, a lot of providers already have edge strategies and we're trying to getting involved with them to see how we can bring more SAS companies to engage with service providers. Which is a really hard thing today. >> It sounds like you solve problem for some Fortune 1,000 customers too, though? >> Yes. >> So do they get involved also? >> Yes, look, the best way to build a startup is you come up with a thesis and very quickly go find four or five people who absolutely believe in the same thing, and they work with you. So, we've been fortunate enough to find a few folks who say, "Look, this is a problem we've been thinking "about for a while, "let's partner together to build a better solution." That's been going really well. >> Great. So, the company itself, I believe you just launched a few months ago, so. >> Haseeb: We started a few months ago. >> Where is the product? What's the state of the funding? >> How many people do you have? >> Sure. >> How many customers? >> We raised a seed round in November. Seed rounds have gotten larger as well these days. They're like the ACE from 10 years ago. We are at a point now where we are demonstrating our platform to our early customers and by early summer we expect to have people on the platform. So, things are moving fast, but I think this problem is becoming more and more clear to many people. Sometimes people don't call it edge computing, people have all kinds of phrases for it, but when it comes to helping customers get better performance out of their existing stacks, that is a very promising concept to many people running applications on the internet. So we are approaching it from that perspective. Edge happens to be the way we solve the problem, so I guess we're an edge computing company, but end of the day we're trying to make applications run faster on the internet. >> Okay. Last thing, give us a viewpoint the next year or two out, what do you expect to see in this space and how should we be measuring success for your firm? >> Sure. Things always take longer than we think they will. I never want to forget that lesson I learned many years ago. I think, look, it's still early days for edge computing. I think a lot of companies who have been bruised by the problem, in that they've tried to build up pops, or tried to get their logic as close to their end points as possible, are going to be adopting it sooner than others. I think in terms of broader option where any developers tZero thinking of core plus edge, that's a five year out thing, and we should, I mean, that's just out there somewhere. But there's enough companies out there, there's enough new use cases out there in the next couple of years that allow company like ours to exist. In fact, I am quite confident that there are probably five other smart people, smarter than me doing this already. This is a real problem, it needs to be solved. >> Alright, well, Haseeb Budhani, it's great to catch up. Thank you so much for helping us interact with our community, understand where these emerging trends in Edge and everything that happens. Distributed architecture is absolutely our biggest challenges of our time, and I look forward to seeing where you and your customers go in the future. >> Absolutely. Thank you so much, Stu. Appreciate your time. >> Alright. And thank you for joining us. Of course, check out theCUBE.net for all of the videos. Check out wikibon.com where it is absolutely digging in deep to how edge is impacting architectures. Peter Burris, David Floyer and the team digging in deep to understand that more and always love your feedback so feel free to give us any comments back. I'm Stu Miniman and thank you for watching theCUBE. (light music)
SUMMARY :
Happy to bring back to the program Haseeb Budhani Great to see you and the first time you and I met, just like the ecosystems around us. The idea is that we want to help our customers Before we get into some of the technology, because I thought we were wrapping apps in a bubble, on popping the bubble or things like that, it had to be named after my son. It was a pretty healthy outcome so I think she's fine. "Oh, SAS is going to be great and the platform that we are envisioning I haven't heard that people don't like the term, I call that the infrastructure edge. (laughs) I ask the question; if you go to an e-commerce site, What if the inventory could be pushed to the edge Haseeb, explain to us where in the stack your solution fits. We are, the way we think about this and the enterprise or how does that relationship work? And edge is a new concept that has been introduced to them is you come up with a thesis So, the company itself, I believe you just launched Edge happens to be the way we solve the problem, and how should we be measuring success for your firm? that allow company like ours to exist. and I look forward to seeing where you Thank you so much, Stu. I'm Stu Miniman and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Levine | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Soha | ORGANIZATION | 0.99+ |
Rafay Systems | ORGANIZATION | 0.99+ |
Akamai | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Soha Systems | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
two kids | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
5,000 | QUANTITY | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Haseeb | PERSON | 0.99+ |
Rafay | PERSON | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Haseeb Budhani | PERSON | 0.99+ |
100 locations | QUANTITY | 0.99+ |
two cars | QUANTITY | 0.99+ |
April 2018 | DATE | 0.99+ |
80,000 | QUANTITY | 0.99+ |
Equanex | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
five year | QUANTITY | 0.99+ |
Kleenex | ORGANIZATION | 0.99+ |
five people | QUANTITY | 0.99+ |
four hour | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
100,000 regions | QUANTITY | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
tomorrow | DATE | 0.98+ |
next year | DATE | 0.98+ |
100,000 locations | QUANTITY | 0.98+ |
Adobe Summit | EVENT | 0.97+ |
five edges | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
10 plus years ago | DATE | 0.97+ |
One | QUANTITY | 0.97+ |
90s | DATE | 0.96+ |
second company | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
early summer | DATE | 0.96+ |
seven different companies | QUANTITY | 0.96+ |
trillions of devices | QUANTITY | 0.96+ |
TiVo | ORGANIZATION | 0.95+ |
SiliconANGLE Media | ORGANIZATION | 0.94+ |
18 odd months ago | DATE | 0.94+ |
SAS | ORGANIZATION | 0.94+ |
first time | QUANTITY | 0.94+ |
Stu | PERSON | 0.92+ |
few months ago | DATE | 0.92+ |
theCUBE.net | OTHER | 0.91+ |
five other smart people | QUANTITY | 0.9+ |
wikibon.com | OTHER | 0.89+ |
Wikibon Research Meeting | October 20, 2017
(electronic music) >> Hi, I'm Peter Burris and welcome once again to Wikibon's weekly research meeting from the CUBE studios in Palo Alto, California. This week we're going to build upon a conversation we had last week about the idea of different data shapes or data tiers. For those of you who watched last week's meeting, we discussed the idea that data across very complex distributed systems featuring significant amounts of work associated with the edge are going to fall into three classifications or tiers. At the primary tier, this is where the sensor data that's providing direct and specific experience about the things that the sensors are indicating, that data will then signal work or expectations or decisions to a secondary tier that aggregates it. So what is the sensor saying? And then the gateways will provide a modeling capacity, a decision making capacity, but also a signal to tertiary tiers that increasingly look across a system wide perspective on how the overall aggregate system's performing. So very, very local to the edge, gateway at the level of multiple edge devices inside a single business event, and then up to a system wide perspective on how all those business events aggregate and come together. Now what we want to do this week is we want to translate that into what it means for some of the new technologies, new analytics technologies that are going to provide much of the intelligence against each of this data. As you can imagine, the characteristics of the data is going to have an impact on the characteristics of the machine intelligence that we can expect to employ. So that's what we want to talk about this week. So Jim Kobielus, with that as a backdrop, why don't you start us off? What are we actually thinking about when we think about machine intelligence at the edge? >> Yeah, Peter, we at the edge, the edge of body, the device be in the primary tier that acquires fresh environmental data through its sensors, what happens at the edge? In the extreme model, we think about autonomous engines, let me just go there just very briefly, basically, it's a number of workloads that take place at the edge, the data workloads. The data is (mumbles) or ingested, it may be persisted locally, and that data then drives local inferences that might be using deep layer machine learning chipsets that are embedded in that device. It might also trigger various tools called actuations. Things, actions are taken at the edge. If it's the self-driving vehicle for example, an action may be to steer the car or brake the car or turn on the air conditioning or whatever it might be. And then last but not least, there might be some degree of adaptive learning or training of those algorithms at the edge, or the training might be handled more often up at the second or tertiary tier. The tertiary tier at the cloud level, which has visibility usually across a broad range of edge devices and is ingesting data that is originated from all of the many different edge devices and is the focus of modeling, of training, of the whole DevOps process, where teams of skilled professionals make sure that the models are trained to a point where they are highly effective for their intended purposes. Then those models are sent right back down to the secondary and the primary tiers, where act out inferences are made, you know, 24 by seven, based on those latest and greatest models. That's the broad framework in terms of the workloads that take place in this fabric. >> So Neil, let me talk to you, because we want to make sure that we don't confuse the nature of the data and the nature of the devices, which may be driven by economics or physics or even preferences inside of business. There is a distinction that we have to always keep track of, that some of this may go up to the Cloud, some of it may stay local. What are some of the elements that are going to indicate what types of actual physical architectures or physical infrastructures will be built out as we start to find ways to take advantage of this very worthwhile and valuable data that's going to be created across all of these different tiers? >> Well first of all, we have a long way to go with sensor technology and capability. So when we talk about sensors, we really have to define classes of sensors and what they do. However, I really believe that we'll begin to think in a way that approximates human intelligence, about the same time as airplanes start to flap their wings. (Peter laughs) So, I think, let's have our expectations and our models reflect that, so that they're useful, instead of being, you know hypothetical. >> That's a great point Neil. In fact, I'm glad you said that, because I strongly agree with you. But having said that, the sensors are going to go a long ways, when we... but there is a distinction that needs to be made. I mean, it may be that that some point in time, a lot of data moves up to a gateway, or a lot of data moves up to the Cloud. It may be that a given application demands it. It may be that the data that's being generated at the edge may have a lot of other useful applications we haven't anticipated. So we don't want to presume that there's going to be some hard wiring of infrastructure today. We do want to presume that we better understand the characteristics of the data that's being created and operated on, today. Does that make sense to you? >> Well, there's a lot of data, and we're just going to have to find a way to not touch it or handle it any more times than we have to. We can't be shifting it around from place to place, because it's too much. But I think the market is going to define a lot of that for us. >> So George, if we think about the natural place where the data may reside, the processes may reside, give us a sense of what kinds of machine learning technologies or machine intelligence technologies are likely to be especially attractive at the edge, dealing with this primary information. Okay, I think that's actually a softball which is, we've talked before about bandwidth and latency limitations, meaning we're going to have to do automated decisioning at the edge, because it's got to be fast, low latency. We can't move all the data up to the Cloud for bandwidth limitations. But, by contrast, so that's data intensive and it's fast, but up in the cloud, where we enhance our models, either continual learning of the existing ones or rethinking them entirely, that's actually augmented decisions, and augmented means it's augmenting a human in the process, where, most likely, a human is adding additional contextual data, performing simulations, and optimizing the model for different outcomes or enriching the model. >> It may in fact be a crucial element or crucial feature of the training by in fact, validating that the action taken by the system was appropriate. >> Yes, and I would add to that, actually, that you might, you used an analogy, people are going from two extremes where they say, some people say, "Okay, so all the analytics has to be done in the cloud," Wikibon and David Floyer, and Jim Kovielus have been pioneering the notion that we have to do a lot more at the client. But you might look back at client server computing where the client was focused on presentation, the server was focused on data integrity. Similarly, here, the edge or client is going to be focused on fast inferencing and the server is going to do many of the things that were associated with a DBMS and data integrity in terms of reproducibility, of decisions in the model for auditing, security, versioning, orchestration in terms of distributing updated models. So we're going to see the roles of the edge and the cloud rhyme with what we saw in server. Neither one goes away, they augment each other. >> So, Jim Kovielus, one of the key issues there is going to be the gateway, and the role that the gateway plays, and specifically here, we talked about the nature of again, the machine intelligence that's going to be operating more on the gateway. What are some of the characteristics of the work that's going to be performed at the gateway that kind of has oversight of groupings or collections of sensor and actuator devices? >> Right, good question. So the perfect example that everybody's familiar with now about a gateway in this environment, a smart home hub. A smart home hub, just for the sake of discussion, has visibility across two or more edge devices. It could be a smart speaker, could be the HVAC system is sensor equipped and so forth, what it does, the pool it performs, a smart hub of any sort, is that it acquires data from the edge devices, the edge devices might report all of their data directly to the hub, or the sensor devices might also do inferences and then pass on the results of the inferences it has given to the hub, regardless. What the hub does is A, it aggregates the data across those different edge devices over which it has this ability and control, B, it may perform it's own inferences based on models that look out across an entire home in terms of patterns of activity. Then it might take the hub, various actions autonomous by itself, without consulting an end user or anything else. It might take action in terms of beef up the security, adjust the HVAC, it adjusts the light in the house or whatever it might be, based on all that information streaming in real time. Possibly, its algorithms will allow you to determine what of that data shows an anomalous condition that deviates from historical patterns. Those kinds of determinations, whether it's anomalous or a usual pattern, are often taken at the hub level, 'cause it's maintaining sort of a homeostatic environment, as it were, within its own domain, and that hub might also communicate up the stream, to a tertiary tier that has oversight, let's say, of a smart city environment, where everybody in that city or whatever, might have a connection into some broader system that say, regulates utility usage across the entire region to avoid brownouts and that kind of thing. So that gives you an idea of what the role of a hub is in this kind of environment. It's really a controller. >> So, Neil, if we think about some of the issues that people really have to consider as they start to architect what some of these systems are going to look like, we need to factor both what is the data doing now, but also ensure that we build into the entire system enough of a buffer so that we can anticipate and take advantage of future ways of using that data. Where do we draw that fine line between we only need this data for this purpose now and geez, let's ensure that we keep our options open so that we can use as much data as we want at some point in time in the future? >> Well, that's a hard question, Peter, but I would say that if it turns out that this detailed data coming from sensors, that the historical aspect of it isn't really that important. If the things you might be using that data for are more current, then you probably don't need to capture all that. On the other hand, there have been many, many occasions historically, where data has been used other than its original purpose. My favorite example was scanners in grocery stores, where it was meant to improve the checkout process, not have to put price stickers on everything, manage inventory and so forth. It turned out that some smart people like IRI and some other companies said, "We'll buy that data from you, "and we're going to sell it to advertisers," and all sorts of things. We don't know the value of this data yet, it's too new. So I would err on the side of being conservative and capturing and saving as much as I could. >> So what we need to do is, we need to marry or we need to do an optimization of some form about how much is it going to cost to transmit the data versus what kind of future value or what kinds of options of future value might there be on that data. That is, as you said, a hard problem, but we can start to conceive of an approach to characterizing that ratio, can't we? >> I hope so. I know that, personally, when I download 10 gigabytes of data, I pay for 10 gigabytes of data, and it doesn't matter if it came from a mile away or 10,000 miles away. So there has to be adjustments for that. There's also ways of compressing data because this sensor data I'm sure is going to be fairly sparse, can be compressed, is redundant, you can do things like RLL encoding, which takes all the zeroes out and that sort of thing. There are going to be a million practices that we'll figure out. >> So as we imagine ourselves in this schemata of edge, hub, tertiary or primary, secondary and tertiary data and we start to envision the role that data's going to play and how we conduct or how we build these architectures and these infrastructures, it does raise an interesting question, and that is, from an economic standpoint, what do we anticipate is going to be the classes of devices that are going to exploit this data? David Foyer who's not here today, hope you're feeling better David, has argued pretty forcibly, that over the next few years we'll see a lot of advances made in microprocessor technology. Jim, I know you've been thinking about this a fair amount. What types of function >> Jim: Right. >> might we actually see being embedded in some of these chips that software developers are going to utilize to actually build some of these more complex and interesting systems? >> Yeah, first of all, one of the trends we're seeing in the chipset market for deep learning, just to be there for a moment, is that deep learning chipsets traditionally, when I say traditionally, the last several years the market has been dominated by GP's graphic processing unit. Invidia of course, is the primary provider of those. Of course, Invidia has been along around for a long time as a gaming solution provider. Now, what's happening with GPU technology, in fact, the latest generation of Invidia's architecture shows where it's going. The thing that is more deep learning optimized capabilities at the chipset level. They're called tensor processing, and I don't want to bore you with all the technical details, but the whole notion of-- >> Peter: Oh, no, Jim, do bore us. What is it? (Jim laughs) >> Basically deep learning is based on doing high speed, fast matrix map. So fundamentally, tensor cores do high velocity fast matrix math, and the industry as a whole is moving toward embedding more tensor cores directly into the chipset, higher density of tensor core. Invidia in its latest generation of chip has done that. They haven't totally taken out the gaming oriented GPU capabilities, but there are competitors and they have a growing list, more than a dozen competitors on the chipset side now. We're all going down a road of embedding far more technical processing units into every chip. Google is well known for something called GPU tensor processing units, their chip architecture. But they're one of many vendors that are going down that road. The bottom line is the chipset itself is becoming authenticated and being optimized for the core function that CPU and really GPU technology and even ASIX and FPGAs were not traditionally geared to do, which is just deep learning at a high speed, many cores, to do things like face recognition and video and voice recognition freakishly fast, and really, that's where the market is going in terms of enabling underlying chipset technology. What we're seeing is that, what's likely to happen in the chipsets of the year 2020 and beyond, they'll be predominantly tensor core processing units, But they'll be systemed on a chip that, and I'm just talking about future, not saying it's here now, systems on a chip that include some, a CPU, to managing real time OS, like a real time Linux or what not, and with highly dense tensor core processing unit. And in this capability, these'll be low power chips, and low cost commodity chips that'll be embedded in everything. Everything from your smart phone, to your smart appliances in your home, to your smart cars and so forth. Everything will have these commodity chips. 'Cause suddenly every edge device, everything will be an edge device, and will be able to provide more than augmentation, automation, all these things we've been talking about, in ways that are not necessarily autonomous, but can operate with a great degree of autonomy to help us human beings to live our lives in an environmentally contextual way at all points in time. >> Alright, Jim, let me cut you off there, because you said something interesting, a lot more autonomy. George, what does it mean, that we're going to dramatically expand the number of devices that we're using, but not expand the number of people that are going to be in place to manage those devices. When we think about applying software technologies to these different classes of data, we also have to figure out how we're going to manage those devices and that data. What are we looking at from an overall IT operations management approach to handling a geometrically greater increase in the number of devices and the amount of data that's being generated? (Jim starts speaking) >> Peter: Hold on, hold on, George? >> There's a couple dimensions to that. Let me start at the modeling side, which is, we need to make data scientists more productive or we need to push out to a greater, we need to democratize the ability to build models, and again, going back to the notion of simulation, there's this merging of machine learning and simulation where machine learning tells you correlations in factors that influence an answer. Whereas, the simulation actually lets you play around with those correlations, to find the causations, and by merging them, we make it much, much more productive to find the models that are both accurate and to optimize them for different outcomes. >> So that's the modeling issue. >> Yes. >> When we think about after we, which is great. Now as we think about some of the data management elements, what are we looking at from a data management standpoint? >> Well, and this is something Jim has talked about, but, you know we had DevOps for joining the, essentially merging the skills of the developers with the operations folks, so that there's joint responsibility of keeping stuff live. >> Well what about things like digital twins, automated processes, we've talked a little it about breadth versus depth, ITOM, What do you think? Are we going to build out, are all these devices going to reveal themselves, or are we going to have to put in place a capacity for handling all of these things in some consistent, coherent way? >> Oh, okay, in terms of managing. >> In terms of managing. >> Okay. So, digital twins were interesting because they pioneered or they made well known a concept called essentially, a symmetric network, or a knowledge graph, which is just a way of abstracting what is a whole bunch of data models and machine learning models that represents the structure and behavior of a device. In IIoT terminology, it was like an industrial device, like a jet engine. But that same construct, the knowledge graph and the digital twin, can be used to describe the application software and the infrastructure, both middleware and hardware, that makes up this increasingly sophisticated network of learning and inferencing applications. And the reason this is important, it sounds arcane, the reason it's important is we're building now vastly more sophisticated applications over great distances, and the only way we can manage them is to make the administrators far more productive. The state of the art today is, alerts on the performance of the applications, and alerts on the, essentially, the resource intensity of the infrastructure. By combining that type of monitoring with the digital twin, we can get a, essentially much higher fidelity reading on when something goes wrong. We don't get false positives. In other words, you don't have, if something goes wrong, it's like the fairy tale of the pea underneath the mattress, all the way up, 10 mattresses, you know it's uncomfortable. Here, it'll pinpoint exactly what gets wrong, rather than cascading all sorts of alerts, and that is the key to productivity in managing this new infrastructure. >> Alright guys, so let's go into the action item around here. What I'd like to do now is ask each of you for the action item that you think users are going to have to apply or employ to actually get some value, and start down this path of utilizing machine intelligence across these different tiers of data to build more complex, manageable application infrastructures. So, Jim, I'd like to start with you, what's your action item? >> My action item is related what George just said, modeled centrally, deployed in a decentralized fashion, machine learning, and use digital twin technology to do your modeling against device classes, in a more coherent way. There's not one model that won't fit all of the devices. Use digital twin technology to structure the modeling process to be able to tune a model to each class of device out there. >> George, action item. >> Okay, recognize that there's a big difference between edge and cloud, as Jim said. But I would elaborate, edge is automated, low latency decision making, extremely data intensive. Recognize that the cloud is not just where you trickle up a little bit of data, this is where you're going to use simulations, with a human in the loop, to augment-- >> System wide, system wide. >> System wide, with a human in the loop to augment how you evaluate new models. >> Excellent. Neil, action item. >> I would have people start on the right side of the diagram and start to think about what their strategy is and where they fit into these technologies. Be realistic about what they think they can accomplish and do the homework. >> Alright, great. So let me summarize our meeting this week. This week we talked about the role that the three tiers of data that we've described will play in the use of machine intelligence technologies as we build increasingly complex and sophisticated applications. We've talked about the difference between primary, secondary, and tertiary data. Primary data being the immediate experience of sensors. Analog being translated into digital, about a particular thing or set of things. Secondary being the data that is then aggregated off of those sensors for business event purposes, so that we can make a business decision, often automatically down at an edge scenario, as a consequence of signals that we're getting from multiple sensors. And then finally, tertiary data, that looks at a range of gateways and a range of systems, and is considering things at a system wide level, for modeling, simulation and integration purposes. Now, what's important about this is that it's not just better understanding the data and not just understanding the classes of technologies that we used, that will remain important. For example, we'll see increasingly powerful low cost device specific arm like processors pushed into the edge. And a lot of competition at the gateway, or at the secondary data tier. It's also important, however to think about the nature of the allocations and where the work is going to be performed across those different classifications. Especially as we think about machine learning, machine etiologies and deep learning. Our expectation is that we will see machine learning being used on all three levels, Where machine etiology is being used on against all forms of data to perform a variety of different work, but that the work that will be performed will be a... Will be naturally associated and related to the characteristics of the data that's being aggregated at that point. In other words, we won't see simulations, which are characteristics of tertiary data, George, at the edge itself. We will however, see edge devices often reduce significant amounts of data from a perhaps a video camera or something else to make relatively simple decisions that may involve complex technologies to allow a person into a building, for example. So our expectation is that over the next five years we're going to see significant new approaches to applying increasingly complex machine etiologies technologies across all different classes of data, but we're going to see them applied in ways that fit the patterns associated with that data, because it's the patterns that drive the applications. So our overall action item, it's absolutely essential that businesses that considering and conceptualizing what machine intelligence can do, but be careful about drawing huge generalizations about what the future machine intelligence is. The first step is to parse out the characteristics of the data driven by the devices that are going to generate it and the applications that are going to use it, and understand the relationship between the characteristics of that data and the types of machine intelligence work that can be performed. What is likely, is that an impedance mismatch between data and expectations of machine intelligence will generate a significant number of failures that often will put businesses back years in taking full advantage of some of these rich technologies. So, once again we want to thank you this week for joining us here on the Wikibon weekly research meeting. I want to thank George Gilbert who is here CUBE Studio in Palo Alto, and Jim Kobielus and Neil Raden who were both on the phone. And we want to thank you very much for joining us here today, and we look forward to talking to you again in the future. So this is Peter Burris, from the CUBE's Palo Alto Studio. Thanks again for watching Wikibon's weekly research meeting. (electronic music)
SUMMARY :
the characteristics of the data is going to have an impact that take place at the edge, the data workloads. that are going to indicate what types about the same time as airplanes start to flap their wings. It may be that the data that's being generated at the edge to not touch it or handle it any more times than we have to. and optimizing the model for different outcomes or crucial feature of the training and the server is going to do many of the things and the role that the gateway plays, is that it acquires data from the edge devices, and geez, let's ensure that we keep our options open that the historical aspect of it or we need to do an optimization of some form So there has to be adjustments for that. has argued pretty forcibly, that over the next few years in fact, the latest generation of Invidia's architecture What is it? in the chipsets of the year 2020 and beyond, that are going to be in place to manage those devices. that are both accurate and to optimize them Now as we think about some of the data management elements, essentially merging the skills of the developers and that is the key to productivity in managing the action item that you think to structure the modeling process to be able to tune a model Recognize that the cloud is not just where you trickle up to augment how you evaluate new models. Neil, action item. and do the homework. So our expectation is that over the next five years
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Jim Kovielus | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
October 20, 2017 | DATE | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
10 mattresses | QUANTITY | 0.99+ |
10,000 miles | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
second | QUANTITY | 0.99+ |
two extremes | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
this week | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
one model | QUANTITY | 0.98+ |
each class | QUANTITY | 0.98+ |
three tiers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
24 | QUANTITY | 0.98+ |
one | QUANTITY | 0.96+ |
a mile | QUANTITY | 0.96+ |
more than a dozen competitors | QUANTITY | 0.95+ |
IRI | ORGANIZATION | 0.95+ |
Wikibon | PERSON | 0.94+ |
seven | QUANTITY | 0.94+ |
first | QUANTITY | 0.92+ |
CUBE Studio | ORGANIZATION | 0.86+ |
2020 | DATE | 0.85+ |
couple dimensions | QUANTITY | 0.79+ |
Palo Alto Studio | LOCATION | 0.78+ |
single business event | QUANTITY | 0.75+ |
tertiary tier | QUANTITY | 0.74+ |
last several years | DATE | 0.71+ |
years | DATE | 0.7+ |
twin | QUANTITY | 0.64+ |
Matt Zilli, Marketo - CUBEConversation - #theCUBE
(upbeat music) >> Hey welcome back everybody. Jeff Frick here with theCUBE. We are at the Palo Alto Studio to have a Cube conversation today. Conference season's slowing down a little bit so we're going to do Cube conversations in the studio so it's a little bit different format, not in the crazy madness of a conference but we're really excited to have our next guest on, he's Matt Zili, he's a VP of Product Marketing at Marketo, I think first time on theCUBE, so welcome Matt. >> Thank you very much, great to be here. >> Absolutely, so you're Marketo, Marketo's been in the marketing, automation, digital, engagement space for a long time but you guys are really starting to change the way you think about things, about engagement. Engagement is this elusive, I don't want to say Unicorn 'cause the term is overused but it is right so how do we get that deep engagement with customers, and how the companies really establish that, and that's something you guys are trying to do more work with, and help enable to do better, so how do you think about when you think about engagement? >> Yeah, absolutely, I think it's what we realized these days is the currency that every company needs to have, has to have, because if you look back over the last 10 or 15 years, what the digital world has gotten us, as companies, is volume. We can go blast a message out all over the world, and just hope that one small percentage point of those folks will actually engage with us, and that's just not going to work anymore, we're all too familiar, we kind of black out, how to ignore all of those messages, and so the real key movement forward is how the companies really deeply engage with their audience, with their customers, with their potential customers, and we're trying to help companies do that. >> The other thing that's really changed is the avenues, the venues, the potential touch points have grown so much, where there used to be in the mailbox outside your house then there was email was dominant for so long, but now it's Snapchat and Facebook and Linkedin and Twitter and all these things, which A, are so omni-channel but B, give a level of measurement opportunity that you never really had before. So how is that changing the way that marketeers think about, knowing, connecting with touching regarding a campaign etc. but again, engaging. >> Yeah, it's a great awesome opportunity on one hand, and this unbelievably significant challenge on the other, where on the positive side, we have all of these avenues for engagement, where we can try to have a connection with somebody when they're on Facebook, when they're thinking more about how a brand might intercept with their personal life, than they ever do elsewhere and that's a great opportunity for marketers 'cause in the digital world, you can measure the effectiveness of all of these things, but on the flip side, as you alluded to, we've got a new opportunity to do that everyday, some new channel, some new touch point comes up and so, as organizations, we have to get really good at managing this complexity, one, just to make sure we're in the right places, and two, to make sure that we've got a uniform, consistent, story that's being told, across all these places and we're not sending a message out on Snapchat that misaligns with what's on our website or who our brand is. >> And it's interesting we had Karen on the other day, talking about the concept of adapted engagement, she had her three As of engagememt and really, this concept of the context matters, context has always mattered, not only the channel, but the timing and just because you can, and just because you've got this massive kind of technology, can and if you will, that you can send a lot of things, a lot of ways, a lot of time, you can't, just 'cause you can, doesn't mean, you can, so when you think about adaptive and trying to be kind of responsive and in sync with the opportunity and the offer as well as the appropriateness of the timing, as well as the match with the individual at the other end of that channel, what are some of the factors that people should think about, that companies should think about is their weighing, you know I cannot just spray and pray, 24/7, that's going to just saturate and kill people. >> I think I'm going to steal your technology, cannon analogy 'cause that's right, it's a cannon that could end up killing people or certainly, killing a brand, and so the way we encourage companies to think about it today, is you have to bring together all of these different insights you might have about a potential person, customer, potential customer, and figure out how to use that, to provide something of value, to them and that's going to adapt over time, we don't have perfect visibility yet into everyone we might want to engage, we learn over time and 10, 20 years ago, the best we could do was try and understand someone's demographics and used that to make a best educated guess, a guess about what they might want. That doesn't really work all that well anymore, when we can now think about all this behavioral data, and when we learn what somebody's looking at on the Web, or on Facebook, or what they're engaging with on Snapchat, that's way more insightful for us to make an educated guess about what might provide somebody value, as the next thing we put in front of them and so, it really is this concept of just constantly adapting the experience, we're delivering to people and it should just get better and better and better over time, as we learn more about what they want and what they're looking for. >> Right, one of the interesting things we see over and over a lot of tech events, is the concept that you have your data in-house, but then there's all this public data and other sources of data, and really to grab that competitive advantage, you need to combine the proprietary data, as well as the public data, and then combine them using the algorithms to get the insight, that maybe your competitor doesn't have, how are you seeing this actually executed in the field, is it easy to do, hard to do, still early innings for people trying to figure it out or is it relatively mature in terms of people using all this different data? >> Yeah, I think there's no question people are using data, more effectively and using just more data now than ever before but it hasn't yet manifested itself in a way where they're using it to deliver the best perfect thing to every single customer, so there's still a long way to go, and some of the things that we see, hold people back is, you mentioned we've got all these different touch points and channels popping up, the scope of how data is expanding is still going far faster than we can even keep up with, and in many organizations, all of those pieces of data will sit in different silos, so even for the companies that have managed to bring it all in-house and trying to get it at least inside the walls of their company, it still probably doesn't sit in one place that will allow somebody to actually gain insights from it and then use those insights to do something and so, I think that's where we see the next few years are going to take us with a combination of AI technologies that can do a lot of the heavy lifting, of looking at the data and gleaning insights from it, to getting them at somebody's fingertip whether it's a marketer or whether it's somebody driving customer experience so they naturally use it to do something informed for a specific customer. >> Right, the other kind of concept we hear over and over and over is kind of the segmentation of one, and the industry that I think is the most interesting to watch on this is insurance, car insurance, 'cause it's easy right, 'cause it used to be your age, your sex and if you were married, and the maybe did you have a red car, and maybe did you travel more than X number of miles, but now, you know with the progressive thing you stick in the dashboard, or let's face it, your cellphone, they can know a lot more, if you roll through red lights, do you spend too much time on your couch, do you tend to drive at 2.30 a.m. on Saturday night and see other things that can really determine ultimately what your rate is. On the other hand, at some point in time, if you're a big company, the overhead of managing to that level and to segment your offerings to that level maybe exceed the value of doing that, so as you see kind of people narrowing in, honing in on their segmentation and execution, what are some of the lessons learnt about, how tight can you get that, can you have infinite number of skews to provide a slightly different flavor of your service to any number of consumers or is there some kind of happy balance that you see the world kind of moving towards? >> Yeah, I think the biggest point we make is there's no excuse for not thinking that way today, there's no excuse for making strides towards delivering on an audience of one, or customer of one. I think it varies pretty wildly by business whether you can do that in your core operations, whether an insurance company can really come up with the right package price, product etc. for that audience of one, that's a big problem certainly, but at least when we think about how we engage with our customers, there's no excuse not to think about it that way today 'cause the very least we all have at our fingertips, the technologies that will let us choose how to engage with someone, what channel to engage with them, what timing, cadence to engage with them and so we can make progress even if we're not necessarily at the point of using all this information to deliver one perfect message to one person at that exact moment. There's a lot of work to be done today to get there. >> Right, the other piece that's interesting is advocacy, and again Karen talked about that as well in her three As of measuring engagement and it's a really different type of relationship to have with a customer that's not necessarily so transactional but much more relationship much less about this transaction and much more about the lifetime value the customer and again an example we shared with Karen, is Harley Davidson is just an iconic brand that people have such a connection to, that they will tattoo it on their body which if you're a brand manager, you're going to say, well, you know that's phenomenal, so you would see advocacy in companies wanting to change the nature of the relationship with the people that buy and use their services, what are some of the best practices you see, what are some of the ways people are trying to flip the bit if you will from a transactional to a relationship type of engagement. >> Yeah, I think there's certainly those iconic brands and products that do a lot of the heavy lifting for companies to do that effectively. Harley Davidson starts with the product, starts with the motorcycle and people love that but for a lot of companies that maybe don't drive that level of passion around the product itself, that's marketing's opportunity to go in and capture that and so I think what we see the most successful and forward thinking organizations do today is think about the entire life cycle that way with an eye towards advocacy because the thing that not everybody has capitalized on today is whether we like it or not, all of our customers have a megaphone and that we know and in a lot of ways, we try to manage the negative sides of that to make sure that the negative messages aren't getting out there and avoid that but we haven't used it enough to make sure we use that to drive the positive messages out in the market and so when companies kind of shift from the transactional approach from the, I just got to acquire new customers or I've got to get these customers to buy more, to a world where they're really thinking about a group of people that could really be advocates, almost on behalf of the brand, almost like their working for the brand to do that and set up a set of initiatives to drive that, it leads to 10, 20, 30 X yields down the road, an ROI down the road because everybody does have that opportunity to be an influencer today and brands can really harness that. >> So do you think the essence of that is brands finally figuring out that they no longer have exclusive rights to control the message, I forget the tweeter, of a meme somewhere you know that your brand is no longer what you're telling people it is, your brand is now what people are telling you what it is and as you said, people didn't have the giant microphones right, they had letter to the editor, who sees it compared to literally worldwide casting ability of a message and if you create it craftily and with a little bit of humor it might pick up and go viral so is it a reaction to that or do people finally figure out that it's seems so stupid to me, obviously it's always easier to sell more chiggers than clients than to get new so why suddenly is advocacy getting the bright spotlight when this should have been something that people were executing all along? >> Yeah, I think it's like most things, it's not just marketers, customer success, everyone's understood the problem and the opportunity for a long time and social is an area that has been around long enough that I think everybody understands it's really a question of what can be done to execute on it and if 80% of marketing budgets were self spent on acquiring new customers, it's no surprise that they're not executing on it, all that effectively and so I think the transition we're going through right now is brands are starting to re-align their dollars, leverage the new technologies and point them at this area of advocacy, as much as their pointing them to other areas, versus maybe they were just of lesser importance years ago so I think everybody's known it for awhile, but they're now just finally acting on it. >> And of course the other thing now that's so different than it used to be in the past especially in large broadcast media you know people measured audience but you really couldn't measure uptake and write the classical saying, I know I'm wasting 50% of my marketing budget, I just don't know which 50% it is. The ability to measure now is higher than we've ever had, the ability to A-B task or A-B-C-D-E-F-G task is like never before at the same time again referencing our conversation before you still have to have a narrative, you still have to be kind of a personality as a brand, or else you'll just get wiped out right so it's a weird dichotomy of the soft and kind of the hard elements of going to market. >> It's exactly it, if you look at what a lot of companies have done in the last 10, 12 years is digital has exploded and certainly beyond even 12 years, there's been a shift from the emotive storytelling side of marketing over to the data driven operational side of marketing, the idea of I can send out a million emails and I know 100,000 people will open them and some subset of those will click on them and that's an important piece of marketing today certainly but I already know the needle has swung too far. When we think about the engagement economy, we think about the core of this is being able, a brand being able to engage deeply on an emotional level with their customer and audience, it requires a brand narrative, a brand story that's relevant to them, it's rolled out appropriately to them that's shared across all these channels with them 'cause if you don't have that and all you have is the operational side, you'll never be the Harley Davidson iconic brand and has that emotional connection with their customers. >> Right, okay so before I let you go, I know you guys have been doing, did a research study that's going to be coming out shortly, I wonder if you can share, preview any kind of the highlights, in terms of what was the purpose of that first off and what are some of the preliminary findings that you could share before the actual data comes out. >> Yeah, absolutely, it's been really insightful for us, what we did, is we went out and surveyed a bunch of consumers and buyers, and a bunch of marketers, and we tried to understand is the story the same across both sides, what people value, what they want, what marketers were delivering, what they think they're delivering so it's been really insightful to understand what the world looks like today when it comes to engagement and while there are a lot of insights, I think the thing that everybody has acknowledged is how important this is, how critical it is in this economy to make sure that you do have that emotional connection with the brand, if you're a consumer, somebody you want to do business with and marketers and brands acknowledge how important it is to have that with their customers, where the gaps are, is how it's being actualized, is it actually happening, the beliefs from some companies is that they're doing this incredibly effectively, and yet the feedback from the customers is that they're not, and so that divide is what we have to resolve, as brands and as companies over the next few years, otherwise someone will come in and disrupt us and take advantage of that. >> So have you found any good objective measurements that people should, I mean obviously, there's not one golden metric, we would already have known it, but what are some of the things that marketers or companies should be looking at, to see if they're doing a better job or doing a good job? >> Yeah, I think that you know without question, looking at your competitive landscape and talking to your customers in a way that can really get you that feedback, you have to seek the answers to find out how good of a job you're doing versus looking at the efforts you're putting in place, and I think that even in itself can be a challenge for a lot of companies is to really get out there and try and get an objective understanding of whether they're doing it or not, and I think when you start there, in almost every brands' case, they're going to find surprises about how their customers really feel about them, how their potential customers really feel about them and identify the opportunities to close that gap. And then of course wandering through the crazy landscape of technology to try and figure out the right things that allow them to close that gap. The good news is there's shortage of options to do that today. >> And don't send 100 question questionnaire, oh my god, I just got one from JetBlue, was happy to fill it out, I lasted I dunno, a lot of questions, I thought and then I just ran out of gas, and c'mon, it's a new world order. >> And so you wouldn't put that in a list of engaging tactics >> A little trade right, give me a little value, I'll give you a little info, value-trade, value-trade, don't get to the whole multi-variate -- >> People have seconds, you can get them for seconds, you're not going to get them for minutes or hours. >> Alright, Matt we look forward to the research coming out and again, thanks for taking a few minutes out of your day. >> Appreciate it, thanks for having me. >> A pleasure, Matt Zili, he's from Marketo, I'm Jeff Frick, you're watching theCUBE, thanks for watching, we'll see you next time.
SUMMARY :
to have a Cube conversation today. and that's something you guys are trying to do and so the real key movement forward So how is that changing the way that marketeers think about, but on the flip side, as you alluded to, but the timing and just because you can, and so the way we encourage companies and some of the things that we see, hold people back is, and the maybe did you have a red car, cadence to engage with them and so we can make progress to flip the bit if you will from a transactional and products that do a lot of the heavy lifting is brands are starting to re-align their dollars, and kind of the hard elements of going to market. and has that emotional connection with their customers. that you could share before the actual data comes out. and so that divide is what we have to resolve, and identify the opportunities to close that gap. and c'mon, it's a new world order. People have seconds, you can get them for seconds, and again, thanks for taking a few minutes out of your day. thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Matt Zili | PERSON | 0.99+ |
Karen | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Matt Zilli | PERSON | 0.99+ |
Harley Davidson | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
JetBlue | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
100,000 people | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
one person | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
100 question questionnaire | QUANTITY | 0.99+ |
Marketo | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
Marketo | PERSON | 0.98+ |
today | DATE | 0.98+ |
a million emails | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
one golden metric | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.94+ |
30 X | QUANTITY | 0.94+ |
ORGANIZATION | 0.93+ | |
Palo Alto Studio | LOCATION | 0.93+ |
ORGANIZATION | 0.92+ | |
years | DATE | 0.92+ |
one place | QUANTITY | 0.91+ |
Snapchat | ORGANIZATION | 0.9+ |
Saturday night | DATE | 0.87+ |
theCUBE | ORGANIZATION | 0.85+ |
every single customer | QUANTITY | 0.82+ |
lot | QUANTITY | 0.8+ |
next few years | DATE | 0.8+ |
2.30 a.m. on | DATE | 0.79+ |
Cube | COMMERCIAL_ITEM | 0.79+ |
one perfect message | QUANTITY | 0.78+ |
one small percentage | QUANTITY | 0.78+ |
bunch | QUANTITY | 0.77+ |
a lot of ways | QUANTITY | 0.77+ |
minutes | QUANTITY | 0.72+ |
bunch of marketers | QUANTITY | 0.71+ |
10, 20 years | DATE | 0.71+ |
things | QUANTITY | 0.7+ |
15 years | QUANTITY | 0.69+ |
lot of insights | QUANTITY | 0.67+ |
time | QUANTITY | 0.6+ |
last 10 | DATE | 0.59+ |
a lot | QUANTITY | 0.59+ |
seconds | QUANTITY | 0.54+ |
tech | QUANTITY | 0.53+ |
Tyler Bell - Google Next 2017 - #GoogleNext17 - #theCUBE
[Narrator] - You are a CUBE Alumni. (cheerful music) Live, from Silicon Valley, it's theCUBE. Covering Google cloud Next '17 (rhythmic electronic music) >> Welcome back everyone. We're live here in the Palo Alto Studio for theCUBE, our new 4500 square foot studio we just moved into a month and a half ago. I'm John Furrier here, breaking down two days of live coverage in-studio of Google Next 2017, we have reporters and analysts in San Francisco on the ground, getting all the details, we had some call-ins. We're also going to call in at the end of the day to find out what the reaction is to the news, the key-notes, and all the great stuff on Day one and certainly Day two, tomorrow, here in the studio as well as in San Francisco. My next guest is Tyler Bell, good friend, industry guru, IOT expert, he's been doing a lot of work with IOT but also has a big data background, he's been on theCUBE before. Tyler, great to see you and thanks for coming in today. >> Thanks, great to be here. >> So, data has been in your wheelhouse for long time. You're a product guy, and The cloud is the future hope, it's happening big-time. Data, the Edge, with IOT is certainly part of this network transformation trend. And, certainly now, machine-learning and AI is now the big buzzword. AI, kind of a mental-model. Machine-learning, using the data. You've been at the front-end of this for years, with data and Factual and Mapbox, your other companies you worked for. Now you have data sets. So before it was like a ton of data, and now it's data sets. And then you got the IOT Edge, a car, smart city, a device. What's you take on the data intersecting with the cloud? What are the key paradigms that are colliding together? >> Yeah, I mean the reason IOT is so hot right now is really 'cause it's connecting a number of things that are also hot. So, together, you get this sort of conflagration of fires, technology fires. So, on one side you've got massive data sets. Just huge data sets about people, places and things that allow systems to learn. So, on the other end, you've got, basically, large-scale computation, which isn't only just available, it's actually accessible and it's affordable. Then, on the other end, you've got massive data collection mechanisms. So, this is anything from the mobile phone that you'll hold in your pocket, to a LIDAR, a laser-based sensor on a car. So, this combination of massive, hardware derived data collection mechanisms, combined with a place to process it, on the cloud, do so affordably. In addition to all the data, means that you get this wonderful combination of the advent of AI and machine-learning, and basically the development of smart systems. And that's really what everybody's excited about. >> It's kind of intoxicating to think about, from a computer science standpoint, this is the nirvana we've been thinking about for generations. With the compute now available, we have, it's just kind of coming together. What are the key things that are merging in your mind? 'Cause you've been doing a lot of this big data stuff. When I say big, I mean large amounts, large-scale data. But as it comes in, as they say, the world's, the future's here, but it's evenly distributed. You could also say that same argument for data. Data's everywhere, but it's not evenly distributed. So, what are some of the key things that you see happening that are important for people to understand with data, in terms of using it, applying it, commercializing it, leveraging it? >> Yeah, what you see, or what you have seen previously is the idea of data, in many people's minds, has been a data base or it's been sort of a CSV file of rows and columns and it's been this sort of fixed entity. And what you're seeing now is that, and that's sort of known as structure data, and what you're seeing now is the advent of data analytics that allow people to understand and analyze loose collections of data and begin to sort of categorize and classify content. In ways that people haven't been able to do so previously. And so, whereas you used to have just a data base of sort of all the places on the globe or a whole bunch of people, right now you can have information about, say, the images that camera sensors on your car sees. And because the systems have been trained about how to identify objects or street signs or certain behaviors and actions, it means that your systems are getting smarter. And so what's happening here is that data itself is driving this trend, where hardware and sensors, even though they're getting cheaper and they're getting increasingly commoditized, they're getting more intelligent. And that intelligence is really driven by, fundamentally, it's driven by data. >> I was having a conversation, yesterday, at Stanford there was a conference going on around bias and data. Algorithms now have bias, gender bias, male bias, but it brings up this notion of programmability and one of the things that some of the early thinkers around data, including yourself, and also we extend that out to IOT, is how do you make data available for software programs, for the learning piece? Because that means that data's now an input into the software development process, whether that's algorithms on the fly being developed in the future or data being part of the software development kit, if you will. Is that a fantasy or is that gettable, is that in reach? Is it happening? Making data part of that agile process, not just a call to a data base? >> Exactly, a lot of the things, the most valuable assets now are called basically labeled data sets, where you could say that this event or this photo or this sound even has been classified as such. And so it's the bark of a dog or the ring of a gunshot. And those labeled data sets are hugely valuable in actually training systems to learn. The other thing is, if you look at it from, say, AV, which has a lot in common with IOT, but the data set is less about a specific sort of structured or labeled event or entity. And instead, it's doing something like putting, there's one company where you can put your camera on the dashboard of your car and then you drive around and all this does is just records the images and records which way your car goes, and, that's actually collecting and learning data. And so, that kind of information is being used to teach cars how to drive and how to react in different circumstances. And so, on one hand, you've got this highly-structured labeled data, on the other hand, it's almost machine behavioral data, where to teach a car how to drive, cars need to understand what that actually entails. >> Yeah, one of the things we talked about on Google Next earlier in the day, when we saw a couple earlier segments. I was talking about, I didn't mean this as a criticism to the enterprise, but I was just saying, Google might want to throttle back their messaging or their concepts. Because the enterprise kind of works at a different pace. Google is just this high-energy, I won't say academic, but they're working on cutting-edge stuff. They have things like Maps, and they're doing things that are just really off the charts, technically. It's just great technical prowess. So, there's a disconnect between enterprise stuff and what I call 'pure' Google cloud. The question that's now on the table is, now with the advent of the IOT, industrial IOT, in particular, enterprises now have to be smarter about analog data, meaning, like the real world. How do you get the data into the cloud from a real-world perspective? Do you have any insight on that? it's something that hard to kind of get, but you mentioned that cam on the car, you're essentially recording the world, so that's the sky, that's not digitized. You're digitizing an analog signal. >> Yeah, that's right. I think I'd have two notes there. The first is that, everything that's going on that's exciting, is really at this nexus between the real world, that you and I operate in now and how that's captured and digitized, and actually collected online so it can be analyzed and processed and then affected back in the real world. And so, when you hear about IOT and cars, of course there are sensors, which basically do a read type analysis of the real world, but you also have affecters which change it and servos, which turn your tires or affect the acceleration or the braking of a vehicle. And so, all these interesting things that are happening now, and it really kicked off, of course, with the mobile phone, is how the online, data-centric, electric world connect with the real world. And all of that's really, all that information is being collected is through an explosion of sensors. Because you just have, the mobile phone supply chains are making cameras, and barometers, and magnetometers, all of these things are now so increasingly inexpensive that when people talk about sensors, they don't talk about one thousand dollar sensor that's designed to do one thing, instead there's thousands of $1 sensors. >> So, you've been doing a lot of work with IOT, almost the past year, you've been out in the IOT world. Thoughts on how the cloud should be enabled or set up for ingesting data or to be architected properly for IOT-related activities, whether it's Edge data store, or Edge Data, I mean, we have little things as boring as backup and recovery are impacted by the cloud. I can imagine that the IOT world, as it collides in with IT, is going to have some reinvention and reconstruction. Thoughts on what the cloud needs to do to be truly IOT ready? >> Yeah, there's some very interesting things that are happening here and some of them seem to be in conflict with each other. So, the cloud is a critical part of the IOT entire stack and it really goes from the device of a sensor, all the way to the cloud. And what you're getting is you are getting providers, including Google and Amazon and SAP and there's over 370, last count, IOT platform providers. Which are basically taken their particular skill set and adjusted it and tweaked it and they now say that we now have an IOT platform. And in traditional cloud services, the distinguishing features are things like being able to have record digital state of sensors and devices, sort of 'shadow' states, increased focus on streaming technology over MAP-reduced batch technology, which you got in the last 10 years, through the big data movement, and the conversations that you and I have had previously. So, there is that focus on streaming, there is a IOT-specific feature stack. But what's happening is that because so much data is being corrected. Let's imagine that you and I are doing something where we're monitoring the environment, using cameras, and we have 10,000 cameras out there. And, this could be within a vehicle, it could be in a building, or smart city, or in a smart building. Cameras are, the cloud traditionally accepts data from all these different resources, be it mobile phones, or terminals and collects it, analyzes it, and spits it back out in some kind of consumable format. But what's happening now is that IOT and the availability of these sensors is generating so much data that it's inefficient and very expensive to send it all back to the cloud. And so all of these-- >> And, it's physics, too. There's a lot of physics, right? >> Exactly, and all these cameras sending full raster images and videos back to the cloud for analysis. Basically the whole idea of real time goes away if you have that much data, you can't analyze it. So, instead of just the cameras sending out a single dumb raster image back, you teach the camera to recognize something, So you could say "I recognize a vehicle in this picture" or "I recognize a stop sign" or a street light. And instead of sending that image back to be analyzed on the cloud, the analysis is done on the device and then that entity is sent back. And so, the sensor says "I saw this stop sign "at this point, at this time in my process." >> So this cuts back to the earlier point you were making about the learning piece, and the libraries, and these data sets. Is that kind of where that thread connects? >> Exactly, so to build the intelligence on the device, that intelligence happens on the cloud. And so, you need to have the training sets and you need to have massive GPUs and huge computational power to instruct. >> Thanks Intel and NVIDIA, we need more of those, right? >> Indeed, and so, that's what's happening on the cloud, and then those learnings are basically consolidated and then put up on the device. And, the device doesn't need the GPUs, but the device does need to be smart. And so, in IOT, especially look for companies that understand, especially hardware companies, that understand that the product, as such, is no longer just a device, it's no longer just a sensor, it's an integral combination of device, intelligence platform in the cloud, and data. >> So, talk about the notion of, let's talk about the reconstruction of some of the value creation or value opportunities with what you just talked about 'cause if you believe what you just said, which I do believe is right on the money, that this new functionality, vis-a-vis, the cloud, and the smart ads and learning ads, and software, is going to change the nature of the apps. So, if I'm a cloud provider, like Google or Amazon, I have to then have the power in the cloud, but it's really the app game, it's the software game that we're talking about here. It's the apps themselves. So, yeah, you might have an atom processor has two cores versus 72 cores, and xeon, and the cloud. Okay, that's a device thing, but the software itself, at the app level, changes. Is that kind of what's happening? Where's the real disruption? I guess what I'm trying to get at is that, is it still about the apps? >> Yeah, so, I tend not to think about apps much anymore, and I guess, if you talk to some VCs, they won't think about apps much anymore either. It's rather, it tends to, you and I still think, and I think so many of us in Silicone Valley, still think of mobile phones as being the end point for both data collection and data effusion. But, really one of the exciting things about IOT now, is that it's moving away from the phone. So, it's vehicles, it's the sensors in the vehicles, it's factories, and the sensors in the factories, and smart cities. And so, what that means is you're collecting so much more data, but also, you're also being more intelligent about how you collect it. And so, it's less about the app and it's much more about the actual intelligence, that's baked into the silicon layer, or the firmware of the device. >> Yeah, I tried to get you on their Mobile World Congress special last week and we're just booked out. But I know you go to Mobile World Congress, you've been there a lot. 5G was certainly a big story there. They had the new devices, the new LG phones, all the sexy glam. But, the 5G and the network transformation becomes more than the device, so you're getting at the point which is it's not about the device anymore, it's beyond the device, more about the interplay between the back at the network. >> It is, it's the full stack, but also it's not just from one device, like the phone is one human, one device, and then that pipeline goes into the cloud, usually. The exciting thing about IOT and the general direction that things are moving now, it's what can thousands of sensors tell us? What can millions of mobile phones, driven over a 100 million miles of road surface, what can that tell us about traffic patterns or our cities? So, the general trend that you're seeing here is that it's less about two eyeballs and one phone and much more about thousands and millions of sensors. And then how you can develop data-centric products built on that conflagration of all of that data coming in. And how quickly you can build them. >> We're here with Tyler Bell, IOT Expert, but also data expert, good friend. We both have kids who play Lacrosse together, who are growing up in front of our eyes, but let's talk about them for a second, Tyler. Because they're going to grow up in a world where it's going to be completely different, so kind of knowing what we know, and as we tease-out the future and connect the dots, what are you excited about this next generation's shift that happening? If you could tease-out some of the highlights in your mind for, as our kids grow up, right, you got to start thinking about the societal impact from algorithms that might have gender bias, or smart cities that need to start thinking about services for residents that will require certain laning for autonomous vehicles, or will cargo (mumbles). Certainly, car buying might shift. They're cloud-native, they're digital-native. What are you excited about, about this future? >> Yeah, I think it's, the thing that's, I think, so huge that I have difficulty looking away from it, is just the impact, the societal impact that autonomous vehicles are going to have. And so, really, not only as our children grow up, but certainly their children, our grandchildren, will wonder how in the heck we were allowed to drive massive metal machines, and just anywhere-- >> John: With no software. >> Yeah, with really just our eyeballs and our hands, and no guidance and no safety. Safety's going to be such a critical part of this. But, it's not just the vehicle, although that's what's getting everybody's attention right now, it's really, what's going to happen to parking lots in the cities? How are parking lots and curb sides going to be reclaimed by cities? How will accessibility and safety within cities be affected by the ability to, at least in principle, just call an autonomous vehicle at any time, have it arrive at your doorstep, and take you where you need to go? What does that look like? It's going to change how cars are bought and sold, how they're leased. It's going to change the impact of brands, the significance of, are these things going to be commoditized? But, ultimately, I think, in terms of societal impact, we have, for generations, grown up in an automotive world, and our grandchildren will grow up in an automotive world, but it will be so changed 'cause it will impact entirely what our cities and our urban spaces look like. >> The good news is when they take our drivers licenses away when we're 90, we'll, at least be able to still get into a car. >> There's places we can go. >> We can still drive (laughs) >> Exactly, exactly, the time is right. We may not have immortality, but we will be able to get from one place to another in our senility. >> We might be a demographic to buy a self-driving car. Hey, you're over 90, you should buy a self-driving car. >> Well, it'll be more like a consortium. Like you, I, and maybe 30 other people. We have access to a car or fleet. >> A whole new man cave definition to bring to the auto,. Tyler, thanks for sharing the insight, really appreciated the color commentary on the cloud, the impact of data, appreciate it. We're here for the two days of coverage of Google Next here inside theCUBE. I'm John Furrier, thanks for watching. More coverage coming up after this short break. (cheerful music) (rhythmic electronic music) >> I'm George--
SUMMARY :
Live, from Silicon Valley, it's theCUBE. in at the end of the day and AI is now the big buzzword. and basically the What are the key things that of sort of all the places on the globe and one of the things that Exactly, a lot of the things, Yeah, one of the things we talked about analysis of the real world, I can imagine that the IOT and the availability of these sensors There's a lot of physics, right? So, instead of just the cameras and the libraries, and these data sets. that intelligence happens on the cloud. but the device does need to be smart. and the smart ads and is that it's moving away from the phone. it's not about the device anymore, and the general direction some of the highlights is just the impact, the societal impact of brands, the significance of, to still get into a car. Exactly, exactly, the time is right. to buy a self-driving car. We have access to a car or fleet. commentary on the cloud,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Tyler | PERSON | 0.99+ |
Tyler Bell | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
72 cores | QUANTITY | 0.99+ |
two cores | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
10,000 cameras | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two days | QUANTITY | 0.99+ |
LG | ORGANIZATION | 0.99+ |
90 | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
4500 square foot | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
first | QUANTITY | 0.99+ |
one device | QUANTITY | 0.99+ |
one thousand dollar | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one phone | QUANTITY | 0.99+ |
millions of mobile phones | QUANTITY | 0.99+ |
thousands of sensors | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
two notes | QUANTITY | 0.99+ |
a month and a half ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Silicone Valley | LOCATION | 0.98+ |
SAP | ORGANIZATION | 0.97+ |
past year | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
Google Next | TITLE | 0.97+ |
George | PERSON | 0.97+ |
30 other people | QUANTITY | 0.96+ |
Day one | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
one company | QUANTITY | 0.96+ |
IOT | ORGANIZATION | 0.95+ |
Palo Alto Studio | LOCATION | 0.95+ |
over 370 | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
one human | QUANTITY | 0.94+ |
over a 100 million miles | QUANTITY | 0.94+ |
single | QUANTITY | 0.94+ |
Day two | QUANTITY | 0.93+ |
millions of sensors | QUANTITY | 0.92+ |
CUBE | ORGANIZATION | 0.88+ |
Edge | TITLE | 0.87+ |
Edge Data | TITLE | 0.87+ |
thousands | QUANTITY | 0.86+ |
Google cloud | TITLE | 0.86+ |
IOT Expert | ORGANIZATION | 0.81+ |
over 90 | QUANTITY | 0.8+ |
one side | QUANTITY | 0.8+ |
agile | TITLE | 0.79+ |
$1 sensors | QUANTITY | 0.76+ |
Maps | TITLE | 0.75+ |
second | QUANTITY | 0.75+ |
Mapbox | ORGANIZATION | 0.74+ |
Factual | ORGANIZATION | 0.74+ |
about two eyeballs | QUANTITY | 0.74+ |
2017 | DATE | 0.7+ |
last 10 years | DATE | 0.68+ |
Next 2017 | TITLE | 0.67+ |
Stanford | LOCATION | 0.64+ |
years | QUANTITY | 0.6+ |
things | QUANTITY | 0.6+ |
IOT | TITLE | 0.57+ |
couple | QUANTITY | 0.56+ |
5G | ORGANIZATION | 0.53+ |