Image Title

Search Results for two separate functions:

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

David Flynn Supercloud Audio


 

>> From every ISV to solve the problems. You want there to be tools in place that you can use, either open source tools or whatever it is that help you build it. And slowly over time, that building will become easier and easier. So my question to you was, where do you see you playing? Do you see yourself playing to ISVs as a set of tools, which will make their life a lot easier and provide that work? >> Absolutely. >> If they don't have, so they don't have to do it. Or you're providing this for the end users? Or both? >> So it's a progression. If you go to the ISVs first, you're doomed to starved before you have time for that other option. >> Yeah. >> Right? So it's a question of phase, the phasing of it. And also if you go directly to end users, you can demonstrate the power of it and get the attention of the ISVs. I believe that the ISVs, especially those with the biggest footprints and the most, you know, coveted estates, they have already made massive investments at trying to solve decentralization of their software stack. And I believe that they have used it as a hook to try to move to a software as a service model and rope people into leasing their infrastructure. So if you look at the clouds that have been propped up by Autodesk or by Adobe, or you name the company, they are building proprietary makeshift solutions for decentralizing or hybrid clouding. Or maybe they're not even doing that at all and all they're is saying hey, if you want to get location agnosticness, then what you should just, is just move into our cloud. >> Right. >> And then they try to solve on the background how to decentralize it between different regions so they can have decent offerings in each region. But those who are more advanced have already made larger investments and will be more averse to, you know, throwing that stuff away, all of their makeshift machinery away, and using a platform that gives them high performance parallel, low level file system access, while at the same time having metadata-driven, you know, policy-based, intent-based orchestration to manage the diffusion of data across a decentralized infrastructure. They are not going to be as open because they've made such an investment and they're going to look at how do they monetize it. So what we have found with like the movie studios who are using us already, many of the app they're using, many of those software offerings, the ISVs have their own cloud that offers that software for the cloud. But what we got when I asked about this, 'cause I was dealt specifically into this question because I'm very interested to know how we're going to make that leap from end user upstream into the ISVs where I believe we need to, and they said, look, we cannot use these software ISV-specific SAS clouds for two reasons. Number one is we lose control of the data. We're giving it to them. That's security and other issues. And here you're talking about we're doing work for Disney, we're doing work for Netflix, and they're not going to let us put our data on those software clouds, on those SAS clouds. Secondly, in any reasonable pipeline, the data is shared by many different applications. We need to be agnostic as to the application. 'Cause the inputs to one application, you know, the output for one application provides the input to the next, and it's not necessarily from the same vendor. So they need to have a data platform that lets them, you know, go from one software stack, and you know, to run it on another. Because they might do the rendering with this and yet, they do the editing with that, and you know, et cetera, et cetera. So I think the further you go up the stack in the structured data and dedicated applications for specific functions in specific verticals, the further up the stack you go, the harder it is to justify a SAS offering where you're basically telling the end users you need to park all your data with us and then you can run your application in our cloud and get this. That ultimately is a dead end path versus having the data be open and available to many applications across this supercloud layer. >> Okay, so-- >> Is that making any sense? >> Yes, so if I could just ask a clarifying question. So, if I had to take Snowflake as an example, I think they're doing exactly what you're saying is a dead end, put everything into our proprietary system and then we'll figure out how to distribute it. >> Yeah. >> And and I think if you're familiar with Zhamak Dehghaniis' data mesh concept. Are you? >> A little bit, yeah. >> But in her model, Snowflake, a Snowflake warehouse is just a node on the mesh and that mesh is-- >> That's right. >> Ultimately the supercloud and you're an enabler of that is what I'm hearing. >> That's right. What they're doing up at the structured level and what they're talking about at the structured level we're doing at the underlying, unstructured level, which by the way has implications for how you implement those distributed database things. In other words, implementing a Snowflake on top of Hammerspace would have made building stuff like in the first place easier. It would allow you to easily shift and run the database engine anywhere. You still have to solve how to shard and distribute at the transaction layer above, so I'm not saying we're a substitute for what you need to do at the app layer. By the way, there is another example of that and that's Microsoft Office, right? It's one thing to share that, to have a file share where you can share all the docs. It's something else to have Word and PowerPoint, Excel know how to allow people to be simultaneously editing the same doc. That's always going to happen in the app layer. But not all applications need that level of, you know, in-app decentralization. You know, many of them, many workflows are pipelined, especially the ones that are very data intensive where you're doing drug discovery or you're doing rendering, or you're doing machine learning training. These things are human in the loop with large stages of processing across tens of thousands of cores. And I think that kind of data processing pipeline is what we're focusing on first. Not so much the Microsoft Office or the Snowflake, you know, parking a relational database because that takes a lot of application layer stuff and that's what they're good at. >> Right. >> But I think... >> Go ahead, sorry. >> Later entrance in these markets will find Hammerspace as a way to accelerate their work so they can focus more narrowly on just the stuff that's app-specific, higher level sharing in the app. >> Yes, Snowflake founders-- >> I think it might be worth mentioning also, just keep this confidential guys, but one of our customers is Blue Origin. And one of the things that we have found is kind of the point of what you're talking about with our customers. They're needing to build this and since it's not commercially available or they don't know where to look for it to be commercially available, they're all building themselves. So this layer is needed. And Blue is just one of the examples of quite a few we're now talking to. And like manufacturing, HPC, research where they're out trying to solve this problem with their own scripting tools and things like that. And I just, I don't know if there's anything you want to add, David, but you know, but there's definitely a demand here and customers are trying to figure out how to solve it beyond what Hammerspace is doing. Like the need is so great that they're just putting developers on trying to do it themselves. >> Well, and you know, Snowflake founders, they didn't have a Hammerspace to lean on. But, one of the things that's interesting about supercloud is we feel as though industry clouds will emerge, that as part of company's digital transformations, they will, you know, every company's a software company, they'll begin to build their own clouds and they will be able to use a Hammerspace to do that. >> A super pass layer. >> Yes. It's really, I don't know if David's speaking, I don't want to speak over him, but we can't hear you. May be going through a bad... >> Well, a regional, regional talks that make that possible. And so they're doing these render farms and editing farms, and it's a cloud-specific to the types of workflows in the median entertainment world. Or clouds specifically to workflows in the chip design world or in the drug and bio and life sciences exploration world. There are large organizations that are kind of a blend of end users, like the Broad, which has their own kind of cloud where they're asking collaborators to come in and work with them. So it starts to even blur who's an end user versus an ISV. >> Yes. >> Right? When you start talking about the massive data is the main gravity is to having lots of people participate. >> Yep, and that's where the value is. And that's where the value is. And this is a megatrend that we see. And so it's really important for us to get to the point of what is and what is not a supercloud and, you know, that's where we're trying to evolve. >> Let's talk about this for a second 'cause I want to, I want to challenge you on something and it's something that I got challenged on and it has led me to thinking differently than I did at first, which Molly can attest to. Okay? So, we have been looking for a way to talk about the concept of cloud of utility computing, run anything anywhere that isn't addressed in today's realization of cloud. 'Cause today's cloud is not run anything anywhere, it's quite the opposite. You park your data in AWS and that's where you run stuff. And you pretty much have to. Same with with Azure. They're using data gravity to keep you captive there, just like the old infrastructure guys did. But now it's even worse because it's coupled back with the software to some degree, as well. And you have to use their storage, networking, and compute. It's not, I mean it fell back to the mainframe era. Anyhow, so I love the concept of supercloud. By the way, I was going to suggest that a better term might be hyper cloud since hyper speaks to the multidimensionality of it and the ability to be in a, you know, be in a different dimension, a different plane of existence kind of thing like hyperspace. But super and hyper are somewhat synonyms. I mean, you have hyper cars and you have super cars and blah, blah, blah. I happen to like hyper maybe also because it ties into the whole Hammerspace notion of a hyper-dimensional, you know, reality, having your data centers connected by a wormhole that is Hammerspace. But regardless, what I got challenged on is calling it something different at all versus simply saying, this is what cloud has always meant to be. This is the true cloud, this is real cloud, this is cloud. And I think back to what happened, you'll remember, at Fusion IO we talked about IO memory and we did that because people had a conceptualization of what an SSD was. And an SSD back then was low capacity, low endurance, made to go military, aerospace where things needed to be rugged but was completely useless in the data center. And we needed people to imagine this thing as being able to displace entire SAND, with the kind of capacity density, performance density, endurance. And so we talked IO memory, we could have said enterprise SSD, and that's what the industry now refers to for that concept. What will people be saying five and 10 years from now? Will they simply say, well this is cloud as it was always meant to be where you are truly able to run anything anywhere and have not only the same APIs, but you're same data available with high performance access, all forms of access, block file and object everywhere. So yeah. And I wonder, and this is just me throwing it out there, I wonder if, well, there's trade offs, right? Giving it a new moniker, supercloud, versus simply talking about how cloud is always intended to be and what it was meant to be, you know, the real cloud or true cloud, there are trade-offs. By putting a name on it and branding it, that lets people talk about it and understand they're talking about something different. But it also is that an affront to people who thought that that's what they already had. >> What's different, what's new? Yes, and so we've given a lot of thought to this. >> Right, it's like you. >> And it's because we've been asked that why does the industry need a new term, and we've tried to address some of that. But some of the inside baseball that we haven't shared is, you remember the Web 2.0, back then? >> Yep. >> Web 2.0 was the same thing. And I remember Tim Burners Lee saying, "Why do we need Web 2.0? "This is what the Web was always supposed to be." But the truth is-- >> I know, that was another perfect-- >> But the truth is it wasn't, number one. Number two, everybody hated the Web 2.0 term. John Furrier was actually in the middle of it all. And then it created this groundswell. So one of the things we wrote about is that supercloud is an evocative term that catalyzes debate and conversation, which is what we like, of course. And maybe that's self-serving. But yeah, HyperCloud, Metacloud, super, meaning, it's funny because super came from Latin supra, above, it was never the superlative. But the superlative was a convenient byproduct that caused a lot of friction and flack, which again, in the media business is like a perfect storm brewing. >> The bad thing to have to, and I think you do need to shake people out of their, the complacency of the limitations that they're used to. And I'll tell you what, the fact that you even have the terms hybrid cloud, multi-cloud, private cloud, edge computing, those are all just referring to the different boundaries that isolate the silo that is the current limited cloud. >> Right. >> So if I heard correctly, what just, in terms of us defining what is and what isn't in supercloud, you would say traditional applications which have to run in a certain place, in a certain cloud can't run anywhere else, would be the stuff that you would not put in as being addressed by supercloud. And over time, you would want to be able to run the data where you want to and in any of those concepts. >> Or even modern apps, right? Or even modern apps that are siloed in SAS within an individual cloud, right? >> So yeah, I guess it's twofold. Number one, if you're going at the high application layers, there's lots of ways that you can give the appearance of anything running anywhere. The ISV, the SAS vendor can engineer stuff to have the ability to serve with low enough latency to different geographies, right? So if you go too high up the stack, it kind of loses its meaning because there's lots of different ways to make due and give the appearance of omni-presence of the service. Okay? As you come down more towards the platform layer, it gets harder and harder to mask the fact that supercloud is something entirely different than just a good regionally-distributed SAS service. So I don't think you, I don't think you can distinguish supercloud if you go too high up the stack because it's just SAS, it's just a good SAS service where the SAS vendor has done the hard work to give you low latency access from different geographic regions. >> Yeah, so this is one of the hardest things, David. >> Common among them. >> Yeah, this is really an important point. This is one of the things I've had the most trouble with is why is this not just SAS? >> So you dilute your message when you go up to the SAS layer. If you were to focus most of this around the super pass layer, the how can you host applications and run them anywhere and not host this, not run a service, not have a service available everywhere. So how can you take any application, even applications that are written, you know, in a traditional legacy data center fashion and be able to run them anywhere and have them have their binaries and their datasets and the runtime environment and the infrastructure to start them and stop them? You know, the jobs, the, what the Kubernetes, the job scheduler? What we're really talking about here, what I think we're really talking about here is building the operating system for a decentralized cloud. What is the operating system, the operating environment for a decentralized cloud? Where you can, and that the main two functions of an operating system or an operating environment are the process scheduler, the thing that's scheduling what is running where and when and so forth, and the file system, right? The thing that's supplying a common view and access to data. So when we talk about this, I think that the strongest argument for supercloud is made when you go down to the platform layer and talk of it, talk about it as an operating environment on which you can run all forms of applications. >> Would you exclude--? >> Not a specific application that's been engineered as a SAS. (audio distortion) >> He'll come back. >> Are you there? >> Yeah, yeah, you just cut out for a minute. >> I lost your last statement when you broke up. >> We heard you, you said that not the specific application. So would you exclude Snowflake from supercloud? >> Frankly, I would. I would. Because, well, and this is kind of hard to do because Snowflake doesn't like to, Frank doesn't like to talk about Snowflake as a SAS service. It has a negative connotation. >> But it is. >> I know, we all know it is. We all know it is and because it is, yes, I would exclude them. >> I think I actually have him on camera. >> There's nothing in common. >> I think I have him on camera or maybe Benoit as saying, "Well, we are a SAS." I think it's Slootman. I think I said to Slootman, "I know you don't like to say you're a SAS." And I think he said, "Well, we are a SAS." >> Because again, if you go to the top of the application stack, there's any number of ways you can give it location agnostic function or you know, regional, local stuff. It's like let's solve the location problem by having me be your one location. How can it be decentralized if you're centralizing on (audio distortion)? >> Well, it's more decentralized than if it's all in one cloud. So let me actually, so the spectrum. So again, in the spirit of what is and what isn't, I think it's safe to say Hammerspace is supercloud. I think there's no debate there, right? Certainly among this crowd. And I think we can all agree that Dell, Dell Storage is not supercloud. Where it gets fuzzy is this Snowflake example or even, how about a, how about a Cohesity that instantiates its stack in different cloud regions in different clouds, and synchronizes, however magic sauce it does that. Is that a supercloud? I mean, so I'm cautious about having too strict of a definition 'cause then only-- >> Fair enough, fair enough. >> But I could use your help and thoughts on that. >> So I think we're talking about two different spectrums here. One is the spectrum of platform to application-specific. As you go up the application stack and it becomes this specific thing. Or you go up to the more and more structured where it's serving a specific application function where it's more of a SAS thing. I think it's harder to call a SAS service a supercloud. And I would argue that the reason there, and what you're lacking in the definition is to talk about it as general purpose. Okay? Now, that said, a data warehouse is general purpose at the structured data level. So you could make the argument for why Snowflake is a supercloud by saying that it is a general purpose platform for doing lots of different things. It's just one at a higher level up at the structured data level. So one spectrum is the high level going from platform to, you know, unstructured data to structured data to very application-specific, right? Like a specific, you know, CAD/CAM mechanical design cloud, like an Autodesk would want to give you their cloud for running, you know, and sharing CAD/CAM designs, doing your CAD/CAM anywhere stuff. Well, the other spectrum is how well does the purported supercloud technology actually live up to allowing you to run anything anywhere with not just the same APIs but with the local presence of data with the exact same runtime environment everywhere, and to be able to correctly manage how to get that runtime environment anywhere. So a Cohesity has some means of running things in different places and some means of coordinating what's where and of serving diff, you know, things in different places. I would argue that it is a very poor approximation of what Hammerspace does in providing the exact same file system with local high performance access everywhere with metadata ability to control where the data is actually instantiated so that you don't have to wait for it to get orchestrated. But even then when you do have to wait for it, it happens automatically and so it's still only a matter of, well, how quick is it? And on the other end of the spectrum is you could look at NetApp with Flexcache and say, "Is that supercloud?" And I would argue, well kind of because it allows you to run things in different places because it's a cache. But you know, it really isn't because it presumes some central silo from which you're cacheing stuff. So, you know, is it or isn't it? Well, it's on a spectrum of exactly how fully is it decoupling a runtime environment from specific locality? And I think a cache doesn't, it stretches a specific silo and makes it have some semblance of similar access in other places. But there's still a very big difference to the central silo, right? You can't turn off that central silo, for example. >> So it comes down to how specific you make the definition. And this is where it gets kind of really interesting. It's like cloud. Does IBM have a cloud? >> Exactly. >> I would say yes. Does it have the kind of quality that you would expect from a hyper-scale cloud? No. Or see if you could say the same thing about-- >> But that's a problem with choosing a name. That's the problem with choosing a name supercloud versus talking about the concept of cloud and how true up you are to that concept. >> For sure. >> Right? Because without getting a name, you don't have to draw, yeah. >> I'd like to explore one particular or bring them together. You made a very interesting observation that from a enterprise point of view, they want to safeguard their store, their data, and they want to make sure that they can have that data running in their own workflows, as well as, as other service providers providing services to them for that data. So, and in in particular, if you go back to, you go back to Snowflake. If Snowflake could provide the ability for you to have your data where you wanted, you were in charge of that, would that make Snowflake a supercloud? >> I'll tell you, in my mind, they would be closer to my conceptualization of supercloud if you can instantiate Snowflake as software on your own infrastructure, and pump your own data to Snowflake that's instantiated on your own infrastructure. The fact that it has to be on their infrastructure or that it's on their, that it's on their account in the cloud, that you're giving them the data and they're, that fundamentally goes against it to me. If they, you know, they would be a pure, a pure plate if they were a software defined thing where you could instantiate Snowflake machinery on the infrastructure of your choice and then put your data into that machinery and get all the benefits of Snowflake. >> So did you see--? >> In other words, if they were not a SAS service, but offered all of the similar benefits of being, you know, if it were a service that you could run on your own infrastructure. >> So did you see what they announced, that--? >> I hope that's making sense. >> It does, did you see what they announced at Dell? They basically announced the ability to take non-native Snowflake data, read it in from an object store on-prem, like a Dell object store. They do the same thing with Pure, read it in, running it in the cloud, and then push it back out. And I was saying to Dell, look, that's fine. Okay, that's interesting. You're taking a materialized view or an extended table, whatever you're doing, wouldn't it be more interesting if you could actually run the query locally with your compute? That would be an extension that would actually get my attention and extend that. >> That is what I'm talking about. That's what I'm talking about. And that's why I'm saying I think Hammerspace is more progressive on that front because with our technology, anybody who can instantiate a service, can make a service. And so I, so MSPs can use Hammerspace as a way to build a super pass layer and host their clients on their infrastructure in a cloud-like fashion. And their clients can have their own private data centers and the MSP or the public clouds, and Hammerspace can be instantiated, get this, by different parties in these different pieces of infrastructure and yet linked together to make a common file system across all of it. >> But this is data mesh. If I were HPE and Dell it's exactly what I'd be doing. I'd be working with Hammerspace to create my own data. I'd work with Databricks, Snowflake, and any other-- >> Data mesh is a good way to put it. Data mesh is a good way to put it. And this is at the lowest level of, you know, the underlying file system that's mountable by the operating system, consumed as a real file system. You can't get lower level than that. That's why this is the foundation for all of the other apps and structured data systems because you need to have a data mesh that can at least mesh the binary blob. >> Okay. >> That hold the binaries and that hold the datasets that those applications are running. >> So David, in the third week of January, we're doing supercloud 2 and I'm trying to convince John Furrier to make it a data slash data mesh edition. I'm slowly getting him to the knothole. I would very much, I mean you're in the Bay Area, I'd very much like you to be one of the headlines. As Zhamak Dehghaniis going to speak, she's the creator of Data Mesh, >> Sure. >> I'd love to have you come into our studio as well, for the live session. If you can't make it, we can pre-record. But you're right there, so I'll get you the dates. >> We'd love to, yeah. No, you can count on it. No, definitely. And you know, we don't typically talk about what we do as Data Mesh. We've been, you know, using global data environment. But, you know, under the covers, that's what the thing is. And so yeah, I think we can frame the discussion like that to line up with other, you know, with the other discussions. >> Yeah, and Data Mesh, of course, is one of those evocative names, but she has come up with some very well defined principles around decentralized data, data as products, self-serve infrastructure, automated governance, and and so forth, which I think your vision plugs right into. And she's brilliant. You'll love meeting her. >> Well, you know, and I think.. Oh, go ahead. Go ahead, Peter. >> Just like to work one other interface which I think is important. How do you see yourself and the open source? You talked about having an operating system. Obviously, Linux is the operating system at one level. How are you imagining that you would interface with cost community as part of this development? >> Well, it's funny you ask 'cause my CTO is the kernel maintainer of the storage networking stack. So how the Linux operating system perceives and consumes networked data at the file system level, the network file system stack is his purview. He owns that, he wrote most of it over the last decade that he's been the maintainer, but he's the gatekeeper of what goes in. And we have leveraged his abilities to enhance Linux to be able to use this decentralized data, in particular with decoupling the control plane driven by metadata from the data access path and the many storage systems on which the data gets accessed. So this factoring, this splitting of control plane from data path, metadata from data, was absolutely necessary to create a data mesh like we're talking about. And to be able to build this supercloud concept. And the highways on which the data runs and the client which knows how to talk to it is all open source. And we have, we've driven the NFS 4.2 spec. The newest NFS spec came from my team. And it was specifically the enhancements needed to be able to build a spanning file system, a data mesh at a file system level. Now that said, our file system itself and our server, our file server, our data orchestration, our data management stuff, that's all closed source, proprietary Hammerspace tech. But the highways on which the mesh connects are actually all open source and the client that knows how to consume it. So we would, honestly, I would welcome competitors using those same highways. They would be at a major disadvantage because we kind of built them, but it would still be very validating and I think only increase the potential adoption rate by more than whatever they might take of the market. So it'd actually be good to split the market with somebody else to come in and share those now super highways for how to mesh data at the file system level, you know, in here. So yeah, hopefully that answered your question. Does that answer the question about how we embrace the open source? >> Right, and there was one other, just that my last one is how do you enable something to run in every environment? And if we take the edge, for example, as being, as an environment which is much very, very compute heavy, but having a lot less capability, how do you do a hold? >> Perfect question. Perfect question. What we do today is a software appliance. We are using a Linux RHEL 8, RHEL 8 equivalent or a CentOS 8, or it's, you know, they're all roughly equivalent. But we have bundled and a software appliance which can be instantiated on bare metal hardware on any type of VM system from VMware to all of the different hypervisors in the Linux world, to even Nutanix and such. So it can run in any virtualized environment and it can run on any cloud instance, server instance in the cloud. And we have it packaged and deployable from the marketplaces within the different clouds. So you can literally spin it up at the click of an API in the cloud on instances in the cloud. So with all of these together, you can basically instantiate a Hammerspace set of machinery that can offer up this file system mesh. like we've been using the terminology we've been using now, anywhere. So it's like being able to take and spin up Snowflake and then just be able to install and run some VMs anywhere you want and boom, now you have a Snowflake service. And by the way, it is so complete that some of our customers, I would argue many aren't even using public clouds at all, they're using this just to run their own data centers in a cloud-like fashion, you know, where they have a data service that can span it all. >> Yeah and to Molly's first point, we would consider that, you know, cloud. Let me put you on the spot. If you had to describe conceptually without a chalkboard what an architectural diagram would look like for supercloud, what would you say? >> I would say it's to have the same runtime environment within every data center and defining that runtime environment as what it takes to schedule the execution of applications, so job scheduling, runtime stuff, and here we're talking Kubernetes, Slurm, other things that do job scheduling. We're talking about having a common way to, you know, instantiate compute resources. So a global compute environment, having a common compute environment where you can instantiate things that need computing. Okay? So that's the first part. And then the second is the data platform where you can have file block and object volumes, and have them available with the same APIs in each of these distributed data centers and have the exact same data omnipresent with the ability to control where the data is from one moment to the next, local, where all the data is instantiate. So my definition would be a common runtime environment that's bifurcate-- >> Oh. (attendees chuckling) We just lost them at the money slide. >> That's part of the magic makes people listen. We keep someone on pin and needles waiting. (attendees chuckling) >> That's good. >> Are you back, David? >> I'm on the edge of my seat. Common runtime environment. It was like... >> And just wait, there's more. >> But see, I'm maybe hyper-focused on the lower level of what it takes to host and run applications. And that's the stuff to schedule what resources they need to run and to get them going and to get them connected through to their persistence, you know, and their data. And to have that data available in all forms and have it be the same data everywhere. On top of that, you could then instantiate applications of different types, including relational databases, and data warehouses and such. And then you could say, now I've got, you know, now I've got these more application-level or structured data-level things. I tend to focus less on that structured data level and the application level and am more focused on what it takes to host any of them generically on that super pass layer. And I'll admit, I'm maybe hyper-focused on the pass layer and I think it's valid to include, you know, higher levels up the stack like the structured data level. But as soon as you go all the way up to like, you know, a very specific SAS service, I don't know that you would call that supercloud. >> Well, and that's the question, is there value? And Marianna Tessel from Intuit said, you know, we looked at it, we did it, and it just, it was actually negative value for us because connecting to all these separate clouds was a real pain in the neck. Didn't bring us any additional-- >> Well that's 'cause they don't have this pass layer underneath it so they can't even shop around, which actually makes it hard to stand up your own SAS service. And ultimately they end up having to build their own infrastructure. Like, you know, I think there's been examples like Netflix moving away from the cloud to their own infrastructure. Basically, if you're going to rent it for more than a few months, it makes sense to build it yourself, if it's at any kind of scale. >> Yeah, for certain components of that cloud. But if the Goldman Sachs came to you, David, and said, "Hey, we want to collaborate and we want to build "out a cloud and essentially build our SAS system "and we want to do that with Hammerspace, "and we want to tap the physical infrastructure "of not only our data centers but all the clouds," then that essentially would be a SAS, would it not? And wouldn't that be a Super SAS or a supercloud? >> Well, you know, what they may be using to build their service is a supercloud, but their service at the end of the day is just a SAS service with global reach. Right? >> Yeah. >> You know, look at, oh shoot. What's the name of the company that does? It has a cloud for doing bookkeeping and accounting. I forget their name, net something. NetSuite. >> NetSuite. NetSuite, yeah, Oracle. >> Yeah. >> Yep. >> Oracle acquired them, right? Is NetSuite a supercloud or is it just a SAS service? You know? I think under the covers you might ask are they using supercloud under the covers so that they can run their SAS service anywhere and be able to shop the venue, get elasticity, get all the benefits of cloud in the, to the benefit of their service that they're offering? But you know, folks who consume the service, they don't care because to them they're just connecting to some endpoint somewhere and they don't have to care. So the further up the stack you go, the more location-agnostic it is inherently anyway. >> And I think it's, paths is really the critical layer. We thought about IAS Plus and we thought about SAS Minus, you know, Heroku and hence, that's why we kind of got caught up and included it. But SAS, I admit, is the hardest one to crack. And so maybe we exclude that as a deployment model. >> That's right, and maybe coming down a level to saying but you can have a structured data supercloud, so you could still include, say, Snowflake. Because what Snowflake is doing is more general purpose. So it's about how general purpose it is. Is it hosting lots of other applications or is it the end application? Right? >> Yeah. >> So I would argue general purpose nature forces you to go further towards platform down-stack. And you really need that general purpose or else there is no real distinguishing. So if you want defensible turf to say supercloud is something different, I think it's important to not try to wrap your arms around SAS in the general sense. >> Yeah, and we've kind of not really gone, leaned hard into SAS, we've just included it as a deployment model, which, given the constraints that you just described for structured data would apply if it's general purpose. So David, super helpful. >> Had it sign. Define the SAS as including the hybrid model hold SAS. >> Yep. >> Okay, so with your permission, I'm going to add you to the list of contributors to the definition. I'm going to add-- >> Absolutely. >> I'm going to add this in. I'll share with Molly. >> Absolutely. >> We'll get on the calendar for the date. >> If Molly can share some specific language that we've been putting in that kind of goes to stuff we've been talking about, so. >> Oh, great. >> I think we can, we can share some written kind of concrete recommendations around this stuff, around the general purpose, nature, the common data thing and yeah. >> Okay. >> Really look forward to it and would be glad to be part of this thing. You said it's in February? >> It's in January, I'll let Molly know. >> Oh, January. >> What the date is. >> Excellent. >> Yeah, third week of January. Third week of January on a Tuesday, whatever that is. So yeah, we would welcome you in. But like I said, if it doesn't work for your schedule, we can prerecord something. But it would be awesome to have you in studio. >> I'm sure with this much notice we'll be able to get something. Let's make sure we have the dates communicated to Molly and she'll get my admin to set it up outside so that we have it. >> I'll get those today to you, Molly. Thank you. >> By the way, I am so, so pleased with being able to work with you guys on this. I think the industry needs it very bad. They need something to break them out of the box of their own mental constraints of what the cloud is versus what it's supposed to be. And obviously, the more we get people to question their reality and what is real, what are we really capable of today that then the more business that we're going to get. So we're excited to lend the hand behind this notion of supercloud and a super pass layer in whatever way we can. >> Awesome. >> Can I ask you whether your platforms include ARM as well as X86? >> So we have not done an ARM port yet. It has been entertained and won't be much of a stretch. >> Yeah, it's just a matter of time. >> Actually, entertained doing it on behalf of NVIDIA, but it will absolutely happen because ARM in the data center I think is a foregone conclusion. Well, it's already there in some cases, but not quite at volume. So definitely will be the case. And I'll tell you where this gets really interesting, discussion for another time, is back to my old friend, the SSD, and having SSDs that have enough brains on them to be part of that fabric. Directly. >> Interesting. Interesting. >> Very interesting. >> Directly attached to ethernet and able to create a data mesh global file system, that's going to be really fascinating. Got to run now. >> All right, hey, thanks you guys. Thanks David, thanks Molly. Great to catch up. Bye-bye. >> Bye >> Talk to you soon.

Published Date : Oct 5 2022

SUMMARY :

So my question to you was, they don't have to do it. to starved before you have I believe that the ISVs, especially those the end users you need to So, if I had to take And and I think Ultimately the supercloud or the Snowflake, you know, more narrowly on just the stuff of the point of what you're talking Well, and you know, Snowflake founders, I don't want to speak over So it starts to even blur who's the main gravity is to having and, you know, that's where to be in a, you know, a lot of thought to this. But some of the inside baseball But the truth is-- So one of the things we wrote the fact that you even have that you would not put in as to give you low latency access the hardest things, David. This is one of the things I've the how can you host applications Not a specific application Yeah, yeah, you just statement when you broke up. So would you exclude is kind of hard to do I know, we all know it is. I think I said to Slootman, of ways you can give it So again, in the spirit But I could use your to allowing you to run anything anywhere So it comes down to how quality that you would expect and how true up you are to that concept. you don't have to draw, yeah. the ability for you and get all the benefits of Snowflake. of being, you know, if it were a service They do the same thing and the MSP or the public clouds, to create my own data. for all of the other apps and that hold the datasets So David, in the third week of January, I'd love to have you come like that to line up with other, you know, Yeah, and Data Mesh, of course, is one Well, you know, and I think.. and the open source? and the client which knows how to talk and then just be able to we would consider that, you know, cloud. and have the exact same data We just lost them at the money slide. That's part of the I'm on the edge of my seat. And that's the stuff to schedule Well, and that's the Like, you know, I think But if the Goldman Sachs Well, you know, what they may be using What's the name of the company that does? NetSuite, yeah, Oracle. So the further up the stack you go, But SAS, I admit, is the to saying but you can have a So if you want defensible that you just described Define the SAS as including permission, I'm going to add you I'm going to add this in. We'll get on the calendar to stuff we've been talking about, so. nature, the common data thing and yeah. to it and would be glad to have you in studio. and she'll get my admin to set it up I'll get those today to you, Molly. And obviously, the more we get people So we have not done an ARM port yet. because ARM in the data center I think is Interesting. that's going to be really fascinating. All right, hey, thanks you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

SlootmanPERSON

0.99+

NetflixORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

MollyPERSON

0.99+

Marianna TesselPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

FrankPERSON

0.99+

DisneyORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurrierPERSON

0.99+

FebruaryDATE

0.99+

PeterPERSON

0.99+

Zhamak DehghaniisPERSON

0.99+

HammerspaceORGANIZATION

0.99+

WordTITLE

0.99+

AWSORGANIZATION

0.99+

RHEL 8TITLE

0.99+

OracleORGANIZATION

0.99+

BenoitPERSON

0.99+

ExcelTITLE

0.99+

secondQUANTITY

0.99+

AutodeskORGANIZATION

0.99+

CentOS 8TITLE

0.99+

David FlynnPERSON

0.99+

oneQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

HPEORGANIZATION

0.99+

PowerPointTITLE

0.99+

first pointQUANTITY

0.99+

bothQUANTITY

0.99+

TuesdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

each regionQUANTITY

0.98+

LinuxTITLE

0.98+

OneQUANTITY

0.98+

IntuitORGANIZATION

0.98+

Tim Burners LeePERSON

0.98+

Zhamak Dehghaniis'PERSON

0.98+

Blue OriginORGANIZATION

0.98+

Bay AreaLOCATION

0.98+

two reasonsQUANTITY

0.98+

eachQUANTITY

0.98+

one applicationQUANTITY

0.98+

SnowflakeTITLE

0.98+

firstQUANTITY

0.98+

more than a few monthsQUANTITY

0.97+

SASORGANIZATION

0.97+

ARMORGANIZATION

0.97+

MicrosoftORGANIZATION

0.97+

Thomas Stocker, UiPath & Neeraj Mathur, VMware | UiPath FORWARD5


 

>> TheCUBE presents UI Path Forward Five brought to you by UI Path. >> Welcome back to UI Path Forward Five. You're watching The Cubes, Walter Wall coverage. This is day one, Dave Vellante, with my co-host Dave Nicholson. We're taking RPA to intelligence automation. We're going from point tools to platforms. Neeraj Mathur is here. He's the director of Intelligent Automation at VMware. Yes, VMware. We're not going to talk about vSphere or Aria, or maybe we are, (Neeraj chuckles) but he's joined by Thomas Stocker who's a principal product manager at UI Path. And we're going to talk about testing automation, automating the testing process. It's a new sort of big vector in the whole RPA automation space. Gentleman, welcome to theCUBE. Good to see you. >> Neeraj: Thank you very much. >> Thomas: Thank you. >> So Neeraj, as we were saying, Dave and I, you know, really like VMware was half our lives for a long time but we're going to flip it a little bit. >> Neeraj: Absolutely. >> And talk about sort of some of the inside baseball. Talk about your role and how you're applying automation at VMware. >> Absolutely. So, so as part of us really running the intelligent automation program at VMware, we have a quite matured COE for last, you know four to five years, we've been doing this automation across the enterprise. So what we have really done is, you know over 45 different business functions where we really automated quite a lot different processes and tasks on that. So as part of my role, I'm really responsible for making sure that we are, you know, bringing in the best practices, making sure that we are ready to scale across the enterprise but at the same time, how, you know, quickly we are able to deliver the value of this automation to our businesses as well. >> Thomas, as a product manager, you know the product, and the market inside and out, you know the competition, you know the pricing, you know how customers are using it, you know all the features. What's your area of - main area of focus? >> The main area of the UiPathT suite... >> For your role, I mean? >> For my role is the RPA testing. So meaning testing RPA workflows themselves. And the reason is RPA has matured over the last few years. We see that, and it has adopted a lot of best practices from the software development area. So what we see is RPA now becomes business critical. It's part of the main core business processes in corporation and testing it just makes sense. You have to continuously monitor and continuously test your automation to make sure it does not break in production. >> Okay. And you have a specific product for this? Is it a feature or it's a module? >> So RPA testing or the UiPath T Suite, as the name suggests it's a suite of products. It's actually part of the existing platform. So we use Orchestrator, which is the distribution engine. We use Studio, which is our idea to create automation. And on top of that, we build a new component, which is called the UiPath Test Manager. And this is a kind of analytics and management platform where you have an oversight on what happened, what went wrong, and what is the reason for automation to **bring. >> Okay. And so Neeraj, you're testing your robot code? >> Neeraj: Correct. >> Right. And you're looking for what? Governance, security, quality, efficiency, what are the things you're looking for? >> It's actually all of all of those but our main goal to really start this was two-front, right? So we were really looking at how do we, you know, deliver at a speed with the quality which we can really maintain and sustain for a longer period, right? So to improve our quality of delivery at a speed of delivery, which we can do it. So the way we look at testing automation is not just as an independent entity. We look at this as a pipeline of a continuous improvement for us, right? So how it is called industry as a CICD pipeline. So testing automation is one of the key component of that. But the way we were able to deliver on the speed is to really have that end to end automation done for us to also from developers to production and using that pipeline and our testing is one piece of that. And the way we were able to also improve on the quality of our delivery is to really have automated way of doing the code reviews, automated way of doing the testing using this platform as well. and then, you know, how you go through end to end for that purpose. >> Thomas, when I hear testing robots, (Thomas chuckles) I don't care if it's code or actual robots, it's terrifying. >> It's terrify, yeah. >> It's terrifying. Okay, great. You, you have some test suite that says look, Yeah, we've looked at >> The, why is that terrifying? >> What's, It's terrifying because if you have to let it interact with actual live systems in some way. Yeah. The only way to know if it's going to break something is either you let it loose or you have some sort of sandbox where, I mean, what do you do? Are you taking clones of environments and running actual tests against them? I mean, think it's >> Like testing disaster recovery in the old days. Imagine. >> So we are actually not running any testing in the production live environment, right? The way we build this actually to do a testing in the separate test environment on that as well by using very specific test data from business, which you know, we call that as a golden copy of that test data because we want to use that data for months and years to come. Okay. Right? Yeah. So not touching any production environmental Facebook. >> Yeah. All right. Cause you, you can imagine >> Absolutely >> It's like, oh yeah we've created a robotic changes baby diapers let's go ahead and test it on these babies. [Collective Laughter] Yeah >> I don't think so. No, no, But, but what's the, does it does it matter if there's a delta between the test data and the, the, the production data? How, how big is that delta? How do you manage that? >> It does matter. And that's where actually that whole, you know, angle of how much you can, can in real, in real life can test right? So there are cases where you would have, even in our cases where, you know, the production data might be slightly different than the test data itself. So the whole effort goes into making sure that the test data, which we are preparing here, is as close to the products and data itself, right? It may not be a hundred percent close but that's the sort of you know, boundary or risk you may have to take. >> Okay. So you're snapshotting, that moving it over, a little V motion? >> Neeraj: Yeah. >> Okay. So do you do this for citizen developers as well? Or is you guys pretty much center of excellence writing all the bots? >> No, right now we are doing only for the unattended, the COE driven bots only at this point of time, >> What are you, what are your thoughts on the future? Because I can see I can see some really sloppy citizen coders. >> Yeah. Yeah. So as part of our governance, which we are trying to build for our citizen developers as well, there there is a really similar consideration for that as well. But for us, we have really not gone that far to build that sort of automation right >> Now, narrowly, just if we talk about testing what's the business impact been on the testing? And I'm interested in overall, but the overall platform but specifically for the testing, when did that when did you start implementing that and, and what what has been the business benefit? >> So the benefit is really on the on the speed of the delivery, which means that we are able to actually deliver more projects and more automation as well. So since we adopted that, we have seen our you know, improvement, our speed is around 15%, right? So, so, you know, 15% better speed than previously. What we have also seen is, is that our success rate of our transactions in production environment has gone to 96% success rate, which is, again there is a direct implication on business, on, on that point of view that, you know, there's no more manual exception or manual interaction is required for those failure scenarios. >> So 15% better speed at what? At, at implementing the bots? At actually writing code? Or... >> End to end, Yes. So from building the code to test that code able to approve that and then deploy that into the production environment after testing it this is really has improved by 15%. >> Okay. And, and what, what what business processes outside of sort of testing have you sort of attacked with the platform? Can you talk to that? >> The business processes outside of testing? >> Dave: Yeah. You mean the one which we are not testing ourself? >> Yeah, no. So just the UI path platform, is it exclusively for, for testing? >> This testing is exclusively for the UI path bots which we have built, right? So we have some 400 plus automations of UI bots. So it's meant exclusively >> But are you using UI path in any other ways? >> No, not at this time. >> Okay, okay. Interesting. So you started with testing? >> No, we started by building the bots. So we already had roughly 400 bots in production. When we came with the testing automation, that's when we started looking at it. >> Dave: Okay. And then now building that whole testing-- >> Dave: What are those other bots doing? Let me ask it that way. >> Oh, there's quite a lot. I mean, we have many bots. >> Dave: Paint a picture if you want. Yeah. In, in finance, in auto management, HR, legal, IT, there's a lot of automations which are there. As I'm saying, there's more than 400 automations out there. Yeah. So so it's across the, you know, enterprise on that. >> Thomas. So, and you know, both of you have a have a view on this, but Thomas's views probably wider across other, other instances. What are the most common things that are revealed in tests that indicate something needs to be fixed? Yeah, so think of, think of a test, a test failure, an error. What are the, what are the most common things that happen? >> So when we started with building our product we conducted a, a survey among our customers. And without a surprise the main reason why automation breaks is change. >> David: Sure. >> And the problem here is RPA is a controlled process a controlled workflow but it runs in an uncontrollable environment. So typically RPA is developed by a C.O.E. Those are business and automation experts, but they operate in an environment that's driven by new patches new application changes ruled out by IT. And that's the main challenge here. You cannot control that. And so far, if you, if you do not proactively test what happens is you catch an issue in production when it already breaks, right? That's reactive, that's leads to maintenance to un-claim maintenance actually. And that was the goal right from the start from the taste suite to support our customers here and go over to proactive maintenance meaning testing before and finding those issues before the heat production. >> Yeah. Yeah, yeah. So I'm, I'm still not clear on, so you just gave a perfect example, changes in the environment. >> Yeah. >> So those changes are happening in the production environment. >> Thomas: Yeah. The robot that was happily doing its automation stuff before? >> Thomas: Yeah. Everyone was happy with it. Change happens. Robot breaks. >> Thomas: Yeah. >> Okay. You're saying you test before changes are implemented? To see if those changes will break the robot? >> Thomas: Yeah. >> Okay. How do you, how do you expose those changes that are in the, in a, that are going to be in a production environment to the robot? You must have a, Is is that part of the test environment? Does that mean that you have to have what fully running instances of like an ERP system? >> Thomas: Yeah. You know, a clone of an environment. How do you, how do you test that without having the live robot against the production environment? >> I think there's no big difference to standard software testing. Okay. The interesting thing is, the change actually happens earlier. You are affected on production side with it but the change happens on it side or on DevOps side. So you typically will test in a test environment that's similar to your production environment or probably in it in a pre-product environment. And the test itself is simply running your workflow that you want to test, but mark away any dependencies you don't want to invoke. You don't want to send a, a letter to a customer in a test environment, right? And then you verify that the result is what you actually expect, right? And as soon as this is not the case, you will be notified you will have a result, the fail result, and you can act before it breaks. So you can fix it, redeploy to production and you should be good now. >> But the, the main emphasis at VMware is testing your bots, correct? >> Neeraj: Testing your bots. Yes. Can I apply this to testing other software code? >> Yeah, yeah. You, you can, you can technically actually and Thomas can speak better than me on that to any software for that matter, but we have really not explored that aspect of it. >> David: You guys have pretty good coders, good engineers at VMware, but no, seriously Thomas what's that market looking like? Is that taking off? Are you, are you are you applying this capability or customers applying it for just more broadly testing software? >> Absolutely. So our goal was we want to test RPA and the application it relies on so that includes RPA testing as well as application testing. The main difference is typical functional application testing is a black box testing. So you don't know the inner implementation of of that application. And it works out pretty well. The big, the big opportunity that we have is not isolated Not isolated testing, isolated RPA but we talk about convergence of automation. So what we offer our customers is one automation platform. You create one, you create automation, not redundantly in different departments, but you create once probably for testing and then you reuse it for RPA. So that suddenly helps your, your test engineers to to move from a pure cost center to a value center. >> How, how unique is this capability in the industry relative to your competition and and what capabilities do you have that, that or, or or differentiators from the folks that we all know you're competing with? >> So the big advantage is the power of the entire platform that we have with UiPath. So we didn't start from scratch. We have that great automation layer. We have that great distribution layer. We have all that AI capabilities that so far were used for RPA. We can reuse them, repurpose them for testing. And that really differentiates us from the competition. >> Thomas, I I, I detect a hint of an accent. Is it, is it, is it German or >> It's actually Austrian. >> Austrian. Well, >> You know. Don't compare us with Germans. >> I understand. High German. Is that the proper, is that what's spoken in Austria? >> Yes, it is. >> So, so >> Point being? >> Point being exactly as I drift off point being generally German is considered to be a very very precise language with very specific words. It's very easy to be confused about between the difference the difference between two things automation testing and automating testing. >> Thomas: Yes. >> Because in this case, what you are testing are automations. >> Thomas: Yes. >> That's what you're talking about. >> Thomas: Yes. >> You're not talking about the automation of testing. Correct? >> Well, we talk about >> And that's got to be confusing when you go to translate that into >> Dave: But isn't it both? >> 50 other languages? >> Dave: It's both. >> Is it both? >> Thomas: It actually is both. >> Okay. >> And there's something we are exploring right now which is even, even the next step, the next layer which is autonomous testing. So, so far you had an expert an automation expert creating the automation once and it would be rerun over and over again. What we are now exploring is together with university to autonomously test, meaning a bot explores your application on the test and finds issues completely autonomously. >> Dave: So autonomous testing of automation? >> It's getting more and more complicated. >> It's more clear, it's getting clearer by the minute. >> Sorry for that. >> All right Neeraj, last question is: Where do you want to take this? What's your vision for, for VMware in the context of automation? >> Sure. So, so I think the first and the foremost thing for us is to really make it more mainstream for for our automation developer Excel, right? What I mean by that is, is to really, so so there is a shift now how we engage with our business users and SMEs. And I said previously they used to actually test it manually. Now the conversation changes that, hey can you tell us what test cases you want what you want us to test in an automated measure? Can you give us the test data for that so that we can keep on testing in a continuous manner for the months and years to come down? Right? The other part of the test it changes is that, hey it used to take eight weeks for us to build but now it's going to take nine weeks because we're going to spend an extra week just to automate that as well. But it's going to help you in the long run and that's the conversation. So to really make it as much more mainstream and then say that out of all these kinds of automation and bots which we are building, So we are not looking to have a test automation for every single bot which we are building. So we need to have a way to choose where their value is. Is it the quarter end processing one? Is it the most business critical one, or is it the one where we are expecting of frequent changes, right? That's where the value of the testing is. So really bring that as a part of our whole process and then, you know >> We're still fine too. That great. Guys, thanks so much. This has been really interesting conversation. I've been waiting to talk to a real life customer about testing and automation testing. Appreciate your time. >> Thank you very much. >> Thanks for everything. >> All right. Thank you for watching, keep it right there. Dave Nicholson and I will be back right after this short break. This is day one of theCUBE coverage of UI Path Forward Five. Be right back after this short break.

Published Date : Sep 29 2022

SUMMARY :

brought to you by UI Path. in the whole RPA automation space. So Neeraj, as we were some of the inside baseball. for making sure that we are, you know, and the market inside and And the reason is RPA has Is it a feature or it's a module? So RPA testing or the UiPath testing your robot code? And you're looking for what? So the way we look at testing automation I don't care if it's You, you have some test suite that says of sandbox where, I mean, what do you do? recovery in the old days. in the separate test Cause you, you can imagine it on these babies. between the test data and that the test data, which we that moving it over, So do you do this for What are you, what are But for us, we have really not gone that So the benefit is really on the At, at implementing the bots? the code to test that code of testing have you sort of You mean the one which we So just the UI path platform, for the UI path bots So you started with testing? So we already had roughly And then now building that whole testing-- Let me ask it that way. I mean, we have many bots. so it's across the, you know, both of you have a the main reason why from the taste suite to changes in the environment. in the production environment. The robot that was happily doing its Thomas: Yeah. You're saying you test before Does that mean that you against the production environment? the result is what you Can I apply this to testing for that matter, but we have really not So you don't know the So the big advantage is the power a hint of an accent. Well, compare us with Germans. Is that the proper, is that about between the difference what you are testing the automation of testing. on the test and finds issues getting clearer by the minute. But it's going to help you in the long run to a real life customer Thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ThomasPERSON

0.99+

DavidPERSON

0.99+

NeerajPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

Neeraj MathurPERSON

0.99+

Dave VellantePERSON

0.99+

Thomas StockerPERSON

0.99+

nine weeksQUANTITY

0.99+

15%QUANTITY

0.99+

eight weeksQUANTITY

0.99+

96%QUANTITY

0.99+

fourQUANTITY

0.99+

bothQUANTITY

0.99+

FacebookORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

UiPathORGANIZATION

0.99+

firstQUANTITY

0.99+

five yearsQUANTITY

0.99+

more than 400 automationsQUANTITY

0.98+

ExcelTITLE

0.98+

50 other languagesQUANTITY

0.98+

AustriaLOCATION

0.98+

one pieceQUANTITY

0.97+

two-frontQUANTITY

0.97+

oneQUANTITY

0.97+

UI Path Forward FiveTITLE

0.97+

The CubesTITLE

0.96+

around 15%QUANTITY

0.96+

UiPath T SuiteTITLE

0.96+

UI PathORGANIZATION

0.96+

GermanOTHER

0.96+

AustrianOTHER

0.95+

hundred percentQUANTITY

0.95+

400 plus automationsQUANTITY

0.95+

TheCUBEORGANIZATION

0.92+

400 botsQUANTITY

0.92+

over 45 different business functionsQUANTITY

0.91+

GermansOTHER

0.91+

day oneQUANTITY

0.91+

UiPathTTITLE

0.9+

RPATITLE

0.9+

monthsQUANTITY

0.88+

UIORGANIZATION

0.86+

Sumit Dhawan, VMware | VMware Explore 2022


 

(upbeat music) >> Welcome back everyone to theCUBE's coverage of VMware Explore '22, formerly VMworld. This is our 12th year covering it. I'm John Furrier with Dave Vellente. Two sets, three days of wall-to-wall coverage. We're starting to get the execs rolling in from VMware. Sumit Dhawan, president of VMware's here. Great to see you. Great keynote, day one. >> Great to be here, John. Great to see you, Dave. Day one, super exciting. We're pumped. >> And you had no problem with the keynotes. We're back in person. Smooth as silk up there. >> We were talking about it. We had to like dust off a cobweb to make some of these inputs. >> It's not like riding a bike. >> No, it's not. We had about 40% of our agencies that we had to change out because they're no longer in business. So, I have to give kudos to the team who pulled it together. They did a fabulous job. >> You do a great check, great presentation. I know you had a lot to crack in there. Raghu set the table. I know this is for him, this was a big moment to lay out the narrative, address the Broadcom thing right out of the gate, wave from Hock Tan in the audience, and then got into the top big news. Still a lot of meat on the bone. You get up there, you got to talk about the use cases, vSphere 8, big release, a lot of stuff. Take us through the keynote. What was the important highlights for you to share, the folks watching that didn't see the keynote or wanted to get your perspective? >> Well, first of all, did any of you notice that Raghu was running on the stage? He did not do that in rehearsal. (John chuckles) I was a little bit worried, but he really did it. >> I said, I betcha that was real. (everyone chuckles) >> Anyways, the jokes aside, he did fabulous. Lays out the strategy. My thinking, as you said, was to first of all speak with their customers and explain how every enterprise is facing with this concept of cloud chaos that Raghu laid out and CVS Health story sort of exemplifies the situation that every customer is facing. They go in, they start with cloud first, which is needed, I think that's the absolutely right approach. Very quickly build out a model of getting a cloud ops team and a platform engineering team which oftentimes be a parallel work stream to a private cloud infrastructure. Great start. But as Roshan, the CIO at CVS Health laid out, there's an inflection point. And that's when you have to converge these because the use cases are where stakeholders, this is the lines of businesses, app developers, finance teams, and security teams, they don't need this stove piped information coming at 'em. And the converge model is how he opted to organize his team. So we called it a multi-cloud team, just like a workspace team. And listen, our commitment and innovations are to solve the problems of those teams so that the stakeholders get what they need. That's the rest of the keynote. >> Yeah, first of all, great point. I want to call out that inflection point comment because we've been reporting coming into VMworld with super cloud and other things across open source and down into the weeds and into the hood. The chaos is real. So, good call. I love how you guys brought that up there. But all industry inflection points, if you go back in history of the tech industry, at every single major inflection point, there was chaos, complexity, or an enemy proprietary. However you want to look at it, there was a situation where you needed to kind of reign in the chaos as Andy Grove would say. So we're at that inflection point, I think that's consistent. And also the ecosystem floor yesterday, the expo floor here in San Francisco with your partners, it was vibrant. They're all on this wave. There is a wave and an inflection point. So, okay. I buy that. So, if you buy the inflection point, what has to happen next? Because this is where we're at. People are feeling it. Some say, I don't have a problem but they're cut chaos such is the problem. So, where do you see that? How does VMware's team organizing in the industry and for customers specifically to solve the chaos, to reign it in and cross over? >> Yeah, you're a 100% right. Every inflection point is associated with some kind of a chaos that had to be reigned in. So we are focused on two major things right now which we have made progress in. And maybe third, we are still work in-progress. Number one is technology. Today's technology announcements are directly to address how that streamlining of chaos can be done through a cloud smart approach that we laid out. Our Aria, a brand new solution for management, significant enhancements to Tanzu, all of these for public cloud based workloads that also extend to private cloud. And then our cloud infrastructure with newer capabilities with AWS, Azure, as well as with new innovations on vSphere 8 and vSAN 8. And then last but not the least, our continuous automation to enable anywhere workspace. All these are simple innovation that have to address because without those innovations, the problem is that the chaos oftentimes is created because lack of technology and as a result structure has to be put in place because tooling and technology is not there. So, number one goal we see is providing that. Second is we have to be independent, provide support for every possible cloud but not without being a partner of theirs. That's not an easy thing to do but we have the DNA as a company, we have done that with data centers in the past, even though being part of Dell we did that in the data center in the past, we have done that in mobility. And so we have taken the challenge of doing that with the cloud. So we are continually building newer innovation and stronger and stronger partnerships with cloud provider which is the basis of our commercial relationships with Microsoft Azure too, where we have brought Azure VMware solution into VMware cloud universal. Again, that strengthens the value of us being neutral because it's very important to have a Switzerland party that can provide these multi-cloud solutions that doesn't have an agenda of a specific cloud, yet an ecosystem, or at least an influence with the ecosystem that can bring going forward. >> Okay, so technology, I get that. Open, not going to be too competitive, but more open. So the question I got to ask you is what is the disruptive enabler to make that happen? 'Cause you got customers, partners and team of VMware, what's the disruptive enabler that's going to get you to that level? >> Over the hump. I mean, listen, our value is this community. All this community has one of two paths to go. Either, they become stove piped into just the public-private cloud infrastructure or they step up as this convergence that's happening around them to say, "You know what? I have the solution to tame this multi-cloud complexity, to reign the chaos," as you mentioned because tooling and technologies are available. And I know they work with the ecosystem. And our objective is to bring this community to that point. And to me, that is the best path to overcome it. >> You are the connective tissue. I was able to sit into the analyst meeting today. You were sort of the proxy for CVS Health where you talked about the private that's where you started, the public cloud ops team, bringing that together. The platform is the glue. That is the connective tissue. That's where Tanzu comes in. That's where Aria comes in. And that is the disruptive technology which it's hard to build that. >> From a technology perspective, it's an enabler of something that has never been done before in that level of comprehensiveness, from a more of a infrastructure side thinking perspective. Yes, infrastructure teams have enabled self-service portals. Yes, infrastructure teams have given APIs to developers, but what we are enabling through Tanzu is completely next level where you have a lot richer experience for developers so that they never ever have to think about the infrastructure at all. Because even when you enable infrastructure as API, that's still an API of the infrastructure. We go straight to the application tier where they're just thinking about authorized set of microservices. Containers can be orchestrated and built automatically, shifting security left where we're truly checking them or enabling them to check the security vulnerabilities as they're developing the application, not going into the production when they have to touch the infrastructure. To me, that's an enabler of a special power that this new multi-cloud team can have across cloud which they haven't had in the past. >> Yeah, it's funny, John, I'd say very challenging technically. The challenge in 2010 was the software mainframe, remember the marketing people killed that term. >> Yeah, exactly. >> But you think about that. We're going to make virtualization and the overhead associated with that irrelevant. We're going to be able to run any workload and VMware achieved that. Now you're saying we run anything anywhere, any Kubernete, any container. >> That's the reality. That's the chaos. >> And the cloud and that's a new, real problem. Real challenging problem that requires serious engineering. >> Well, I mean it's aspirational, right? Let's get the reality, right? So true spanning cloud, not yet there. You guys, I think your vision is definitely right on in the sense that we'd like the chaos and multicloud's a reality. The question is AWS, Azure, Google Cloud, other clouds, they're not going to sit still. No one's going to let VMware just come up and take everything. You got to enable so the market- >> True, true. I don't think this is the case of us versus them because there is so much that they have to express in terms of the value of every cloud. And this happened in the case of, by the way, whether you go into infrastructure or even workspace solutions, as long as the richest of the experience and richest of the controls are provided, for their cloud to the developers that makes the adoption of their cloud simpler. It's a win-win for every party. >> That's the key. I think the simplest. So, I want to ask you, this comes up a lot and I love that you brought that up, simple and self-service has proven developers who are driving the change, cloud DevOps developers. They're driving the change. They're in charge more than ever. They want self-service, easier to deploy. I want a test, if I don't like it, I want to throw it away. But if I like something, I want to stick with it. So it's got to be self-service. Now that's antithetical to the old enterprise model of solve complexity with more complexity. >> Yeah, yeah. >> So the question for you is as the president of VMware, do you feel good that you guys are looking out over the landscape where you're riding into the valley of the future with the demand being automation, completely invisible, abstraction layer, new use case scenarios for IT and whatever IT becomes. Take us through your mindset there, because I think that's what I'm hearing here at this year, VMware Explorer is that you guys have recognized the shift in demographics on the developer side, but ops isn't going away either. They're connecting. >> They're connected. Yeah, so our vision is, if you think about the role of developers, they have a huge influence. And most importantly they're the ones who are driving innovation, just the amount of application development, the number of developers that have emerged, yet remains the scarcest resource for the enterprise are critical. So developers often time have taken control over decision on infrastructure and ops. Why? Because infrastructure and ops haven't shown up. Not because they like it. In fact, they hate it. (John chuckles) Developers like being developers. They like writing code. They don't really want to get into the day to day operations. In fact, here's what we see with almost all our customers. They start taking control of the ops until they go into production. And at that point in time, they start requesting one by one functions of ops, move to ops because they don't like it. So with our approach and this sort of, as we are driving into the beautiful valley of multi-cloud like you laid out, in our approach with the cross cloud services, what we are saying is that why don't we enable this new team which is a reformatted version of the traditional ops, it has the platform engineering in it, the key skill that enables the developer in it, through a platform that becomes an interface to the developers. It creates that secure workflows that developers need. So that developers think and do what they really love. And the infrastructure is seamless and invisible. It's bound to happen, John. Think about it this way. >> Infrastructure is code. >> Infrastructure has code, and even next year, it's invisible because they're just dealing with the services that they need. >> So it's self-service infrastructure. And then you've got to have that capability to simplified, I'll even say automated or computational governance and security. So Chris Wolf is coming on Thursday. >> Yeah. >> Unfortunately I won't be here. And he's going to talk about all the future projects. 'Cause you're not done yet. The project narrows, it's kind of one of these boring, but important. >> Yeah, there's a lot of stuff in the oven coming out. >> There's really critical projects coming down the pipeline that support this multi-cloud vision, is it's early days. >> Well, this is the thing that we were talking about. I want to get your thoughts on. And we were commenting on the keynote review, Hock Tan bought VMware. He's a lot more there than he thought. I mean, I got to imagine him sitting in the front row going there's some stuff coming out of the oven. I didn't even, might not have known. >> He'd be like, "Hmm, this extra value." (everyone chuckles) >> He's got to be pretty stoked, don't you think? >> He is, he is. >> There's a lot of headroom on the margin. >> I mean, independent to that, I think the strategy that he sees is something that's compelling to customers which is what, in my assessment, speaking with him, he bought VMware because it's strategic to customers and the strategic value of VMware becomes even higher as we take our multi-cloud portfolio. So it's all great. >> Well, plus the ecosystem is now re-energize. It's always been energized, but energized cuz it's sort of had to be, cuz it's such a strong- >> And there was the Dell history there too. >> But, yeah it was always EMC, and then Dell, and now it's like, wow, the ecosystem's- >> Really it's released almost. I like this new team, we've been calling this new ops kind of vibe going refactored ops, as you said, that's where the action's happening because the developers want to go faster. >> They want to go faster. >> They want to go fast cuz the velocity's paying off of them. They don't want to have to wait. They don't want security reviews. They want policy. They want some guardrails. Show me the track. >> That's it. >> And let me drive this car. >> That's it because I mean think about it, if you were a developer, listen, I've been a developer. I never really wanted to see how to operate the code in production because it took time away for developing. I like developing and I like to spend my time building the applications and that's the goal of Aria and Tanzu. >> And then I got to mention the props of seeing project Monterey actually come out to fruition is huge because that's the future of computing architecture. >> I mean at this stage, if a customer from here on is modernizing their infrastructure and they're not investing in a holistic new infrastructure from a hardware and software perspective, they're missing out an opportunity on leveraging the numbers that we were showing, 20% increase in calls. Why would you not just make that investment on both the hardware and the software layer now to get the benefits for the next five-six years. >> You would and if I don't have to make any changes and I get 20% automatically. And the other thing, I don't know if people really appreciate the new curve that the Silicon industry is on. It blows away the history of Moore's law which was whatever, 35-40% a year, we're talking about 100% a year price performance or performance improvements. >> I think when you have an inflection point as we said earlier, there's going to be some things that you know is going to happen, but I think there's going to be a lot that's going to surprise people. New brands will emerge, new startups, new talent, new functionality, new use cases. So, we're going to watch that carefully. And for the folks watching that know that theCUBE's been 12 years with covering VMware VMworld, now VMware Explore, we've kind of met everybody over the years, but I want to point out a little nuance, Raghu thing in the keynote. During the end, before the collective responsibility sustainment commitment he had, he made a comment, "As proud as we are," which is a word he used, there's a lot of pride here at VMware. Raghu kind of weaved that in there, I noticed that, I want to call that out there because Raghu's proud. He's a proud product guy. He said, "I'm a product guy." He's delivering keynote. >> Almost 20 years. >> As proud as we are, there's a lot of pride at VMware, Sumit, talk about that dynamic because you mentioned customers, your customer is not a lot of churn. They've been there for a long time. They're embedded in every single company out there, pretty much VMware is in every enterprise, if not all, I mean 99%, whatever percentage it is, it's huge penetration. >> We are proud of three things. It comes down to number one, we are proud of our innovations. You can see it, you can see the tone from Raghu or myself, or other executives changes with excitement when we're talking about our technologies, we're just proud. We're just proud of it. We are a technology and product centric company. The second thing that sort of gets us excited and be proud of is exactly what you mentioned, which is the customers. The customers like us. It's a pleasure when I bring Roshan on stage and he talks about how he's expecting certain relationship and what he's viewing VMware in this new world of multi-cloud, that makes us proud. And then third, we're proud of our talent. I mean, I was jokingly talking to just the events team alone. Of course our engineers do amazing job, our sellers do amazing job, our support teams do amazing job, but we brought this team and we said, "We are going to get you to run an event after three years from not they doing one, we're going to change the name on you, we're going to change the attendees you're going to invite, we're going to change the fact that it's going to be new speakers who have never been on the stage and done that kind of presentation. >> You're also going to serve a virtual audience. >> And we're going to have a virtual audience. And you know what? They embraced it and they surprised us and it looks beautiful. So I'm proud of the talent. >> The VMware team always steps up. You never slight it, you've got great talent over there. The big thing I want to highlight as we end this day, the segment, and I'll get your thoughts and reactions, Sumit, is again, you guys were early on hybrid. We have theCUBE tape to go back into the video data lake and find the word hybrid mentioned 2013, 2014, 2015. Even when nobody was talking about hybrid. >> Yeah, yeah. >> Multicloud, Raghu, I talked to Raghu in 2016 when he did the Pat Gelsinger, I mean Raghu, Pat and Andy Jassy. >> Yeah. >> When that cloud thing got cleared up, he cleared that up. He mentioned multicloud, even then 2016, so this is not new. >> Yeah. >> You had the vision, there's a lot of stuff in the oven. You guys make announcements directionally, and then start chipping away at it. Now you got Broadcom buys VMware, what's in the oven? How much goodness is coming out that's like just hitting the fruits are starting to bear on the tree. There's a lot of good stuff and just put that, contextualize and scale that for us. What's in the oven? >> First of all, I think the vision, you have to be early to be first and we believe in it. Okay, so that's number one. Now having said that what's in the oven, you would see us actually do more controls across cloud. We are not done on networking side. Okay, we announced something as project Northstar with networking portfolio, that's not generally available. That's in the oven. We are going to come up with more capability on supporting any Kubernetes on any cloud. We did some previews of supporting, for example, EKS. You're going to see more of those cluster controls across any Kubernetes. We have more work happening on our telco partners for enablement of O-RAN as well as our edge solutions, along with the ecosystem. So more to come on those fronts. But they're all aligned with enabling customers multi-cloud through these five cross cloud services. They're all really, some of them where we have put a big sort of a version one of solution out there such as Aria continuation, some of them where even the version one's not out and you're going to see that very soon. >> All right. Sumit, what's next for you as the president? You're proud of your team, we got that. Great oven description of what's coming out for the next meal. What's next for you guys, the team? >> I think for us, two things, first of all, this is our momentum season as we call it. So for the first time, after three years, we are now being in, I think we've expanded, explored to five cities. So getting this orchestrated properly, we are expecting nearly 50,000 customers to be engaging in person and maybe a same number virtually. So a significant touchpoint, cuz we have been missing. Our customers have departed their strategy formulation and we have departed our strategy formulation. Getting them connected together is our number one priority. And number two, we are focused on getting better and better at making customers successful. There is work needed for us. We learn, then we code it and then we repeat it. And to me, those are the two key things here in the next six months. >> Sumit, thank you for coming on theCUBE. Thanks for your valuable time, sharing what's going on. Appreciate it. >> Always great to have chatting. >> Here with the president, the CEO's coming up next in theCUBE. Of course, we're John and Dave. More coverage after the short breaks, stay with us. (upbeat music)

Published Date : Aug 30 2022

SUMMARY :

We're starting to get the Great to be here, John. And you had no problem We had to like dust off a cobweb So, I have to give kudos to the team Still a lot of meat on the bone. did any of you notice I said, I betcha that was real. so that the stakeholders and into the hood. Again, that strengthens the So the question I got to ask you is I have the solution to tame And that is the disruptive technology so that they never ever have to think the software mainframe, and the overhead associated That's the reality. And the cloud and in the sense that we'd like the chaos that makes the adoption and I love that you brought that up, So the question for you is the day to day operations. that they need. that capability to simplified, all the future projects. stuff in the oven coming out. coming down the pipeline on the keynote review, He'd be like, "Hmm, this extra value." headroom on the margin. and the strategic value of Well, plus the ecosystem And there was the because the developers want to go faster. cuz the velocity's paying off of them. and that's the goal of Aria and Tanzu. because that's the future on leveraging the numbers that the Silicon industry is on. And for the folks watching because you mentioned customers, to get you to run an event You're also going to So I'm proud of the talent. and find the word hybrid I talked to Raghu in 2016 he cleared that up. that's like just hitting the That's in the oven. for the next meal. So for the first time, after three years, Sumit, thank you for coming on theCUBE. the CEO's coming up next in theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

2016DATE

0.99+

DellORGANIZATION

0.99+

Sumit DhawanPERSON

0.99+

DavePERSON

0.99+

SumitPERSON

0.99+

2013DATE

0.99+

Chris WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

RoshanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2014DATE

0.99+

20%QUANTITY

0.99+

CVS HealthORGANIZATION

0.99+

2010DATE

0.99+

Dave VellentePERSON

0.99+

2015DATE

0.99+

Andy JassyPERSON

0.99+

PatPERSON

0.99+

100%QUANTITY

0.99+

ThursdayDATE

0.99+

Pat GelsingerPERSON

0.99+

12 yearsQUANTITY

0.99+

Andy GrovePERSON

0.99+

99%QUANTITY

0.99+

five citiesQUANTITY

0.99+

Hock TanPERSON

0.99+

three daysQUANTITY

0.99+

SecondQUANTITY

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

RaghuPERSON

0.99+

BroadcomORGANIZATION

0.99+

NorthstarORGANIZATION

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

first timeQUANTITY

0.99+

12th yearQUANTITY

0.99+

thirdQUANTITY

0.99+

SumitORGANIZATION

0.99+

two thingsQUANTITY

0.99+

Two setsQUANTITY

0.99+

vSAN 8TITLE

0.99+

vSphere 8TITLE

0.98+

next yearDATE

0.98+

TanzuORGANIZATION

0.98+

todayDATE

0.98+

TodayDATE

0.98+

MulticloudORGANIZATION

0.98+

AriaORGANIZATION

0.98+

EMCORGANIZATION

0.98+

three thingsQUANTITY

0.97+

nearly 50,000 customersQUANTITY

0.97+

firstQUANTITY

0.97+

VMworldORGANIZATION

0.96+

bothQUANTITY

0.96+

about 40%QUANTITY

0.96+

two key thingsQUANTITY

0.96+

five cross cloud servicesQUANTITY

0.95+

two pathsQUANTITY

0.94+

35-40%QUANTITY

0.93+

next six monthsDATE

0.93+

Sahir Azam & Guillermo Rauch | MongoDB World 2022


 

>> We're back at the Big Apple, theCUBE's coverage of MongoDB World 2022. Sahir Azam is here, he's the Chief Product Officer of MongoDB, and Guillermo Rauch who's the CEO of Vercel. Hot off the keynotes from this morning guys, good job. >> Thank you. >> Thank you. >> Thank you for joining us here. Thanks for having us. Guillermo when it comes to modern web development, you know the back-end, the cloud guys got to it kind of sewn up, >> you know- >> Guillermo: Forget about it. >> But all the action's in the front end, and that's where you are. Explain Vercel. >> Yeah so Vercel is the company that pioneers front-end development as serverless infrastructure. So we built Next.js which is the most popular React framework in the world. This is what front-end engineers choose to build innovative UI's, beautiful websites. Companies like Dior and GitHub and TikTok and Twitch, which we mentioned in the keynote, are powering their entire dot-coms or all of their new parts of their dot-coms with Next.js. And Vercel is the serverless platform where you can deploy frameworks like in Next.js and others like Svelte and Vue to create really fast experiences on the web. >> So I hear, so serverless, I hear that's the hot trend. You guys made some announcements today. I mean when you look at the, we have spending data with our friends at ETR right down the street. I mean it's just off the charts, whether it's Amazon, Google, Azure Functions, I mean it's just exploding. >> Sahir: Yeah, it's I think in many ways, it's a natural trend. You know, we talk a lot about, whether it be today's keynote or another industry talks you see around our industry that developers are constantly looking for ways to focus on innovation and the business logic that defines their application and as opposed to managing the plumbing, and management of infrastructure. And we've seen this happen over and over again across every layer of the stack. And so for us, you know MongoDB, we have a bit of, you know sort of a lens of a broad spectrum of the market. We certainly have you know, large enterprises that are modernizing existing kind of core systems, then we have developers all over the world who are building the next big best thing. And that's what led us to partner with Vercel is just the bleeding edge of developers building in a new way, in a much more efficient way. And we wanted to make sure we provide a data platform that fits naturally in the way they want to work. >> So explain to our audience the trade-offs of serverless, and I want to get into sort of how you've resolved that. And then I want to hear from Guillermo, what that means for developers. >> Sahir: Yeah in our case, we don't view it as an either or, there are certain workloads and definitely certain companies that will gravitate towards a more traditional database infrastructure where they're choosing the configuration of their cluster. They want full control over it. And that provides, you know, certain benefits around cost predictability or isolation or perceived benefits at least of those things. And customers will gravitate towards that. Now on the flip side, if you're building a new application or you want the ability to scale seamlessly and not have to worry about any of the plumbing, serverless is clearly the easier model. So over the long term, we certainly expect to see as a mix of things, more and more serverless workloads being built on our platform and just generally in the industry, which is why we leaned in so heavily on investing in Atlas serverless. But the flexibility to not be forced into a particular model, but to get the same database experience across your application and even switch between them is an important characteristic for us as we build going forward. >> And you stressed the cost efficiency, and not having to worry about, you know, starting cold. You've architected around that, and what does that mean for a developer? >> Guillermo: For a developer it means that you kind of get the best of both worlds, right? Like you get the best possible performance. Front-end developers are extremely sensitive to this. That's why us pioneering this concept, serverless front-end, has put us in a very privileged position because we have to deliver that really quick time to first buy, that really quick paint. So any of the old trade-offs of serverless are not accepted by the market. You have to be extremely fast. You have to be instant to deliver that front-end content. So what we talked about today for example, with the Vercel Edge network, we're removing all of the cost of that like first hit. That cold start doesn't really exist. And now we're seeing it all across the board, going into the back-end where Mongo has also gotten rid of it. >> Dave: How do you guys collaborate? What's the focus of integration specifically from, you know, an engineering resource standpoint? >> Yeah the main idea is, idea to global app in seconds, right? You have your idea. We give you the framework. We don't give you infrastructure primitives. We give you all the necessary tools to start your application. In practice this means you host it in a Git repo. You import it onto Vercel. You install the Mongo integration. Now your front-end and your data back-end are connected. And then your application just goes global in seconds. >> So, okay. So you've abstracted away the complexity of those primitives, is that correct? >> Guillermo: Absolutely. >> Do do developers ever say, "That's awesome but I'd like to get to them every now and then." Or do you not allow that? >> Definitely. We expose all the underlying APIs, and the key thing we hear is that, especially with the push for usage-based billing models, observability is of the essence. So at any time you have to be able to query, in real time, every data point that the platform is observing. We give you performance analytics in real time to see how your front-end is performing. We give you statistics about how often you're querying your back-end and so on, and your cache hit ratios. So what I talked about today in the keynote is, it's not just about throwing more compute at the problem, but the ability to use the edge to your advantage to memoize computation and reuse it across different visits. >> When we think of mission critical historically, you know, you think about going to the ATM, right? I mean a financial transaction. But Mongo is positioning for mission critical applications across a variety of industries. Do we need to rethink what mission critical means? >> I think it's all in the eye of the beholder so to speak. If you're a new business starting up, your software and your application is your entire business. So if you have a cold start latency or God forbid something actually goes down, you don't have a business. So it's just as mission critical to that founder of a new business and new technology as it is, you know, an established enterprise that's running sort of a more, you know, day-to-day application that we may all interact with. So we treat all of those scenarios with equal fervor and importance right? And many times, it's a lot of those new experiences that the become the day-to-day experiences for us globally, and are super important. And we power all of those, whether it be an established enterprise all the way to the next big startup. >> I often talk about COVID as the forced march to digital. >> Sahir: Mm-Hmm. >> Which was obviously a little bit rushed, but if you weren't in digital business, you were out of business. And so now you're seeing people step back and say, "All right, let's be more thoughtful about our digital transformation. We've got some time, we've obviously learned some things made some mistakes." It's all about the customer experience though. And that becomes mission critical right? What are you seeing Guillermo, in terms of the patterns in digital transformation now that we're sort of exiting the isolation economy? >> One thing that comes to mind is, we're seeing that it's not always predictable how fast you're going to grow in this digital economy. So we have customers in the ecommerce space, they do a drop and they're piggybacking on serverless to give them that ability to instantly scale. And they couldn't even prepare for some of these events. We see that a lot with the Web3 space and NFT drops, where they're building in such a way that they're not sensitive to this massive fluctuations in traffic. They're taking it for granted. We've put in so much work together behind the scenes to support it. But the digital native creator just, "Oh things are scaling from one second to the next like I'm hitting like 20,000 requests per second, no problem Vercel is handling it." But the amount of infrastructural work that's gone behind the scenes in support has been incredible. >> We see that in gaming all the time, you know it's really hard for a gaming company to necessarily predict where in the globe a game's going to be particularly hot. Games get super popular super fast if they're successful, it's really hard to predict. It's another vertical that's got a similar dynamic. >> So gaming, crypto, so you're saying that you're able to assist your customers in architecting so that the website doesn't crash. >> Guillermo: Absolutely. >> But at the same time, if the the business dynamic changes, they can dial down. >> Yeah. >> Right and in many ways, slow is the new down, right? And if somebody has a slow experience they're going to leave your site just as much as if it's- >> I'm out of here- >> You were down. So you know, it's really maintaining that really fast performance, that amazing customer experience. Because this is all measured, it's scientific. Like anytime there's friction in the process, you're going to lose customers. >> So obviously people are excited about your keynote, but what have they been saying? Any specific comments you can share, or questions that you got that were really interesting or? >> I'm already getting links to the apps that people are deploying. So the whole idea- >> Come on! >> All over the world. Yeah so it's already working I'm excited. >> So they were show they were showing off, "Look what I did" Really? >> Yeah on Twitter. >> That's amazing. >> I think from my standpoint, I got a question earlier, we were with a bunch of financial analysts and investors, and they said they've been talking to a lot of the customers in the halls. And just to see, you know, from the last time we were all in person, the number of our customers that are using multiple capabilities across this idea of a developer data platform, you know, certainly MongoDB's been a popular core database open source for a long time. But the new capabilities around search, analytics, mobile being adopted much more broadly to power these experiences is the most exciting thing from our side. >> So from 2019 to now, you're saying substantial uptick in adoption for these features? >> Yeah. And many of them are new. >> Time series as well, that's pretty new, so yeah. >> Yeah and you know, our philosophy of development at MongoDB is to get capabilities in the hands of customers early. Get that feedback to enrich and drive that product-market fit. And over the last three years especially, we've been transitioning from a single product kind of core, you know, non relational modern database to a data platform, a developer data platform that adds more and more capabilities to power these modern applications. And a lot of those were released during the pandemic. Certainly we talked about them in our virtual conferences and all the zoom meetings we had over the years. But to actually go talk to all these customers, this is the largest conference we've ever put on, and to get a sense of, wow all the amazing things they're doing with them, it's definitely a different feeling when we're all together. >> So that's interesting, when you have such a hot product, product-led growth which is what Mongo has been in, and you add these new features. They're coming from the developers who are saying, "Hey, we need this." >> Yip. >> Okay so you have a pretty high degree of confidence, but how do you know when you have product-market fit? I mean, is it adoption, usage, renewals? What's your metric? >> Yeah I think it's a mix of quantitative measures that you know, around conversion rates, the size of your funnel, the retention rate, NPS which obviously can be measured, but also just qualitative. You know when you're talking to a developer or a technology executive around what their needs are, and then you see how they actually apply it to solve a problem, it's that balance between the qualitative and the quantitative measurement of things. And you can just sort of, frankly you can feel it. You can see it in the numbers sure, but you can kind of feel that excitement, you can see that adoption and what it empowers people to do. And so to me, as a product leader, it's always a blend of those things. If you get too obsessed with purely the metrics, you can always over optimize something for the wrong reason. So you have to bring in that qualitative feedback to balance yourself out. >> Right. >> Guillermo, what's next? What do you not have that you want from Sahir and Mongo? >> So the natural next step for serverless computing is, is the Edge. So we have to auto-scale, we have to tolerate fares. We have to be avail. We have to be easy, but we have to be global. And right now we've been doing this by using a lot of techniques like caching and replication and things like this. But the future's about personalizing even more to each visitor depending on where they are. So if I'm in New York, I want to get the latest offers for New York on demand, just for me, and using AI to continue to personalize that experience. So giving the developer these tools in a way where it feels natural to build an application like this. It doesn't feel like, "Oh I'm going to do this year 10 if I make it, I'm going to do it since the very beginning." >> Dave: Okay interesting. So that says to me that I'm not going to make a round trip to the cloud necessarily for that experience. So I'm going to have some kind, Apple today, at the Worldwide Developer Conference announced the M2, right. I've been looking at the M1 Ultra, and I'm going wow look at that! And so- >> Sahir: You were talking about that new one backstage. >> I mean it's this amazing pace of Silicon development and they're focusing on the NPU and you look at what Tesla's doing. I mean it's just incredible. So you're going to have some new hardware architecture that emerges. Most of the AI that's done today is modeling in the cloud. You're going to have a real time inferencing at the Edge. So that's not going to do the round trip. There's going to be a data store there, I think it has to be. You're going to persist some of the data, maybe not all of it. So it's a whole new architecture- >> Sahir: Absolutely. >> That's developing. That sounds very disruptive. >> Sahir: Yeah. >> How do you think about that, and how does Mongo play there? Guillermo first. >> What I spent a lot of time thinking about is obviously the developer experience, giving the programmer a programming model that is natural, intuitive, and produces its great results. So if they have to think about data that's local because of regulatory reasons for example, how can we let the framework guide them to success? I'm just writing an application I deployed to the cloud and then everything else is figured out. >> Yeah or speed of light is another challenge. (Sahir and Guillermo laugh) >> How can we overcome the speed of light is our next task for sure. >> Well you're working on that aren't you? You've got the best engineers on that one. (Sahir and Guillermo laugh) >> We can solve a lot of problems, I'm not sure of that one. >> So Mongo plays in that scenario or? >> Yeah so I think, absolutely you know, we've been focused heavily on becoming the globally distributed cloud data layer. The back-end data layer that allows you to persist data to align with performance and move data where it needs to be globally or deal with data sovereignty, data nationalism that's starting to rise, but absolutely there is more data being pushed out to the Edge, to your point around processing or inference happening at the Edge. And there's going to be a globally distributed front-end layer as well, whether data and processing takes apart. And so we're focused on one, making sure the data connectivity and the layer is all connected into one unified architecture. We do that in combination with technologies that we have that do with mobility or edge distribution and synchronization of data with realm. And we do it with partnerships. We have edge partnerships with AWS and Verizon. We have partnerships with a lot of CVM players who are building out that Edge platform and making sure that MongoDB is either connected to it or just driving that synchronization back and forth. >> I call that unified experience super cloud, Robbie Belson from Verizon the cloud continuum, but that consistent experience for developers whether you're on Prim, whether you're in you know, Azure, Google, AWS, and ultimately the Edge. That's the big- >> That's where it's going. >> White space right now I'm hearing, Guillermo, right? >> I think it'll define the next generation of how software is built. And we're seeing this almost like a coalition course between some of the ideas that the Web3 developers are excited about, which is like decentralization almost to the extreme. But the Web2 also needs more decentralization, because we're seeing it with like, the data needs to be local to me, I need more privacy. I was looking at the latest encryption features in Mongo, like I think both Web2 need to incorporate more of the ideas of Web3 and vice versa to create the best possible consumer experience. Privacy matters more than ever before. Latency for conversion matters more than ever before. And regulations are changing. >> Sahir: Yeah. >> And you talked about Web3 earlier, talked about new protocols, a new distributed you know, decentralized system emerging, new hardware architectures. I really believe we really think that new economics are going to bleed back into the data center, and yeah every 15 years or so this industry gets disrupted. >> Sahir: Yeah. >> Guillermo: Absolutely. >> You know you ain't see nothing yet guys. >> We all talked about hardware becoming commoditized 10, 15 years ago- >> Yeah of course. >> We get the virtualization, and it's like nope not at all. It's actually a lot of invention happening. >> The lower the price the more the consumption. So guys thanks so much. Great conversation. >> Thank you. >> Really appreciate your time. >> Really appreciate it I enjoyed the conversation. >> All right and thanks for watching. Keep it right there. We'll be back with our next segment right after this short break. Dave Vellante for theCUBE's coverage of MongoDB World 2022. >> Man Offscreen: Clear. (clapping) >> All right wow. Don't get up. >> Sahir: Okay. >> Is that a Moonwatch? >> Sahir: It is a Speedmaster but it's that the-

Published Date : Jun 8 2022

SUMMARY :

he's the Chief Product Officer of MongoDB, the cloud guys got to it kind of sewn up, and that's where you are. And Vercel is the I mean it's just off the charts, and the business logic that So explain to our audience But the flexibility to not be forced and not having to worry about, So any of the old trade-offs You install the Mongo integration. is that correct? "That's awesome but I'd like to get the edge to your advantage you know, that the become the day-to-day experiences the forced march to digital. in terms of the patterns behind the scenes to support it. We see that in gaming all the time, the website doesn't crash. But at the same time, friction in the process, So the whole idea- All over the world. from the last time we were all in person, And many of them are new. so yeah. and all the zoom meetings They're coming from the it's that balance between the qualitative So giving the developer So that says to me that I'm about that new one backstage. So that's not going to do the round trip. That's developing. How do you think about that, So if they have to think (Sahir and Guillermo laugh) How can we overcome the speed of light You've got the best engineers on that one. I'm not sure of that one. and the layer is all connected That's the big- the data needs to be local to me, that new economics are going to bleed back You know you ain't We get the virtualization, the more the consumption. enjoyed the conversation. of MongoDB World 2022. All right wow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robbie BelsonPERSON

0.99+

SahirPERSON

0.99+

VerizonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

Sahir AzamPERSON

0.99+

Dave VellantePERSON

0.99+

GuillermoPERSON

0.99+

Guillermo RauchPERSON

0.99+

2019DATE

0.99+

VercelORGANIZATION

0.99+

DiorORGANIZATION

0.99+

TwitchORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

New YorkLOCATION

0.99+

AmazonORGANIZATION

0.99+

Next.jsTITLE

0.99+

MongoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TikTokORGANIZATION

0.99+

bothQUANTITY

0.99+

AppleORGANIZATION

0.99+

one secondQUANTITY

0.99+

TeslaORGANIZATION

0.99+

todayDATE

0.99+

MongoDBORGANIZATION

0.98+

firstQUANTITY

0.98+

20,000 requests per secondQUANTITY

0.98+

first hitQUANTITY

0.97+

both worldsQUANTITY

0.97+

EdgeTITLE

0.97+

single productQUANTITY

0.96+

pandemicEVENT

0.96+

Web2ORGANIZATION

0.96+

oneQUANTITY

0.95+

Web3ORGANIZATION

0.95+

SvelteTITLE

0.95+

MongoDBTITLE

0.95+

theCUBEORGANIZATION

0.93+

Worldwide Developer ConferenceEVENT

0.91+

M2EVENT

0.91+

VueTITLE

0.9+

each visitorQUANTITY

0.9+

GitTITLE

0.9+

M1 UltraCOMMERCIAL_ITEM

0.89+

this year 10DATE

0.88+

AtlasORGANIZATION

0.84+

TwitterORGANIZATION

0.83+

ReactTITLE

0.83+

VercelTITLE

0.81+

10, 15 years agoDATE

0.81+

One thingQUANTITY

0.75+

Big AppleLOCATION

0.75+

7 Sahir Azam & Guillermo Rauch


 

>> Man Offscreen: Standby. Dave is coming you in 5, 4, 3, 2. >> We're back at the Big Apple, theCUBE's coverage of MongoDB World 2022. Sahir Azam is here, he's the Chief Product Officer of MongoDB, and Guillermo Rauch who's the CEO of Vercel. Hot off the keynotes from this morning guys, good job. >> Thank you. >> Thank you. >> Thank you for joining us here. Thanks for having us. Guillermo when it comes to modern web development, you know the back-end, the cloud guys got to it kind of sewn up, >> you know- >> Guillermo: Forget about it. >> But all the action's in the front end, and that's where you are. Explain Vercel. >> Yeah so Vercel is the company that pioneers front-end development as serverless infrastructure. So we built Next.js which is the most popular React framework in the world. This is what front-end engineers choose to build innovative UI's, beautiful websites. Companies like Dior and GitHub and TikTok and Twitch, which we mentioned in the keynote, are powering their entire dot-coms or all of their new parts of their dot-coms with Next.js. And Vercel is the serverless platform where you can deploy frameworks like in Next.js and others like Svelte and Vue to create really fast experiences on the web. >> So I hear, so serverless, I hear that's the hot trend. You guys made some announcements today. I mean when you look at the, we have spending data with our friends at ETR right down the street. I mean it's just off the charts, whether it's Amazon, Google, Azure Functions, I mean it's just exploding. >> Sahir: Yeah, it's I think in many ways, it's a natural trend. You know, we talk a lot about, whether it be today's keynote or another industry talks you see around our industry that developers are constantly looking for ways to focus on innovation and the business logic that defines their application and as opposed to managing the plumbing, and management of infrastructure. And we've seen this happen over and over again across every layer of the stack. And so for us, you know MongoDB, we have a bit of, you know sort of a lens of a broad spectrum of the market. We certainly have you know, large enterprises that are modernizing existing kind of core systems, then we have developers all over the world who are building the next big best thing. And that's what led us to partner with Vercel is just the bleeding edge of developers building in a new way, in a much more efficient way. And we wanted to make sure we provide a data platform that fits naturally in the way they want to work. >> So explain to our audience the trade-offs of serverless, and I want to get into sort of how you've resolved that. And then I want to hear from Guillermo, what that means for developers. >> Sahir: Yeah in our case, we don't view it as an either or, there are certain workloads and definitely certain companies that will gravitate towards a more traditional database infrastructure where they're choosing the configuration of their cluster. They want full control over it. And that provides, you know, certain benefits around cost predictability or isolation or perceived benefits at least of those things. And customers will gravitate towards that. Now on the flip side, if you're building a new application or you want the ability to scale seamlessly and not have to worry about any of the plumbing, serverless is clearly the easier model. So over the long term, we certainly expect to see as a mix of things, more and more serverless workloads being built on our platform and just generally in the industry, which is why we leaned in so heavily on investing in Atlas serverless. But the flexibility to not be forced into a particular model, but to get the same database experience across your application and even switch between them is an important characteristic for us as we build going forward. >> And you stressed the cost efficiency, and not having to worry about, you know, starting cold. You've architected around that, and what does that mean for a developer? >> Guillermo: For a developer it means that you kind of get the best of both worlds, right? Like you get the best possible performance. Front-end developers are extremely sensitive to this. That's why us pioneering this concept, serverless front-end, has put us in a very privileged position because we have to deliver that really quick time to first buy, that really quick paint. So any of the old trade-offs of serverless are not accepted by the market. You have to be extremely fast. You have to be instant to deliver that front-end content. So what we talked about today for example, with the Vercel Edge network, we're removing all of the cost of that like first hit. That cold start doesn't really exist. And now we're seeing it all across the board, going into the back-end where Mongo has also gotten rid of it. >> Dave: How do you guys collaborate? What's the focus of integration specifically from, you know, an engineering resource standpoint? >> Yeah the main idea is, idea to global app in seconds, right? You have your idea. We give you the framework. We don't give you infrastructure primitives. We give you all the necessary tools to start your application. In practice this means you host it in a Git repo. You import it onto Vercel. You install the Mongo integration. Now your front-end and your data back-end are connected. And then your application just goes global in seconds. >> So, okay. So you've abstracted away the complexity of those primitives, is that correct? >> Guillermo: Absolutely. >> Do do developers ever say, "That's awesome but I'd like to get to them every now and then." Or do you not allow that? >> Definitely. We expose all the underlying APIs, and the key thing we hear is that, especially with the push for usage-based billing models, observability is of the essence. So at any time you have to be able to query, in real time, every data point that the platform is observing. We give you performance analytics in real time to see how your front-end is performing. We give you statistics about how often you're querying your back-end and so on, and your cache hit ratios. So what I talked about today in the keynote is, it's not just about throwing more compute at the problem, but the ability to use the edge to your advantage to memoize computation and reuse it across different visits. >> When we think of mission critical historically, you know, you think about going to the ATM, right? I mean a financial transaction. But Mongo is positioning for mission critical applications across a variety of industries. Do we need to rethink what mission critical means? >> I think it's all in the eye of the beholder so to speak. If you're a new business starting up, your software and your application is your entire business. So if you have a cold start latency or God forbid something actually goes down, you don't have a business. So it's just as mission critical to that founder of a new business and new technology as it is, you know, an established enterprise that's running sort of a more, you know, day-to-day application that we may all interact with. So we treat all of those scenarios with equal fervor and importance right? And many times, it's a lot of those new experiences that the become the day-to-day experiences for us globally, and are super important. And we power all of those, whether it be an established enterprise all the way to the next big startup. >> I often talk about COVID as the forced march to digital. >> Sahir: Mm-Hmm. >> Which was obviously a little bit rushed, but if you weren't in digital business, you were out of business. And so now you're seeing people step back and say, "All right, let's be more thoughtful about our digital transformation. We've got some time, we've obviously learned some things made some mistakes." It's all about the customer experience though. And that becomes mission critical right? What are you seeing Guillermo, in terms of the patterns in digital transformation now that we're sort of exiting the isolation economy? >> One thing that comes to mind is, we're seeing that it's not always predictable how fast you're going to grow in this digital economy. So we have customers in the ecommerce space, they do a drop and they're piggybacking on serverless to give them that ability to instantly scale. And they couldn't even prepare for some of these events. We see that a lot with the Web3 space and NFT drops, where they're building in such a way that they're not sensitive to this massive fluctuations in traffic. They're taking it for granted. We've put in so much work together behind the scenes to support it. But the digital native creator just, "Oh things are scaling from one second to the next like I'm hitting like 20,000 requests per second, no problem Vercel is handling it." But the amount of infrastructural work that's gone behind the scenes in support has been incredible. >> We see that in gaming all the time, you know it's really hard for a gaming company to necessarily predict where in the globe a game's going to be particularly hot. Games get super popular super fast if they're successful, it's really hard to predict. It's another vertical that's got a similar dynamic. >> So gaming, crypto, so you're saying that you're able to assist your customers in architecting so that the website doesn't crash. >> Guillermo: Absolutely. >> But at the same time, if the the business dynamic changes, they can dial down. >> Yeah. >> Right and in many ways, slow is the new down, right? And if somebody has a slow experience they're going to leave your site just as much as if it's- >> I'm out of here- >> You were down. So you know, it's really maintaining that really fast performance, that amazing customer experience. Because this is all measured, it's scientific. Like anytime there's friction in the process, you're going to lose customers. >> So obviously people are excited about your keynote, but what have they been saying? Any specific comments you can share, or questions that you got that were really interesting or? >> I'm already getting links to the apps that people are deploying. So the whole idea- >> Come on! >> All over the world. Yeah so it's already working I'm excited. >> So they were show they were showing off, "Look what I did" Really? >> Yeah on Twitter. >> That's amazing. >> I think from my standpoint, I got a question earlier, we were with a bunch of financial analysts and investors, and they said they've been talking to a lot of the customers in the halls. And just to see, you know, from the last time we were all in person, the number of our customers that are using multiple capabilities across this idea of a developer data platform, you know, certainly MongoDB's been a popular core database open source for a long time. But the new capabilities around search, analytics, mobile being adopted much more broadly to power these experiences is the most exciting thing from our side. >> So from 2019 to now, you're saying substantial uptick in adoption for these features? >> Yeah. And many of them are new. >> Time series as well, that's pretty new, so yeah. >> Yeah and you know, our philosophy of development at MongoDB is to get capabilities in the hands of customers early. Get that feedback to enrich and drive that product-market fit. And over the last three years especially, we've been transitioning from a single product kind of core, you know, non relational modern database to a data platform, a developer data platform that adds more and more capabilities to power these modern applications. And a lot of those were released during the pandemic. Certainly we talked about them in our virtual conferences and all the zoom meetings we had over the years. But to actually go talk to all these customers, this is the largest conference we've ever put on, and to get a sense of, wow all the amazing things they're doing with them, it's definitely a different feeling when we're all together. >> So that's interesting, when you have such a hot product, product-led growth which is what Mongo has been in, and you add these new features. They're coming from the developers who are saying, "Hey, we need this." >> Yip. >> Okay so you have a pretty high degree of confidence, but how do you know when you have product-market fit? I mean, is it adoption, usage, renewals? What's your metric? >> Yeah I think it's a mix of quantitative measures that you know, around conversion rates, the size of your funnel, the retention rate, NPS which obviously can be measured, but also just qualitative. You know when you're talking to a developer or a technology executive around what their needs are, and then you see how they actually apply it to solve a problem, it's that balance between the qualitative and the quantitative measurement of things. And you can just sort of, frankly you can feel it. You can see it in the numbers sure, but you can kind of feel that excitement, you can see that adoption and what it empowers people to do. And so to me, as a product leader, it's always a blend of those things. If you get too obsessed with purely the metrics, you can always over optimize something for the wrong reason. So you have to bring in that qualitative feedback to balance yourself out. >> Right. >> Guillermo, what's next? What do you not have that you want from Sahir and Mongo? >> So the natural next step for serverless computing is, is the Edge. So we have to auto-scale, we have to tolerate fares. We have to be avail. We have to be easy, but we have to be global. And right now we've been doing this by using a lot of techniques like caching and replication and things like this. But the future's about personalizing even more to each visitor depending on where they are. So if I'm in New York, I want to get the latest offers for New York on demand, just for me, and using AI to continue to personalize that experience. So giving the developer these tools in a way where it feels natural to build an application like this. It doesn't feel like, "Oh I'm going to do this year 10 if I make it, I'm going to do it since the very beginning." >> Dave: Okay interesting. So that says to me that I'm not going to make a round trip to the cloud necessarily for that experience. So I'm going to have some kind, Apple today, at the Worldwide Developer Conference announced the M2, right. I've been looking at the M1 Ultra, and I'm going wow look at that! And so- >> Sahir: You were talking about that new one backstage. >> I mean it's this amazing pace of Silicon development and they're focusing on the NPU and you look at what Tesla's doing. I mean it's just incredible. So you're going to have some new hardware architecture that emerges. Most of the AI that's done today is modeling in the cloud. You're going to have a real time inferencing at the Edge. So that's not going to do the round trip. There's going to be a data store there, I think it has to be. You're going to persist some of the data, maybe not all of it. So it's a whole new architecture- >> Sahir: Absolutely. >> That's developing. That sounds very disruptive. >> Sahir: Yeah. >> How do you think about that, and how does Mongo play there? Guillermo first. >> What I spent a lot of time thinking about is obviously the developer experience, giving the programmer a programming model that is natural, intuitive, and produces its great results. So if they have to think about data that's local because of regulatory reasons for example, how can we let the framework guide them to success? I'm just writing an application I deployed to the cloud and then everything else is figured out. >> Yeah or speed of light is another challenge. (Sahir and Guillermo laugh) >> How can we overcome the speed of light is our next task for sure. >> Well you're working on that aren't you? You've got the best engineers on that one. (Sahir and Guillermo laugh) >> We can solve a lot of problems, I'm not sure of that one. >> So Mongo plays in that scenario or? >> Yeah so I think, absolutely you know, we've been focused heavily on becoming the globally distributed cloud data layer. The back-end data layer that allows you to persist data to align with performance and move data where it needs to be globally or deal with data sovereignty, data nationalism that's starting to rise, but absolutely there is more data being pushed out to the Edge, to your point around processing or inference happening at the Edge. And there's going to be a globally distributed front-end layer as well, whether data and processing takes apart. And so we're focused on one, making sure the data connectivity and the layer is all connected into one unified architecture. We do that in combination with technologies that we have that do with mobility or edge distribution and synchronization of data with realm. And we do it with partnerships. We have edge partnerships with AWS and Verizon. We have partnerships with a lot of CVM players who are building out that Edge platform and making sure that MongoDB is either connected to it or just driving that synchronization back and forth. >> I call that unified experience super cloud, Robbie Belson from Verizon the cloud continuum, but that consistent experience for developers whether you're on Prim, whether you're in you know, Azure, Google, AWS, and ultimately the Edge. That's the big- >> That's where it's going. >> White space right now I'm hearing, Guillermo, right? >> I think it'll define the next generation of how software is built. And we're seeing this almost like a coalition course between some of the ideas that the Web3 developers are excited about, which is like decentralization almost to the extreme. But the Web2 also needs more decentralization, because we're seeing it with like, the data needs to be local to me, I need more privacy. I was looking at the latest encryption features in Mongo, like I think both Web2 need to incorporate more of the ideas of Web3 and vice versa to create the best possible consumer experience. Privacy matters more than ever before. Latency for conversion matters more than ever before. And regulations are changing. >> Sahir: Yeah. >> And you talked about Web3 earlier, talked about new protocols, a new distributed you know, decentralized system emerging, new hardware architectures. I really believe we really think that new economics are going to bleed back into the data center, and yeah every 15 years or so this industry gets disrupted. >> Sahir: Yeah. >> Guillermo: Absolutely. >> You know you ain't see nothing yet guys. >> We all talked about hardware becoming commoditized 10, 15 years ago- >> Yeah of course. >> We get the virtualization, and it's like nope not at all. It's actually a lot of invention happening. >> The lower the price the more the consumption. So guys thanks so much. Great conversation. >> Thank you. >> Really appreciate your time. >> Really appreciate it I enjoyed the conversation. >> All right and thanks for watching. Keep it right there. We'll be back with our next segment right after this short break. Dave Vellante for theCUBE's coverage of MongoDB World 2022. >> Man Offscreen: Clear. (clapping) >> All right wow. Don't get up. >> Sahir: Okay. >> Is that a Moonwatch? >> Sahir: It is a Speedmaster but it's that the-

Published Date : Jun 7 2022

SUMMARY :

Dave is coming you in 5, 4, 3, 2. he's the Chief Product Officer of MongoDB, the cloud guys got to it kind of sewn up, and that's where you are. And Vercel is the I mean it's just off the charts, and the business logic that So explain to our audience But the flexibility to not be forced and not having to worry about, So any of the old trade-offs You install the Mongo integration. is that correct? "That's awesome but I'd like to get the edge to your advantage you know, that the become the day-to-day experiences the forced march to digital. in terms of the patterns behind the scenes to support it. We see that in gaming all the time, the website doesn't crash. But at the same time, friction in the process, So the whole idea- All over the world. from the last time we were all in person, And many of them are new. so yeah. and all the zoom meetings They're coming from the it's that balance between the qualitative So giving the developer So that says to me that I'm about that new one backstage. So that's not going to do the round trip. That's developing. How do you think about that, So if they have to think (Sahir and Guillermo laugh) How can we overcome the speed of light You've got the best engineers on that one. I'm not sure of that one. and the layer is all connected That's the big- the data needs to be local to me, that new economics are going to bleed back You know you ain't We get the virtualization, the more the consumption. enjoyed the conversation. of MongoDB World 2022. Man Offscreen: Clear. All right wow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robbie BelsonPERSON

0.99+

AWSORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SahirPERSON

0.99+

Sahir AzamPERSON

0.99+

GuillermoPERSON

0.99+

DavePERSON

0.99+

Guillermo RauchPERSON

0.99+

TwitchORGANIZATION

0.99+

New YorkLOCATION

0.99+

GitHubORGANIZATION

0.99+

2019DATE

0.99+

VercelORGANIZATION

0.99+

DiorORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Next.jsTITLE

0.99+

MongoORGANIZATION

0.99+

one secondQUANTITY

0.99+

TikTokORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

AppleORGANIZATION

0.98+

MongoDBORGANIZATION

0.98+

this year 10DATE

0.98+

both worldsQUANTITY

0.97+

SvelteTITLE

0.97+

first hitQUANTITY

0.97+

oneQUANTITY

0.97+

EdgeTITLE

0.96+

pandemicEVENT

0.96+

MongoDBTITLE

0.96+

firstQUANTITY

0.94+

M1 UltraCOMMERCIAL_ITEM

0.94+

20,000 requests per secondQUANTITY

0.94+

Web2ORGANIZATION

0.94+

VueTITLE

0.93+

GitTITLE

0.93+

Big AppleLOCATION

0.93+

single productQUANTITY

0.93+

Worldwide Developer ConferenceEVENT

0.93+

each visitorQUANTITY

0.92+

M2EVENT

0.92+

theCUBEORGANIZATION

0.9+

VercelTITLE

0.88+

first buyQUANTITY

0.85+

One thingQUANTITY

0.83+

10, 15 years agoDATE

0.82+

MongoDB World 2022EVENT

0.81+

TwitterORGANIZATION

0.78+

Douglas Ko, Cohesity & Sabina Joseph | AWS Partner Showcase S1E2


 

(upbeat music) >> Hello everyone, welcome to the special CUBE presentation of the AWS Partner Showcase season one, episode two. I'm John Furrier, your host of theCUBE. We've got two great guest here. Douglas Ko, Director of product marketing at Cohesity and Sabina Joseph General Manager of AWS, Amazon Web Services. Welcome to the show. >> Thank you for having us. >> Great to see you Sabina and Douglas. Great to see you, congratulations at Cohesity. Loved the shirt, got the colors wearing there on Cohesity, Always good I can't miss your booth at the shows, can't wait to get back in person, but thanks for coming in remotely. I got to say it's super excited to chat with you, appreciate it. >> Yeah, pleasure to be here. >> What are the trends you're seeing in the market when it comes to ransomware threats right now. You guys are in the middle of it right now more than ever. I was hearing more and more about security, cloud scale, cloud refactoring. You guys are in the middle of it. What's the latest trends in ransomware? >> Yeah, I have to say John, it's a pleasure to be here but on the other hand, when you asked me about ransomware, right? The data and the statistics are pretty sobering right now. If we look at what just happened in 2020 to 2021, we saw a tenfold increase in a ransomware attacks. We also saw the prediction of a ransomware attack happening every 11 seconds meaning by the time I finished this sentence there's going to be another company falling victim to ransomware. And it's also expected by 2031 that the global impact of ransomware across businesses will be over $260 billion, right? So, that's huge. And even at Cohesisity, right, what we saw, we did our own survey, and this one actually directly to end users and consumers. And what we found was over 70% of them would reconsider doing business with a company that paid a ransom. So all these things are pretty alarming and pretty big problems that we face today in our industry. >> Yeah, there's so many dimensions to it. I mean, you guys at Cohesity have been doing a while. It's being baked in from day one, security in the cloud and backup recovery, all that is kind of all in one thing now. So to protect against ransomware and other threats is huge Sabina, I got to ask you Amazon's view of ransomware is serious. You guys take it very seriously. What's the posture and specifically, what is AWS doing to protect customers from this threat? >> Yeah, so as Doug mentioned, right, there's no industry that's immune to ransomware attacks. And just as so we all level set, right? What it means is somebody taking control over and locking your data as an individual or as a company, and then demanding a ransom for it, right? According to the NIST, the National Institute of Standards and Technology cybersecurity framework, there are basically five main functions which are needed in order to plan and manage these kind of cybersecurity ransomware attacks. They go across identifying what do you need to protect, actually implementing the things that you need in order to protect yourself, detecting things if there is an attack that's going on, then also responding, how do you get out of this attack? And then bringing things, recovery, right? Bringing things back to where they were before the attack. As we all know, AWS takes security very seriously. We want to make sure that our customer's data is always protected. We have a number of native security solutions, but we are also looking to see how we can work with partners. And this is in fact when in the fall of 2019, the Cohesity CEO, Mohit Aron, myself and a couple of us, we met and we brainstorm, what could we do something that is differentiated in the market? When we built this data management as a service native solution on top of AWS, it's a first of a kind solution, John. It doesn't exist anywhere else in the market, even to even today. And we really focused on using the well architected review, the five pillars of security, reliability, operational excellence, performance, and cost optimization. And we built this differentiated solution together, and it was launched in April, 2020. And then of course from a customer viewpoint, they should use a comprehensive set of solutions. And going back to that security, that cyber security framework that I mentioned, the Cohesity data management as a service solution really falls into that recovery, that last area that I mentioned and solution actually provides, granular management of data, protection of data. Customers can spin up things very quickly and really scale their solution across the globe. And ensure that there is compliance, no matter how many times we do data changes, ads and so on across the world. >> Yeah, Sabina, that's a great point about that because a lot of the ransomware actually got bad actors, but also customers can misconfigure things. They don't follow the best practice. So having that native solutions are super important. So that's a great call out. Douglas, I got to go back to you because you're on the Cohesity side and a the partner of AWS. They have all these best practices that for the good actors, got to pay attention to the best practices and the bad actors also trying to get in creates a two, challenge an opportunity. So how do organizations protect their data against these attacks? And also how do they maintain their best practices? Because that's half the battle too, is the best practices to make sure you're following the guidelines on AWS side, as well as protecting the attacks. What's your thoughts? >> Yeah, absolutely. First and foremost, right? As an organization, you need to understand how ransomware operates and how it's evolved over the years. And when you first look at it, Sabina already mentioned it, they started with consumers, small businesses, attacking their data, right? And some of these, consumers or businesses didn't have any backup. So the first step is just to make sure your data is backed up, but then the criminals kind of went up market, right? They understood that big organizations had big pocket and purses. So they went after them and the larger organizations do have backup and recovery solutions in place. So the criminals knew that they had to go deeper, right? And what they did was they went after the backup systems themselves and went to attack, delete, tamper with those backup systems and make it difficult or impossible to recover. And that really highlighted some solutions is out there that had some vulnerabilities with their data immutability and capabilities around WORM. And those are areas we suggest customers look at, that have immutability and WORM. And more recently again, given the way attacks have happened now is really to add another layer of defense and protection. And that includes, traditionally what we used to call, the 3-2-1 rule. And that basically means, three copies of data on two different sets of media with one piece of that data offsite, right? And in today's world and the cloud, right? That's a great opportunity to kind of modernize your environment. I wish that was all that ransomware guys we're doing right now and the criminals were doing, but unfortunately that's not the case. And what we've seen is over the past two years specifically, we've seen a huge increase in what you would call data theft or data exfiltration. And that essentially is them taking that data, a specific sense of the data and they're threatening to expose it to the dark web or selling it to the highest bidder. So in this situation it's honestly very difficult to manage. And the biggest thing you could do is obviously harden your security systems, but also you need a good understanding about your data, right? Where all that sensitive information is, who has access to it and what are the potential risks of that data being exposed. So that takes another step in terms of leveraging a bunch of technologies to help with that problem set. >> What can businesses do from an architectural standpoint and platform standpoint that you guys see there's key guiding principles around how their mindset should be? What's the examples of other approaches- >> Yeah. >> Approach here? >> No, I think they are both us at Cohesity and I'll speak for Sabina, AWS, we believe in a platform approach. And the reason for that is this a very complicated problem and the more tools and more things you have in there, you add risk of complexity, even potential new attack surfaces that the criminals can go after. So we believe the architecture approach should kind of have some key elements. One is around data resiliency, right? And that again comes from things like data encryption, your own data is encrypted by your own keys, that the data is immutable and has that, right, want to read many or WORM capabilities, so the bad guys can't temper with your data, right? That's just step one. Step two is really understanding and having the right access controls within your environment, right? And that means having multi factor authentication, quorum, meaning having two keys for the closet before you can actually have access to it. But it's got to go beyond there as well too. We got to leverage some newer technologies like AI and machine learning. And that can help you with detection and analysis of both where all your sensitive information is, right? As well as understanding potential anomalies that could signify attack or threat in progress. So, those are all key elements. And the last one of course is I think it takes a village, right? To fight the ransomware war. So we know we can't do it alone so, that's why we partner with people like AWS. That's why we also partner with other people in the security space to ensure you really have a full ecosystem support to manage all those things around that framework. >> That's awesome. Before I get to Sabina, I want to get into the relationship real quick, but I want to come back and highlight what you said about the data management as a service. This is a joint collaboration. This is some of the innovation that Cohesity and AWS are bringing to the market to combat ransomware. Can you elaborate more on that piece 'cause this is important. It's a collaboration that we're going to gather. So it's a partner and you guys were going to take us through what that means for the customer and to you guys. I mean, that's a compelling offering. >> So when we start to work with partners, right? we want to make sure that we are solving a customer problem. That's the whole working backwards from a customer. We are adding something more that the customer could not do. That's why when either my team or me, we start to either work on a new partnership or a new solution, it's always focused on, okay, is this solution enabling our customer to do something that they couldn't do before? And this approach has really helped us, John, in enabling majority of the fortune 500 companies and 90% of the fortune 100 companies use partner solutions successfully. But it's not just focused on innovation and technology, it's also focused on the business side. How are we helping partners grow their business? And we've been scaling our field teams, our AWS sales teams globally. But what we realized is through partner feedback, in fact, that we were not doing a great job in helping our partners close those opportunities and also bring net new opportunities. So in our field, we actually introduced a new role called the ISV Success Manager, ISMs that are embedded in our field to help partners either close existing opportunities, but also bring net new opportunities to them. And then at re:Invent 2020, we also launched the ISB accelerate program, which enables our field teams, the AWS field teams to get incentive to work with our partners. Cohesity, of course, participates in all of these programs and has access to all of these resources. And they've done a great job in leveraging and bringing our field teams together, which has resulted in hundreds of wins for this data management as a service solution that was launched. >> So you're bringing customers to Cohesity. >> Absolutely. >> Okay, I got to get the side. So they're helping you, how's this relationship going? Could you talk about the relationship on the customer side? How's that going? Douglas, what's your take on that? >> Yeah, absolutely. I mean, it's going great. That's why we chose to partner with AWS and to be quite honest, as Sabina mentioned, we really only launched data management and service back in 2020, late 2020. And at that time we launched with just one service then, right, when we first launched with backup as a service. Now about 15 months later, right? We're on the brink of launching four services that are running on AWS cloud. So, without the level of support, both from a go to market standpoint that Sabina mentioned as well as the engineering and the available technology services that are on the AWS Cloud, right? There's no way we would've been able to spin up new services in such a short period of time. >> Is that Fort Knox and Data Govern, those are the services you're talking about Or is that- >> Yeah, so let me walk you through it. Yeah, so we have Cohesity DataProtect, which is our backup as a service solution. And that helps customers back their data to the cloud, on-prem, SaaS, cloud data like AWS, all in a single service and allows you to recover from ransomware, right? But a couple months ago we also announced a couple new services that you're alluding to John. And that is around Fort Knox and DataGovern. And basically Fort Knox, it is basically our SaaS solution for data isolation to a vaulted copy in the AWS cloud. And the goal of that is to really make it very simple for customers, not only to provide data immutability, but also that extra layer of protection by moving that data offsite and keeping it secure and vaulted away from cyber criminals and ransomware. And what we're doing is simplifying the whole process that normally is manual, right? You either do it manually with tapes or you'll manually replicate data to another data center or even to the cloud, but we're providing it as a service model, basically providing a modern 3-2-1 approach, right? For the cloud era. So, that's what's cool about Fort Knox, DataGovern, right? That's also a new service that we announced a few months ago and that really provides data governance and user behavior analytics services that leverages a lot that AI machine learning that everybody's so excited about. But really the application of that is to automate the discovery of sensitive data. So that could be your credit card numbers, healthcare records, a personal information of customers. So understanding where all that data is, is very important because that's the data that the criminals are going to go after and hold you host. So that's kind of step one. And then step two is again, leveraging machine learning, actually looking at how users are accessing and managing that data is also super important because that's going to help you identify potential anomalies, such as people sharing that data externally, which could be a threat. It could be in improper vault permissions, or other suspicious behaviors that could potentially signify data exfiltration or ransomware attack in progress. >> That's some great innovation. You got the data resiliency, of course, the control mechanism, but the AI piece machine learning is awesome. So congratulations on that innovation. Sabina, I'm listening to conversation and hear you talk. And it reminds me of our chat at re:Invent. And the whole theme of the conference was about the innovation and rapid innovations and how companies are refactoring with the cloud and this NextGen kind of journey. This is a fundamental pillar of AWS's rapid innovation concept with your partners. And I won't say it's new, but it's highly accelerated. How are you guys helping partners be with this rapid innovation, 'cause you're seeing benefits can come faster now, Agile is here. What are some of the programs that you're doing? How are you helping customers take advantage of the rapid innovation with the secret sauce of AWS? >> Yeah, so we have a number of leadership principles, John, and one of them, of course, is customer obsession. We are very focused on making sure we are developing things that our customers need. And we look for these very same qualities when we work with partners such as Cohesity. We want to make sure that it's a win-win approach for both sides because that's what will make the partnership durable over time. And this John, our leadership team at AWS, right from our CEO down believes that partners are critical to our success and as partners lean in, we lean in further. And that's why we signed the strategic collaboration agreement with Cohesity in April, 2020, where data management as a service solution was launch as part of that agreement. And for us, we've launched this solution now and as Doug said, what are the next things we could be doing, right? And just to go back a little bit when Cohesity was developing this solution with us, they used a number of our programs. Especially on the technical side, they used our SaaS factory program, which really helped them build this differentiated solution, especially focused around security compliance and cost optimizing the solution. Now that we've launched this solution, just like Doug mentioned, we are now focused on leveraging other services like security, AIML, and also our analytic services. And the reason for that is Cohesity, as we all know, protects, manages this data for the customer, but we want to make sure that the customer is extracting value from this data. That is why we continue to look, what can we do to continue to differentiate this solution in this market. >> That's awesome. You guys did a great job. I got to say, as it gets more scale, there's more needs for this rapid, I won't say prototyping, but rapid innovation and the Cohesity side does was you guys have been always on point on the back and recovery and now with security and the new modern application development, you guys are in the front row seats of all the action. So, I'll give you the final worry what's going on at Cohesity, give an update on what you guys are doing. What's it like over there these days? How's life give a quick plug for Cohesity. >> Yeah, Cohesity is doing great, right? We're always adding folks to the team, on our team, we have a few open racks open both on the marketing side, as well as the technology advocacy side. And of course, some of our other departments too, and engineering and sales and also our partner teams as well, working with AWS partners such as that. So, in our mind, the data delusion and growth is not going to slow down, right? So in this case, I think all tides raises all the boats here and we're glad to be innovative leader in this space and really looking to be really, the new wave of NextGen data management providers out there that leverages things like AI that leverages cybersecurity at the core and has an ecosystem of partners that we're working with, like AWS, that we're building out to help customers better manage their data. >> It's all great. Data is in the mid center of the value proposition. Sabina, great to see you again, thanks for sharing. And Douglas, great to see you too. Thanks for sharing this experience here in theCUBE. >> Thanks, John. >> Okay, this is theCUBE's AWS Partner Showcase special presentation, speeding innovation with AWS. I'm John Furrier your host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Mar 2 2022

SUMMARY :

of the AWS Partner Showcase Great to see you Sabina and Douglas. You guys are in the middle of And it's also expected by 2031 that Sabina, I got to ask you Amazon's view that is differentiated in the market? is the best practices to make sure So the first step is just to make sure in the security space to and to you guys. and 90% of the fortune 100 companies customers to Cohesity. relationship on the customer side? that are on the AWS Cloud, right? And the goal of that is to And the whole theme of And the reason for that is and the Cohesity side does that leverages cybersecurity at the core And Douglas, great to see you too. Okay, this is theCUBE's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SabinaPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

DouglasPERSON

0.99+

DougPERSON

0.99+

April, 2020DATE

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

2020DATE

0.99+

90%QUANTITY

0.99+

Douglas KoPERSON

0.99+

National Institute of Standards and TechnologyORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

two keysQUANTITY

0.99+

2021DATE

0.99+

CohesisityORGANIZATION

0.99+

Sabina JosephPERSON

0.99+

over $260 billionQUANTITY

0.99+

oneQUANTITY

0.99+

2031DATE

0.99+

late 2020DATE

0.99+

both sidesQUANTITY

0.99+

one serviceQUANTITY

0.99+

over 70%QUANTITY

0.99+

FirstQUANTITY

0.99+

one pieceQUANTITY

0.99+

hundredsQUANTITY

0.99+

bothQUANTITY

0.99+

tenfoldQUANTITY

0.98+

firstQUANTITY

0.98+

first stepQUANTITY

0.98+

CohesityPERSON

0.98+

Amazon Web ServicesORGANIZATION

0.98+

Mohit AronPERSON

0.97+

about 15 months laterDATE

0.97+

NISTORGANIZATION

0.97+

five pillarsQUANTITY

0.97+

five main functionsQUANTITY

0.97+

OneQUANTITY

0.97+

two different setsQUANTITY

0.97+

single serviceQUANTITY

0.97+

twoQUANTITY

0.96+

Fort KnoxORGANIZATION

0.96+

Ravi Mayuram, Couchbase | Couchbase ConnectONLINE 2021


 

>>Welcome back to the cubes coverage of Couchbase connect online, where the theme of the event is, or is modernized now. Yes, let's talk about that. And with me is Ravi, who's the senior vice president of engineering and the CTO at Couchbase Ravi. Welcome. Great to see you. >>Thank you so much. I'm so glad to be here with you. >>I asked you what the new requirements are around modern applications. I've seen some, you know, some of your comments, you gotta be flexible, distributed, multimodal, mobile edge. It, that those are all the very cool sort of buzz words, smart applications. What does that all mean? And how do you put that into a product and make it real? >>Yeah, I think what has basically happened is that, uh, so far, uh, it's been a transition of sorts. And now we are come to a point where, uh, the tipping point and the tipping point has been, uh, uh, more because of COVID and there COVID has pushed us to a world where we are living, uh, in a sort of, uh, occasionally connected manner where our digital, uh, interactions, precede our physical interactions in one sense. So it's a world where we do a lot more stuff that's less than, uh, in a digital manner, as opposed to sort of making a more specific human contact that has really been the, uh, sort of accelerant to this modernized. Now, as a team in this process, what has happened is that so far all the databases and all the data infrastructure that we have built historically, are all very centralized. >>They're all sitting behind. Uh, they used to be in mainframes from where they came to like your own data centers, where we used to run hundreds of servers to where they're going now, which is the computing marvelous change to consumption-based computing, which is all cloud oriented now. And so, uh, but they are all centralized still. Uh, but where our engagement happens with the data is, uh, at the edge, uh, at your point of convenience at your point of consumption, not where the data is actually sitting. So this has led to, uh, you know, all those buzzwords, as you said, which is like, oh, well we need a distributed data infrastructure, where is the edge? Uh, but it just basically comes down to the fact that the data needs to be where you are engaging with it. And that means if you are doing it on your mobile phone, or if you are sitting, uh, doing something in your body or traveling, or whether you are in a subway, whether you're in a plane or a ship, wherever the data needs to come to you, uh, and be available as opposed to every time you going to the data, which is centrally sitting in some place. >>And that is the fundamental shift in terms of how the modern architecture needs to think, uh, when they, when it comes to digital transformation and, uh, transitioning their old applications to, uh, the, the modern infrastructure, because that's, what's going to define your customer experiences and your personalized experiences. Uh, otherwise people are basically waiting for that circle of death that we all know, uh, and blaming the networks and other pieces. The problem is actually, the data is not where you are engaging with. It has got to be fetched, you know, seven seas away. Um, and that is the problem that we are basically solving in this modern modernization of that data, data infrastructure. >>I love this conversation and I love the fact that there's a technical person that can kind of educate us on, on this, because date data by its very nature is distributed. It's always been distributed, but w w but distributed database has always been incredibly challenging, whether it was a global SIS Plex or an eventual consistency of getting recovery for a distributed architecture has been extremely difficult. You know, I hate that this is a terrible term, lots of ways to skin a cat, but, but you've been the visionary behind this notion of optionality, how to solve technical problems in different ways. So how do you solve that, that problem of, of, of, uh, of, uh, of a super rock solid database that can handle, you know, distributed data? Yes. >>So there are two issues that you're a little too over there with Forrest is the optionality piece of it, which is that same data that you have that requires different types of processing on it. It's almost like fractional distillation. It is, uh, like your crude flowing through the system. You start all over from petrol and you can end up with Vaseline and rayon on the other end, but the raw material, that's our data in one sense. So far, we never treated the data that way. That's part of the problem. It has always been very purpose built and cast first problem. And so you just basically have to recast it every time we want to look at the data. The first thing that we have done is make data that fluid. So when you're actually, uh, when you have the data, you can first look at it to perform. >>Let's say a simple operation that we call as a key value store operation. Given my ID, give him a password kind of scenarios, which is like, you know, there are customers of ours who have billions of user IDs in their management. So things get slower. How do you make it fast and easily available? Log-in should not take more than five minutes. Again, this is a, there's a class of problem that we solve that same data. Now, eventually, without you ever having to, uh, sort of do a casting it to a different database, you can now do a solid, uh, acquire. These are classic sequel queries, which is our next magic. We are a no SQL database, but we have a full functional sequel. The sequel has been the language that has talked to data for 40 odd years successfully. Every other database has come and try to implement their own QL query language, but they've all failed only sequel as which stood the test of time of 40 odd years. >>Why? Because there's a solid mathematics behind it. It's called a relational calculus. And what that helps you is, is, uh, basically, uh, look at the data and any common tutorial, uh, any, uh, any which way you look at the data. All it will come, uh, the data in a format that you can consume. That's the guarantee sort of gives you in one sense. And because of that, you can now do some really complex in the database signs, what we call us, predicate logic on top of that. And that gives you the ability to do the classic relational type queries, select star from where Canada stuff, because it's at an English level, it becomes easy to, so the same data, you didn't have to go move it to another database, do your, uh, sort of transformation of the data and all this stuff. Same day that you do this. >>Now, that's where the optionality comes in. Now you can do another piece of logic on top of this, which we call search. This is built on this concept of inverted index and TF IDF, the classic Google in a very simple terms, but Google tokenized search, you can do that in the same data without you ever having to move the data to a different format. And then on top of it, they can do what is known as a eventing or your own custom logic, which we all which we do on a, on programming language called Java script. And finally analytics and analytics is the ability to query the operational data in a different way. I'll talk budding. What was my sales of this widget year over year on December 1st week, that's a very complex question to ask, and it takes a lot of different types of processing. >>So these are different types of that's optionality with different types of processing on the same data without you having to go to five different systems without you having to recast the data in five different ways and find different application logic. So you put them in one place. Now is your second question. Now this has got to be distributed and made available in multiple cloud in your data center, all the way to the edge, which is the operational side of the, uh, the database management system. And that's where the distributed, uh, platform that we have built enables us to get it to where you need the data to be, you know, in a classic way, we call it CDN in the data as in like content delivery networks. So far do static, uh, uh, sort of moving of static content to the edges. Now we can actually dynamically move the data. Now imagine the richness of applications you can develop. >>The first part of the, the answer to my question, are you saying you could do this without skiing with a no schema on, right? And then you can apply those techniques. >>Uh, fantastic question. Yes. That's the brilliance of this database is that so far classically databases have always demanded that you first define a schema before you can write a single byte of data. Couchbase is one of the rare databases. I, for one don't know any other one, but there could be, let's give the benefit of doubt. It's a database which writes data first and then late binds to schema as we call it. It's a schema on read things. So because there is no schema, it is just a on document that is sitting inside. And Jason is the lingua franca of the web, as you very well know by now. So it just Jason that we manage, you can do key lookups of the Jason. You can do full credit capability, like a classic relational database. We even have cost-based optimizers and the other sophisticated pieces of technology behind it. >>You can do searching on it, using the, um, the full textual analysis pipeline. You can do ad hoc wedding on the analytic side, and you can write your own custom logic on it using our eventing capabilities. So that's, that's what it allows because we keep the data in the native form of Jason. It's not a data structure or a data schema imposed by a database. It is how the data is produced. And on top of it, we bring different types of logic, five different types of it's like the philosophy is bringing logic to data as opposed to moving data to logic. This is what we have been doing, uh, in the last 40 years because we developed various, uh, database systems and data processing systems of various points. In time in our history, we had key value stores. We had relational systems, we had search systems, we had analytical systems. >>We had queuing systems, all the systems, if you want to use any one of them, our answer has always been, just move the data to that system. Versus we are saying that do not move the data as we get bigger and bigger and data just moving this data is going to be a humongous problem. If you're going to be moving petabytes of data for this is not one to fly instead, bring the logic to the data. So you can now apply different types of logic to the data. I think that's what, in one sense, the optionality piece of this, >>As you know, there's plenty of schema-less data stores. They're just, they're called data swamps. I mean, that's what they, that's what they became, right? I mean, so this is some, some interesting magic that you're applying here. >>Yes. I mean, the one problem with the data swamps as you call them is that that was a little too open-ended because the data format itself could change. And then you do your, then everything became like a game data casting because it required you to have it in seven schema in one sense at the end of the day, for certain types of processing. So in that where a lot of gaps it's probably flooded, but it not really, uh, how do you say, um, keep to the promise that it actually meant to be? So that's why it was a swamp I need, because it was fundamentally not managing the data. The data was sitting in some file system, and then you are doing something, this is a classic database where the data is managed and you create indexes to manage it, and you create different types of indexes to manage it. You distribute the index, you distribute the data you have, um, like we were discussing, you have acid semantics on top of, and when you, when you put all these things together, uh, it's, it's, it's a tough proposition, but they have solved some really tough problems, which are good computer science stuff, computer science problems that we have to solve to bring this, to bring this, to bear, to bring this to the market. >>So you predicted the trend around multimodal and converged, uh, databases. Um, you kind of led Couchbase through that. I want to, I always ask this question because it's clearly a trend in the industry and it, it definitely makes sense from a simplification standpoint. And, and, and so that I don't have to keep switching databases or the flip side of that though, Ravi. And I wonder if you could give me your opinion on this is kind of the right tool for the right job. So I often say isn't that the Swiss army knife approach, we have a little teeny scissors and a knife. That's not that sharp. How do you respond to that? Uh, >>A great one. Um, my answer is always, I use another analogy to tackle that, but is that, have you ever accused a smartphone of being a Swiss army knife? No. No. Nobody does that because it's actually 40 functions in one is what a smartphone becomes. You never call your iPhone or your Android phone, a Swiss army knife, because here's the reason is that you can use that same device in the full capacity. That's what optionality is. It's not, I'm not, it's not like your good old one where there's a keyboard hiding half the screen, and you can do everything only through the keyboard without touching and stuff like that. That's not the whole devices available to you to do one type of processing when you want it. When you're done with that, it can do another completely different types of processing. Like as in a moment, it could be a Tom, Tom telling you all the directions, the next one, it's your PDA. >>Third one, it's a fantastic phone. Uh, four, it's a beautiful camera, which can do your f-stop management and give you a nice SLR quality picture. Right? So next moment is a video camera. People are shooting movies with this thing in Hollywood, these days for God's sake. So it gives you the full power of what you want to do when you want it. And now, if you just taught that iPhone is a great device or any smartphone is a great device, because you can do five things in one or 50 things in one, and at a certain level, they missed the point because what that device really enabled is not just these five things in one place. It becomes easy to consume and easy to operate. It actually started the app is the economy. That's the brilliance of bringing so many things in one place, because in the morning, you know, I get the alert saying that today you got to leave home at eight 15 for your nine o'clock meeting. >>And the next day it might actually say 8 45 is good enough because it knows where the phone is sitting. The geo position of it. It knows from my calendar where the meeting is actually happening. It can do a traffic calculation because it's got my map and all of the routes. And then it's gone there's notification system, which eventually pops up on my phone to say, Hey, you got to leave at this time. Now five different systems have to come together and they can because the data is in one place without that, you couldn't even do this simple function, uh, in a, in a sort of predictable manner in a, in a, in a manner that's useful to you. So I believe a database which gives you this optionality of doing multiple data processing on the same set of data allows you will allow you to build a class of products, which you are so far been able to struggling to build, because half the time you're running sideline to sideline, just, you know, um, integrating data from one system to the other. >>So I love the analogy with the smartphone. I w I want to, I want to continue it and double click on it. So I use this camera. I used to, you know, my kid had a game. I would bring the, the, the big camera, the 35 millimeter. So I don't use that anymore no way, but my wife does, she still uses the DSLR. So is, is there a similar analogy here? That those, and by the way, the camera, the camera shop in my town went out of business, you know? And so, so, but, but is there, is that a fair, where, in other words, those specialized databases, they say there still is a place for them, but they're getting >>Absolutely, absolutely great analogy and a great extension to the question. That's, that's the contrarian side of it in one sense is that, Hey, if everything can just be done in one, do you have a need for the other things? I mean, you gave a camera example where it is sort of, it's a, it's a slippery slope. Let me give you another one, which is actually less straight to the point better. I've been just because my, I, I listened to half of the music on the iPhone. Doesn't stop me from having my full digital receiver. And, you know, my Harman Kardon speakers at home because they haven't, they produce a kind of sounded immersive experience. This teeny little speaker has never in its lifetime intended to produce, right? It's the convenience. Yes. It's the convenience of convergence that I can put my earphones on and listen to all the great music. >>Yes, it's 90% there or 80% there. It depends on your audio file mess of your, uh, I mean, you don't experience the super specialized ones do not go away. You know, there are, there are places where, uh, the specialized use cases will demand a separate system to exist, but even there that has got to be very closed. Um, how do you say close, binding or late binding? I should be able to stream that song from my phone to that receiver so I can get it from those speakers. You can say that, oh, there's a digital divide between these two things done, and I can only play CDs on that one. That's not how it's going to work going forward. It's going to be, this is the connected world, right? As in, if I'm listening to the song in my car and then step off the car and walk into my living room, that's same songs should continue and play in my living room speakers. Then it's a world because it knows my preference and what I'm doing that all happened only because of this data flowing between all these systems. >>I love, I love that example too. When I was a kid, we used to go to Twitter, et cetera. And we'd to play around with, we take off the big four foot speakers. Those stores are out of business too. Absolutely. Um, now we just plug into Sonos. So that is the debate between relational and non-relational databases over Ravi. >>I believe so. Uh, because I think, uh, what had happened was the relational systems. Uh, I've been where the norm, they rule the roost, if you will, for the last 40 odd years, and then gain this no sequel movement, which was almost as though a rebellion from the relational world, we all inhibited, uh, uh, because we, it was very restrictive. It, it had the schema definition and the schema evolution as we call it, all those things, they were like, they required a committee, they required your DBA and your data architect. And you have to call them just to add one column and stuff like that. And the world had moved on. This was the world of blogs and tweets and, uh, you know, um, mashups and, um, uh, uh, a different generation of digital behavior, digital, native people now, um, who are operating in these and the, the applications, the, the consumer facing applications. >>We are living in this world. And yet the enterprise ones were still living in the, um, in the other, the other side of the divide. So all came this solution to say that we don't need SQL. Actually, the problem was never sequel. No sequel was, you know, best approximation, good marketing name, but from a technologist perspective, the problem was never the query language, no SQL was not the problem, the schema limitations, and the inability for these, the system to scale, the relational systems were built like, uh, airplanes, which is that if, uh, San Francisco Boston, there is a flight route, it's so popular that if you want to add 50 more seats to it, the only way you can do that is to go back to Boeing and ask them to get you a set in from 7 3 7 2 7 7 7, or whatever it is. And they'll stick you with a billion dollar bill on the alarm to somehow pay that by, you know, either flying more people or raising the rates or whatever you have to do. >>These are called vertically scaling systems. So relational systems are vertically scaling. They are expensive. Versus what we have done in this modern world, uh, is make the system how it is only scaling, which is more like the same thing. If it's a train that is going from San Francisco to Boston, you need 50 more people be my guests. I'll add one more coach to it, one more car to it. And the better part of the way we have done this year is that, and we have super specialized on that. This route actually requires three, three dining cars and only 10 sort of sleeper cars or whatever. Then just pick those and attach the next route. You can choose to have ID only one dining car. That's good enough. So the way you scale the plane is also can be customized based on the route along the route, more, more dining capabilities, shorter route, not an abandoned capability. >>You can attach the kind of coaches we call this multi-dimensional scaling. Not only do we scale horizontally, we can scale to different types of workloads by adding different types of coaches to it quite. So that's the beauty of this architecture. Now, why is that important? Is that where we land eventually is the ability to do operational and analytical in the same place. This is another thing which doesn't happen in the past, because you would say that I cannot run this analytical Barre because then my operational workload will suffer. Then my friend, then we'll slow down millions of customers that impacted that problem. We will solve the same data in which you can do analytical buddy, an operational query because they're separated by these cars, right? As in like we, we fence the, the, the resources, so that one doesn't impede the other. So you can, at the same time, have a microsecond 10 million ops per second, happening of a key value or equity. >>And then yet you can run this analytical body, which will take a couple of minutes to run one, not impeding the other. So that's in one sense, sort of the, part of the, um, uh, problems that we have solved here is that relational versus, uh, uh, the no SQL portion of it. These are the kinds of problems we have to solve. We solve those. And then we yet put back the same quality language on top. Y it's like Tesla in one sense, right underneath the surface is where all the stuff that had to be changed had to change, which is like the gasoline, uh, the internal combustion engine, uh, I think gas, uh, you says, these are the issues we really wanted to solve. Um, so solve that, change the engine out, you don't need to change the steering wheel or the gas pedal or the, you know, the battle shifters or whatever else you need, or that are for your shifters. >>Those need to remain in the same place. Otherwise people won't buy it. Otherwise it does not even look like a car to people. So, uh, even when you feed people the most advanced technology, it's got to be accessible to them in the manner that people can consume. Only in software, we forget this first design principle, and we go and say that, well, I got a car here, you got the blue harder to go fast and lean back for, for it to, you know, uh, to apply a break that's, that's how we seem to define, uh, design software. Instead, we should be designing them in a manner that it is easiest for our audience, which is developers to consume. And they've been using SQL for 40 years or 30 years. And so we give them the steering wheel on the, uh, and the gas bottle and the, um, and the gear shifter is by putting cul back on underneath the surface, we have completely solved, uh, the relational, uh, uh, limitations of schema, as well as scalability. >>So in, in, in that way, and by bringing back the classic acid capabilities, which is what relational systems, uh, we accounted on and being able to do that with the sequel programming language, we call it like multi-state SQL transaction. So to say, which is what a classic way all the enterprise software was built by putting that back. Now, I can say that that debate between relational and non-relational is over because this has truly extended the database to solve the problems that the relational systems had to grow up the salt in the modern times, but rather than get, um, sort of pedantic about whether it's, we have no SQL or sequel or new sequel, or, uh, you know, any of that sort of, uh, jargon, oriented debate, uh, this, these are the debates of computer science that they are actually, uh, and they were the solve and they have solved them with, uh, the latest release of $7, which we released a few months ago. >>Right, right. Last July, Ravi, we got to leave it there. I, I love the examples and the analogies. I can't wait to be face to face with you. I want to hang with you at the cocktail party because I've learned so much and really appreciate your time. Thanks for coming to the cube. >>Fantastic. Thanks for the time. And the Aboriginal Dan was, I mean, very insightful questions really appreciate it. Thank you. >>Okay. This is Dave Volante. We're covering Couchbase connect online, keep it right there for more great content on the cube.

Published Date : Oct 26 2021

SUMMARY :

Welcome back to the cubes coverage of Couchbase connect online, where the theme of the event Thank you so much. And how do you put that into a product and all the data infrastructure that we have built historically, are all very Uh, but it just basically comes down to the fact that the data needs to be where you And that is the fundamental shift in terms of how the modern architecture needs to think, So how do you solve that, of it, which is that same data that you have that requires different give him a password kind of scenarios, which is like, you know, there are customers of ours who have And that gives you the ability to do the classic relational you can do that in the same data without you ever having to move the data to a different format. platform that we have built enables us to get it to where you need the data to be, The first part of the, the answer to my question, are you saying you could So it just Jason that we manage, you can do key lookups of the Jason. You can do ad hoc wedding on the analytic side, and you can write your own custom logic on it using our We had queuing systems, all the systems, if you want to use any one of them, our answer has always been, As you know, there's plenty of schema-less data stores. You distribute the index, you distribute the data you have, um, So I often say isn't that the Swiss army knife approach, we have a little teeny scissors and That's not the whole devices available to you to do one type of processing when you want it. because in the morning, you know, I get the alert saying that today you got to leave home at multiple data processing on the same set of data allows you will allow you to build a class the camera shop in my town went out of business, you know? in one, do you have a need for the other things? Um, how do you say close, binding or late binding? is the debate between relational and non-relational databases over Ravi. And you have to call them just to add one column and stuff like that. to add 50 more seats to it, the only way you can do that is to go back to Boeing and So the way you scale the plane is also can be customized based on So you can, at the same time, so solve that, change the engine out, you don't need to change the steering wheel or the gas pedal or you got the blue harder to go fast and lean back for, for it to, you know, you know, any of that sort of, uh, jargon, oriented debate, I want to hang with you at the cocktail party because I've learned so much And the Aboriginal Dan was, I mean, very insightful questions really appreciate more great content on the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ravi MayuramPERSON

0.99+

RaviPERSON

0.99+

BostonLOCATION

0.99+

Dave VolantePERSON

0.99+

$7QUANTITY

0.99+

second questionQUANTITY

0.99+

San FranciscoLOCATION

0.99+

90%QUANTITY

0.99+

80%QUANTITY

0.99+

40 yearsQUANTITY

0.99+

todayDATE

0.99+

30 yearsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

threeQUANTITY

0.99+

40 functionsQUANTITY

0.99+

35 millimeterQUANTITY

0.99+

five thingsQUANTITY

0.99+

nine o'clockDATE

0.99+

40 odd yearsQUANTITY

0.99+

50 thingsQUANTITY

0.99+

Last JulyDATE

0.99+

BoeingORGANIZATION

0.99+

two issuesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

50 more seatsQUANTITY

0.99+

one senseQUANTITY

0.99+

one placeQUANTITY

0.99+

oneQUANTITY

0.99+

one more carQUANTITY

0.99+

San Francisco BostonLOCATION

0.99+

one more coachQUANTITY

0.99+

50 more peopleQUANTITY

0.99+

firstQUANTITY

0.99+

two thingsQUANTITY

0.98+

five different systemsQUANTITY

0.98+

CanadaLOCATION

0.98+

JavaTITLE

0.98+

Harman KardonORGANIZATION

0.98+

five different waysQUANTITY

0.98+

more than five minutesQUANTITY

0.98+

first partQUANTITY

0.98+

GoogleORGANIZATION

0.98+

first problemQUANTITY

0.98+

CouchbaseORGANIZATION

0.98+

first thingQUANTITY

0.98+

JasonPERSON

0.97+

TomPERSON

0.97+

SQLTITLE

0.97+

next dayDATE

0.97+

SonosORGANIZATION

0.97+

AndroidTITLE

0.97+

TwitterORGANIZATION

0.97+

December 1st weekDATE

0.96+

one dining carQUANTITY

0.96+

seven schemaQUANTITY

0.96+

this yearDATE

0.96+

Third oneQUANTITY

0.96+

three dining carsQUANTITY

0.95+

SIS PlexTITLE

0.95+

one columnQUANTITY

0.95+

10 sort of sleeper carsQUANTITY

0.95+

EnglishOTHER

0.95+

one systemQUANTITY

0.94+

eight 15DATE

0.94+

millions of customersQUANTITY

0.94+

single byteQUANTITY

0.93+

one problemQUANTITY

0.93+

fiveQUANTITY

0.93+

2021DATE

0.93+

four footQUANTITY

0.92+

billion dollarQUANTITY

0.92+

8 45OTHER

0.91+

ForrestORGANIZATION

0.9+

one typeQUANTITY

0.88+

billions of user IDsQUANTITY

0.88+

10 million opsQUANTITY

0.88+

Ravi Mayuram, Senior Vice President of Engineering and CTO, Couchbase


 

>> Welcome back to the cubes coverage of Couchbase connect online, where the theme of the event is, is modernize now. Yes, let's talk about that. And with me is Ravi mayor him, who's the senior vice president of engineering and the CTO at Couchbase Ravi. Welcome. Great to see you. >> Thank you so much. I'm so glad to be here with you. >> I want to ask you what the new requirements are around modern applications. I've seen some of your comments, you got to be flexible, distributed, multimodal, mobile, edge. Those are all the very cool sort of buzz words, smart applications. What does that all mean? And how do you put that into a product and make it real? >> Yeah, I think what has basically happened is that so far it's been a transition of sorts. And now we are come to a point where that tipping point and that tipping point has been more because of COVID and there are COVID has pushed us to a world where we are living in a in a sort of occasionally connected manner where our digital interactions precede, our physical interactions in one sense. So it's a world where we do a lot more stuff that's less than in a digital manner, as opposed to sort of making a more specific human contact. That does really been the sort of accelerant to this modernize Now, as a team. In this process, what has happened is that so far all the databases and all the data infrastructure that we have built historically, are all very centralized. They're all sitting behind. They used to be in mainframes from where they came to like your own data centers, where we used to run hundreds of servers to where they're going now, which is the computing marvelous change to consumption-based computing, which is all cloud oriented now. And so, but they are all centralized still, but where our engagement happens with the data is at the edge at your point of convenience, at your point of consumption, not where the data is actually sitting. So this has led to, you know, all those buzzwords, as you said, which is like, oh, well we need a distributed data infrastructure, where is the edge? But it just basically comes down to the fact that the data needs to be there, if you are engaging with it. And that means if you are doing it on your mobile phone, or if you're sitting, but doing something in your while you're traveling, or whether you're in a subway, whether you're in a plane or a ship, wherever the data needs to come to you and be available, as opposed to every time you going to the data, which is centrally sitting in some place. And that is the fundamental shift in terms of how the modern architecture needs to think when they, when it comes to digital transformation and, transitioning their old applications to the, the modern infrastructure, because that's, what's going to define your customer experiences and your personalized experiences. Otherwise, people are basically waiting for that circle of death that we all know, and blaming the networks and other pieces. The problem was actually, the data is not where you are engaging with it. It's got to be fetched, you know, seven sea's away. And that is the problem that we are basically solving in this modern modernization of that data, data infrastructure. >> I love this conversation and I love the fact that there's a technical person that can kind of educate us on, on this because date data by its very nature is distributed. It's always been distributed, but with the distributed database has always been incredibly challenging, whether it was a global SIS Plex or an eventual consistency of getting recovery for a distributed architecture has been extremely difficult. You know, I hate that this is a terrible term, lots of ways to skin a cat, but, but you've been the visionary behind this notion of optionality, how to solve technical problems in different ways. So how do you solve that, that problem of, of, of, of, of a super rock solid database that can handle, you know, distributed data? >> Yes. So there are two issues that you alluded little too over there. The first is the optionality piece of it, which is that same data that you have that requires different types of processing on it. It's almost like fractional distillation. It is like your crude flowing through the system. You start all over from petrol and you can end up with Vaseline and rayon on the other end, but the raw material, that's our data. In one sense. So far, we never treated the data that way. That's part of the problem. It has always been very purpose built and cast first problem. And so you just basically have to recast it every time we want to look at the data. The first thing that we have done is make data that fluid. So when you're actually, when you have the data, you can first look at it to perform. Let's say a simple operation that we call as a key value store operation. Given my ID, give him a password kind of scenarios, which is like, you know, there are customers of ours who have billions of user IDs in their management. So things get slower. How do you make it fast and easily available? Log-in should not take more than five milliseconds, this is, this is a class of problem that we solve that same data. Now, eventually, without you ever having to sort of do a casting it to a different database, you can now do solid queries. Our classic SQL queries, which is our next magic. We are a no SQL database, but we have a full functional SQL. The SQL has been the language that has talked to data for 40 odd years successfully. Every other database has come and tried to implement their own QL query language, but they've all failed only SQL has stood the test of time of 40 odd years. Why? Because there's a solid mathematics behind it. It's called a relational calculus. And what that helps you is, is basically a look at the data and any common editorial, any, any which way you look at the data, all it will come, the data in a format that you can consume. That's the guarantee sort of gives you in one sense. And because of that, you can now do some really complex in the database signs, what we call us, predicate logic on top of that. And that gives you the ability to do the classic relational type queries select star from where, kind of stuff, because it's at an English level becomes easy to so the same day that you didn't have to go move it to another database, do your sort of transformation of the data and all the stuff, same day that you do this. Now that's where the optionality comes in. Now you can do another piece of logic on top of this, which we call search. This is built on this concept of inverted index and TF IDF, the classic Google in a very simple terms, what Google tokenized search, you can do that in the same data without you ever having to move the data to a different format. And then on top of it, they can do what is known as a eventing or your own custom logic, which we all which we do on a, on programming language called Java script. And finally analytics and analytics is the, your ability to query the operational data in a different way. And talk querying, what was my sales of this widget year over year on December 1st week, that's a very complex question to ask, and it takes a lot of different types of processing. So these are different types of that's optionality with different types of processing on the same data without you having to go to five different systems without you having to recast the data in five different ways and apply different application logic. So you put them in one place. Now is your second question. Now this has got to be distributed and made available in multiple cloud in your data center, all the way to the edge, which is the operational side of the, the database management system. And that's where the distributed platform that we have built enables us to get it to where you need the data to be, you know, in the classic way we call it CDN'ing the data as in like content delivery networks. So far do static, sort of moving of static content to the edges. Now we can actually dynamically move the data. Now imagine the richness of applications you can develop. >> And on the first part of, of the, the, the answer to my question, are you saying you could do this without scheme with a no schema on, right? And then you can apply those techniques. >> Fantastic question. Yes. That's the brilliance of this database is that so far classically databases have always demanded that you first define a schema before you can write a single byte of data. Couchbase is one of the rare databases. I, for one don't know any other one, but there could be, let's give the benefit of doubt. It's a database which writes data first and then late binds to schema as we call it. It's a schema on read thing. So, because there is no schema, it is just a Json document that is sitting inside. And Json is the lingua franca of the web, as you very well know by now. So it just Json that we manage, you can do key value look ups of the Json. You can do full credit capability, like a classic relational database. We even have cost-based optimizers and other sophisticated pieces of technology behind it. You can do searching on it, using the, the full textual analysis pipeline. You can do ad hoc webbing on the analytics side, and you can write your own custom logic on it using or inventing capabilities. So that's, that's what it allows because we keep the data in the native form of Json. It's not a data structure or a data schema imposed by a database. It is how the data is produced. And on top of it, bring, we bring different types of logic, five different types of it's like the philosophy is bringing logic to data as opposed to moving data to logic. This is what we have been doing in the last 40 years, because we developed various database systems and data processing systems at various points in time in our history, we had key value stores. We had relational systems, we had search systems, we had analytical systems. We had queuing systems, all these systems, if you want to use any one of them are answered. It always been, just move the data to that system. Versus we are saying that do not move the data as we get bigger and bigger and data just moving this data is going to be a humongous problem. If you're going to be moving petabytes of data for this, it's not going to fly instead, bring the logic to the data, right? So you can now apply different types of logic to the data. I think that's what, in one sense, the optionality piece of this. >> But as you know, there's plenty of schema-less data stores. They're just, they're called data swamps. I mean, that's what they, that's what they became, right? I mean, so this is some, some interesting magic that you're applying here. >> Yes. I mean, the one problem with the data swamps as you call them is that that was a little too open-ended because the data format itself could change. And then you do your, then everything became like a game data recasting because it required you to have it in seven schema in one sense at, at the end of the day, for certain types of processing. So in that where a lot of gaps it's probably related, but it not really, how do you say keep to the promise that it actually meant to be? So that's why it was a swamp I mean, because it was fundamentally not managing the data. The data was sitting in some file system, and then you are doing something, this is a classic database where the data is managed and you create indexes to manage it. And you create different types of indexes to manage it. You distribute the index, you distribute the data you have, like we were discussing, you have ACID semantics on top of, and when you, when you put all these things together, it's, it's, it's a tough proposition, but we have solved some really tough problems, which are good computer science stuff, computer science problems that we have to solve to bring this, to bring this, to bear, to bring this to the market. >> So you predicted the trend around multimodal and converged databases. You kind of led Couchbase through that. I, I want, I always ask this question because it's clearly a trend in the industry and it, and it definitely makes sense from a simplification standpoint. And, and, and so that I don't have to keep switching databases or the flip side of that though, Ravi. And I wonder if you could give me your opinion on this is kind of the right tool for the right job. So I often say isn't that the Swiss army knife approach, where you have have a little teeny scissors and a knife, that's not that sharp. How, how do you respond to that? >> A great one. My answer is always, I use another analogy to tackle that, and is that, have you ever accused a smartphone of being a Swiss army knife? - No. No. >> Nobody does. That because it actually 40 functions in one is what a smartphone becomes. You never call your iPhone or your Android phone, a Swiss army knife, because here's the reason is that you can use that same device in the full capacity. That's what optionality is. It's not, I'm not, it's not like your good old one where there's a keyboard hiding half the screen, and you can do everything only through the keyboard without touching and stuff like that. That's not the whole devices available to you to do one type of processing when you want it. When you're done with that, it can do another completely different types of processing. Right? As in a moment, it could be a TomTom, telling you all the directions, the next one, it's your PDA. Third one. It's a fantastic phone. Four. It's a beautiful camera which can do your f-stop management and give you a nice SLR quality picture. Right? So next moment, it's the video camera. People are shooting movies with this thing in Hollywood, these days for God's sake. So it gives you the full power of what you want to do when you want it. And now, if you just thought that iPhone is a great device or any smartphone is a great device, because you can do five things in one or 50 things in one, and at a certain level, he missed the point because what that device really enabled is not just these five things in one place. It becomes easy to consume and easy to operate. It actually started the app based economy. That's the brilliance of bringing so many things in one place, because in the morning, you know, I get an alert saying that today you got to leave home at >> 8: 15 for your nine o'clock meeting. And the next day it might actually say 8 45 is good enough because it knows where the phone is sitting. The geo position of it. It knows from my calendar where the meeting is actually happening. It can do a traffic calculation because it's got my map and all of the routes. And then it's got this notification system, which eventually pops up on my phone to say, Hey, you got to leave at this time. Now five different systems have to come together and they can because the data is in one place. Without that, you couldn't even do this simple function in a, in a sort of predictable manner in a, in a, in a manner that's useful to you. So I believe a database which gives you this optionality of doing multiple data processing on the same set of data allows you will allow you to build a class of products, which you are so far been able to struggling to build. Because half the time you're running sideline to sideline, just, you know, integrating data from one system to the other. >> So I love the analogy with the smartphone. I want to, I want to continue it and double click on it. So I use this camera. I used to, you know, my kid had a game. I would bring the, the, the big camera, the 35 millimeter. So I don't use that anymore no way, but my wife does, she still uses the DSLR. So is, is there a similar analogy here? That those, and by the way, the camera, the camera shop in my town went out of business, you know? So, so, but, but is there, is that a fair and where, in other words, those specialized databases, they say there still is a place for them, but they're getting. >> Absolutely, absolutely great analogy and a great extension to the question. That's like, that's the contrarian side of it in one sense is that, Hey, if everything can just be done in one, do you have a need for the other things? I mean, you gave a camera example where it is sort of, it's a, it's a slippery slope. Let me give you another one, which is actually less straight to the point better. I've been just because my, I, I listened to half of my music on the iPhone. Doesn't stop me from having my full digital receiver. And, you know, my Harman Kardon speakers at home because they, I mean, they produce a kind of sounded immersive experience. This teeny little speaker has never in its lifetime intended to produce, right? It's the convenience. Yes. It's the convenience of convergence that I can put my earphones on and listen to all the great music. Yes, it's 90% there or 80% there. It depends on your audio file-ness of your, I mean, your experience super specialized ones do not go away. You know, there are, there are places where the specialized use cases will demand a separate system to exist. But even there that has got to be very closed. How do you say close, binding or late binding? I should be able to stream that song from my phone to that receiver so I can get it from those speakers. You can say that all, there's a digital divide between these two things done, and I can only play CDs on that one. That's not how it's going to work going forward. It's going to be, this is the connected world, right? As in, if I'm listening to the song in my car and then step off the car, walk into my living room, that same songs should continue and play in my living room speakers. Then it's a connected world because it knows my preference and what I'm doing that all happened only because of this data flowing between all these systems. >> I love, I love that example too. When I was a kid, we used to go to Tweeter, et cetera. And we used to play around with three, take home, big four foot speakers. Those stores are out of business too. Absolutely. And now we just plug into Sonos. So that is the debate between relational and non-relational databases over Ravi? >> I believe so, because I think what had happened was relational systems. I've mean where the norm, they rule the roost, if you will, for the last 40 odd years and then gain this no SQL movement, which was almost as though a rebellion from the relational world, we all inhabited because we, it was very restrictive. It, it had the schema definition and the schema evolution as we call it, all those things, they were like, they required a committee. They required your DBA and your data architect. And you had to call them just to add one column and stuff like that. And the world had moved on. This was a world of blogs and tweets and, you know, mashups and a different generation of digital behavior, There are digital, native people now who are operating in these and the, the applications, the, the consumer facing applications. We are living in this world. And yet the enterprise ones were still living in the, in the other, the other side of the divide. So out came this solution to say that we don't need SQL. Actually the problem was never SQL. No SQL was, you know, best approximation, good marketing name, but from a technologist perspective, the problem was never the query language, no SQL was not the problem, the schema limitations and the inability for these, the system to scale, the relational systems were built like airplanes, which is that if a San Francisco, Boston, there is a flight route, it's so popular that if you want to add 50 more seats to it, the only way you can do that is to go back to Boeing and ask them to get you a set from 7 3 7 2 7 7 7, or whatever it is. And they'll stick you with a billion dollar bill on the allowance that you'll somehow pay that by, you know, either flying more people or raising the rates or whatever you have to do. These are all vertically scaling systems. So relational systems are vertically scaling. They are expensive. Versus what we have done in this modern world is make the system horizontally scaling, which is more like the same thing. If it's a train that is going from San Francisco to Boston, you need 50 more people be my guest. I'll add one more coach to it, one more car to it. And the better part of the way we have done this here is that, and we are super specialized on that. This route actually requires three, three dining cars and only 10 sort of sleeper cars or whatever. Then just pick those and attach the next route. You can choose to have, I need only one dining car. That's good enough. So the way you scale the plane is also can be customized based on the route along the route, more, more dining capabilities, shorter route, not an abandoned capability. You can attach the kind of coaches we call this multidimensional scaling. Not only do we scale horizontally, we can scale to different types of workloads by adding different types of coaches to it, right? So that's the beauty of this architecture. Now, why is that architecture important? Is that where we land eventually is the ability to do operational and analytical in the same place. This is another thing which doesn't happen in the past, because, you would say that I cannot run this analytical query because then my operational workload will suffer. Then my front end, then we'll slow down millions of customers that impacted that problem. They'll solve the same data once again, do analytical query, an operational query because they're separated by these cars, right? As in like we, we, we fence the, the, the resources so that one doesn't impede the other. So you can, at the same time, have a microsecond 10 million ops per second, happening of a key value or a query. And then yet you can run this analytical query, which will take a couple of minutes to them. One, not impeding the other. So that's in one sense, sort of the part of the problems that we have solved it here is that relational versus the no SQL portion of it. These are the kinds of problems we have to solve. We solve those. And then we yet put back the same query language on top. Why? It's like Tesla in one sense, right underneath the surface is where all the stuff that had to be changed had to change, which is like the gasoline, the internal combustion engine the gas, you says, these were the issues we really wanted to solve. So solve that, change the engine out, you don't need to change the steering wheel or the gas pedal or the, you know, the battle shifters or whatever else you need, over there your gear shifters. Those need to remain in the same place. Otherwise people won't buy it. Otherwise it does not even look like a car to people. So even when you feed people, the most advanced technology, it's got to be accessible to them in the manner that people can consume. Only in software, we forget this first design principle, and we go and say that, well, I got a car here, you got the blow harder to go fast. And they lean back for, for it to, you know, to apply a break that's, that's how we seem to define design software. Instead, we shouldn't be designing them in a manner that it is easiest for our audience, which is developers to consume. And they've been using SQL for 40 years or 30 years. And so we give them the steering wheel on the, and the gas pedal and the, and the gear shifters by putting SQL back on underneath the surface, we have completely solved the relational limitations of schema, as well as scalability. So in, in, in that way, and by bringing back the classic ACID capabilities, which is what relational systems we accounted on, and being able to do that with the SQL programming language, we call it like multi-statement SQL transaction. So to say, which is what a classic way all the enterprise software was built by putting that back. Now, I can say that that debate between relational and non-relational is over because this has truly extended the database to solve the problems that the relational systems had to grow up to solve in the modern times, rather than get sort of pedantic about whether it's we have no SQL or SQL or new SQL, or, you know, any of that sort of jargon oriented debate. This is, these are the debates of computer science that they are actually, and they were the solve, and they have solved them with the latest release of 7.0, which we released a few months ago. >> Right, right. Last July, Ravi, we got got to leave it there. I love the examples and the analogies. I can't wait to be face-to-face with you. I want to hang with you at the cocktail party because I've learned so much and really appreciate your time. Thanks for coming to the cube. >> Fantastic. Thanks for the time. And the opportunity I was, I mean, very insightful questions really appreciate it. - Thank you. >> Okay. This is Dave Volante. We're covering Couchbase connect online, keep it right there for more great content on the cube.

Published Date : Oct 1 2021

SUMMARY :

of engineering and the CTO Thank you so much. And how do you put that into And that is the problem that that can handle, you know, the data in a format that you can consume. the answer to my question, the data to that system. But as you know, the data is managed and you So I often say isn't that the have you ever accused a place, because in the morning, you know, And the next day it might So I love the analogy with my music on the iPhone. So that is the debate between So the way you scale the plane I love the examples and the analogies. And the opportunity I was, I mean, great content on the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
San FranciscoLOCATION

0.99+

BostonLOCATION

0.99+

90%QUANTITY

0.99+

Dave VolantePERSON

0.99+

Ravi MayuramPERSON

0.99+

40 yearsQUANTITY

0.99+

80%QUANTITY

0.99+

second questionQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

five thingsQUANTITY

0.99+

RaviPERSON

0.99+

todayDATE

0.99+

40 odd yearsQUANTITY

0.99+

30 yearsQUANTITY

0.99+

oneQUANTITY

0.99+

Last JulyDATE

0.99+

50 more seatsQUANTITY

0.99+

35 millimeterQUANTITY

0.99+

threeQUANTITY

0.99+

five thingsQUANTITY

0.99+

Harman KardonORGANIZATION

0.99+

SQLTITLE

0.99+

two issuesQUANTITY

0.99+

nine o'clockDATE

0.99+

40 functionsQUANTITY

0.99+

five different systemsQUANTITY

0.99+

SonosORGANIZATION

0.99+

JavaTITLE

0.99+

TeslaORGANIZATION

0.99+

50 more peopleQUANTITY

0.99+

millionsQUANTITY

0.99+

50 thingsQUANTITY

0.99+

one more carQUANTITY

0.99+

one placeQUANTITY

0.99+

one more coachQUANTITY

0.99+

one placeQUANTITY

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.98+

firstQUANTITY

0.98+

CouchbaseORGANIZATION

0.98+

one senseQUANTITY

0.98+

December 1st weekDATE

0.98+

five different systemsQUANTITY

0.98+

first partQUANTITY

0.98+

AndroidTITLE

0.98+

Third oneQUANTITY

0.97+

FourQUANTITY

0.97+

next dayDATE

0.96+

first thingQUANTITY

0.96+

JsonTITLE

0.96+

8 45OTHER

0.95+

SIS PlexTITLE

0.95+

BoeingORGANIZATION

0.95+

one problemQUANTITY

0.95+

one columnQUANTITY

0.94+

more than five millisecondsQUANTITY

0.94+

three dining carsQUANTITY

0.94+

OneQUANTITY

0.94+

one systemQUANTITY

0.94+

10 sort of sleeper carsQUANTITY

0.93+

8: 15DATE

0.93+

billion dollarQUANTITY

0.92+

one dining carQUANTITY

0.92+

first problemQUANTITY

0.92+

EnglishOTHER

0.92+

VeeamON Power Panel | VeeamON 2021


 

>>President. >>Hello everyone and welcome to wien on 2021. My name is Dave Volonte and you're watching the cubes continuous coverage of the event. You know, VM is a company that made its mark riding the virtualization wave, but quite amazingly has continued to extend its product portfolio and catch the other major waves of the industry. Of course, we're talking about cloud backup. SaS data protection was one of the early players there making moves and containers. And this is the VM on power panel with me or Danny Allen, who is the Ceo and Senior vice president of product strategy at VM. Dave Russell is the vice President of enterprise Strategy, of course, said Vin and Rick Vanover, senior director of product strategy at VM. It's great to see you again. Welcome back to the cube. >>Good to be here. >>Well, it had to be here. >>Yeah, let's do it. >>Let's do this. So Danny, you know, we heard you kind of your keynotes and we saw the general sessions and uh sort of diving into the breakouts. But the thing that jumps out to me is this growth rate that you're on. Uh you know, many companies and we've seen this throughout the industry have really struggled, you know, moving from the traditional on prem model to an an A. R. R. Model. Uh they've had challenges doing so the, I mean, you're not a public company, but you're quite transparent and a lot of your numbers 25% a our our growth year of a year in the last quarter, You know, 400,000 plus customers. You're talking about huge numbers of downloads of backup and replication Danny. So what are your big takeaways from the last, You know, 6-12 months? I know it was a strange year obviously, but you guys just keep cranking. >>Yeah, so we're obviously hugely excited by this and it really is a confluence of various things. It's our, it's our partners, it's the channel. Um, it's our customers frankly that that guide us and give us direction on what to do. But I always focus in on the product because I, you know, we run product strategy here, this group and we're very focused on building good products and I would say there's three product areas that are on maximum thrust right now. One is in the data center. So we built a billion dollar business on being the very best in the data center for V sphere, hyper V, um, for Nutanix, HV and as we announced also with red hat virtualization. So data center obviously a huge thrust for us going forward. The second assess Office 3 65 is exploding. We already announced we're protecting 5.8 million users right now with being back up for Office 3 65 and there's a lot of room to grow there. There's 145 million daily users of Microsoft teams. So a lot of room to grow. And then the third areas cloud, we moved over 100 petabytes of data into the public cloud in Q one and there's a lot of opportunity there as well. So those three things are driving the growth, the data center SaAS and cloud >>Davis. I want to get your kind of former analyst perspective on this. Uh you know, I know, you know, it's kind of become cliche but you still got that D. N. A. And I'm gonna tap it. So when you think about and you were following beam, of course very closely during its ascendancy with virtualization. And back then you wouldn't just take your existing, you know, approaches to back up in your processes and just slap them on to virtualization. That that wouldn't have worked. You had to rethink your backup. And it seems like I want to ask you about cloud because people talk about lift and shift and what I hear from customers is, you know, if I just lift and shift to cloud, it's okay, but if I don't have a plan to change my operating model, you know, I don't get the real benefit out of it. And so I would think back up data protection, data management etcetera is a key part of that. So how are you thinking about cloud and the opportunity there? >>Yeah, that's a good point, David. You know, I think the key area right there is it's important to protect the workload of the environment. The way that that environment is naturally is best suited to be protected and also to interact in a way that the administrator doesn't have to rethink, doesn't have to change their process so early on. Um I think it was very successful because the interface is the work experience looked like what an active directory administrator was used to, seeing if they went to go and protect something with me where to go recover an item. Same is true in the cloud, You don't want to just take what's working well in one area and just force it, you know, around round peg into a square hole. This doesn't work well. So you've got to think about the environment and you've got to think about what's gonna be the real use case for getting access to this data. So you want to really tune things and there's obviously commonality involved, but from a workflow perspective, from an application perspective and then a delivery model perspective, Now, when it comes to hybrid cloud multi cloud, it's important to look like that you belong there, not a fish out of water. >>Well, so of course, Danny you were talking to talking about you guys have product first, Right? And so rick your your key product guy here. What's interesting to me is when you look at the history of the technology industry and disruption, it's it's so often that the the incumbent, which you knew now an incumbent, you know, you're not the startup anymore, but the incumbent has challenges riding these these new waves because you've got to serve the existing customer base, but you gotta ride the new momentum as well. So how rick do you approach that from a product standpoint? Because based on the numbers that we see it doesn't you seem to be winning in both the traditional business and the new business. So how do you adapt from a product standpoint? >>Well, Dave, that's a good question. And Danny set it up? Well, it's really the birth of the Wien platform and its relevance in the market. In my 11th year here at Wien, I've had all kinds of conversations. Right. You know, the perception was that, you know, this smb toy for one hyper Advisor those days are long gone. We can check the boxes across the data center and cloud and even cloud native apps. You know, one of the things that my team has done is invest heavily in both people and staff on kubernetes, which aligns to our casting acquisition, which was featured heavily here at V Mon. So I think that being able to have that complete platform conversation Dave has really given us incredible momentum but also credibility with the customers because more than ever, this fundamental promise of having data backed up and being able to drive a recovery for whatever may happen to data nowadays. You know, that's a real emotional, important thing for people and to be able to bring that kind of outcome across the data center, across the cloud, across changes in what they do kubernetes that's really aligned well to our success and you know, I love talking to customers now. It's a heck of a lot easier when you can say yes to so many things and get the technical win. So that kind of drives a lot of the momentum Dave, but it's really the platform. >>So let's talk about the future of it and I want all you guys to chime in here and Danny, you start up, How do you see it? I mean, I always say the last 10 years, the next 10 years ain't gonna be like the last 10 years whether it's in cloud or hybrid et cetera. But so how Danny do you see I. T. In the future of I. T. Where do you see VM fitting in, how does that inform your roadmap, your product strategy? Maybe you could kick that segment off? >>Yeah. I think of the kind of the two past decades that we've gone through starting back in 2000 we had a lot of digital services built for end users and it was built on physical infrastructure and that was fantastic. Obviously we could buy things online, we could order close we could order food, we we could do things interact with end users. The second era about a decade later was based on virtualization. Now that wasn't a benefit so much to the end user is a benefit to the business. The Y because you could put 10 servers on a single physical server and you could be a lot more flexible in terms of delivery. I really think this next era that we're going into is actually based on containers. That's why the cost of acquisition is so strategic to us. Because the unique thing about containers is they're designed for to be consumption friendly. You spin them up, you spin them down, you provision them, you d provisions and they're completely portable. You can move it >>from on >>premises if you're running open shift to e k s a k s G k E. And so I think the next big era that we're going to go through is this movement towards containerized infrastructure. Now, if you ask me who's running that, I still think there's going to be a data center operations team, platform ups is the way that I think about them who run that because who's going to take the call in the middle of the night. But it is interesting that we're going through this transformation and I think we're in the very early stages of this radical transformation to a more consumption based model. Dave. I don't know what you think about that. >>Yeah, I would say something pretty similar Danny. It sounds cliche day valenti, but I take everything back to digital transformation. And the reason I say that is to me, digital transformation is about improving customer intimacy and so that you can deliver goods and services that better resonate and you can deliver them in better time frame. So exactly what Danny said, you know, I think that the siloed approaches of the past where we built very hard in environments and we were willing to take a long time to stand those up and then we have very tight change control. I feel like 2020 sort of a metaphor for where the data center is going to throw all that out the window we're compiling today. We're shipping today and we're going to get experience today and we're going to refine it and do it again tomorrow. But that's the environment we live in. And to Danny's point why containers are so important. That notion of shift left meaning experience things earlier in the cycle. That is going to be the reality of the data center regardless of whether the data center is on prem hybrid cloud, multi cloud or for some of us potentially completely in the cloud. >>So rick when you think about some of your peeps like the backup admit right and how that role is changing in a big discussion in the economy now about the sort of skills gap we got all these jobs and and yet there's still all this unemployment now, you know the debate about the reasons why, but there's a there's a transition enrolls in terms of how people are using products and obviously containers brings that, what what are you seeing when you talk to like a guy called him your peeps? Yeah, it's >>an evolving conversation. Dave the audience, right. It has to be relevant. Uh you know, we were afforded good luxury in that data center wheelhouse that Danny mentioned. So virtualization platform storage, physical servers, that's a pretty good start. But in the software as a service wheelhouse, it's a different persona now, they used to talk to those types of people, there's a little bit of connection, but as we go farther to the cloud, native apps, kubernetes and some of the other SAAS platforms, it is absolutely an audience journey. So I've actually worked really hard on that in my team, right? Everything from what I would say, parachuting into a community, right? And you have to speak their language. Number one reason is just number one outcomes just be present. And if you're in these communities you can find these individuals, you can talk their language, you can resonate with their needs, right? So that's something uh you know, everything from Levin marketing strategy to the community strategy to even just seating products in the market, That's a recipe that beam does really well. So yeah, it's a moving target for sure. >>Dave you were talking about the cliche of digital transformation and I'll say this may be pre Covid, I really felt like it was a cliche, there was a lot of, you know, complacency, I'll call it, but then the force marks the digital change that uh and now we kind of understand if you're not a digital business, you're in trouble. Uh And so my question is how it relates to some of the trends that we've been talking about in terms of cloud containers, We've seen the SAs ification for the better part of a decade now, but specifically as it relates to migration, it's hard for customers to just migrate their application portfolio to the cloud. Uh It's hard to fund it. It takes a long time. It's complex. Um how do you see that cloud migration evolving? Maybe that's where hybrid comes in And again, I'm interested in how you guys think about it and how it affects your strategy. >>Yeah. Well it's a complex answer as you might imagine because 400,000 customers, we take the exact same code. The exact same ice so that I run on my laptop is the exact same being backup and replication image that a major bank protects almost 20,000 machines and a petabytes of data. And so what that means is that you have to look at things on a case by case basis for some of us continuing to operate proprietary systems on prem might be the best choice for a certain workload. But for many of us the Genie is kind of out of the bottle with 2020 we have to move faster. It's less about safety and a lot more about speed and favorable outcome. We'll fix it if it's broken but let's get going. So for organizations struggling with how to move to the cloud, believe it or not, backup and recovery is an excellent way to start to venture into that because you can start to move data backup ISm data movement engine. So we can start to see data there where it makes sense. But rick would be quick to point out we want to offer a safe return. We have instances of where people want to repatriate data back and having a portable data format is key to that Rick. >>Uh yeah, I had a conversation recently with an organization managing cloud sprawl. They decided to consolidate, we're going to use this cloud, so it was removing a presence from one cloud that starts with an A and migrating it to the other cloud that starts with an A. You know, So yeah, we've seen that need for portability repatriation on prem classic example going from on prem apps to software as a service models for critical apps. So data mobility is at the heart of VM and with all the different platforms, kubernetes comes into play as well. It's definitely aligning to the needs that we're seeing in the market for sure. >>So repatriation, I want to stay on that for a second because you're, you're an arms dealer, you don't care if they're in the cloud or on prem and I don't know, maybe you make more money in one or the other, but you're gonna ride whatever waves the market gives you so repatriation to me implies. Or maybe I'm just inferring that somebody's moved to the cloud and they feel like, wow, we've made a mistake, it was too fast, too expensive. It didn't work for us. So now we're gonna bring it back on prem. Is that what you're saying? Are you saying they actually want their data in both both places. As another layer of data protection Danny. I wonder if you could address that. What are you seeing? >>Well, one of the interesting things that we saw recently, Dave Russell actually did the survey on this is that customers will actually build their work laid loads in the cloud with the intent to bring it back on premises. And so that repatriation is real customers actually don't just accidentally fall into it, but they intend to do it. And the thing about being everyone says, hey, we're disrupting the market, we're helping you go through this transformation, we're helping you go forward. Actually take a slightly different view of this. The team gives them the confidence that they can move forward if they want to, but if they don't like it, then they can move back and so we give them the stability through this incredible pace, change of innovation. We're moving forward so so quickly, but we give them the ability to move forward if they want then to recover to repatriate if that's what they need to do in a very effective way. And Dave maybe you can touch on that study because I know that you talked to a lot of customers who do repatriate workloads after moving them to the cloud. >>Yeah, it's kind of funny Dave not in the analyst business right now, but thanks to Danny and our chief marketing Officer, we've got now half a dozen different research surveys that have either just completed or in flight, including the largest in the data protection industry's history. And so the survey that Danny alluded to, what we're finding is people are learning as they're going and in some cases what they thought would happen when they went to the cloud they did not experience. So the net kind of funny slide that we discovered when we asked people, what did you like most about going to the cloud and then what did you like least about going to the cloud? The two lists look very similar. So in some cases people said, oh, it was more stable. In other cases people said no, it was actually unstable. So rick I would suggest that that really depends on the practice that you bring to it. It's like moving from a smaller house to a larger house and hoping that it won't be messy again. Well if you don't change your habits, it's eventually going to end up in the same situation. >>Well, there's still door number three and that's data reuse and analytics. And I found a lot of organizations love the idea of at least manipulating data, running test f scenarios on yesterday's production, cloud workload completely removed from the cloud or even just analytics. I need this file. You know, those types of scenarios are very easy to do today with them. And you know, sometimes those repatriations, those portable recoveries, Sometimes people do that intentionally, but sometimes they have to do it. You know, whether it's fire, flood and blood and you know, oh, I was looks like today we're moving to the cloud because I've lost my data center. Right. Those are scenarios that, that portable data format really allows organizations to do that pretty easily with being >>it's a good discussion because to me it's not repatriation, it has this negative connotation, the zero sum game and it's not Danny what you describe and rick as well. It was kind of an experimentation, a purposeful. We're going to do it in the cloud because we can and it's cheap and low risk to spin it up and then we're gonna move it because we've always thought we're going to have it on prem. So, so you know, there is some zero sum game between the cloud and on prem. Clearly no question about it. But there's also this rising tide lifts all ship. I want to, I want to change the subject to something that's super important and and top of mind it's in the press and it ain't going away and that is cyber and specifically ransomware. I mean, since the solar winds hack and it seems to me that was a new milestone in the capabilities and aggressiveness of the adversary who is very well funded and quite capable. And what we're seeing is this idea of tucking into the supply chain of islands, so called island hopping. You're seeing malware that's self forming and takes different signatures very stealthy. And the big trend that we've seen in the last six months or so is that the bad guys will will lurk and they'll steal all kinds of sensitive data. And then when you have an incident response, they will punish you for responding. And they will say, okay, fine, you want to do that. We're going to hold you ransom. We're gonna encrypt your data. And oh, by the way, we stole this list of positive covid test results with names from your website and we're gonna release it if you don't pay their. I mean, it's like, so you have to be stealthy in your incident response. And this is a huge problem. We're talking about trillions of dollars lost each year in, in in cybercrime. And so, uh, you know, it's again, it's this uh the bad news is good news for companies like you. But how do you help customers deal with this problem? What are you seeing Danny? Maybe you can chime in and others who have thoughts? >>Well we're certainly seeing the rise of cyber like crazy right now and we've had a focus on this for a while because if you think about the last line of defense for customers, especially with ransomware, it is having secure backups. So whether it be, you know, hardened Linux repositories, but making sure that you can store the data, have it offline, have it, have it encrypted immutable. Those are things that we've been focused on for a long while. It's more than that. Um it's detection and monitoring of the environment, which is um certainly that we do with our monitoring tools and then also the secure recovery. The last thing that you want to do of course is bring your backups or bring your data back online only to be hit again. And so we've had a number of capabilities across our portfolio to help in all of these. But I think what's interesting is where it's going, if you think about unleashing a world where we're continuously delivering, I look at things like containers where you have continues delivery and I think every time you run that helm commander, every time you run that terra form command, wouldn't that be a great time to do a backup to capture your data so that you don't have an issue once it goes into production. So I think we're going towards a world where security and the protection against these cyber threats is built into the supply chain rather than doing it on just a time based uh, schedule. And I know rick you're pretty involved on the cyber side as well. Would you agree with that? I >>would. And you know, for organizations that are concerned about ransomware, you know, this is something that is taken very seriously and what Danny explained for those who are familiar with security, he kind of jumped around this, this universally acceptable framework in this cybersecurity framework there, our five functions that are a really good recipe on how you can go about this. And and my advice to IT professionals and decision makers across the board is to really align everything you do to that framework. Backup is a part of it. The security monitoring and user training. All those other things are are areas that that need to really follow that wheel of functions. And my little tip here and this is where I think we can introduce some differentiation is around detection and response. A lot of people think of backup product would shine in both protection and recovery, which it does being does, but especially on response and detection, you know, we have a lot of capabilities that become impact opportunities for organizations to be able to really provide successful outcomes through the other functions. So it's something we've worked on a lot. In fact we've covered here at the event. I'm pretty sure it will be on replay the updated white paper. All those other resources for different levels can definitely guide them through. >>So we follow up to the detection is what analytics that help you identify whatever lateral movement or people go in places they shouldn't go. I mean the hard part is is you know, the bad guys are living off the land, meaning they're using your own tooling to to hack you. So they're not it's not like they're introducing something new that shouldn't be there. They're they're just using making judo moves against you. So so specifically talk a little bit more about your your detection because that's critical. >>Sure. So I'll give you one example imagine we capture some data in the form of a backup. Now we have an existing advice that says, you know what Don't put your backup infrastructure with internet connectivity. Use explicit minimal permissions. And those three things right there and keep it up to date. Those four things right there will really hedge off a lot of the different threat vectors to the back of data, couple that with some of the mutability offline or air gapped capabilities that Danny mentioned and you have an additional level of resiliency that can really ensure that you can drive recovery from an analytic standpoint. We have an api that allows organizations to look into the backup data. Do more aggressive scanning without any exclusions with different tools on a flat file system. You know, the threats can't jump around in memory couple that with secure restore. When you reintroduce things into the environment From a recovery standpoint, you don't want to reintroduce threats. So there's protections, there's there's confidence building steps along the way with them and these are all generally available technologies. So again, I got this white paper, I think we're up to 50 pages now, but it's a very thorough that goes through a couple of those scenarios. But you know, it gets the uh, it gets quickly into things that you wouldn't expect from a backup product. >>Please send me a copy if you, if you don't mind. I this is a huge problem and you guys are global company. I admittedly have a bit of a US bias, but I was interviewing robert Gates one time the former defense secretary and we're talking about cyber war and I said, don't we have the best cyber, can't we let go on the offense? He goes, yeah, we can, but we got the most to lose. So this is really a huge problem for organizations. All right, guys, last question I gotta ask you. So what's life like under, under inside capital of the private equity? What's changed? What's, what's the same? Uh, do you hear from our good friend ratner at all? Give us the update there. >>Yes. Oh, absolutely fantastic. You know, it's interesting. So obviously acquired by insight partners in February of 2020, right, when the pandemic was hitting, but they essentially said light the fuse, keep the engine's going. And we've certainly been doing that. They haven't held us back. We've been hiring like crazy. We're up to, I don't know what the count is now, I think 4600 employees, but um, you know, people think of private equity and they think of cost optimizations and, and optimizing the business, That's not the case here. This is a growth opportunity and it's a growth opportunity simply because of the technology opportunity in front of us to keep, keep the engine's going. So we hear from right near, you know, on and off. But the new executive team at VM is very passionate about driving the success in the industry, keeping abreast of all the technology changes. It's been fantastic. Nothing but good things to say. >>Yes, insight inside partners, their players, we watched them watch their moves and so it's, you know, I heard Bill McDermott, the ceo of service now the other day talking about he called himself the rule of 60 where, you know, I always thought it was even plus growth, you know, add that up. And that's what he was talking about free cash flow. He's sort of changing the definition a little bit but but so what are you guys optimizing for you optimizing for growth? Are you optimising for Alberta? You optimizing for free cash flow? I mean you can't do All three. Right. What how do you think about that? >>Well, we're definitely optimizing for growth. No question. And one of the things that we've actually done in the past 12 months, 18 months is beginning to focus on annual recurring revenue. You see this in our statements, I know we're not public but we talk about the growth in A. R. R. So we're certainly focused on that growth in the annual recovering revenue and that that's really what we tracked too. And it aligns well with the cloud. If you look at the areas where we're investing in cloud native and the cloud and SAAS applications, it's very clear that that recurring revenue model is beneficial. Now We've been lucky, I think we're 13 straight quarters of double-digit growth. And and obviously they don't want to see that dip. They want to see that that growth continue. But we are optimizing on the growth trajectory. >>Okay. And you see you clearly have a 25% growth last quarter in A. R. R. Uh If I recall correctly, the number was evaluation was $5 billion last january. So obviously then, given that strategy, Dave Russell, that says that your tam is a lot bigger than just the traditional backup world. So how do you think about tam? I'll we'll close there >>and uh yeah, I think you look at a couple of different ways. So just in the backup recovery space or backup in replication to paying which one you want to use? You've got a large market there in excess of $8 billion $1 billion dollar ongoing enterprise. Now, if you look at recent i. D. C. Numbers, we grew and I got my handy HP calculator. I like to make sure I got this right. We grew 44.88 times faster than the market average year over year. So let's call that 45 times faster and backup. There's billions more to be made in traditional backup and recovery. However, go back to what we've been talking around digital transformation Danny talking about containers in the environment, deployment models, changing at the heart of backup and recovery where a data capture data management, data movement engine. We envision being able to do that not only for availability but to be able to drive the business board to be able to drive economies of scale faster for our organizations that we serve. I think the trick is continuing to do more of the same Danny mentioned, he knows the view's got lit. We haven't stopped doing anything. In fact, Danny, I think we're doing like 10 times more of everything that we used to be doing prior to the pandemic. >>All right, Danny will give you the last word, bring it home. >>So our goal has always been to be the most trusted provider of backup solutions that deliver modern data protection. And I think folks have seen at demon this year that we're very focused on that modern data protection. Yes, we want to be the best in the data center but we also want to be the best in the next generation, the next generation of I. T. So whether it be sas whether it be cloud VM is very committed to making sure that our customers have the confidence that they need to move forward through this digital transformation era. >>Guys, I miss flying. I mean, I don't miss flying, but I miss hanging with you all. We'll see you. Uh, for sure. Vim on 2022 will be belly to belly, but thanks so much for coming on the the virtual edition and thanks for having us. >>Thank you. >>All right. And thank you for watching everybody. This keeps continuous coverage of the mon 21. The virtual edition. Keep it right there for more great coverage. >>Mm

Published Date : May 26 2021

SUMMARY :

It's great to see you again. So Danny, you know, we heard you kind of your keynotes and we saw the general But I always focus in on the product because I, you know, we run product strategy here, I know, you know, it's kind of become cliche but you still got that D. N. A. that the administrator doesn't have to rethink, doesn't have to change their process so early on. Because based on the numbers that we see it doesn't you seem to be winning in both the traditional business It's a heck of a lot easier when you can say yes to so many things So let's talk about the future of it and I want all you guys to chime in here and Danny, You spin them up, you spin them down, you provision them, you d provisions and they're completely portable. I don't know what you think about that. So exactly what Danny said, you know, I think that the siloed approaches of the past So that's something uh you I really felt like it was a cliche, there was a lot of, you know, complacency, I'll call it, And so what that means is that you have to So data mobility is at the heart of VM and with all the different platforms, I wonder if you could address that. And Dave maybe you can touch on that study depends on the practice that you bring to it. And you know, sometimes those repatriations, those portable recoveries, And then when you have an incident response, they will punish you for responding. you know, hardened Linux repositories, but making sure that you can store the data, And you know, for organizations that are concerned about ransomware, I mean the hard part is is you know, Now we have an existing advice that says, you know what Don't put your backup infrastructure with internet connectivity. I this is a huge problem and you guys are global company. So we hear from right near, you know, on and off. called himself the rule of 60 where, you know, I always thought it was even plus growth, And one of the things that we've actually done in the past 12 So how do you think about tam? recovery space or backup in replication to paying which one you want to use? So our goal has always been to be the most trusted provider of backup solutions that deliver I mean, I don't miss flying, but I miss hanging with you all. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DannyPERSON

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

Dave VolontePERSON

0.99+

Danny AllenPERSON

0.99+

Rick VanoverPERSON

0.99+

Dave RussellPERSON

0.99+

VinPERSON

0.99+

45 timesQUANTITY

0.99+

44.88 timesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

February of 2020DATE

0.99+

Bill McDermottPERSON

0.99+

robert GatesPERSON

0.99+

10 serversQUANTITY

0.99+

$5 billionQUANTITY

0.99+

400,000 customersQUANTITY

0.99+

2000DATE

0.99+

10 timesQUANTITY

0.99+

todayDATE

0.99+

2021DATE

0.99+

145 millionQUANTITY

0.99+

tomorrowDATE

0.99+

25%QUANTITY

0.99+

4600 employeesQUANTITY

0.99+

HPORGANIZATION

0.99+

bothQUANTITY

0.99+

11th yearQUANTITY

0.99+

each yearQUANTITY

0.99+

NutanixORGANIZATION

0.99+

oneQUANTITY

0.99+

VMORGANIZATION

0.99+

RickPERSON

0.99+

yesterdayDATE

0.99+

18 monthsQUANTITY

0.99+

OneQUANTITY

0.99+

billionsQUANTITY

0.99+

two listsQUANTITY

0.99+

five functionsQUANTITY

0.99+

2020DATE

0.98+

VeeamONORGANIZATION

0.98+

last quarterDATE

0.98+

last januaryDATE

0.98+

6-12 monthsQUANTITY

0.98+

HVORGANIZATION

0.98+

over 100 petabytesQUANTITY

0.98+

400,000 plus customersQUANTITY

0.98+

one exampleQUANTITY

0.98+

three thingsQUANTITY

0.97+

Office 3 65TITLE

0.97+

both placesQUANTITY

0.97+

13 straight quartersQUANTITY

0.97+

SaASORGANIZATION

0.97+

LevinPERSON

0.97+

about a decade laterDATE

0.96+

secondQUANTITY

0.96+

2022DATE

0.95+

rickPERSON

0.95+

CeoORGANIZATION

0.95+

Breaking Analysis: Best of theCUBE on Cloud


 

>> Narrator: From theCUBE Studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The next 10 years of cloud, they're going to differ dramatically from the past decade. The early days of cloud, deployed virtualization of standard off-the-shelf components, X86 microprocessors, disk drives et cetera, to then scale out and build a large distributed system. The coming decade is going to see a much more data-centric, real-time, intelligent, call it even hyper-decentralized cloud that will comprise on-prem, hybrid, cross-cloud and edge workloads with a services layer that will obstruct the underlying complexity of the infrastructure which will also comprise much more custom and varied components. This was a key takeaway of the guests from theCUBE on Cloud, an event hosted by SiliconANGLE on theCUBE. Welcome to this week's Wikibon CUBE Insights Powered by ETR. In this episode, we'll summarize the findings of our recent event and extract the signal from our great guests with a couple of series and comments and clips from the show. CUBE on Cloud is our very first virtual editorial event. It was designed to bring together our community in an open forum. We ran the day on our 365 software platform and had a great lineup of CEOs, CIOs, data practitioners technologists. We had cloud experts, analysts and many opinion leaders all brought together in a day long series of sessions that we developed in order to unpack the future of cloud computing in the coming decade. Let me briefly frame up the conversation and then turn it over to some of our guests. First, we put forth our view of how modern cloud has evolved and where it's headed. This graphic that we're showing here, talks about the progression of cloud innovation over time. A cloud like many innovations, it started as a novelty. When AWS announced S3 in March of 2006, nobody in the vendor or user communities really even in the trade press really paid too much attention to it. Then later that year, Amazon announced EC2 and people started to think about a new model of computing. But it was largely tire kickers, bleeding-edge developers that took notice and really leaned in. Now the financial crisis of 2007 to 2009, really created what we call a cloud awakening and it put cloud on the radar of many CFOs. Shadow IT emerged within departments that wanted to take IT in bite-sized chunks and along with the CFO wanted to take it as OPEX versus CAPEX. And then I teach transformation that really took hold. We came out of the financial crisis and we've been on an 11-year cloud boom. And it doesn't look like it's going to stop anytime soon, cloud has really disrupted the on-prem model as we've reported and completely transformed IT. Ironically, the pandemic hit at the beginning of this decade, and created a mandate to go digital. And so it accelerated the industry transformation that we're highlighting here, which probably would have taken several more years to mature but overnight the forced March to digital happened. And it looks like it's here to stay. Now the next wave, we think we'll be much more about business or industry transformation. We're seeing the first glimpses of that. Holger Mueller of Constellation Research summed it up at our event very well I thought, he basically said the cloud is the big winner of COVID. Of course we know that now normally we talk about seven-year economic cycles. He said he was talking about for planning and investment cycles. Now we operate in seven-day cycles. The examples he gave where do we open or close the store? How do we pivot to support remote workers without the burden of CAPEX? And we think that the things listed on this chart are going to be front and center in the coming years, data AI, a fully digitized and intelligence stack that will support next gen disruptions in autos, manufacturing, finance, farming and virtually every industry where the system will expand to the edge. And the underlying infrastructure across physical locations will be hidden. Many issues remain, not the least of which is latency which we talked about at the event in quite some detail. So let's talk about how the Big 3 cloud players are going to participate in this next era. Well, in short, the consensus from the event was that the rich get richer. Let's take a look at some data. This chart shows our most recent estimates of IaaS and PaaS spending for the Big 3. And we're going to update this after earning season but there's a couple of points stand out. First, we want to make the point that combined the Big 3 now account for almost $80 billion of infrastructure spend last year. That $80 billion, was not all incremental (laughs) No it's caused consolidation and disruption in the on-prem data center business and within IT shops companies like Dell, HPE, IBM, Oracle many others have felt the heat and have had to respond with hybrid and cross cloud strategies. Second while it's true that Azure and GCP they appear to be growing faster than AWS. We don't know really the exact numbers, of course because only AWS provides a clean view of IaaS and passwords, Microsoft and Google. They kind of hide them all ball on their numbers which by the way, I don't blame them but they do leave breadcrumbs and clues on growth rates. And we have other means of estimating through surveys and the like, but it's undeniable Azure is closing the revenue gap on AWS. The third is that I like the fact that Azure and Google are growing faster than AWS. AWS is the only company by our estimates to grow its business sequentially last quarter. And in and of itself, that's not really enough important. What is significant is that because AWS is so large now at 45 billion, even at their slower growth rates it grows much more in absolute terms than its competitors. So we think AWS is going to keep its lead for some time. We think Microsoft and AWS will continue to lead the pack. You know, they might converge maybe it will be a 200 just race in terms of who's first who's second in terms of cloud revenue and how it's counted depending on what they count in their numbers. And Google look with its balance sheet and global network. It's going to play the long game and virtually everyone else with the exception of perhaps Alibaba is going to be secondary players on these platforms. Now this next graphic underscores that reality and kind of lays out the competitive landscape. What we're showing here is survey data from ETR of more than 1400 CIOs and IT buyers and on the vertical axis is Net Score which measures spending momentum on the horizontal axis is so-called Market Share which is a measure of pervasiveness in the data set. The key points are AWS and Microsoft look at it. They stand alone so far ahead of the pack. I mean, they really literally, it would have to fall down to lose their lead high spending velocity and large share of the market or the hallmarks of these two companies. And we don't think that's going to change anytime soon. Now, Google, even though it's far behind they have the financial strength to continue to position themselves as an alternative to AWS. And of course, an analytics specialist. So it will continue to grow, but it will be challenged. We think to catch up to the leaders. Now take a look at the hybrid zone where the field is playing. These are companies that have a large on-prem presence and have been forced to initiate a coherent cloud strategy. And of course, including multicloud. And we include Google in this so pack because they're behind and they have to take a differentiated approach relative to AWS, and maybe cozy up to some of these traditional enterprise vendors to help Google get to the enterprise. And you can see from the on-prem crowd, VMware Cloud on AWS is stands out as having some, some momentum as does Red Hat OpenShift, which is it's cloudy, but it's really sort of an ingredient it's not really broad IaaS specifically but it's a component of cloud VMware cloud which includes VCF or VMware Cloud Foundation. And even Dell's cloud. We would expect HPE with its GreenLake strategy. Its financials is shoring up, should be picking up momentum in the future in terms of what the customers of this survey consider cloud. And then of course you could see IBM and Oracle you're in the game, but they don't have the spending momentum and they don't have the CAPEX chops to compete with the hyperscalers IBM's cloud revenue actually dropped 7% last quarter. So that highlights the challenges that that company facing Oracle's cloud business is growing in the single digits. It's kind of up and down, but again underscores these two companies are really about migrating their software install basis to their captive clouds and as well for IBM, for example it's launched a financial cloud as a way to differentiate and not take AWS head-on an infrastructure as a service. The bottom line is that other than the Big 3 in Alibaba the rest of the pack will be plugging into hybridizing and cross-clouding those platforms. And there are definitely opportunities there specifically related to creating that abstraction layer that we talked about earlier and hiding that underlying complexity and importantly creating incremental value good examples, snowfallLike what snowflake is doing with its data cloud, what the data protection guys are doing. A company like Loomio is headed in that direction as are others. So, you keep an eye on that and think about where the white space is and where the value can be across-clouds. That's where the opportunity is. So let's see, what is this all going to look like? How does the cube community think it's going to unfold? Let's hear from theCUBE Guests and theCUBE on Cloud speakers and some of those highlights. Now, unfortunately we don't have time to show you clips from every speaker. We are like 10-plus hours of video content but we've tried to pull together some comments that summarize the sentiment from the community. So I'm going to have John Furrier briefly explain what theCUBE on Cloud is all about and then let the guests speak for themselves. After John, Pradeep Sindhu is going to give a nice technical overview of how the cloud was built out and what's changing in the future. I'll give you a hint it has to do with data. And then speaking of data, Mai-Lan Bukovec, who heads up AWS is storage portfolio. She'll explain how she views the coming changes in cloud and how they look at storage. Again, no surprise, it's all about data. Now, one of the themes that you'll hear from guests is the notion of a distributed cloud model. And Zhamak Deghani, he was a data architect. She'll explain her view of the future of data architectures. We also have thoughts from analysts like Zeus Karavalla and Maribel Lopez, and some comments from both Microsoft and Google to compliment AWS's view of the world. In fact, we asked JG Chirapurath from Microsoft to comment on the common narrative that Microsoft products are not best-to-breed. They put out a one dot O and then they get better, or sometimes people say, well, they're just good enough. So we'll see what his response is to that. And Paul Gillin asks, Amit Zavery of Google his thoughts on the cloud leaderboard and how Google thinks about their third-place position. Dheeraj Pandey gives his perspective on how technology has progressed and been miniaturized over time. And what's coming in the future. And then Simon Crosby gives us a framework to think about the edge as the most logical opportunity to process data not necessarily a physical place. And this was echoed by John Roese, and Chris Wolf to experience CTOs who went into some great depth on this topic. Unfortunately, I don't have the clips of those two but their comments can be found on the CTO power panel the technical edge it's called that's the segment at theCUBE on Cloud events site which we'll share the URL later. Now, the highlight reel ends with CEO Joni Klippert she talks about the changes in securing the cloud from a developer angle. And finally, we wrap up with a CIO perspective, Dan Sheehan. He provides some practical advice on building on his experience as a CIO, COO and CTO specifically how do you as a business technology leader deal with the rapid pace of change and still be able to drive business results? Okay, so let's now hear from the community please run the highlights. >> Well, I think one of the things we talked about COVID is the personal impact to me but other people as well one of the things that people are craving right now is information, factual information, truth, textures that we call it. But here this event for us Dave is our first inaugural editorial event. Rob, both Kristen Nicole the entire cube team, SiliconANGLE on theCUBE we're really trying to put together more of a cadence. We're going to do more of these events where we can put out and feature the best people in our community that have great fresh voices. You know, we do interview the big names Andy Jassy, Michael Dell, the billionaires of people making things happen, but it's often the people under them that are the real Newsmakers. >> If you look at the architecture of cloud data centers the single most important invention was scale-out. Scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPU's with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute but this architecture, Dave is a compute centric architecture. And the reason it's a compute centric architecture is if you open this, is server node. What you see is a connection to the network typically with a simple network interface card. And then you have CPU's which are in the middle of the action. Not only are the CPU's processing the application workload but they're processing all of the IO workload what we call data centric workload. And so when you connect SSDs and hard drives and GPU is everything to the CPU, as well as to the network you can now imagine that the CPU is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go to the CPU and you're executing instructions typically in the operating system. And you're interrupting the CPU many many millions of times a second. Now general purpose CPU and the architecture of the CPU's was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where does a lot of data, a lot of these stress traffic the percentage of workload, which is data centric has gone from maybe one to 2% to 30 to 40%. >> The path to innovation is paved by data. If you don't have data, you don't have machine learning you don't have the next generation of analytics applications that helps you chart a path forward into a world that seems to be changing every week. And so in order to have that insight in order to have that predictive forecasting that every company needs, regardless of what industry that you're in today, it all starts from data. And I think the key shift that I've seen is how customers are thinking about that data, about being instantly usable. Whereas in the past, it might've been a backup. Now it's part of a data Lake. And if you can bring that data into a data lake you can have not just analytics or machine learning or auditing applications it's really what does your application do for your business and how can it take advantage of that vast amount of shared data set in your business? >> We are actually moving towards decentralization if we think today, like if it let's move data aside if we said is the only way web would work the only way we get access to various applications on the web or pages to centralize it We would laugh at that idea. But for some reason we don't question that when it comes to data, right? So I think it's time to embrace the complexity that comes with the growth of number of sources, the proliferation of sources and consumptions models, embrace the distribution of sources of data that they're not just within one part of organization. They're not just within even bounds of organizations that are beyond the bounds of organization. And then look back and say, okay, if that's the trend of our industry in general, given the fabric of compensation and data that we put in, you know, globally in place then how the architecture and technology and organizational structure incentives need to move to embrace that complexity. And to me that requires a paradigm shift a full stack from how we organize our organizations how we organize our teams, how we put a technology in place to look at it from a decentralized angle. >> I actually think we're in the midst of the transition to what's called a distributed cloud, where if you look at modernized cloud apps today they're actually made up of services from different clouds. And also distributed edge locations. And that's going to have a pretty profound impact on the way we go vast. >> We wake up every day, worrying about our customer and worrying about the customer condition and to absolutely make sure we dealt with the best in the first attempt that we do. So when you take the plethora of products we've dealt with in Azure, be it Azure SQL be it Azure cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks Azure machine learning. And recently when we sort of offered the world's first comprehensive data governance solution and Azure overview, I would, I would humbly submit to you that we are leading the way. >> How important are rankings within the Google cloud team or are you focused mainly more on growth and just consistency? >> No, I don't think again, I'm not worried about we are not focused on ranking or any of that stuff. Typically I think we are worried about making sure customers are satisfied and the adding more and more customers. So if you look at the volume of customers we are signing up a lot of the large deals we did doing. If you look at the announcement we've made over the last year has been tremendous momentum around that. >> The thing that is really interesting about where we have been versus where we're going is we spend a lot of time talking about virtualizing hardware and moving that around. And what does that look like? And creating that as more of a software paradigm. And the thing we're talking about now is what does cloud as an operating model look like? What is the manageability of that? What is the security of that? What, you know, we've talked a lot about containers and moving into different, DevSecOps and all those different trends that we've been talking about. Like now we're doing them. So we've only gotten to the first crank of that. And I think every technology vendor we talked to now has to address how are they are going to do a highly distributed management insecurity landscape? Like, what are they going to layer on top of that? Because it's not just about, oh, I've taken a rack of something, server storage, compute, and virtualized it. I know have to create a new operating model around it in a way we're almost redoing what the OSI stack looks like and what the software and solutions are for that. >> And the whole idea of we in every recession we make things smaller. You know, in 91 we said we're going to go away from mainframes into Unix servers. And we made the unit of compute smaller. Then in the year, 2000 windows the next bubble burst and the recession afterwards we moved from Unix servers to Wintel windows and Intel x86 and eventually Linux as well. Again, we made things smaller going from million dollar servers to $5,000 servers, shorter lib servers. And that's what we did in 2008, 2009. I said, look, we don't even need to buy servers. We can do things with virtual machines which are servers that are an incarnation in the digital world. There's nothing in the physical world that actually even lives but we made it even smaller. And now with cloud in the last three, four years and what will happen in this coming decade. They're going to make it even smaller not just in space, which is size, with functions and containers and virtual machines, but also in time. >> So I think the right way to think about edges where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have but much data is encrypted between the original device say and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze it in the care. >> When I think of Shift-left, I think of that Mobius that we all look at all of the time and how we deliver and like plan, write code, deliver software, and then manage it, monitor it, right like that entire DevOps workflow. And today, when we think about where security lives, it either is a blocker to deploying production or most commonly it lives long after code has been deployed to production. And there's a security team constantly playing catch up trying to ensure that the development team whose job is to deliver value to their customers quickly, right? Deploy as fast as we can as many great customer facing features. They're then looking at it months after software has been deployed and then hurrying and trying to assess where the bugs are and trying to get that information back to software developers so that they can fix those issues. Shifting left to me means software engineers are finding those bugs as they're writing code or in the CIC CD pipeline long before code has been deployed to production. >> During this for quite a while now, it still comes down to the people. I can get the technology to do what it needs to do as long as they have the right requirements. So that goes back to people making sure we have the partnership that goes back to leadership and the people and then the change management aspects right out of the gate, you should be worrying about how this change is going to be how it's going to affect, and then the adoption and an engagement, because adoption is critical because you can go create the best thing you think from a technology perspective. But if it doesn't get used correctly, it's not worth the investment. So I agree, what is a digital transformation or innovation? It still comes down to understand the business model and injecting and utilizing technology to grow our reduce costs, grow the business or reduce costs. >> Okay, so look, there's so much other content on theCUBE on Cloud events site we'll put the link in the description below. We have other CEOs like Kathy Southwick and Ellen Nance. We have the CIO of UI path. Daniel Dienes talks about automation in the cloud and Appenzell from Anaplan. And a plan is not her company. By the way, Dave Humphrey from Bain also talks about his $750 million investment in Nutanix. Interesting, Rachel Stevens from red monk talks about the future of software development in the cloud and CTO, Hillary Hunter talks about the cloud going vertical into financial services. And of course, John Furrier and I along with special guests like Sergeant Joe Hall share our take on key trends, data and perspectives. So right here, you see the coupon cloud. There's a URL, check it out again. We'll, we'll pop this URL in the description of the video. So there's some great content there. I want to thank everybody who participated and thank you for watching this special episode of theCUBE Insights Powered by ETR. This is Dave Vellante and I'd appreciate any feedback you might have on how we can deliver better event content for you in the future. We'll be doing a number of these and we look forward to your participation and feedback. Thank you, all right, take care, we'll see you next time. (upbeat music)

Published Date : Jan 22 2021

SUMMARY :

bringing you data-driven and kind of lays out the about COVID is the personal impact to me and GPU is everything to the Whereas in the past, it the only way we get access on the way we go vast. and to absolutely make sure we dealt and the adding more and more customers. And the thing we're talking And the whole idea and analyze it in the care. or in the CIC CD pipeline long before code I can get the technology to of software development in the cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Daniel DienesPERSON

0.99+

Zhamak DeghaniPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

John RoesePERSON

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Andy JassyPERSON

0.99+

DellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rachel StevensPERSON

0.99+

Maribel LopezPERSON

0.99+

Michael DellPERSON

0.99+

$5,000QUANTITY

0.99+

Chris WolfPERSON

0.99+

2008DATE

0.99+

Joni KlippertPERSON

0.99+

seven-dayQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Dan SheehanPERSON

0.99+

Pradeep SindhuPERSON

0.99+

Dheeraj PandeyPERSON

0.99+

March of 2006DATE

0.99+

RobPERSON

0.99+

Hillary HunterPERSON

0.99+

GoogleORGANIZATION

0.99+

Amit ZaveryPERSON

0.99+

Ellen NancePERSON

0.99+

JG ChirapurathPERSON

0.99+

John FurrierPERSON

0.99+

Dave HumphreyPERSON

0.99+

Simon CrosbyPERSON

0.99+

Mai-Lan BukovecPERSON

0.99+

2009DATE

0.99+

$80 billionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

AlibabaORGANIZATION

0.99+

JohnPERSON

0.99+

11-yearQUANTITY

0.99+

Kristen NicolePERSON

0.99+

DatabricksORGANIZATION

0.99+

LoomioORGANIZATION

0.99+

BostonLOCATION

0.99+

10-plus hoursQUANTITY

0.99+

45 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

$750 millionQUANTITY

0.99+

7%QUANTITY

0.99+

Holger MuellerPERSON

0.99+

DavePERSON

0.99+

FirstQUANTITY

0.99+

John FurrierPERSON

0.99+

thirdQUANTITY

0.99+

two companiesQUANTITY

0.99+

SecondQUANTITY

0.99+

firstQUANTITY

0.99+

Zeus KaravallaPERSON

0.99+

last yearDATE

0.99+

Kathy SouthwickPERSON

0.99+

secondQUANTITY

0.99+

Constellation ResearchORGANIZATION

0.99+

Pradeep Sindhu CLEAN


 

>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.

Published Date : Jan 4 2021

SUMMARY :

of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

90QUANTITY

0.99+

PradeepPERSON

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

15 megahertzQUANTITY

0.99+

30 timesQUANTITY

0.99+

30%QUANTITY

0.99+

four yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20QUANTITY

0.99+

five yearsQUANTITY

0.99+

80%QUANTITY

0.99+

30QUANTITY

0.99+

Juniper NetworksORGANIZATION

0.99+

Pradeep SindhuPERSON

0.99+

GoogleORGANIZATION

0.99+

two problemsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

600XQUANTITY

0.99+

1987DATE

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

first questionQUANTITY

0.99+

two problemsQUANTITY

0.99+

1,000 different treadsQUANTITY

0.99+

oneQUANTITY

0.99+

30 timesQUANTITY

0.99+

60 yearsQUANTITY

0.99+

next decadeDATE

0.99+

eachQUANTITY

0.99+

second thingQUANTITY

0.99+

2.3 gigahertzQUANTITY

0.99+

2%QUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

40%QUANTITY

0.99+

thousandsQUANTITY

0.99+

two functionsQUANTITY

0.98+

25%QUANTITY

0.98+

todayDATE

0.98+

third elementQUANTITY

0.98+

FungibleORGANIZATION

0.98+

95%QUANTITY

0.98+

40 timesQUANTITY

0.98+

two ordersQUANTITY

0.98+

singleQUANTITY

0.98+

SecondlyQUANTITY

0.98+

last decadeDATE

0.98+

two thingsQUANTITY

0.98+

two basic problemsQUANTITY

0.97+

10, 20%QUANTITY

0.97+

a secondQUANTITY

0.97+

around 8%QUANTITY

0.97+

one solutionQUANTITY

0.97+

43 years agoDATE

0.97+

fourQUANTITY

0.97+

four examplesQUANTITY

0.96+

eightQUANTITY

0.96+

billions of dollarsQUANTITY

0.96+

100 gigabits per secondQUANTITY

0.96+

one sideQUANTITY

0.95+

35QUANTITY

0.94+

three megabits per secondQUANTITY

0.94+

GCPORGANIZATION

0.93+

AzureORGANIZATION

0.92+

two fundamental problemsQUANTITY

0.91+

hundreds of billions of transistorsQUANTITY

0.91+

two and a half yearsQUANTITY

0.91+

Problem number twoQUANTITY

0.9+

Evolving Your Analytics Center of Excellence | Beyond.2020 Digital


 

>>Hello, everyone, and welcome to track three off beyond. My name is being in Yemen and I am an account executive here at Thought spot based out of our London office. If the accents throwing you off I don't quite sound is British is you're expecting it because the backgrounds Australian so you can look forward to seeing my face. As we go through these next few sessions, I'm gonna be introducing the guests as well as facilitating some of the Q and A. So make sure you come and say hi in the chat with any comments, questions, thoughts that you have eso with that I mean, this whole track, as the title somewhat gives away, is really about everything that you need to know and all the tips and tricks when it comes to adoption and making sure that your thoughts what deployment is really, really successful. We're gonna be taking off everything from user training on boarding new use cases and picking the right use cases, as well as hearing from our customers who have been really successful in during this before. So with that, though, I'm really excited to introduce our first guest, Kathleen Maley. She is a senior analytics executive with over 15 years of experience in the space. And she's going to be talking to us about all her tips and tricks when it comes to making the most out of your center of excellence from obviously an analytics perspective. So with that, I'm going to pass the mic to her. But look forward to continuing the chat with you all in the chat. Come say hi. >>Thank you so much, Bina. And it is really exciting to be here today, thanks to everyone for joining. Um, I'll jump right into it. The topic of evolving your analytics center of excellence is a particular passion of mine on I'm looking forward to sharing some of my best practices with you. I started my career, is a member of an analytic sioe at Bank of America was actually ah, model developer. Um, in my most recent role at a regional bank in the Midwest, I ran an entire analytics center of excellence. Um, but I've also been on the business side running my own P and l. So I think through this combination of experiences, I really developed a unique perspective on how to most effectively establish and work with an analytic CEO. Um, this thing opportunity is really a two sided opportunity creating value from analytics. Uh, and it really requires the analytics group and the line of business Thio come together. Each has a very specific role to play in making that happen. So that's a lot of what I'll talk about today. Um, I started out just like most analysts do formally trained in statistics eso whether your data analyst or a business leader who taps into analytical talent. I want you to leave this talk today, knowing the modern definition of analytics, the purpose of a modern sioe, some best practices for a modern sioe and and then the role that each of you plays in bringing this Kuito life. So with that said, let me start by level, setting on the definition of analytics that aligns with where the discipline is headed. Um, versus where it's been historically, analytics is the discovery, interpretation and communication of meaningful patterns in data, the connective tissue between data and effective decision making within an organization. And this is a definition that I've been working under for the last, you know, 7 to 10 years of my career notice there is nothing in there about getting the data. We're at this amazing intersection of statistics and technology that effectively eliminates getting the data as a competitive advantage on this is just It's true for analysts who are thinking in terms of career progression as it is for business leaders who have to deliver results for clients and shareholders. So the definition is action oriented. It's purposeful. It's not about getting the data. It's about influencing and enabling effective decision making. Now, if you're an analyst, this can be scary because it's likely what you spend a huge amount of your time doing, so much so that it probably feels like getting the data is your job. If that's the case, then the emergence of these new automated tools might feel like your job is at risk of becoming obsolete. If you're a business leader, this should be scary because it means that other companies air shooting out in front of you not because they have better ideas, necessarily, but because they can move so much faster. According to new research from Harvard Business Review, nearly 90% of businesses say the more successful when they equipped those at the front lines with the ability to make decisions in the moment and organizations who are leading their industries and embracing these decision makers are delivering substantial business value nearly 50% reporting increased customer satisfaction, employee engagement, improve product and service quality. So, you know, there there is no doubt that speed matters on it matters more and more. Um, but if you're feeling a little bit nervous, I want you to think of it. I want you think of it a little differently. Um, you think about the movie Hidden figures. The job of the women in hidden figures was to calculate orbital trajectories, uh, to get men into space and then get them home again. And at the start of the movie, they did all the required mathematical calculations by hand. At the end of the movie, when technology eliminated the need to do those calculations by hand, the hidden figures faced essentially the same decision many of you are facing now. Do I become obsolete, or do I develop a new set of, in their case, computer science skills required to keep doing the job of getting them into space and getting them home again. The hidden figures embraced the latter. They stayed relevant on They increase their value because they were able to doom or of what really mattered. So what we're talking about here is how do we embrace the new technology that UN burdens us? And how do we up skill and change our ways of working to create a step function increase in data enabled value and the first step, really In evolving your analytics? Dewey is redefining the role of analytics from getting the data to influencing and enabling effective decision making. So if this is the role of the modern analyst, a strategic thought partner who harnesses the power of data and directs it toward achieving specific business outcomes, then let's talk about how the series in which they operate needs change to support this new purpose. Um, first, historical CEOs have primarily been about fulfilling data requests. In this scenario, C always were often formed primarily as an efficiency measure. This efficiency might have come in the form of consistency funds, ability of resource is breaking down silos, creating and building multipurpose data assets. Um, and under the getting the data scenario that's actually made a lot of sense for modern Sealy's, however, the objective is to create an organization that supports strategic business decision ing for individuals and for the enterprises the whole. So let's talk about how we do that while maintaining the progress made by historical seaweeds. It's about really extending its extending what, what we've already done the progress we've already made. So here I'll cover six primary best practices. None is a silver bullet. Each needs to fit within your own company culture. But these air major areas to consider as you evolve your analytics capabilities first and foremost always agree on the purpose and approach of your Coe. Successfully evolving yourself starts with developing strategic partnerships with the business leaders that your organization will support that the analytics see we will support. Both parties need to explicitly blocked by in to the objective and agree on a set of operating principles on bond. I think the only way to do that is just bringing people to the table, having an open and honest conversation about where you are today, where you wanna be and then agree on how you will move forward together. It's not about your organization or my organization. How do we help the business solve problems that, you know, go beyond what what we've been able to do today? So moving on While there's no single organizational model that works for everyone, I generally favor a hybrid model that includes some level of fully dedicated support. This is where I distinguish between to whom the analyst reports and for whom the analyst works. It's another concept that is important to embrace in spirit because all of the work the analyst does actually comes from the business partner. Not from at least it shouldn't come from the head of the analytic Center of excellence. Andan analysts who are fully dedicated to a line of business, have the time in the practice to develop stronger partnerships to develop domain knowledge and history on those air key ingredients to effectively solving business problems. You, you know, how can you solve a problem when you don't really understand what it is? So is the head of an analytic sioe. I'm responsible for making sure that I hire the right mix of skills that I can effectively manage the quality of my team's work product. I've got a specialized skill set that allows me to do that, Um, that there's career path that matters to analysts on all of the other things that go along with Tele management. But when it comes to doing the work, three analysts who report to me actually work for the business and creating some consistency and stability there will make them much more productive. Um, okay, so getting a bit more, more tactical, um, engagement model answers the question. Who do I go to When? And this is often a question that business partners ask of a centralized analytics function or even the hybrid model. Who do I go to win? Um, my recommendation. Make it easy for them. Create a single primary point of contact whose job is to build relationships with a specific partner set of partners to become deeply embedded in their business and strategies. So they know why the businesses solving the problems they need to solve manage the portfolio of analytical work that's being done on behalf of the partner, Onda Geun. Make it make it easy for the partner to access the entire analytics ecosystem. Think about the growing complexity of of the current analytics ecosystem. We've got automated insights Business Analytics, Predictive modeling machine learning. Um, you Sometimes the AI is emerging. Um, you also then have the functional business questions to contend with. Eso This was a big one for me and my experience in retail banking. Uh, you know, if if I'm if I'm a deposits pricing executive, which was the line of business role that I ran on, I had a question about acquisitions through the digital channel. Do I talk Thio the checking analyst, Or do I talk to the digital analyst? Um, who owns that question? Who do I go to? Eso having dedicated POC s on the flip side also helps the head of the center of excellence actually manage. The team holistically reduces the number of entry points in the complexity coming in so that there is some efficiency. So it really is a It's a win win. It helps on both sides. Significantly. Um, there are several specific operating rhythms. I recommend each acting as a as a different gear in an integrated system, and this is important. It's an integrated decision system. All of these for operating rhythms, serves a specific purpose and work together. So I recommend a business strategy session. First, UM, a portfolio management routine, an internal portfolio review and periodic leadership updates, and I'll say a little bit more about each of those. So the business strategy session is used to set top level priorities on an annual or semiannual basis. I've typically done this by running half day sessions that would include a business led deep dive on their strategy and current priorities. Again, always remembering that if I'm going to try and solve all the business problem, I need to know what the business is trying to achieve. Sometimes new requester added through this process often time, uh, previous requests or de prioritized or dropped from the list entirely. Um, one thing I wanna point out, however, is that it's the partner who decides priorities. The analyst or I can guide and make recommendations, but at the end of the day, it's up to the business leader to decide what his or her short term and long term needs and priorities are. The portfolio management routine Eyes is run by the POC, generally on a biweekly or possibly monthly basis. This is where new requests or prioritize, So it's great if we come together. It's critical if we come together once or twice a year to really think about the big rocks. But then we all go back to work, and every day a new requests are coming up. That pipeline has to be managed in an intelligent way. So this is where the key people, both the analyst and the business partners come together. Thio sort of manage what's coming in, decking it against top priorities, our priorities changing. Um, it's important, uh, Thio recognize that this routine is not a report out. This routine is really for the POC who uses it to clarify questions. Raised risks facilitate decisions, um, from his partners with his or her partner so that the work continues. So, um, it should be exactly as long as it needs to be on. Do you know it's as soon as the POC has the information he or she needs to get back to work? That's what happens. An internal portfolio review Eyes is a little bit different. This this review is internal to the analytics team and has two main functions. First, it's where the analytics team can continue to break down silos for themselves and for their partners by talking to each other about the questions they're getting in the work that they're doing. But it's also the form in which I start to challenge my team to develop a new approach of asking why the request was made. So we're evolving. We're evolving from getting the data thio enabling effective business decision ing. Um, and that's new. That's new for a lot of analysts. So, um, the internal portfolio review is a safe space toe asks toe. Ask the people who work for May who report to May why the partner made this request. What is the partner trying to solve? Okay, senior leadership updates the last of these four routines, um, less important for the day to day, but significantly important for maintaining the overall health of the SIOE. I've usually done this through some combination of email summaries, but also standing agenda items on a leadership routine. Um, for for me, it is always a shared update that my partner and I present together. We both have our names on it. I typically talk about what we learned in the data. Briefly, my partner will talk about what she is going to do with it, and very, very importantly, what it is worth. Okay, a couple more here. Prioritization happens at several levels on Dive. Alluded to this. It happens within a business unit in the Internal Portfolio review. It has to happen at times across business units. It also can and should happen enterprise wide on some frequency. So within business units, that is the easiest. Happens most frequently across business units usually comes up as a need when one leader business leader has a significant opportunity but no available baseline analytical support. For whatever reason. In that case, we might jointly approach another business leader, Havenaar Oi, based discussion about maybe borrowing a resource for some period of time. Again, It's not my decision. I don't in isolation say, Oh, good project is worth more than project. Be so owner of Project Be sorry you lose. I'm taking those. Resource is that's It's not good practice. It's not a good way of building partnerships. Um, you know that that collaboration, what is really best for the business? What is best for the enterprise, um, is an enterprise decision. It's not a me decision. Lastly, enterprise level part ization is the probably the least frequent is aided significantly by the semi annual business strategy sessions. Uh, this is the time to look enterprise wide. It all of the business opportunities that play potential R a y of each and jointly decide where to align. Resource is on a more, uh, permanent basis, if you will, to make sure that the most important, um, initiatives are properly staffed with analytical support. Oxygen funding briefly, Um, I favor a hybrid model, which I don't hear talked about in a lot of other places. So first, I think it's really critical to provide each business unit with some baseline level of analytical support that is centrally funded as part of a shared service center of excellence. And if a business leader needs additional support that can't otherwise be provided, that leader can absolutely choose to fund an incremental resource from her own budget that is fully dedicated to the initiative that is important to her business. Um, there are times when that privatization happens at an enterprise level, and the collective decision is we are not going to staff this potentially worthwhile initiative. Um, even though we know it's worthwhile and a business leader might say, You know what? I get it. I want to do it anyway. And I'm gonna find budget to make that happen, and we create that position, uh, still reporting to the center of excellence for all of the other reasons. The right higher managing the work product. But that resource is, as all resource is, works for the business leader. Um, so, uh, it is very common thinking about again. What's the value of having these resource is reports centrally but work for the business leader. It's very common Thio here. I can't get from a business leader. I can't get what I need from the analytics team. They're too busy. My work falls by the wayside. So I have to hire my own people on. My first response is have we tried putting some of these routines into place on my second is you might be right. So fund a resource that's 100% dedicated to you. But let me use my expertise to help you find the right person and manage that person successfully. Um, so at this point, I I hope you see or starting to see how these routines really work together and how these principles work together to create a higher level of operational partnership. We collectively know the purpose of a centralized Chloe. Everyone knows his or her role in doing the work, managing the work, prioritizing the use of this very valuable analytical talent. And we know where higher ordered trade offs need to be made across the enterprise, and we make sure that those decisions have and those decision makers have the information and connectivity to the work and to each other to make those trade offs. All right, now that we've established the purpose of the modern analyst and the functional framework in which they operate, I want to talk a little bit about the hard part of getting from where many individual analysts and business leaders are today, uh, to where we have the opportunity to grow in order to maintain pain and or regain that competitive advantage. There's no judgment here. It's simply an artifact. How we operate today is simply an artifact of our historical training, the technology constraints we've been under and the overall newness of Applied analytics as a distinct discipline. But now is the time to start breaking away from some of that and and really upping our game. It is hard not because any of these new skills is particularly difficult in and of themselves. But because any time you do something, um, for the first time, it's uncomfortable, and you're probably not gonna be great at it the first time or the second time you try. Keep practicing on again. This is for the analyst and for the business leader to think differently. Um, it gets easier, you know. So as a business leader when you're tempted to say, Hey, so and so I just need this data real quick and you shoot off that email pause. You know it's going to help them, and I'll get the answer quicker if I give him a little context and we have a 10 minute conversation. So if you start practicing these things, I promise you will not look back. It makes a huge difference. Um, for the analyst, become a consultant. This is the new set of skills. Uh, it isn't as simple as using layman's terms. You have to have a different conversation. You have to be willing to meet your business partner as an equal at the table. So when they say, Hey, so and so can you get me this data You're not allowed to say yes. You're definitely not is not to say no. Your reply has to be helped me understand what you're trying to achieve, so I can better meet your needs. Andi, if you don't know what the business is trying to achieve, you will never be able to help them get there. This is a must have developed project management skills. All of a sudden, you're a POC. You're in charge of keeping track of everything that's coming in. You're in charge of understanding why it's happening. You're responsible for making sure that your partner is connected across the rest of the analytics. Um, team and ecosystem that takes some project management skills. Um, be business focused, not data focused. Nobody cares what your algorithm is. I hate to break it to you. We love that stuff on. We love talking about Oh, my gosh. Look, I did this analysis, and I didn't think this is the way I was gonna approach it, and I did. I found this thing. Isn't it amazing? Those are the things you talk about internally with your team because when you're doing that, what you're doing is justifying and sort of proving the the rightness of your answer. It's not valuable to your business partner. They're not going to know what you're talking about anyway. Your job is to tell them what you found. Drawing conclusions. Historically, Analyst spent so much of their time just getting data into a power 0.50 pages of summarized data. Now the job is to study that summarized data and draw a conclusion. Summarized data doesn't explain what's happening. They're just clues to what's happening. And it's your job as the analyst to puzzle out that mystery. If a partner asked you a question stated in words, your answer should be stated in words, not summarized data. That is a new skill for some again takes practice, but it changes your ability to create value. So think about that. Your job is to put the answer on page with supporting evidence. Everything else falls in the cutting room floor, everything. Everything. Everything has to be tied to our oi. Um, you're a cost center and you know, once you become integrated with your business partner, once you're working on business initiatives, all of a sudden, this actually becomes very easy to do because you will know, uh, the business case that was put forth for that business initiative. You're part of that business case. So it becomes actually again with these routines in place with this new way of working with this new way of thinking, it's actually pretty easy to justify and to demonstrate the value that analytic springs to an organization. Andi, I think that's important. Whether or not the organization is is asking for it through formalized reporting routine Now for the business partner, understand that this is a transformation and be prepared to support it. It's ultimately about providing a higher level of support to you, but the analysts can't do it unless you agree to this new way of working. So include your partner as a member of your team. Talk to them about the problems you're trying to sell to solve. Go beyond asking for the data. Be willing and able to tie every request to an overarching business initiative on be poised for action before solution is commissioned. This is about preserving. The precious resource is you have at your disposal and you know often an extra exploratory and let it rip. Often, an exploratory analysis is required to determine the value of a solution, but the solution itself should only be built if there's a plan, staffing and funding in place to implement it. So in closing, transformation is hard. It requires learning new things. It also requires overriding deeply embedded muscle memory. The more you can approach these changes is a team knowing you won't always get it right and that you'll have to hold each other accountable for growth, the better off you'll be and the faster you will make progress together. Thanks. >>Thank you so much, Kathleen, for that great content and thank you all for joining us. Let's take a quick stretch on. Get ready for the next session. Starting in a few minutes, you'll be hearing from thought spots. David Coby, director of Business Value Consulting, and Blake Daniel, customer success manager. As they discuss putting use cases toe work for your business

Published Date : Dec 10 2020

SUMMARY :

But look forward to continuing the chat with you all in the chat. This is for the analyst and for the business leader to think differently. Get ready for the next session.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KathleenPERSON

0.99+

Kathleen MaleyPERSON

0.99+

David CobyPERSON

0.99+

YemenLOCATION

0.99+

100%QUANTITY

0.99+

10 minuteQUANTITY

0.99+

Blake DanielPERSON

0.99+

secondQUANTITY

0.99+

Bank of AmericaORGANIZATION

0.99+

LondonLOCATION

0.99+

FirstQUANTITY

0.99+

DeweyPERSON

0.99+

7QUANTITY

0.99+

EachQUANTITY

0.99+

MayDATE

0.99+

Both partiesQUANTITY

0.99+

0.50 pagesQUANTITY

0.99+

eachQUANTITY

0.99+

ThioPERSON

0.99+

both sidesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

nearly 50%QUANTITY

0.99+

todayDATE

0.99+

Hidden figuresTITLE

0.99+

over 15 yearsQUANTITY

0.99+

firstQUANTITY

0.98+

second timeQUANTITY

0.98+

first guestQUANTITY

0.98+

onceQUANTITY

0.98+

nearly 90%QUANTITY

0.98+

bothQUANTITY

0.98+

BinaPERSON

0.98+

singleQUANTITY

0.98+

first timeQUANTITY

0.98+

MidwestLOCATION

0.97+

three analystsQUANTITY

0.97+

oneQUANTITY

0.96+

first responseQUANTITY

0.96+

two sidedQUANTITY

0.94+

ChloePERSON

0.92+

first stepQUANTITY

0.92+

half dayQUANTITY

0.91+

Business Value ConsultingORGANIZATION

0.9+

POCORGANIZATION

0.9+

two main functionsQUANTITY

0.89+

each business unitQUANTITY

0.88+

twice a yearQUANTITY

0.86+

coupleQUANTITY

0.81+

SealyORGANIZATION

0.8+

ThoughtORGANIZATION

0.77+

AndiPERSON

0.76+

six primary bestQUANTITY

0.76+

one leaderQUANTITY

0.7+

OndaPERSON

0.68+

threeQUANTITY

0.68+

ReviewORGANIZATION

0.66+

biweeklyQUANTITY

0.65+

AustralianOTHER

0.63+

four routinesQUANTITY

0.61+

Havenaar OiORGANIZATION

0.6+

GeunORGANIZATION

0.59+

HarvardORGANIZATION

0.54+

BusinessTITLE

0.51+

BritishLOCATION

0.5+

Beyond.2020OTHER

0.5+

SIOETITLE

0.39+

António Alegria, Outsystems | Outsystems NextStep 2020


 

>> (narrator) From around the globe it's "theCUBE." With digital coverage of OutSystems-NextStep 2020. Brought to you by OutSystems. >> I'm Stu Miniman. And welcome back to thecubes coverage of OutSystems-NextStep. Of course, one of the items that we've been talking a lot in the industry is about how artificial intelligence machine learning are helping people as we go beyond what really human scale can do. And we need to be able to do things more machine scale to help us really dig into this topic. Happy to welcome to the program. First time, guest Antonio Alegria. He is the head of artificial intelligence at OutSystems. Antonio. Thanks so much for joining us. >> Thank you Stu. I'm really happy to be here and really talk a little bit about what we're doing at OutSystems to help our customers and how we're leveraging AI to get to those goals. >> ( Stu) Wonderful. So I saw ahead of the event, a short video that you did and talked about extreme agility with no limits. So, you know, before we dig into the product itself, maybe if you could just, you know, how should we be thinking about AI? You know, there's a broad spectrum, is that, you know, machine learning that there's various components in there, you listened to the big analyst firms, you know, the journey it's big steps and something that is pretty broad. So when we're talking about AI, you know, what does that mean to you? What does that mean to your customers? >> Yeah, so AI OutSystems really speaks to the vision and the core strategy we have for our product. Which is, you know, if you saw the keynote, you know, we talk about, you know, really enabling every company, even those that, you know, that had existed for decades, perhaps have a lot of legacy to become, you know, leading elite cloud software development companies. And really can develop digital solutions at scale really easily. But one thing we see, and then this is a big statistic. One of the things that limits CIO's the most, nowadays is really the lack of talent. Lack of, you know, engineering and software engineering, you know, ability and people that can do that. And there's a statistic that was reported by wall street journal. I saw it recently, perhaps last year. That said that according to federal jobs data in the U S, by the end of 2020 there would be about a million unfilled IET and software development jobs available, right. So there's this big problem. All of these companies really need to scale, really need to invest in digital systems. And so our belief at OutSystems, we've already been abstracting, and we've been focusing on automating as much as possible, the software development tools and applications that we use. We've already seen amazing stories of people coming from different backgrounds, really starting to develop really leading edge applications. And we want to take this to the next level. And we believe that artificial intelligence with machine learning, but also with other AI technologies that we're also taking advantage of can really help us get to a next stage of productivity. So from 10 X productivity to 100 X productivity. And we believe AI plays a role in three ways. We believe AI by learning from all of this data that we now collect in terms of, you know, the projects that are being developed. We are essentially trying to embed a tech lead, so to speak inside a product. And a tech lead that can help developers by guiding them, guiding the most junior ones, by automating some of the boring, repetitive tasks where by validating their work, making sure that they are using the best practices, making sure that it helps them as they scale to refactor their code to automatically design their architectures, things like that. >> (Stu) Wonderful . Antonio. Goncalo, stated it quite clearly in the interview that I had with him, it's really about enabling that next, you know, 10 million developers. We know that there is that skill gap, as you said, and you know, everybody right now, how can I do more? How can I react faster? So that's where, you know, the machine learning artificial intelligence should be able to help. So bring us inside. I know the platform itself has had, you know, guidance and the whole movement. You know, what we used to call low code. Was about simplifying things and allowing people to, you know, build faster. So bring us inside the product, you know, what are the enhancements? What are the new pieces? Some of the key items. >> Yeah. So one interesting thing, and I think one thing that I think OutSystems is really proud of being able to achieve is, if you look at how OutSystems has been using a AI within the platform. We started with introducing AI assistance within the, our software development environment, service studio, right? And so this capability we've been iterating it a lot. We've been evolving it, and now it's really able to accelerate significantly and guide novices, but also help pros dealing through the software development process and coding. By essentially.... and trying to infer and understanding their context and trying to infer their intent, and then automating the steps afterwards. And we do this by suggesting you the most likely, let's say function or code piece that you will need. But then at the next step, which we're introducing this year even better, which is we're trying to autofill most of the, let's say the variables and all of that, and the data flow that you need to collect. And so you get a very delightful, frictionless experience as you are coding. So you're closer to the business value even more than before. Now, this was just the first step. What you're seeing now and what we're announcing, and we're showing at this next step that we showed at the keynote, is that we are trying to fuse.... starting to fuse AI across the OutSystems products and across the software development life cycle. So we took this core technology that we use to guide developers and assist and automate their work. And we use the same capability to help developers, tech leads and architects to analyze the code. Learning from the bad patterns that exist learning from and receiving runtime information about crashes and performance. And inside the product that we call Architecture Dashboard. We're really able to give recommendations to these architects and tech leads, where should they evolve and improve their code. And we're using AI, refusing AI in this product in two very specific ways. Now that we are releasing today. Which is one, is to automatically collect and design and define the architecture. So we call this automated architecture discovery. So if you have a very large factory, you can imagine, you know, have lots of different modules, lots of different applications. And if you need to go and manually have to label everything. So this is a front end, this is a backend. That would take a lot of time. So we use machine learning. Learning from what architects have already done in the past and classifying their architecture. And we can map out your architecture completely, automatically, which is really powerful. Then we also use our AI engine to analyze your factory. And we can detect the best opportunities for refactoring. So refactoring is one of the top problems and the top smells and technical debt problems that large factories have, right? So we can completely identify and pinpoint what are these opportunities for refactoring and we guide you through it. We tell you, okay, all of these hundreds of functions and logic patterns that we see in your code, you could de-refactor this into a single function, and you can save a lots and lots of code. Because you know, the best code, the fastest code, the easiest to maintain is the code you don't write, you don't have. So we're trying to really eliminate crack from these factories, with these capabilities. >> (Stu) Well. >> It's fascinating. You're absolutely right. I'm, curious, you know, I think back to some of the earliest interactions I had with things, they'd give you guidance, you know, spellcheckers, grammar check. How much does the AI that you work on, does it learn what's specific for my organization and my preferences? Is there any community learning over time? Because there are industry best practices out there that are super valuable, but, you know, we saw in the SAS wave. When I can customize things myself, we learn over time. So how does that play into kind of today in the roadmap for AI that you're building? >> That's a great question. So our AI, let's say technology that we used, it actually uses two different, big kinds of AI. So we use machine learning definitely to learn from the community, what are the best practices and what are the most common patterns that people use. So we use that to guide developers, but also to validate and analyze their code. But then we also use automated reasoning. So this is more logic based, reasoning based AI. And we pair these two technologies to really create a system that is able to learn from data, but also be able to reason it at a higher order, about what are good practices and kind of reach conclusions from there and learn new things from there. Now we started by applying these technologies to more of the community data and kind of standard best practices, but our vision is to more and more start learning specifically, and allowing tech leads and architects even, in the future to tailor these engines of AI, perhaps to suggest these are the best practices for my factory. These patterns perhaps are good best practices in general. But in my factory, I do not want to use them because I have some specificities for compliance or something like that. And our vision is that, architects and tech leads can just provide just a few examples of what they like and what they don't like. And the engine just automatically learns and gets tailored to their own environment. >> (Stu) Oh, important that you're able to, you know, have the customers move things forward to the direction that makes sense on their end. I'm also curious, you talk about, you know, what partnerships OutSystems has out there, you know, being able to tie into things like what the public cloud is doing, lots of industry collaboration. So, you know, how does health systems fit into the kind of the broader AI ecosystem? >> Yeah. So one thing I did not mention, and to your point is, so we have kind of two complimentary visions and strategies for AI. So one of them is we really want to improve our own product, improve the automation in the product in the abstraction by using AI, together with great user experience and the best programming language for software automation. Right? So that's one, that's what we generally call AI assistant development and infusing AI across the software development life cycle. The other one is we also believe that, you know, true elite cloud software companies that create frictionless experiences, one of the things that they use to really be super competitive and create these frictionless experiences is that they can themselves use AI and machine learning. To automate processes, create really, really delightful experiences. So we're also investing and we've shown. And we're launching and announcing at NextStep. We've just shown this at the keynote. One tool that we call the machine learning builder, ML builder. So this essentially speaks to the fact that, you know, a lot of companies do not have access to data science talent. They really struggle to adopt machine learning. Like just one out of 10 companies are able to go and put AI in production. So we're essentially abstracting also that. We're also increasing the productivity for customers to implement AI and machine learning. We use partners behind the scenes and cloud providers for the core technology, with automated machine learning and all of that. But we abstract all of the experience. So developers can essentially just pick the data they have already inside the AI systems platform. And they want to just select. I want to train this machine learning model to predict this field, just click, click, click. And it runs dozens of experiments, selects the best algorithms transforms the data for you without you needing to have a lot of data science experience. And then you can just drag and drop into platform. Integrate in your application. And you're good to go. >> (Stu) Well. Sounds, you know, phenomenal. You mentioned data scientists. We talked about that the skill gap. Do you have any statistics, you know, is this helping people, you know, hire faster, lower the bar to entry for people to get on board, you know, increase productivity, what kind of hero numbers do your customers typically, you know, how do they measure success? >> Yeah. So we know that in, for machine learning adoption at companies, we know that, sorry, this is one of the top challenges that they have, right? So companies do not, it's not only that they do not have the expertise to implement machine learning in their products and their applications. They don't even have a good understanding of what are the use cases in order the technology opportunities for them to apply. Right? So this has been listed by lots of different surveys that this is the top problem. These are the two of the top problems that companies have to adopt AI's. Access to skill, data science skill, and understanding of the use case. And that's exactly what we're trying to kind of package up in a very easy to use product, where you can see the use cases you have available. You just select your data. You just click train, you do not need to know the nitty gritty details. And for us, a measure of success is that we've seen customers that are starting to experiment with ML builder. Is that in just a day or a few days, they can iterate over several machine learning models and put them in production. We have customers that have, you know, no machine learning models in production ever. And they just now have two. And they're starting to automate processes. They're starting to innovate with business. And that for us is we've seen as kind of the measure of success for businesses. Initially, what they want to do is they want to do POC's, and they want to experiment and they want to get to production, start getting to field for it and iterate. >> (Stu) From a product standpoint, is the AI just infused in, or there are additional licensing, you know, how do customers, you know, take advantage of it? What's the impact on that from the relationships without systems? >> Yeah. So for AI and machine learning that is fused into our product. And for automation, validation and guidance, there's, you know, no extra charges, just part of the product is what we believe is kind of a core building block in a core service for everything we do in our product. For machine learning services and components that customers can use to.... in their own applications. We allow you to integrate with cloud providers and the billing is done separately. And that's something that we're working towards and building great technical partnerships and exploring other avenues for deeper integration, so that developers and customers do not really have to worry about those things as well. >> (Stu) Yeah. Well, it's such a great way to really democratize the use of this technology platform that they're used to. They start doing it. What's general feedback from your customers? Do they just like, "Oh, it's there." "I started playing with it." "It's super easy, it makes it better." Are there any concerns or pushback? Have we gotten beyond that? What do you hear? Any good customer examples you can share as to general adoption? >> Yeah. So as I said, as we reduce the friction for adopting these technologies. We've seen one thing that's very interesting. So we have a few customers that are, for example, more in the logistics side of industry and vertical. And so they have a more conservative management, like they take time to adopt. They're more of a laggard in adopting these kinds of technologies. The business is more skeptical. Doesn't want to spend a lot of time playing around, right. And once they saw what they could do with a platform, they quickly did a proof of concept. They showed to the business and the business had lots of ideas. So they just started interacting a lot more with IT. Which is something we see with OutSystems platform, not just for AI machine learning, but generally in the digital transformation. Is when the IT can start really being very agile and iterating and innovating. And they start collaborating a lot with the business. And so what we see is, customers asking us for even more. So customers want more use cases to be supported like this. Customers also the ones that are more mature, that already have their centers of excellence, and they have their data scientists for example. They want to understand how they can also bring in perhaps, their use of very specialized tools. How can they integrate that into the platform so that, you know, for certain use cases, developers can very quickly train their own models, but so specialized data science teams can also bring in and developers can integrate their models easily and put them into production. Which is one of the big barriers. We see in a lot of companies, people working on year long projects, they develop the models, but they struggle to get them to production. And so we really want to focus on the whole end to end journey. Either you're building everything within the AI platform, or you're bringing it from a specialized pro tool. We want to make that whole journey frictionless and smooth. >> ( STU) Antonio, final question I have for you. Of course, this space we're seeing maturing, you know, rapid new technologies out there. Give us a little look forward. What should we be expecting to see from OutSystems or things even a little broader as you look at your partner ecosystem over kind of the next six, 12, 18 months? >> Yeah. So what... We're going to continue to see a trend, I think from the cloud service providers of democratization of the AI services. So this is during, just starting to advanced and accelerate. As these providers started packaging. It's like, what our system is also doing, starting to packaging some specific well-defined use cases. And then making the journey for training these models and deploying super simple. That's one thing that's continued to ramp up. And we're going to move from a AI services, more focused on cognitive pre-trained models, right. That, which is kind of the status quo. To custom AI models based on your data. That's kind of the trend we're going to start seeing in that, OutSystems also pushing forward. Generally from the AI and machine learning application and technology side of thing. I think one thing that we are leading on is that you know, machine learning and deep learning is definitely one of the big drivers for the innovation that we're seeing in AI. But you're start seeing more and more what is called hybrid AI. Which is taking machine learning and database artificial intelligence with more logic based automated reasoning techniques in pairing these two to really create systems that are able to operate at a really higher level. A higher cognitive level, which is what OutSystems is investing internally in terms of research and development. And with partnerships with institutions like Carnegie Mellon University, and such as that. >> (Stu) Wonderful. Antonio, who doesn't want, you know, a tech expert sitting next to them, helping get rid of some of the repetitive, boring things or challenges. Thank you so much for sharing the updates. >> (Antonio) Thank you Stu Congratulations on your progress And definitely look forward to hearing more in the future. >> (Antonio) Thank you Stu. Have a good day. >> (Stu) All right. >> Stay tuned for more from OutSystems NextStep. I'm Stu Miniman. And thank you for watching the "theCUBE."

Published Date : Sep 14 2020

SUMMARY :

Brought to you by OutSystems. He is the head of artificial I'm really happy to be here What does that mean to your customers? legacy to become, you know, and you know, everybody and we guide you through it. How much does the AI that you work on, in the future to tailor talk about, you know, to the fact that, you know, to get on board, you know, We have customers that have, you know, and the billing is done separately. to really democratize the use and the business had lots of ideas. you know, rapid new That's kind of the trend we're going Antonio, who doesn't want, you know, to hearing more in the future. (Antonio) Thank you And thank you for watching the "theCUBE."

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AntonioPERSON

0.99+

twoQUANTITY

0.99+

Carnegie Mellon UniversityORGANIZATION

0.99+

Antonio AlegriaPERSON

0.99+

António AlegriaPERSON

0.99+

OutSystemsORGANIZATION

0.99+

100 XQUANTITY

0.99+

GoncaloPERSON

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

two technologiesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

StuPERSON

0.99+

10 XQUANTITY

0.99+

a dayQUANTITY

0.99+

first stepQUANTITY

0.99+

18 monthsQUANTITY

0.98+

10 companiesQUANTITY

0.98+

One toolQUANTITY

0.98+

OutsystemsORGANIZATION

0.98+

one thingQUANTITY

0.97+

end of 2020DATE

0.97+

hundredsQUANTITY

0.97+

todayDATE

0.97+

OneQUANTITY

0.96+

First timeQUANTITY

0.96+

single functionQUANTITY

0.95+

NextStepORGANIZATION

0.95+

this yearDATE

0.95+

dozens of experimentsQUANTITY

0.93+

10 million developersQUANTITY

0.92+

U SLOCATION

0.9+

about a millionQUANTITY

0.9+

2020DATE

0.89+

three waysQUANTITY

0.87+

SAS waveEVENT

0.87+

daysQUANTITY

0.86+

one interesting thingQUANTITY

0.86+

decadesQUANTITY

0.85+

streetORGANIZATION

0.8+

sixQUANTITY

0.8+

theCUBETITLE

0.77+

functionsQUANTITY

0.72+

two complimentaryQUANTITY

0.7+

Outsystems NextStepORGANIZATION

0.7+

specific waysQUANTITY

0.68+

2QUANTITY

0.58+

themQUANTITY

0.51+

NextStepTITLE

0.32+

Jared Bell T-Rex Solutions & Michael Thieme US Census Bureau | AWS Public Sector Partner Awards 2020


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Partner Awards brought to you by Amazon web services. >> Hi, and welcome back, I'm Stu Miniman and we're here at the AWS Public Sector. Their Partner Awards, really enjoying this. We get to talk to some of the diverse ecosystem as well as they've all brought on their customers, some really phenomenal case studies. Happy to welcome to the program two first time guests. First of all, we have Jared Bell, he's the Chief Engineer of self response, operational readiness at T-Rex Solutions and T-Rex is the award winner for the most customer obsessed mission-based in Fed Civ. So Jared, congratulations to you and the T-Rex team and also joining him, his customer Michael Thieme, he's the Assistant Director for the Decennial Census Program systems and contracts for the US Census Bureau, thank you so much both for joining us. >> Good to be here. >> All right, Jared, if we could start with you, as I said, you're an award winner, you sit in the Fed Civ space, you've brought us to the Census Bureau, which most people understand the importance of that government program coming up on that, you know, every 10 year we've been hearing, you know, TV and radio ads talking about it, but Jared, if you could just give us a thumbnail of T-Rex and what you do in the AWS ecosystem. >> So yeah again, my name's Jared Bell and I work for T-Rex Solutions. T-Rex is a mid tier IT federal contracting company in Southern Maryland, recently graduated from hubs on status, and so T-Rex really focuses on four key areas, infrastructure in Cloud modernization, cybersecurity, and active cyber defense, big data management and analytics, and then overall enterprise system integration. And so we've been, you know, AWS partner for quite some time now and with decennial, you know, we got to really exercise a lot of the bells and whistles that are out there and really put it all to the test. >> All right, well, Michael, you know, so many people in IT, we talk about the peaks and valleys that we have, not too many companies in our organization say, well, we know exactly, you know, that 10 year spike of activity that we're going to have, I know there's lots of work that goes on beyond that, but it tells a little bit , your role inside the Census Bureau and what's under your purview. >> Yes, the Census Bureau, is actually does hundreds of surveys every year, but the decennial census is our sort of our main flagship activity. And I am the Assistant Director under our Associate Director for the IT and for the contracts for the decennial census. >> Wonderful and if you could tell us a little bit the project that you're working on, that eventually pulled T-Rex in. >> Sure. This is the 2020 census and the challenge of the 2020 census is we've done the census since 1790 in the United States. It's a pillar, a foundation of our democracy, and this was the most technologically advanced census we've ever done. Actually up until 2020, we have done our censuses mostly by pen, paper, and pencil. And this is a census where we opened up the internet for people to respond from home. We can have people respond on the phone, people can respond with an iPhone or an Android device. We tried to make it as easy as possible and as secure as possible for people to respond to the census where they were and we wanted to meet the respondent where they were. >> All right. So Jared, I'd love you to chime in here 'cause I'm here and talking about, you know, the technology adoption, you know, how much was already in plans there, where did T-Rex intersect with this census activity? >> Yeah. So, you know, census deserves a lot of credit for their kind of innovative approach with this technical integrator contract, which T-Rex was fortunate enough to win. When we came in, you know, we were just wrapping up the 2018 test. we really only had 18 months to go from start to, you know, a live operational tests to prepare for 2020. And it was really exciting to be brought in on such a large mission critical project and this is one of the largest federal IT products in the Cloud to date. And so, you know, when we came in, we had to really, you know, bring together a whole lot of solutions. I mean, the internet self response, which is what we're going to to talk about today was one of the major components. But we really had a lot of other activities that we had to engage in. You know, we had to design and prepare an IT solution to support 260 field offices, 16,000 field staff, 400,000 mobile devices and users that were going to go out and knock on doors for a numeration. So it was real6ly a big effort that we were honored to be a part of, you know, and on top of that, T-Rex actually brought to the table, a lot of its past experience with cybersecurity and active cyber defense, also, you know, because of the importance of all this data, you know, we had the role in security all throughout, and I think T-Rex was prepared for that and did a great job. And then, you know, overall I think that, not necessarily directly to your question, but I think, y6ou know, one of the things that we were able to do to make ourselves successful and to really engage with the census Bureau and be effective with our stakeholders was that we really build a culture of decennial within the technical integrator, you know, we had brown bags and working sessions to really teach the team the importance of the decennial, you know, not just as a career move, but also as a important activity for our country. And so I think that that really helped the team, you know, internalize that mission and really drove kind of our dedication to the census mission and really made us effective and again, a lot of the T-Rex leadership had a lot of experience there from past decennials and so they really brought that mindset to the team and I think it really paid off. >> Michael, if you could bring us inside a little a bit the project, you know, 18 months, obviously you have a specific deadline you need to hit, for that help us understand kind of the architectural considerations that you had there, any concerns that you had and I have to imagine that just the global activities, the impacts of COVID-19 has impacted some of the end stage, if you will, activities here in 2020. >> Absolutely. Yeah. The decennial census is, I believe a very unique IT problem. We have essentially 10 months out of the decade that we have to scale up to gigantic and then scale back down to run the rest of the Census Bureau's activities. But our project, you know, every year ending in zero, April 1st is census day. Now April 1st continued to be census day in 2020, but we also had COVID essentially taking over virtually everything in this country and in fact in the world. So, the way that we set up to do the census with the Cloud and with the IT approach and modernization that we took, actually, frankly, very luckily enabled us to kind of get through this whole thing. Now, we haven't had, Jared discussed a little bit the fact that we're here to talk about our internet self response, we haven't had one second of downtime for our response. We've taken 77 million. I think even more than 78 million responses from households, out of the 140 million households in the United States, we've gotten 77 million people to respond on our internet site without one second of downtime, a good user experience, a good supportability, but the project has always been the same. It's just this time, we're actually doing it with much more technology and hopefully the way that the Cloud has supported us will prove to be really effective for the COVID-19 situation. Because we've had changes in our plans, difference in timeframes, we are actually not even going into the field, or we're just starting to go into the field these next few weeks where we would have almost been coming out of the field at this time. So that flexibility, that expandability, that elasticity, that being in the Cloud gives all of our IT capabilities was really valuable this time. >> Well, Jared, I'm wondering if you can comment on that. All of the things that Michael just said, you know, seem like, you know, they are just the spotlight pieces that I looked at Cloud for. You know, being able to scale on demand, being able to use what I need when I need it, and then dial things down when I don't, and especially, you know, I don't want to have to, you know, I want to limit how much people actually need to get involved. So help understand a little bit, you know, what AWS services underneath, we're supporting this and anything else around the Cloud deployment. >> Sure, yeah. Michael is spot on. I mean, the cloud is tailor made for our operation and activity here. You know, I think all told, we use over 30 of the AWS FedRAMP solutions in standing up our environment across all those 52 system of systems that we were working with. You know, just to name a few, I mean, internet self response alone, you're relying heavily on auto scaling groups, elastic load balancers, you know, we relied a lot on Lambda Functions, DynamoDB. We're one of the first adopters through DynamoDB global tables, which we use for a session persistence across regions. And then on top of that, you know, the data was all flowing down into RDS databases and then from there to, you know, the census data Lake, which was built on EMR and Elasticsearch capabilities, and that's just to name a couple. I mean, you know, we had, we ran the gamut of AWS services to make all this work and they really helped us accelerate. And as Michael said, you know, we stood this up expecting to be working together in a war room, watching everything hand in hand, and because of the way we, were able to architect it in partnership with AWS, we all had to go out and stay at home, you know, the infrastructure remain rock solid. We can have to worry about, you know, being hands on with the equipment and, you know, again, the ability to automate and integrate with those solutions Cloud formation and things like that really let us keep a small agile team of, you know, DevSecOps there to handle the deployments. And we were doing full scale deployments with, you know, one or two people in the middle of the night without any problems. So it really streamlined things for us and helped us keep a tight natural, sure. >> Michael, I'm curious about what kind of training your team need to go through to take advantage of this solution. So from bringing it up to the ripple effect, as you said, you're only now starting to look at who would go into the field who uses devices and the like, so help us understand really the human aspect of undergoing this technology. >> Sure. Now, the census always has to ramp up this sort of immediate workforce. We hire, we actually process over 3 million people through, I think, 3.9 million people applied to work for the Census Bureau. And each decade we have to come up with a training program and actually training sites all over the country and the IT to support those. Now, again, modernization for the 2020 census, didn't only involve the things like our internet self response, it also involves our training. We have all online training now, we used to have what we called verbatim training, where we had individual teachers all over the country in places like libraries, essentially reading text exactly the same way to exactly over and over again to our, to the people that we trained. But now it's all electronic, it allows us to, and this goes to the COVID situation as well, it allows us to bring only three people in at a time to do training. Essentially get them started with our device that we have them use when they're knocking on doors and then go home and do the training, and then come back to work with us all with a minimal contact, human contact, sort of a model. And that, even though we designed it differently, the way that we set the technology of this time allowed us to change that design very quickly, get people trained, not essentially stop the census. We essentially had to slow it down because we weren't sure exactly when it was going to be safe to go knocking on door to door, but we were able to do the training and all of that worked and continues to work phenomenally. >> Wonderful. Jared, I wonder if you've got any lessons learned from working with the census group that might be applicable to kind of, the broader customers out there? >> Oh, sure. Well, working with the census, you know, it was really a great group to work with. I mean, one of the few groups I worked with who have such a clear vision and understanding of what they want their final outcome to be, I think again, you know, for us the internalization of the decennial mission, right? It's so big, it's so important. I think that because we adopted it early on we felt that we were true partners with census, we had a lot of credibility with our counterparts and I think that they understood that we were in it with them together and that was really important. I would also say that, you know, because we're talking about the go Cloud solutions that we worked, you know, we also engage heavily with the AWS engineering group and in partnership with them, you know, we relied on the infrastructure event management services they offer and was able to give us a lot of great insight into our architecture and our systems and monitoring to really make us feel like we were ready for the big show when the time came. So, you know, I think for me, another lesson learned there was that, you know, the Cloud providers like AWS, they're not just a vendor, they're a partner and I think that now going forward, we'll continue to engage with those partners early and often. >> Michael the question I have for you is, you know, what would you say to your peers? What lessons did you have learned and how much of what you've done for the census, do you think it will be applicable to all those other surveys that you do in between the big 10 year surveys? >> All right. I think we have actually set a good milestone for the rest of the Census Bureau, that the modernization that the 2020 census has allowed since it is our flagship really is something that we hope we can continue through the decade and into the next census, as a matter of fact. But I think one of the big lessons learned I wanted to talk about was we have always struggled with disaster recovery. And one of the things that having the Cloud and our partners in the Cloud has helped us do is essentially take advantage of the resilience of the Cloud. So there are data centers all over the country. If ever had a downtime somewhere, we knew that we were going to be able to stay up. For the decennial census, we've never had the budget to pay for a persistent disaster recovery. And the Cloud essentially gives us that kind of capability. Jared talked a lot about security. I think we have taken our security posture to a whole different level, something that allowed us to essentially, as I said before, keep our internet self response free of hacks and breaches through this whole process and through a much longer process than we even intended to keep it open. So, there's a lot here that I think we want to bring into the next decade, a lot that we want to continue, and we want the census to essentially stay as modern as it has become for 2020. >> Well, I will tell you personally Michael, I did take the census online, it was really easy to do, and I'll definitely recommend if they haven't already, everybody listening out there so important that you participate in the census so that they have complete data. So, Michael, Jared, thank you so much. Jared, congratulations to your team for winning the award and you know, such a great customer. Michael, thank you so much for what you and your team are doing. We Appreciate all that's being done, especially in these challenging times. >> Thank you and thanks for doing the census. >> All right and stay tuned for more coverage of the AWS public sector partner award I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)

Published Date : Aug 6 2020

SUMMARY :

brought to you by Amazon web services. and T-Rex is the award winner you know, TV and radio and with decennial, you know, we know exactly, you know, and for the contracts Wonderful and if you and the challenge of the 2020 census you know, the technology adoption, the importance of the decennial, you know, some of the end stage, if you will, and in fact in the world. and especially, you know, and then from there to, you know, really the human aspect and the IT to support those. that might be applicable to kind of, and in partnership with them, you know, and our partners in the and you know, such a great customer. for doing the census. of the AWS public sector partner award

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JaredPERSON

0.99+

MichaelPERSON

0.99+

Michael ThiemePERSON

0.99+

Jared BellPERSON

0.99+

AWSORGANIZATION

0.99+

2020DATE

0.99+

oneQUANTITY

0.99+

Census BureauORGANIZATION

0.99+

T-RexORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

140 millionQUANTITY

0.99+

18 monthsQUANTITY

0.99+

400,000 mobile devicesQUANTITY

0.99+

Southern MarylandLOCATION

0.99+

April 1stDATE

0.99+

United StatesLOCATION

0.99+

10 monthsQUANTITY

0.99+

2018DATE

0.99+

US Census BureauORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

3.9 million peopleQUANTITY

0.99+

T-Rex SolutionsORGANIZATION

0.99+

77 millionQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

10 yearQUANTITY

0.99+

three peopleQUANTITY

0.99+

18 monthsQUANTITY

0.99+

next decadeDATE

0.99+

over 3 million peopleQUANTITY

0.99+

77 million peopleQUANTITY

0.99+

one secondQUANTITY

0.99+

two peopleQUANTITY

0.99+

1790DATE

0.98+

todayDATE

0.98+

260 field officesQUANTITY

0.98+

COVID-19OTHER

0.98+

DynamoDBTITLE

0.97+

each decadeQUANTITY

0.97+

16,000 field staffQUANTITY

0.97+

bothQUANTITY

0.97+

AWS Public SectorORGANIZATION

0.97+

firstQUANTITY

0.96+

CloudTITLE

0.95+

UNLIST TILL 4/2 - Vertica Database Designer - Today and Tomorrow


 

>> Jeff: Hello everybody and thank you for joining us today for the Virtual VERTICA BDC 2020. Today's breakout session has been titled, "VERTICA Database Designer Today and Tomorrow." I'm Jeff Healey, Product VERTICA Marketing, I'll be your host for this breakout session. Joining me today is Yuanzhe Bei, Senior Technical Manager from VERTICA Engineering. But before we begin, (clearing throat) I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. As always, there will be a Q&A session at the end of the presentation. We'll answer as many questions, as we're able to during that time, any questions we don't address, we'll do our best to answer them offline. Alternatively, visit VERTICA forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to join the forums, to keep the conversation going. Also, a reminder that you can maximize your screen by clicking the double arrow button at the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand this week. We will send you a notification as soon as it's ready. Now let's get started. Over to you Yuanzhe. >> Yuanzhe: Thanks Jeff. Hi everyone, my name is Yuanzhe Bei, I'm a Senior Technical Manager at VERTICA Server RND Group. I run the query optimizer, catalog and the disaggregated engine team. Very glad to be here today, to talk about, the "VERTICA Database Designer Today and Tomorrow". This presentation will be organized as the following; I will first refresh some knowledge about, VERTICA fundamentals such as Tables and Projections, which will bring to the question, "What is Database Designer?" and "Why we need this tool?". Then I will take you through a deep dive, into a Database Designer or we call DBD, and see how DBD's internals works, after that I'll show you some exciting DBD improvements, we have planned for 10.0 release and lastly, I will share with you, some DBD future roadmap we planned next. As most of you should already know, VERTICA is built on a columnar architecture. That means, data is stored column wise. Here we can see a very simple example, of table with four columns, and the many of you may also know, table in VERTICA is a virtual concept. It's just a logical representation of data, which means user can write SQL query, to reference the table names and column, just like other relational database management system, but the actual physical storage of data, is called Projection. A Projection can reference a subset, or all of the columns all to its anchor table, and must be sorted by at least one column. Each table need at least one C for projection which reference all the columns to the table. If you load data to a table with no projection, and automated, auto production will be created, which will be arbitrarily assorted by, the first couple of columns in the table. As you can imagine, even though such other production, can be used to answer any query, the performance is not optimized in most cases. A common practice in VERTICA, is to create multiple projections, contain difference step of column, and sorted in different ways on the same table. When query is sent to the server, the optimizer will pick the projection, that can answer the query in the most efficient way. For example, here you can say, let's say you have a query, that select columns B, D, C and sorted by B and D, the third projection will be ideal, because the data is already sorted, so you can save the sorting costs while executing the query. Basically when you choose the design of the projection, you need to consider four things. First and foremost, of course the sort order. The data already sorted in the right way, can benefit quite a lot of the query actually, like Ordered by, Group By, Analytics, Merge, Join, Predicates and so on. The select column group is also important, because the projection must contain, all the columns referenced by your workflow query. Even missing one column in the projection, this projection cannot be used for a particular query. In addition, VERTICA is the distributed database, and allow projection to be segmented, based on the hash of a set of columns, which is beneficial if the segmentation merged, the join keys or group keys. And finally encoding of each per columns is also part of the design, because the data is sorted in different way, may completely change the optimal encoding for each column. This example only show the benefit of the first two, but you can imagine the rest too are also important. But even for that, it doesn't sound that hard, right? Well I hope you change your mind already when you see this, at least I do. These machine generated queries, really beats me. It will probably take an experienced DBA hours, to figure out which projection can be benefit these queries, not even mentioning there could be hundreds of such queries, in the regular work logs in the real world. So what can we do? That's why we need DBD. DBD is a tool integrated in the VERTICA server, that it can help DBA to perform an access, on their work log query, tabled schema and data, and then automatically figure out, the most optimized projection design for their workload. In addition, DBD also a sophisticated tool, that can take customize by a user, by sending a lot of parameters objectives and so on. And lastly, DBD has access to the optimizer, so DB knows what kind of attribute, the projection need to have, in order to have the optimizer to benefit from them. DBD has been there for years, and I'm sure there are plenty of materials available online, to show you how DBD can be used in different scenarios, whether to achieve the query optimize, or load optimize, whether it's the comprehensive design, or the incremental design, whether it's a dumping deployment script, and manual deployment later, or let the DBD do the order deployment for you, and the many other options. I'm not planning to talk about this today, instead, I will take the opportunity today, to open this black box DBD, and show you what exactly hide inside. DBD is a complex tool and I have tried my best to summarize the DBD design process into seven steps; Extract, Permute, Prune, Build, Score, Identify and Encode. What do they mean? Don't worry, I will show you step by step. The first step is Extract. Extract Interesting Columns. In this step, DBD pass the design queries, and figure out the operations that can be benefited, by the potential projection design, and extract the corresponding columns, as interesting columns. So Predicates, Group By, Order By, Joint Condition, and analytics are all interesting Column to the DBD. As you can see this three simple sample queries, DBD can extract the interest in column sets on the right. Some of these column sets are unordered. For example, the green one for Group By a1 and b1, the DBD extracts the interesting column set, and put them in the own orders set, because either data sorted by a1 first or b1 first, can benefit from this Group By operation. Some of the other sets are ordered, and the best example is here, order by clause a2 and b2, and obviously you cannot sort it by b2 and then a2. These interesting columns set will be used as if, to extend to actual projection sort order candidates. The next step is Permute, once DBD extract all the C's, it will enumerate sort order using C, and how does DBD do that? I'm starting with a very simple example. So here you can see DBD can enumerate two sort orders, by extending d1 with the unordered set a1, b1, and the derived at two sort order candidates, d1, a1, b1, and d1, b1, a1. This sort order can benefit queries with predicate on d1, and also benefit queries by Group By a1, b1, when a1, sorry when d1 is constant. So with the same idea, DBD will try to extend other States with each other, and populate more sort order permutations. You can imagine that how many of them, there could be many of them, these candidates, based on how many queries you have in the design and that can be handled of the sort order candidates. That comes to the third step, which is Pruning. This step is to limit the candidates sort order, so that the design won't be running forever. DBD uses very simple capping mechanism. It sorts all the, sort all the candidates, are ranked by length, and only a certain number of the sort order, with longest length, will be moved forward to the next step. And now we have all the sort orders candidate, that we want to try, but whether this sort order candidate, will be actually be benefit from the optimizer, DBD need to ask the optiizer. So this step before that happens, this step has to build those projection candidate, in the catalog. So this step will build, will generates the projection DBL's, surround the sort order, and create this projection in the catalog. These projections won't be loaded with real data, because that takes a lot of time, instead, DBD will copy over the statistic, on existing projections, to this projection candidates, so that the optimizer can use them. The next step is Score. Scoring with optimizer. Now projection candidates are built in the catalog. DBD can send a work log queries to optimizer, to generate a query plan. And then optimizer will return the query plan, DBD will go through the query plan, and investigate whether, there are certain benefits being achieved. The benefits list have been growing over time, when optimizer add more optimizations. Let's say in this case because the projection candidates, can be sorted by the b1 and a1, it is eligible for Group By Pipe benefit. Each benefit has a preset score. The overall benefit score of all design queries, will be aggregated and then recorded, for each projection candidate. We are almost there. Now we have all the total benefit score, for the projection candidates, we derived on the work log queries. Now the job is easy. You can just pick the sort order with the highest score as the winner. Here we have the winner d1, b1 and a1. Sometimes you need to find more winners, because the chosen winner may only benefit a subset, of the work log query you provided to the DBD. So in order to have the rest of the queries, to be also benefit, you need more projections. So in this case, DBD will go to the next iteration, and let's say in this case find to another winner, d1, c1, to benefit the work log queries, that cannot be benefit by d1, b1 and a1. The number of iterations and thus the winner outcome, DBD really depends on the design objective that uses that. It can be load optimized, which means that only one, super projection winner will be selected, or query optimized, where DBD try to create as many projections, to cover most of the work log queries, or somewhat balance an objective in the middle. The last step is to decide encoding, for each projection columns, for the projection winners. Because the data are sorted differently, the encoding benefits, can be very different from the existing projection. So choose the right projection encoding design, will save the disk footprint a significant factor. So it's worth the effort, to find out the best thing encoding. DBD picks the encoding, based on the actual sampling the data, and measure the storage footprint. For example, in this case, the projection winner has three columns, and say each column has a few encoding options. DBD will write the sample data in the way this projection is sorted, and then you can see with different encoding, the disk footprint is different. DBD will then compare the disk footprint of each, of different options for each column, and pick the best encoding options, based on the one that has the smallest storage footprint. Nothing magical here, but it just works pretty well. And basic that how DBD internal works, of course, I think we've heard it quite a lot. For example, I didn't mention how the DBD handles segmentation, but the idea is similar to analyze the sort order. But I hope this section gave you some basic idea, about DBD for today. So now let's talk about tomorrow. And here comes the exciting part. In version 10.0, we significantly improve the DBD in many ways. In this talk I will highlight four issues in old DBD and describe how the 10.0 version new DBD, will address those issues. The first issue is that a DBD API is too complex. In most situations, what user really want is very simple. My queries were slow yesterday, with the new or different projection can help speed it up? However, to answer a simple question like this using DBD, user will be very likely to have the documentation open on the side, because they have to go through it's whole complex flow, from creating a projection, run the design, get outputs and then create a design in the end. And that's not there yet, for each step, there are several functions user need to call in order. So adding these up, user need to write the quite long script with dozens of functions, it's just too complicated, and most of you may find it annoying. They either manually tune the projection to themselves, or simply live with the performance and come back, when it gets really slow again, and of course in most situations, they never come back to use the DBD. In 10.0 VERTICA support the new simplified API, to run DBD easily. There will be just one function designer_single_run and one argument, the interval that you think, your query was slow. In this case, user complained about it yesterday. So what does this user to need to do, is just specify one day, as argument and run it. The user don't need to provide anything else, because the DBD will look up his query or history, within that time window and automatically populate design, run design and export the projection design, and the clean up, no user intervention needed. No need to have the documentation on the side and carefully write a script, and a debug, just one function call. That's it. Very simple. So that must be pretty impressive, right? So now here comes to another issue. To fully utilize this single round function, users are encouraged to run DBD on the production cluster. However, in fact, VERTICA used to not recommend, to run a design on a production cluster. One of the reasons issue, is that DBD picks massive locks, both table locks and catalog locks, which will badly interfere the running workload, on a production cluster. As of 10.0, we eliminated all the table and ten catalog locks from DBD. Yes, we eliminate 100% of them, simple improvement, clear win. The third issue, which user may not be aware of, is that DBD writes intermediate result. into real VERTICA tables, the real DBD have to do that is, DBD is the background task. So the intermediate results, some user needs to monitor it, the progress of the DBD in concurrent session. For complex design, the intermediate result can be quite massive, and as a result, many lost files will be created, and written to the disk, and we should both stress, the catalog, and that the disk can slow down the design. For ER mode, it's even worse because, the table are shared on communal storage. So writing to the regular table, means that it has to upload the data, to the communal storage, which is even more expensive and disruptive. In 10.0, we significantly restructure the intermediate results buffer, and make this shared in memory data structure. Monitoring queries will go directly look up, in memory data structure, and go through the system table, and return the results. No Intermediate Results files will be written anymore. Another expensive lubidge of local disk for DBD is encoding design, as I mentioned earlier in the deep dive, to determine which encoding works the best for the new projection design, there's no magic way, but the DBD need to actually write down, the sample data to the disk, using the different encoding options, and to find out which ones have the smallest footprint, or pick it as the best choice. These written sample data will be useless after this, and it will be wiped out right away, and you can imagine this is a huge waste of the system resource. In 10.0 we improve this process. So instead of writing, the different encoded data on the disk, and then read the file size, DBD aggregate the data block size on-the-fly. The data block will not be written to the disk, so the overall encoding and design is more efficient and non-disruptive. Of course, this is just about the start. The reason why we put a significant amount of the resource on the improving the DBD in 10.0, is because the VERTICA DBD, as essential component of the out of box performance design campaign. To simply illustrate the timeline, we are now on the second step, where we significantly reduced, the running overhead of the DBD, so that user will no longer fear, to run DBD on their production cluster. Please be noted that as of 10.0, we haven't really started changing, how DBD design algorithm works, so that what we have discussed in the deep dive today, still holds. For the next phase of DBD, we will briefly make the design process smarter, and this will include better enumeration mechanism, so that the pruning is more intelligence rather than brutal, then that will result in better design quality, and also faster design. The longer term is to make DBD to achieve the automation. What entail automation and what I really mean is that, instead of having user to decide when to use DBD, until their query is slow, VERTICA have to know, detect this event, and have have DBD run automatically for users, and suggest the better projections design, if the existing projection is not good enough. Of course, there will be a lot of work that need to be done, before we can actually fully achieve the automation. But we are working on that. At the end of day, what the user really wants, is the fast database, right? And thank you for listening to my presentation. so I hope you find it useful. Now let's get ready for the Q&A.

Published Date : Mar 30 2020

SUMMARY :

at the end of the presentation. and the many of you may also know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Yuanzhe BeiPERSON

0.99+

Jeff HealeyPERSON

0.99+

100%QUANTITY

0.99+

forum.vertica.comOTHER

0.99+

one dayQUANTITY

0.99+

second stepQUANTITY

0.99+

third stepQUANTITY

0.99+

tomorrowDATE

0.99+

third issueQUANTITY

0.99+

todayDATE

0.99+

FirstQUANTITY

0.99+

yesterdayDATE

0.99+

Each benefitQUANTITY

0.99+

TodayDATE

0.99+

third projectionQUANTITY

0.99+

OneQUANTITY

0.99+

b2OTHER

0.99+

each columnQUANTITY

0.99+

first issueQUANTITY

0.99+

one columnQUANTITY

0.99+

three columnsQUANTITY

0.99+

VERTICA EngineeringORGANIZATION

0.99+

YuanzhePERSON

0.99+

each stepQUANTITY

0.98+

Each tableQUANTITY

0.98+

first stepQUANTITY

0.98+

DBDTITLE

0.98+

DBDORGANIZATION

0.98+

seven stepsQUANTITY

0.98+

DBLORGANIZATION

0.98+

eachQUANTITY

0.98+

one argumentQUANTITY

0.98+

VERTICATITLE

0.98+

each projectionQUANTITY

0.97+

first twoQUANTITY

0.97+

firstQUANTITY

0.97+

this weekDATE

0.97+

hundredsQUANTITY

0.97+

one functionQUANTITY

0.97+

clause a2OTHER

0.97+

oneQUANTITY

0.97+

each per columnsQUANTITY

0.96+

TomorrowDATE

0.96+

bothQUANTITY

0.96+

four issuesQUANTITY

0.95+

VERTICAORGANIZATION

0.95+

b1OTHER

0.95+

single roundQUANTITY

0.94+

4/2DATE

0.94+

first couple of columnsQUANTITY

0.92+

VERTICA Database Designer Today and TomorrowTITLE

0.91+

VerticaORGANIZATION

0.91+

10.0QUANTITY

0.89+

one function callQUANTITY

0.89+

a1OTHER

0.89+

four thingsQUANTITY

0.88+

c1OTHER

0.87+

two sort orderQUANTITY

0.85+

Sizzle Reel | UiPath Forward 2019


 

it's gonna come from the expansion potential right none of our customers are more than one percent automated from an RPA perspective so that shows you the massive opportunity but back to the market site data size I Craig and I in the other annals we talk often about because I think the Tam views are very low you look at our markets here let's just get some real data out there right our market share in 2017 was 5% let's use Craig's linear data for now you know our market share this year's over 20% our market share applying and I don't get the exact numbers you don't provide guidance anymore it's substantially we're substantially gaining share now I believe that's the reality of the market I think because we know blue president's numbers we'd go four times faster than them every quarter Automation anywhere won't share their numbers but you know I can make some guesses but either way you know I think we're gaining share on them significantly I think you know Craig's not gonna want us to be 50% of the market two years he's just not and so he's gonna have to figure out how do we didn't have it brought more broadly about about that market trend he talked about it on stage today about how does he calculate the AI impact and the other piece is now the process mining now that we are integrating process mining into RP a right it strategic component of that how does that also involve the market so I think you have both the expansion in the product portfolio which tries and then you have the fact that customers are gonna add more automation at faster pace and more robots and that's where the expansion really kicks in them we often say you know look is up there's a company that you know one day will be public company our a our our number is very important we do openly transparently share that but you know the other big metric will be you know dollar base net expansion rate the shows really how customers are expanding I think that I know what our number is we haven't shared it yet I know all the SAS companies the top 10 I can tell you you know we're higher than all of them the market projections are low and I think he knows in what you were just saying - is that that the company's pitch is that we are freeing people we are liberating them from the mundane from the drudgery from the data entry and and as you as you pointed out rightfully a lot of the customers are saying oh no it's giving our time it's giving our employees time back to focus on the higher level tasks the more creative aspects of their job butBut I wonder if it is in fact what what it really is doing two jobs I mean I think that there's a really telling line in that Forex profile of Daniel Dinah's who is the CEO of this company's founder of this company the newly minted billionaire the first ever bot billionaire exactly where it was an MIT professor quoted saying you know we always say to the companies that we say give us your data and we'll tell you if it is in fact having this job-killing effect and he said the companies don't want to give that up so accelerate that accelerate we're one of the largest nice providers is the only thing that we do where process automation and AI company and our sole focus has been process automation since our inception in our past lives were generalists we did well and wanted to do it again so when we started accelerating we wanted to make sure that we focus on a very specific vertical niche and process automation was just starting up the optic about mid-2016 ish I think one of the big trends that's out there I mean our PA has come onto the scene I like how you phrase it Dave because you refer to it as rightly so automation is not new and so we sort of say the big question out there is is our pages flavor of the month art being is definitely not and I come from a firm we put out a blog earlier this year called our PA is dead long live automation and that's because when we look at our PA and when we think about when we think about what its impact is in the marketplace to us the whole point of automation in any form regardless of whether it's our PA whether it be a good old old-school BPM whatever it may be its mission is to drive transformation and so the HFS perspective and what all of our research shows and sort of justifies that the goal is what everyone is striving towards is to get to that transformation and so the reason we put out that piece the RP is dead long live integrated automation platforms is to make the point that if you're not because what is our PA allow it affords an opportunity for change to drive transformation so if you are not actually looking at your processes within your company and taking this opportunity to say what can I change what processes are just bad and we've been doing them I'm not even sure why for so long what can we transform what can we optimize what can we invent if you're not taking that opportunity as an enterprise to truly embrace the change and move towards transformation that's a missed opportunity so I always say our PA you can kind of couch it as one of many technologies but what RP has really done for the marketplace today it's given business users the leaders the realization that they can have a role in their own transformation and that's one of the reasons why it's actually become very important but a single tool in its own right will never be the holistic answer that's a very good question I think it's a question that has been very common throughout this entire conference I would say you know when I think about scaling what I've noticed over the past few years is that you know the actual bot development is about 25 percent of the work that you need to do right when it comes to scale there is everything outside of the actual development is the important part so how are you funneling opportunities into a pipeline how are you streamlining the entire process reengineering of you know fitting an RPA into an existing process you know what is what are the governance that what's the governance you have in place to make sure that the code of that development is clean and can be maintained long term and then more importantly I think that people overlook you know the people think of scale is being able to develop a lot of bots I think more importantly what scale is is being able to efficiently maintain a large portfolio bots and that's what I've realized this year we've got now about 300 automations in production and you know your reputation as an organization is really on how well you maintain those box because if your bots are consistently failing and you're not fixing them quick enough for your functional users to leverage them then you lose a lot of credibility so I think that's been a big learning for us as we reach how are you guys thinking about the way in which a user worker interacts with that that fog I think it's it's more like a dance and and less like a task manager right so you might think in classic automation you know click a button go do this thing click about and go do that thing that the automation is happening when you want it to the way that our platform is written the robot can listen to what you're doing it can monitor for when you click on a specific button or for when you move files to a folder so think about it less like a conscious effort to guide the robot and more as a a collaborative you know effort where where the robot is seeing what you're doing and taking action to help you and do things on your behalf and then letting you know when they're done so it's the paradigm is changing for work and when you have a robot on your computer it's gonna open up a new way of doing your daily act and and the enabler there is what machine learning machine intelligence it's a combination of things so think about machine learning and AI as just one tool that that robot has to use both CR as well you know we did a demo earlier this week where we took receipts moved him to a folder the robot sees that you've moved receipts into a folder can bounce it off an end point that and break apart those receipts using OCR load that all into excel and help you with your expense report so think about things like this you things you need to do you do what you would normally do put receipts in a folder and the robot takes care of the rest the most fascinating thing about RPA right now is that it's really highlighting the problems that organizations have all their accidents of history are really being brought up by RBA and then you've got these digital darlings that they're trying to compete with the greenfield site kind of people and some of those don't have beautiful back offices but let's not go there for a minute so it our PA is an opportunity for companies to link their digital dreams with their existing legacy nightmares I definitely think we're seeing less tech spending expected for q4 and I think that will spill into 2020 based on the ETR and enterprise technology research data that we see but I think it's actually a healthy pullback I kind of agree with guy on that front and I actually think it is good for our PA I think our PA is one of those sectors that you see in the EGR surveys that is gaining share relative to other tech spending and I think that will continue in any downturn so I expect softness you know however you define downturn I don't think it's going to be falling off the cliff or a disaster but I definitely think spending will be more tepid one of the nice things about our PA is you can take your software robots and apply them to an existing process and a lot of times changing processes not a lot sighs almost always changing processes is painful however we've talked to some customers that have said by applying our PA to our business it's exposed some really bad processes have you experienced that and can you maybe share that experience with us absolutely so for us one of the initial robots may apply to a customer facing process it was our field team trying to get back to our customer with with some information and we realized that the the cycle time was very long and the reason is there are four functions involved in answering the question and seven different applications are being touched all the way from Excel to ERP CRM so with it obviously bringing a strategic solution to fix the cycle time and reduce that to streamline the process was going to take us long so our PA was great help we'd reduce the cycle time by putting a robot and we were able to get back to ours please sales team in the field in matter of minutes what used to take hours was now being responded in minutes now that doesn't mean that process is perfect but that's unacceptable steam was in the field before you know streamlining and going into a bigger initiative anything you could share Christine coming from a software engineer background I at least I had the tendency to don't give enough credit to sales to marketing and not even to the customers we understand the customers in the so we build technology for the sake of technology so we were really fortunate to have some multi customers what we didn't understand how because I thought that customers should go to themselves to test and find the best technology out there and just go with it I I was really kind of I had a lot of blind spots on how this world operates but after I've started to visit customers and understand their pain points and their requests actually machine using our own technology because they use it in the real world so that message that that completely transform my thinking so I went back to my engineering teams tonight and I tell the guys from this day I don't wanna ever here we don't fix bugs and we do features and we do this when the customers say you do this you say thank you thank you for showing me the light I will do this that's that makes we create the better draw [Music]

Published Date : Feb 24 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
2017DATE

0.99+

5%QUANTITY

0.99+

ChristinePERSON

0.99+

2020DATE

0.99+

ExcelTITLE

0.99+

50%QUANTITY

0.99+

two jobsQUANTITY

0.99+

two yearsQUANTITY

0.99+

Daniel DinahPERSON

0.99+

MITORGANIZATION

0.99+

seven different applicationsQUANTITY

0.99+

more than one percentQUANTITY

0.99+

mid-2016DATE

0.99+

CraigPERSON

0.98+

excelTITLE

0.98+

oneQUANTITY

0.98+

tonightDATE

0.98+

four functionsQUANTITY

0.98+

firstQUANTITY

0.98+

todayDATE

0.98+

q4DATE

0.97+

bothQUANTITY

0.97+

earlier this yearDATE

0.97+

one toolQUANTITY

0.97+

about 300 automationsQUANTITY

0.95+

about 25 percentQUANTITY

0.95+

single toolQUANTITY

0.95+

this yearDATE

0.93+

over 20%QUANTITY

0.93+

DavePERSON

0.93+

earlier this weekDATE

0.92+

top 10QUANTITY

0.92+

one dayQUANTITY

0.89+

2019DATE

0.87+

EGRORGANIZATION

0.85+

RBAORGANIZATION

0.82+

SASORGANIZATION

0.81+

RPATITLE

0.73+

CraigORGANIZATION

0.68+

lotQUANTITY

0.62+

UiPath ForwardTITLE

0.62+

a lot of botsQUANTITY

0.61+

past few yearsDATE

0.59+

Sizzle ReelORGANIZATION

0.51+

customersQUANTITY

0.49+

manyQUANTITY

0.47+

Breaking Analysis: re:Invent 2019: AWS Gears up for Cloud 2.0


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> Hello everyone and welcome to this week's episode of theCUBE insights, powered by ETR. In this Breaking Analysis, we're going deep into AWS. In a couple of weeks, theCUBE is going to be at the eighth AWS Reinvent, which will be our seventh year of having theCUBE at that show. You know, reinvent has really become the Super Bowl for enterprise tech innovation. And, ahead of the event, what I want to do is talk about the revolution of cloud, and the impact that it's having on the industry. And of course, I want to dig in to some of the data using the ETR data set. Before I do that, let me first say that the cloud 2.0, which is a term that we've been using, is becoming a reality. This is something that John Furrier and I talk about a lot here at theCUBE and at SiliconANGLE. The cloud is not about an incremental transition, it's really about transformation. We're talking here about the end to end modernization of the enterprise. The game is changing, and the engine of innovation is really being driven by new architectures, and these architectures have built around a few items. Data, machine intelligence, and of course cloud, for scale. We feel like what we are witnessing is the build out of this massively scalable distributed system. And this system is transforming businesses, and really enabling entire new companies and business models to emerge. The cloud is the under pitting of this digital revolution, and virtually every industry is going to be disrupted, no industry is safe. All right, let's get right to it. So, the key questions that I want to explore in this session, let's start with the spending patterns. We're going to look at the ETR survey data, and what services are attracting the most action inside cloud, and which vendors are winning? I then want to look at the market share data from a couple of angles. I'll look at ETR data, I'll talk about some other market data. Then we're going to drill into some of the services that are critical to innovation, and I specifically want to look at databases in particular analytic data stores. And then I want to look at the data and analytic services at AI, machine intelligence, and then I want to look at the data around containers and functions, like Lambda, which are very hot right now. Then, we're going to share some data on how the cloud is impacting the so called, "old guard." This is a pejorative term that Andy Jassy coined to refer to the legacy enterprise tech providers. Then I want to make some comments about the AWS ecosystem, it's getting a lot of chatter lately. And then I want to share some thoughts on what you can expect this year at Reinvent, and then I'll wrap. So the first data point that I want to show you here really draws on ETR's latest survey of 1,336 respondents. So what this chart does is it cuts the data, and it's showing just the cloud sector ranked by net score. Now remember, net score is a measure of spending momentum. Okay? So you can see where the action is. So at the top, you see Azure Functions and AWS Lambda popping right up. Look at their net scores, they've got a net score of 74% and 71% respectively. You can see Azure overall, this is the overall Azure business that's right up there as well, and of course AWS overall, so people responded AWS is right there. Very, very high, but it's dropped a little bit below Azure. We'll talk about that more in a moment. Then you can see VMware Cloud on AWS, it's got strong momentum, which is a real positive. You've got Google Cloud Functions, again, Functions very hot right now. Open Shift from Red Hat, GCP is up there, VMware Cloud. Then you've got Alibaba. Alibaba's only got 18 mentions, whereas the others have much higher shared in, so I'm not going to really put too much weight on that. And you can see the other folks as well on that chart. But also you can you can point out the functions. The Azure functions, and services like Lander, Lambda, are gaining really a lot of momentum in the marketplace, and I think point to a new mode of compute. What I want to do now is I want to isolate in this chart, the big 3 in cloud, and put them into context with a legacy player, you know, namely IBM. I'm not trying to pick on the legacy guys, but I think it's good for context. So as you can see here, Azure and AWS, they've been neck and neck battling it out in the last 10 surveys or so. And you can see even Google, somewhat behind, but it's still got pretty strong spending momentum. Now, these figures overall are trending down relative to the expectations earlier in the year. This is something that we've talked about, that spending is reverting back to pre '18 levels, not falling off a cliff, still solid in the grand scheme of things. So you can see, you know, net scores here are well above 50% for AWS and Azure. Now take a look at IBM. The ETR data shows them in the red zone, with a net score of 16%. That is not a surprise, that they're behind the Big 3. And I've said many many times, here's the thing, IBM and Oracle, I'm not showing Oracle here, they're at least in the cloud game. Think about it, HP had the public cloud, they had to tap out. Cisco, they don't have a public cloud. Dell EMC, even VMware, they don't have a public cloud. So at least IBM and Oracle have a cloud play, where they can take their SaaS business and run it, and get vertically integrated and some operating leverage. Okay, I'm going to switch gears a little bit and talk about market share. And we want to focus here on the battle between Azure and AWS. We all know Microsoft is growing faster, but AWS is much larger. And this is something that AWS CEO Andy Jassy, he takes a lot of time to explain to the analyst, and to the crowd at Reinvent. Let's take a look at what Jassy said last year at Reinvent on this topic, and then we'll come back. >> So if you look at the provider who most people think is the second place provider in this space, in their last financials they grew 76% year over year. And you can look at that and say, "Oh, 76% is more than 46%." But if you look at it in reality, that 76% represents about a billion dollars of growth year over year. If you look at the 46% growth of AWS on that much larger base, that represents $2.1 billion of growth year over year. So more than double that. So AWS not only has a significant market segment leadership position in share, but also on an absolute revenue basis is growing meaningfully faster than anybody else. >> Okay so, think about what Jassy said. He was using Q3 data and he said that AWS had a $27 billion run rate business. And if you look at those charts that I showed, or he showed, it looked like the yellow bar, which was Microsoft, even though they didn't say, you know, "the company that shall not be named." It was about 1/3 the size of AWS, so where would that put Microsoft? Somewhere around 9 billion last year, on kind of an apples to apples run rate basis, using those extrapolated market data that Jassy showed. By the way, ironically, this is about what AWS did last quarter which you can see here on this chart that I'm showing you. You might remember, I showed you this chart in a previous episode of Breaking Analysis. And what it shows is AWS' quarterly revenue on the blue bars, and the growth rate on the right hand axis, that's the red line. And you can see Jassy talked about 46% growth.. And you can see that in Q3 last year, and then look how its moderated. It's 35% in Q3 in 2019, the last quarter that they announced. So Jassy is right. AWS is growing slower than Microsoft last year, which was growing in the mid-70's. But Microsoft was 59% last quarter so that trend is continued. If, you know, that's if you believe Microsoft numbers, which are really not clean. It's hard to say sometimes with all the SaaS in there, and Office 365, LinkedIn, I don't know what else is in there but we try to parse that out. Regardless, Jassy's point that size matters is still correct. But, Microsoft is closing the gap. I talked to the Wikibon team recently, and they think that AWS is going to come in at $35 billion dollars in revenue this year. And they have Microsoft's IS business at around $15 billion. So that's 43% of AWS's business versus 33% at this time last year. So you can see that Microsoft is closing that gap. AWS is still adding $8 billion a year in growth, but Microsoft is definitely catching up. So what is the spending data show? Let's take a look here at the ETR data, and see what they say about market share. Now, remember, in the ETR parlance market share is a measure of how pervasive a vendor is within the data set. And as you can see here, it maps pretty well to the market estimates that I was just talking about. Although it actually appears that in these lines that AWS is widening that lead. But you can see in the net scores, by the way, this is net scores across all sectors, not just cloud computing, so it pulls in the other segments. But none the less, you can see Azure has a somewhat higher net score which indicates stronger spending intentions. So that pretty much fits what we see in the market for the most part. Now it's not all rosy for Microsoft. You know, they are super strong in the ETR data set across the board, but specifically in cloud. So that's important, I don't want to lose sight of that, but I want to share something that Gartner said recently, and it's a 2019 magic quadrant on cloud computing. Microsoft Azure's reliability issues continue to be a challenge for customers, largely as a result of Azure's growing pains. Since September of 2018, Azure has had multiple service-impacting incidents, including significant outages involving Azure active directory. The nature of many of these outages is such that customers had no controls in order to mitigate the downtime. So, caution is what Gartner said. So despite the great numbers and the fact that Azure is gaining, it's having growing pains. For years I've talked about the economies of scale for AWS due to its automation. I talked about the companies marginal economics at volume, and you can see it in the firm's operating margins. The question to ask, is Microsoft running into dis-economies at scale, due to it's large install base, and does it have technical debt? Because it's jamming large software estate into Azure, and having to preserve the past while trying to innovate for the future. I don't know, and it's hard to tell because Microsoft is so big and so profitable, but it's something that CIO's definitely should keep an eye on. Now, I want to look at some key sectors here and evaluate how AWS is doing in some of the areas where we see really innovation. And I want to start in the all important data base area. Now I'm going to focus here on analytic databases, and data warehouses, and I think there's some interesting trends going on here. So this is a cut of the ETR data warehouse segment. Now I've talked about Snowflake in the previous episodes of Breaking Analysis, and you can see why. Snowflake has a net score of 71%. They're one of the highest and most interesting newer companies in this space and in the ETR data set. You can see AWS doing very well, and I want to make some comments on both Snowflake and AWS Redshift. But before I do that, look at Oracle and Teradata on this chart. What you see here is the classic innovator's dilemma. It's at play where AWS and Snowflake, you can see them, they're solidly in the green, and you got the two legacy players affirmly in the red. So I include them as reference points. But I want to come back to Redshift and Snowflake, because I feel like there's something new going on in cloud. Where cloud 1.0 was all about IS and compute and storage and throw in some data base, there's this new trend emerging that's really driving new workloads. And this data that now sits in the cloud, it's maybe stored in S3, and customers are using data stores like Redshift and Snowflake to get more insights out of that data. They're bringing tools like data bricks into the equation, and really driving a whole new set of work loads that are not just about provisioning infrastructure, but really extracting insights much more quickly from the data and applying it to your business. And for AWS, it's driving tons of compute sales and customers are getting more value out of their data. Now, here's the interesting thing. Redshift and Snowflake are both best in class modern data warehouses, they seem to be coexisting, they're both thriving, you know, why is that? They're both MPP columnar stores, so they've got many similar attributes, but I think what it comes down to really is what I call horses for courses. I don't have time to dig into it today, but when you peel back the onion, what you find is different approaches to things like architecture, security, scaling, different philosophies, pricing, different feature sets. So it really comes down to the best strategic fit, and for now it looks like to me, there's room for both platforms. They're both doing very well from a spending momentum perspective. We'll see how that plays out over time. Let's now take a look at the analytic sector. Now here, we're talking about things like Amazon's quarry services, elastic map reduce, search kinesis, quick site glue, streaming, those kinds of tooling. You can see in this chart that AWS is very strong and it leads Microsoft by a small margin in the ETR data set. Now for comparison, and again, I'm not trying to pick on the legacy players, but I think it's important to provide context, and when it comes to spending momentum, the data doesn't lie. You can see here, IBM they've had a sizeable and very impressive set of capabilities in the analytic space, but you can see where the buyers are placing their bets. Now, what I'm showing you now in this next chart is a similar view, but this time I'm showing ETR market share for both AI and the machine learning segment. So for context I've added IBM Watson. Remember, market share for ETR is a measure of pervasiveness, not only to AWS and Microsoft, though they're battling it out for the top spot, but they got stronger spending momentum as you can see by the net scores. Look at Watson, I mean, it's respectable in the ETR data, but it just doesn't have the scale of the top two players. Okay, finally, I want to look at the container space. It's hot and I want to focus on Lambda from AWS. So what we're showing here is the net scores for Lambda, and Amazon's elastic container service. And you can see Lambda, very very strong. ECS is tapering a little bit, it's showing less momentum overtime, but still well over 50% net score. But look at Pivotal Cloud Foundry, they've showed a steady down term over time. This underscores the work that VMware and Pat Gelsinger have to do with one of their newest acquisitions. As in aside, this is an opportunity for VMware, which in my opinion, I've said they really need to get their developer act together, really to drive new innovation. And by the way, Pivotal just had some layoffs, but my understanding that it was not in engineering but rather folks that VMware saw as redundant, rolls that they already had in place. The bottom line is, Pivotal has been steadily losing momentum in the ETR surveys. But look, a 27 net score is not a disaster by any means. I said on my last Breaking Analysis, that if I were Michael Dell, I'd dedicate a thousand engineers to open sources, using Pivotal to really appeal to developers, and make his hardware run better on the open source tooling and apps that these thousand build. And make his infrastructure programmable. This is how the edge is going to be won. It's not going to be by throwing boxes over the top of the fence, but really a bottoms up by devs. I digress. The last data point that I want to share here is really designed to address the question, how is the cloud impacting what Jassy calls the "old guard?" So this view shows market share, which again is defined by ETR, remember they do the math to measure the pervasiveness of a vendor in their data set. And they call that market share. And I've cut that data by just the cloud spenders. So those buyers spending heavily, and I've isolated on AWS, Azure, and Google Cloud. And how their spending on traditional vendors has changed overtime. And I'm picking out Cisco, HPE, Dell EMC, and Oracle. And the story you can see is clear. They came out of the downturn in 2010, and the big guys who were holding their breath, and they came up for air and they saw lots of pet up demands, so they did pretty well. But the cloud has continued to slowly eat away at their share, and their spending momentum as seen by the net scores in this table, has been affected. But look at Cisco. They actually have quite a strong net score, its 37%. So to me, by the way, this makes sense. And I think Cisco is in a good position to connect clouds and secure data moving across clouds. But the cloud, it's long steady march continues. And we are entering a new era that I think is only going to see greater share gains for the cloud in my view. By the way, I don't want to just, back to my rant about the edge in programmable infrastructure, and how developers are going to win the edge. Cisco with Devnet is actually in a pretty good position here, and done a good job. And I think they're one of the few, if not the only legacy player that is going to figure this out. Now before I close, I want to make a few comments on the ecosystem, and give a glimpse as to what to expect at Reinvent in 2019. All right first the ecosystem. There's a lot of sort of chatter, and griping, and concerns around AWS cannibalizing the ecosystem partners. And I think frankly, that concern has merit. You know when AWS has this insane customer focus, you can pretty much take that to the bank. If a customer wants something and expresses that to AWS, and they see a space to fill where it can leverage that flywheel that they always talk about at ad services, AWS is not going to sub optimize it's portfolio to protect it's partner. It's going to go hard after it. So as a partner of AWS, it's up to you to keep innovating and moving fast. And that's hard, because AWS is probably moving faster than you are. But you know, you can still specialize as a partner, and thrive as a best debride player. I mean, look at the Snowflake example. There's plenty of opportunities out there in security, backup, governess, machine intelligence, work flow, edge, and of course, there's the infamous multi cloud opportunity. And I saw infamous because AWS doesn't use that term, you're not going to see it on the floor of Reinvent this year because it's frankly not allowed. AWS is very controlling over the messaging that it puts out to it's customers, and that includes the rules of the ecosystem if you want to go to their show. But you'll hear plenty of side conversations about multi cloud, and we're certainly going to be talking about it on theCUBE. Is multi cloud a symptom of multi vendor? You know what I think on this topic. I think it's more than that than it is a strategy. But CIO's are now being asked to clean up the multi cloud mess, so it does have merit. But it creates complexity, and that means opportunity for partners. So multi cloud is white space for the ecosystem, as is hybrid, and on prim connectivity, so partners are hedging their bet, they're supporting multiple clouds, and they're partnering with Azure and Google, and that makes sense to do so. But here's the thing. Cloud 2.0 is getting more complex. AI, new workloads, edge, new use cases, machine learning, more API's, more services, more complicated pricing. These are confusing matters for customers, and partners can help simplify this. As well, thinking about competition with Microsoft, Microsoft is kind of an easy choice. If you're already a Microsoft software customer, (murmurs) So partners need AWS, and AWS need partners to help them deliver solutions to go to market, and keep it simple. John Furrier says this a lot, that winning in the enterprise requires salesmanship, and AWS partners, they're a powerful channel, so AWS has to lean on this channel to really create solutions for customers and simplify. Okay, let's talk about what to expect at Reinvent 2019, and I want to start with storage. Jeff Bar put out a blog post announcing a series of new storage offers around block store, new gateways, S3 replication, a new Windows file server capabilities, and stronger emphasis on file storage. Now, most of the world's data is stored on file, and AWS is expanding it's portfolio. It started with S3 object, back in 2006, and then EBS, block store, and now a big emphasis on file services. So I expect to hear a lot about that, and as well, we're going to hear about outpost. What progress has Amazon made with outpost? What's the status? What's the vision for outpost? How does it fit in at the edge? You know as I just said in my rant earlier, the edge is all about developers, and I like AWS' edge approach. I think AWS has the right perspective. It's very devs centric. It's bottoms up from devs, not over the top like many of the box sellers. Now at Reinvent, you're probably going to hear more about unplugging Oracle databases, certainly security is going to be a big focus, as will AI and machine learning. I also expect a lot on transformation of industries. As Microsoft continues to grow in IS, expect AWS to somehow try to change the game again. And I'm not sure AWS can win the battle head on with CIO's. Rather, I think AWS is really going to focus on this duel disruption agenda, both within the horizontal technology stack but also within industries. In other words, expect AWS to increasingly focus on enabling industry transformation in different segments, like media, health care, financial services, manufacturing, government, automobiles, telco, virtually every vertical. This dual disruption agenda, both in the tech stack and within industries, its in AWS's DNA because it's in Amazon's DNA. It's driven by Jeff Bezos at the top. Now in closing, I want to stress again, the cloud 2.0 is here, and it's getting more complex. The so called "old guard" is hanging on to it's install basis, but in many ways, it's working hard to get simpler. Now are these two domains going to collide together and create an equilibrium in the cloud native wannabes and the cloud native guys? Probably not functionally, but there are a lot of opportunities for the big whales to capitalize on this industry consolidation, and compete by simplifying their experience enough to keep customers hanging around. You know, don't forget, the enterprise business for years has relied on high touch specials, and unique requirements, and that's the wheelhouse for the legacy players, it's not AWS'. And maybe this approach is going to continue to pick away at those opportunities with repeatable and automatible solutions. So this should be really interesting to watch. Stop by theCUBE, come see us at Reinvent, we got two sets there. This is Dave Vellante, signing out from this episode of CUBE insights powered by ETR. Thanks for watching, we'll see you next time. (upbeat music)

Published Date : Nov 22 2019

SUMMARY :

From the SiliconANGLE Media office in So the first data point that I want to show you here And you can look at that and say, And the story you can see is clear.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

$2.1 billionQUANTITY

0.99+

AWS'ORGANIZATION

0.99+

HPEORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

JassyPERSON

0.99+

Michael DellPERSON

0.99+

2006DATE

0.99+

Andy JassyPERSON

0.99+

2010DATE

0.99+

Jeff BarPERSON

0.99+

76%QUANTITY

0.99+

GartnerORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

37%QUANTITY

0.99+

18 mentionsQUANTITY

0.99+

last yearDATE

0.99+

September of 2018DATE

0.99+

$27 billionQUANTITY

0.99+

Cristina Pirola, Generali Assicurazioni & Leyla Delic, Coca Cola İçecek | UiPath FORWARD III 2019


 

>>Live from Las Vegas. It's the cube covering UI path forward Americas 2019. Brought to you by UI path. Hello everyone and welcome >>do the cubes live coverage of UI path forward. I'm your host, Rebecca Knight, co-hosting alongside of Dave Volante. We are joined by Layla Delage. She is the chief information and digital officer at Coca-Cola. ECEK thanks so much for coming on the show. Thank you. Great to be here. Very exciting. And also Christina Perala, she is the group RPA lead at Generali. Thank you so much for coming into, for inviting me. Thank you. So I want to hear from you both about what, what your industry is and what your role is. Level. Let's start with you. Okay, great. Um, so we are, um, one of the Rogers bottlers within the Coca-Cola system. Uh, we produce, distribute and sell Coca Cola company products. The operating around 10 countries are middle East and central Asia and parts of middle East, Pakistan, Syria and Turkey. They are actually born out of Turkey and that's where our central offices, um, we've operate with 26 plants, around 8,500 employees. >>Uh, we serve a consumer base of 400 million and we have around close to 1 billion, uh, customers. Uh, and we continue to invest in the countries where we operate. And my role is to film and my role is all things digital within this community. So leading technologists, leading technology, all things digital. Yes. So Christina, tell us about Generali. Generalia. Sikora Zuni is a leading insurance company as the presidency. Enough 50 countries worldwide and more than a 70,000 employees that were wider. So it's a bigger company, not only for insurance. And my role with the internet rally group is to leader the LPA program. So I'm inside of the group that I in digital. So am I inside this group, I'm very focused on smart process automation. So RPA plus AI, because a has a, we already know all I loudly, LPA without a AI is announcer nowadays. So we have to keep on talking about AI, machine learning algorithms to enrich, uh, uh, the capabilities of basic robotic sell, hand reach, also the Antwerp and automation of processes. You're the CIO and the CDO. Yes. Yes. That's unique. First of all, there's one that's unique too. It's even more unique than a woman has both roles. So what's the reason behind it? So, um, there's definitely a reason behind it. I joined the Coca Cola >>system about a year ago, so I'm just a over a year in the company. The reason actually I wanted to make sure that we highlight the CIO and CTO CDO role together is, um, I want to advocate for all the it organizations to transform and really get into the digital world and get into the world of advanced technologies, become strategic business partners. Get out of the kitchen, I call it kitchen kitchen, it, you know, get out of the managing of data centers or cloud and um, just the core foundational systems and applications. Get into the advanced technology, understand the business, gain business acumen and deliver solutions based on business needs. So to highlight that, I want to make sure that I hold the role of both and I'm able to be advocate of both worlds. Cause digital without it support is not able to accomplish what they need to accomplish and it needs to get into more of the digital space. And Christina, as the RPA, you write bots, you evangelize the organization. >>Um, mostly the second. So in generally we have a, a very, uh, so, uh, sort of ivory the organization. So for something we are very decentralized, for example, for the developing of robots or the deploying for the action, the operational stuff and so on. Uh, but uh, for some stuff like a guidelines, uh, uh, risk framework to ensure that robots can do their work in the right way with notice to all for the business processes, uh, for this stuff before guidelines, framework, best practice sharing. We are a central centralized, we, we try to be centralized. So, uh, my role is to try to collect is to collect and not try and super lat, uh, best practices and share with you in the companies chair, uh, um, the best use cases. And, uh, also tried to gather what are the main concerns, what are the difficulties in order to a facilitator and to boost smarter process automation of the option. So >>Laila, you are up on the main stage this morning. You, I Pat highlighted Coca Cola itchy as a, as a customer that is embraced automation, embrace the UI pass solution. So tell us a little bit about the challenges you are facing and then why you chose I a UI path. So as I joined the company, uh, I introduced a very strong digital strategy that required a lot of change and it's within a company that has been very successfully operating all these years and doing pretty much know what to do very well. And all of a sudden with digital we are starting to disrupt the, are trying to say, Hey, we've got to change the way, do some of the things. Um, so belief in digital and belief that it can really bring efficiency and outcomes was very important. And I needed a quick win. I needed to have a technology or a solution or an outcome that I would generate very quickly and show to the whole organization that this can be done and we can do this as Coca-Cola. TJ. >>So that was, that was RPA, that was our PA for this fascinates me because you're an incumbent business, been around for a long time. you're a bottler and distributor, right? So yeah, processes are around the bottling plants and the distribution system. Yes. And now you're transforming into a digital business. Yes. I'll put data at your core. Totally not start his daytime customer. Okay. So describe the difference between the traditional business and what it looks like when you've transformed, particularly from a data perspective. And then I want to understand what role RPA plays. So we are definitely a very data rich company, however, to call ourselves data rich and to call it a strategic asset, I first need to capture and control my data and I have to treat it like a strategic asset. So that is a huge transformation. The second, once you treat it as an asset, how do you generate more insights? >>And I call this augmenting the gut feeling. I have an amazing gut feeling in the company. How do I augment that with data and provide our, this is partners and then our customers and our suppliers and some of the information. And then obviously future maturity level is, you know, shared economy and data monetization, et cetera. So that's how I describe within the company. And then assets, other assets like our plants and coolers cooler, we call it cooler, you know, where do you actually see all our products? They are called, they are visible and they are available, but they are also in that set where I can turn them into a digital cooler and I can do so much more with the cooler that standing. And I recently, in one of our leadership meetings I said we have as many coolers as the um, population on the fishy Island, which is close to 1 million. >>So just imagine in this new world, in this digital era, everything that you can do by just having a cooler, 1 million coolers present out there on the street, I can serve the consumers, I can serve customers with very different information. So that's kind of what I mean by turning the business into a digital business. So that's an awesome story. By the way, how does RPA fit into that vision? RPA is everywhere in division. So I said when I started the journey, uh, any digital journey has some Muslim battles for me. There are four must win battles. I need to get certain things right in it, in the, and that was one, one of the Mustin battles was alteration. So we have to create efficiency, we have to optimize, we have to streamline. And we said automation first. Um, and we started with, I call it robotics and automation. >>And I agree with what you said, Christina. It's more than just robots. It's actually a strategic application. It could be a good old ERP. It's the RPA, it's AI, it's all the other technologies that are out there that they bring the two of them brings. So how do you create this end to end solution using all the trends, technologies to create optimization? Uh, our goal was how do we get back to our customer much faster. We had so many customer facing processes and they're going to be there forever. They are a very customer centric customer into company obviously. So how do I get back to my customer faster? How do I make my employees just happy? They were working on so many things would be until midnight over time during weekends. How do I take that away from them? So we called it lifting the weight of the shoulders and giving you a new capabilities. So again, augmentation and then giving them that space. So we had uh, three of my employees upskilled and reskilled themselves. They became a developers in the robotics space, a couple of fire functional, um, colleagues are now reskilling themselves because now they have the time to reskill. More importantly, they have the time to actually leverage their expertise and they are so much more motivated. The engagement, the employee engagement is increasing. So that's how we are positioning RPA. Pristina ICU >>nodding a lot, your head too. A lot of what Layla is saying. I'm wondering if you can talk to about any best practices that have emerged as you've implemented RPA at Generali to what you've learned. Yes, for sure. Um, we have a lot of processes automated, uh, all around the group. Uh, but we are not, we have not reached our maximum or, uh, benefits, uh, gaining. So what we need to do right now is to try to boost the smart process automation, uh, via analyzing the issue around value, Cena. So each business area of the value chain because currently we have countries that has, that have a different level of maturity. So, so some countries are at the very beginning and we have to help them with best practice sharings with a huge case, successful use cases. And we are, uh, we have a lot of help from parts into, in this because locally and who I Potter as a, a very strong presence and is very powerful in doing that. >>And, uh, now, uh, our next mouth are very focused on try to, um, uh, deep dive, the vertical, our area of the issue around value chain and identify which are the processes inside them are best to automated. Uh, uh, Basinger. Uh, these activities are not so you, I part, we'd, his experience has created a heat mapper, value chain Heath mapper. And so it's given up as some advice where to focus our strengths, our hand energy in automating. And I think that this is a very huge, uh, uh, support that you are UI parties given us. So it's not just a matter of, okay, let's start, uh, uh, do some, uh, process assessment in order to identify which processes are the best candidates to be automated. But, uh, we have, uh, how our back, uh, us. So we, we are, uh, we have the backing of UI pass saying it's better to do that and automate in depth, uh, processes of that, but Oh, the value chain. So we are starting a program to do that with all the countries or the vertical area of the country. So, and I think that this could really bring a, uh, high benefits and can, uh, uh, drive us to, uh, really having a scaling up in using a smart process, automation and UI. But you a bot ecosystem not only are, so >>one of the nice things about RPA is you can take the software robots and apply them to an existing process. A lot of times changing processes and a lot of times almost always changing processes is painful. However, we've talked to some customers that have said by applying RPA to our business, it's exposed some really bad processes. Have you experienced that and can you maybe share that experience with it? Absolutely. So for us, one of the initial, um, robots, we applied to a customer facing process. It was our field team trying to get back to our customer with a, with some information. And we realize that the, um, the cycle time was very long. And the reason is there are four functions involved in answering the question and seven different applications are being touched all the way from XL to ERP to CRM. So what we did obviously bringing a strategic solution to fix the cycle time and reduce that to streamline the process was going to take us long. So RPA was great help. We reduced the cycle time by putting a robot and we were able to get back to ours, priests, sales team in the field in matter of minutes. What used to take hours was now being responded to in minutes. Now that doesn't mean that process is perfect, but that's our next step. So we created value for our customer and our sales team within the field, um, before, you know, streamlining and going into a bigger initiatives. So then you could share Christina. >>Yes. Uh, so, um, it is necessary to automate something that could be automated. So, uh, it is necessarily to out optimize the process before automating it, but sometimes it's better to automate it as Caesar because, uh, also the not optimize the process can bring value if ultimated. So let me share an example. If you, for example, have to migrate some data obviously is a one shot, uh, uh, activity. But with the robot you can do it in a very short, well sharp timer. Maybe it's not the best, uh, process to be automated, but that could be useful as well. So it's always a matter of understanding the costs and the benefits. Uh, and sometimes, uh, FBA is very quickly, is very quick to be implemented and can be, can have a, also a lot of savings instead of integrating instead of doing more complex things. >>And then other things, uh, that it's important to take into account is that, uh, uh, after having a automating goal, all the low hanging fruits and so the processes with a low cost, uh, uh, low complexity and high benefits, uh, then it starts to facer when it's necessary to understand how to the end to end processes. Because, uh, it happens, uh, in, uh, some of our countries that, uh, the second phase is very difficult because, uh, the situation is that you have very, um, a lot of very fermented processes. And so before automating it is necessary to apply operational efficiency methodology, lean six Sigma, rare business process for engineering and then automate it. So it's a longer trip. And our Amer as group head office in general is to give these kinds of methodologies and best practices for all kinds of level of maturity in our countries. So finally, w what is the customer is the employee response then in terms of how you're talking a lot about streamlining, getting rid of these tedious tasks that took forever, how, how our employees reacting to the implementation. >>So we, um, we actually launched the, uh, announce announced RPA robotics and automation with a Hekaton in our company. And we invited 40 colleagues from various functions and two and everybody from the business was there and they participated actually in gathering ideas and prioritizing what matters most to the company. And we looked at customer, we looked at compliance, we look to the employee and we actually with during the hackathon you iPad team helped us to go live with one of the robots. They were mesmerized. They couldn't believe that this could happen. I think that's where we kind of engaged them and now going forward everyone who generate the idea was part of the building of the robots so they continue to be engaged to me allowed them to name the robots so they start naming and once the robots were alive yet literally had some of our teams who are dancing from happiness and I think that said it all. That was the strongest voice of our business partner and we published that video. So our business partners became our advocates and that's really our how we born the robotic and automation within CCI. We have so many advocates right now they are coming to us. Our business partners are coming to us with more use cases and they are actually, they are sharing with rest of the system within Coca-Cola and with the group that we are part of locally in Turkey, they are sharing their stories. So now we have a hype going on in the system. >>Yes. And in generally, um, at the beginning, uh, we face some fears in our employees fears of losing their job, but fear is not be able to use this kind of technology. Uh, but, uh, also with the help of HR because I, Charlie is, uh, driving a huge program of upskilling and reskilling of people. Uh, nowadays, uh, also hand user are very happy to use robotics, uh, because, uh, uh, when they realize that they can really help in their activities, in their very boring and not useful activities, they are very happy to enjoy this, this program. But it is so, uh, it, it was a trip, a journey with the employees to make them understand that it's not something that, uh, is affecting their job. So, at least in generally group, we are, we are programming, uh, these, uh, uh, or employees, uh, journey in order to make them, uh, uh, to have more, uh, uh, awareness about robotics and not be scared about it. Layla and Christina, thank you both so much for coming on the cube. It was wonderful. Thank you very much for you. I'm Rebecca Knight for Dave Volante. Please stay tuned for more of the cubes live coverage of UI path forward.

Published Date : Oct 15 2019

SUMMARY :

Brought to you by UI path. So I want to hear from you both about what, what your industry is and what your role is. So we have to keep on talking about AI, And Christina, as the RPA, you write So in generally we have a, So as I joined the company, uh, I introduced a So describe the difference between the traditional in one of our leadership meetings I said we have as many coolers as the So we have to create efficiency, So that's how we are positioning RPA. the very beginning and we have to help them with best practice sharings with a huge So we are starting So we created value for our customer and our sales team within the field, Uh, and sometimes, uh, FBA is very quickly, the end to end processes. So now we have a hype going on in the system. the beginning, uh, we face some fears in our employees fears

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

ChristinaPERSON

0.99+

Christina PeralaPERSON

0.99+

LaylaPERSON

0.99+

Layla DelagePERSON

0.99+

TurkeyLOCATION

0.99+

Coca-ColaORGANIZATION

0.99+

Cristina PirolaPERSON

0.99+

LailaPERSON

0.99+

Coca ColaORGANIZATION

0.99+

SyriaLOCATION

0.99+

26 plantsQUANTITY

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

Dave VolantePERSON

0.99+

threeQUANTITY

0.99+

Leyla DelicPERSON

0.99+

40 colleaguesQUANTITY

0.99+

CharliePERSON

0.99+

second phaseQUANTITY

0.99+

PakistanLOCATION

0.99+

GeneraliORGANIZATION

0.99+

one shotQUANTITY

0.99+

seven different applicationsQUANTITY

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

GeneraliaORGANIZATION

0.99+

Generali AssicurazioniORGANIZATION

0.99+

both rolesQUANTITY

0.99+

firstQUANTITY

0.98+

both worldsQUANTITY

0.98+

iPadCOMMERCIAL_ITEM

0.98+

oneQUANTITY

0.98+

around 8,500 employeesQUANTITY

0.98+

CCIORGANIZATION

0.98+

FirstQUANTITY

0.97+

1 million coolersQUANTITY

0.97+

400 millionQUANTITY

0.97+

central AsiaLOCATION

0.97+

more than a 70,000 employeesQUANTITY

0.96+

RogersORGANIZATION

0.96+

around 10 countriesQUANTITY

0.95+

each businessQUANTITY

0.95+

PristinaPERSON

0.94+

middle EastLOCATION

0.94+

2019DATE

0.94+

Coca Cola İçecekORGANIZATION

0.93+

four functionsQUANTITY

0.92+

fishy IslandLOCATION

0.87+

aroundQUANTITY

0.86+

CaesarPERSON

0.85+

this morningDATE

0.85+

BasingerPERSON

0.84+

four must win battlesQUANTITY

0.84+

AmerORGANIZATION

0.82+

ECEKORGANIZATION

0.82+

HekatonORGANIZATION

0.81+

over a yearQUANTITY

0.8+

about a year agoDATE

0.78+

AntwerpLOCATION

0.78+

1 millionQUANTITY

0.77+

Enough 50 countriesQUANTITY

0.76+

close to 1 billionQUANTITY

0.74+

Sikora ZuniORGANIZATION

0.68+

UiPathORGANIZATION

0.67+

CenaPERSON

0.67+

MuslimOTHER

0.65+

FORWARD IIITITLE

0.63+

CDOORGANIZATION

0.62+

RPATITLE

0.56+

Shekar Ayyar, VMware | VMworld 2019


 

>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019. Brought to you by VM Wear and its ecosystem partners. >> Hey, welcome back here and live here in Mosconi North. The Emerald 2019 Cube Live coverage on Shopping Day Volante Jr Jr. Who's here? EVP general manager, Telco and Edge of Cloud for Vienna. Where Thanks for coming on. Thanks for having me. I know you're super busy. We don't have a lot of time. Get right to it. Um five g a big part of the key. No discussion that's gonna enable a whole bunch of Pakal the pregame show pre gaming not even talk about that. Also. Telco on the Edge Computing Big part Michael Dell said, Edges the future Now these air to emerging areas for you guys. What's the positioning? What's the update? >> No, absolutely. I mean, if you look a tw telecom infrastructure. For the longest time, telcos have played a role just as pure basic connectivity providers. And with five g coming on board, they finally have an opportunity to break out of that and redefine the cloud off the future. So for us the big opportunity around five g is not just the better provisioning off like Higher Man With Service is to consumer for voice and data buy the whole set off new enterprise service is that can be provided on top of this five g network. And in order to be able to do that, you really need to go in with a virtualized telco Cloud architecture. Underneath that, and so we are working with carriers globally now preparing them for five G with an architecture that's going to help them deploy. New service is faster for both their consumer as well as enterprise. >> Going to be the white knight at, so to speak. For these telcos because they've been struggling for years over the top and any kind of differentiates service is even in the network layer. Exactly. I've had tons of rack and stack machine, so they're after their well, well stacked up in terms of computer storage. Also connectivity to the edge. That's the back hall. So you have back haul, which is connectivity. Companies that have massive expertise in scale but fumbling in operational cloud natives that >> by not just that, but I also think that having the idea off on application platform that allows them to go and deploy service is faster and then decide whether they're just going to play at the network connectivity level or at the application tear or a full SAS tear. These are all options that are open to them now with this notion off. Telco five G coupled with an NFI and cloud telco cloud infrastructure. Underneath that and never before have they had this option to doing that. And this is now open to them >> and the cloud native is there greenfield for AP supporter having applications on top of it. Exact icing on the cake, right? >> Exactly, Exactly. And so they're all looking at core architectures and then, potentially, their radio architectures now all being opened up toe deploying new service is that are much faster to provisions and then extending that to EJ and >> five G's deploying. So we know it's out there. So it is pre game is Pat, Guzman said. You know, not even an inning Yet in the metaphor of baseball innings, I >> gotta ask you get my phone. That's not that's fake ill. I know it >> did that with four g to >> skeptic e stands for evolution, which is coming soon. >> That's vaporware for tell Coke language. The surface area is going to radically get bigger with this capability. Yeah, security's gonna be baked in. This is the number one concern for io ti. And more importantly, industrial I ot We've been reporting on silicon angle dot com. This is a national security issue because we're under cyber attacks. Town's getting locked out with ransomware critical infrastructure exposed. We're free country, and I want to be free. We don't lock down. So you have security built into this new promiscuous landscape that is called the coyote Edge. Because you wanna have no perimeter. You want the benefits of cloud. But one whole malware is in there. One take over physical device could cost lives. >> Yeah, there's a big concern. Yeah. What's your thoughts? Yeah, No. So I think there >> are two ways of looking at it. One is the way you looked at it in terms of the security perimeter expanding and then us making sure that we have the right level off infrastructure security baked in to enable this to be an easier, manageable security architecture. This is sort of the pitch you heard from Mia Mary, even in the context of our acquisition of carbon black and how we're thinking about baking security into the infrastructure, the other way of looking at this is if you think about some of the concerns around providers off telecom infrastructure today and how there might be or might not be security back doors. This is happening in today's hardware infrastructure. Okay, so in fact, I would argue that a sick software defined architecture, er, actually ends up providing you greater levels of security. Because what you now have is the option off running all of these network functions as secured as software workloads in a policy envelope that you can introspect. And then you can decide what kind of security you want to deploy on what kind of workload. >> That's an innovative approach. But it doesn't change much, really, from an infrastructure standpoint, does it? Or does it? >> No, it does, >> because now, instead of having a hardware box where you have to worry, I mean, if it's a close, hard red box and you don't quite know what is happening there, the question is, is that more secure than a infrastructure radio running the software that you can actually introspect. I would argue that the software defined approach is more secure than having a hardware box that you don't know. >> I would buy the premise that certainly we know that supply chain concerned. You know the speculation Super Micro, which never was proved. >> It doesn't matter who the vendor is or what the country is. It really is a concern in terms of not being able to introspect what has happened Going inside >> for my tea shop. I'm running VM where operating I want developers. So now you're going to tell Coz you revitalize their business model? They had a rule out appy. Now what did you see? That connecting is gonna be connective tissue between >> I'll think about it. I didn't feel goto a telco. We look at really three stakeholders in there. One is I t the second Is there be to be or enterprise facing business and then the 3rd 1 is their core and access network or the CTO. We're now have a value proposition of having a uniform architecture across all three stakeholders with the uniform ability to create applications and drop it on top of each of these infrastructures with the ability to manage and secure these again in a uniform way, not just that, but also make this work well with other cloud infrastructures private, hyper scale, public as well as EJ. >> That's table stakes. You have to do that. These jokers have to operate whatever >> well it is, But it's not. I mean, if you think about what the infrastructure off a tailcoat today is, it's far from that, because it's it's sort of a closed environment. You can't access anything from a telco environment in order to go build an application to it, and it does not resemble anything like any club >> you could enable Telco. Just I'm just kind of thinking connecting the dots here real time in the Cube. If I'm a telco, hell, I'll take that VM wear on a deli and see model. Make me a cloud and I'll sell Cloud Service is to markets that kind of >> it is. Actually it's a very important part of our business model because most telcos would not move their own infrastructure from a network standpoint onto a public cloud. But they are eagerly awaiting the ability to operate their own network as a cloud, and if they can have somebody manage that for them, then that is very much within the >> you're enabling. An increase in the number of cloud service provides potentially the paint on the makeup of the telco tier one tier two tier three size. Pretty much >> potentially. I mean, it's taking an existing operator and having them operate in a more agile way and potentially increasing anew form off a cloud service >> provided telcos wouldn't move into the public cloud because of they want to control. And the cost is that right or it's >> mostly control. It's not about cost. It's about taking What is your sort of coordinating for, ah, packet corps or for a radio network? Yeah, and there is also an angle around competition, I think telcos our what in about the Amazons of the world and the azure eyes of the world potentially becoming a service provided >> themselves. And that's what I wanted to ask you about the business impact of all this discussion you guys were having is, you know, the cost for bits coming down. The amount of data is increasing faster. You got over the top providers just, you know, picking off the telcos. Telcos can't compete their infrastructures of so hardened. Will this all change that? >> Absolutely So. I think that it has the potential to changing all that. I don't think all the telcos will take advantage of it. Some off them might end up being more traditional and sort of sticking to where they were. But for those that are willing to make the leap, I mean, as an example, Vodafone is a customer that has actually gone in with this architecture with us. A. T and T is working with us with the Vela Cloud software from via Mary bringing a new form off branch computer branch connectivity through SD man. So these are all examples of telcos that are actually leading the >> charter. But if they don't lean in, they have this vision there either. Well, it's either because they're protected by their local government or they're going to go out of business. No, I would >> agree. I mean, it's sort of silly from our standpoint to be talking about five G and not thinking about this as the architecture for five, right? I mean, if you only focus on radio waves and your wireless network that's like a part of the problem, but you really need to have the ability to deploy these agile service's. Otherwise, you could get killed by >> the O. T. T. So how do you compete against the competition? What's the business plan that you have? C. Five G? We see that in the horizon that's evolving its evolution, so to speak. Pun intended on edge is certainly very relevant for enterprises, whether it's manufacturing or industrial or just people. Yeah, >> I'd say there are two things. One is a CZ. I'm sure you heard from folks at GM, where our vision is this notion of any any anywhere. We've talked about any cloud at any application that any device. So that becomes one of the strongest different chaining factors in terms of what V Amir can bring. Tow any of our customers compared to the competition, right? Nobody can actually make it really across these dimensions. If you then take that architecture and use that to deploy a telco cloud, we're now making investments that are telco specific that allow the tailcoat than take this and make the most out of it. As an example, we're investing in open stack we're investing in container ization. We just bought a company called Johanna and Johanna essentially allows the operator to go and provide metrics from their radio access networks. Use at that to train a learning engine and then feed that back so that the operator can tune their network to get like fewer dropped calls in the region. So if you combined technology like that with this, any cloud infrastructure that we have underneath that that's the best in class deployment methodology for any. Tell Cho to deploy >> five. Your business model metrics for you internally is get Maur deployments. What stage of development five G certainly is in a certain stage, but you know, edges there. Where is the Progress bar? If you're the kind of oh, >> it's actually mold phenomenally. I mean, every time we have conversations like this, we're moving about further in terms off. How many carriers are deploying on via mare on a telco cloud Architecture? How many subscribers are basically being serviced by an architectural like this? And then how many network functions are being deployed? Two of'em air architecture. So we are over 100 carriers now we are over. We have about 800 million subscribers, or so that about globally are being serviced by a V M Air supported network. On then, we have essentially over 120 network functions that >> are operating on top of you. Usually bring in all the same stuff that's announced that the show that stuff's gonna fold into the operating platform or Joe Chuckles have different requirements. Off course. It's >> both. We take the best of what is there from the sort of overall vehement factory and then as a team. My team then builds other widgets on top that are telco specific. >> How big is your your tam up Terry for you? >> Well, so the best way to look at it as telcos globally spend about a trillion dollars in capital investment and then probably to X that in terms of their operating expenditure over the course off all of the things that they do right? And out of that, I would say probably a tent off that. So if you take about $100 billion opportunity, opens itself up toe infrastructure investment in terms off the kinds of things that we're talking about now, they're not gonna move from like 0 200 of course. So if you take some period of time, I would say good subset off that $100 billion opportunity is gonna open itself up >> to it. This kind of business cases, eliminating that two x factor, at least reducing it. Is that exactly? That's not just that Service is that's, >> ah, cost reduction alternative. But then you have the ability to go deploy. Service is faster, so it's really a combination off both sort of carrot and the stick, right? I mean, the character here is the ability to go monetize More new service is with five G faster. The stick is that if you don't do it, Ortiz will get there faster and your costs off. Deploying your simple service is will increase his >> telcos, in your opinion, have what they have to do to get the DNA chops to actually be able to compete with the over to top OT T providers and be more agile. I mean, it's obviously sort of new skills that they have to bring in a new talent. Yeah, >> well, first and foremost, they need toe get to a point where their infrastructure is agile and they get into a business model off knowing how to monetize that agile infrastructure. So, for example, they could offer network as a service on a consume as you go basis. They could offer a platform as a service on top off that network in order for or titties to go build applications so they can do Rev shares with the forties. Or they could have offer. Full service is where they could go in and say, We are the conferencing provider for videoconferencing for enterprises. I mean, these are all models that >> the great conversation love to do. Your Palo Alto? Yes. Have you in our studio want to do more of a deep dive? We love the serious, super provocative, and it's important Final question for you. Though Pat Sr here on the Cube, lay asked him, Look back in the past 10 years. Yeah, look back in the next 10 years. What waves should everyone be riding? He said three things that working security and kubernetes humans being number one actually promoting convinced everyone for the ride, for obvious reasons, clouded. I get that, but networking Yeah, that's your world. That's changing. Which which events do you go to where you meet your audience out there in the telco because networking is a telco fundamental thing. Sure moving packets around. This is a big thing, >> eh? So far, operator networking related stuff, I would say. I mean the biggest shows that for us would be Mobile World Congress as an example, right? It's where many operators are. But I would also say that when we do our own events like this is the ember. But the movie forums in in Asia packers an example. A lot of the telco conversations I find they are best done one on one before. Yeah, the forums are our forums, but we will goto have one on one conversations or small group conversations >> with our telco customers. Locals Shakaar Thanks for spending. You get a hard stop. Very busy. >> Thank you. Thanks for having me >> here, Sugar Yaar, Who's here inside the Cube bringing down five G, which is still pregame. A few winning something first thing is gonna come up soon, but edges super hot. A lot of telco customers be back with more live coverage of the emerald after this short break

Published Date : Aug 27 2019

SUMMARY :

Brought to you by VM Wear and its ecosystem partners. Edges the future Now these air to emerging areas for you guys. is not just the better provisioning off like Higher Man With Service is to and any kind of differentiates service is even in the network layer. These are all options that are open to them now with this notion off. and the cloud native is there greenfield for AP supporter having applications on top new service is that are much faster to provisions and then extending that You know, not even an inning Yet in the metaphor of baseball innings, I gotta ask you get my phone. promiscuous landscape that is called the coyote Edge. So I think there This is sort of the pitch you heard from Mia Mary, even in the context of our acquisition of carbon black But it doesn't change much, really, from an infrastructure standpoint, running the software that you can actually introspect. You know the speculation Super Micro, being able to introspect what has happened Going inside Now what did you see? One is I t the second Is there be to be or enterprise facing business and then the 3rd You have to do that. I mean, if you think about what the infrastructure off a tailcoat today is, you could enable Telco. But they are eagerly awaiting the ability to operate their of the telco tier one tier two tier three size. I mean, it's taking an existing operator and having them operate in a more And the cost is that right of the world potentially becoming a service provided You got over the top providers just, you know, picking off the telcos. Vodafone is a customer that has actually gone in with this architecture with us. it's either because they're protected by their local government or they're going to go out of business. I mean, it's sort of silly from our standpoint to be talking about five G and the O. T. T. So how do you compete against the competition? So that becomes one of the strongest different chaining factors in terms of what V Where is the Progress bar? I mean, every time we have conversations like this, Usually bring in all the same stuff that's announced that the show that stuff's We take the best of what is there from the sort of overall vehement factory Well, so the best way to look at it as telcos globally spend about a trillion dollars in capital This kind of business cases, eliminating that two x factor, I mean, the character here is the ability to go monetize More new service I mean, it's obviously sort of new skills that they have to bring in a new talent. in order for or titties to go build applications so they can do Rev shares with the forties. the great conversation love to do. I mean the biggest shows that for us would be Mobile World Congress as an example, right? with our telco customers. Thanks for having me here, Sugar Yaar, Who's here inside the Cube bringing down five G, which is still pregame.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael DellPERSON

0.99+

TelcoORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

VodafoneORGANIZATION

0.99+

Mia MaryPERSON

0.99+

MaryPERSON

0.99+

$100 billionQUANTITY

0.99+

10 yearsQUANTITY

0.99+

JohannaORGANIZATION

0.99+

GuzmanPERSON

0.99+

EdgeORGANIZATION

0.99+

Shekar AyyarPERSON

0.99+

TelcosORGANIZATION

0.99+

Mosconi NorthLOCATION

0.99+

eachQUANTITY

0.99+

telcosORGANIZATION

0.99+

two waysQUANTITY

0.99+

OneQUANTITY

0.99+

telcoORGANIZATION

0.99+

PatPERSON

0.99+

Joe ChucklesPERSON

0.99+

todayDATE

0.98+

Super MicroORGANIZATION

0.98+

bothQUANTITY

0.98+

about $100 billionQUANTITY

0.98+

about a trillion dollarsQUANTITY

0.98+

over 100 carriersQUANTITY

0.98+

two thingsQUANTITY

0.98+

about 800 million subscribersQUANTITY

0.98+

TerryPERSON

0.98+

TwoQUANTITY

0.98+

AsiaLOCATION

0.98+

AmazonsORGANIZATION

0.97+

secondQUANTITY

0.97+

five gORGANIZATION

0.97+

Mobile World CongressEVENT

0.97+

fiveQUANTITY

0.97+

Palo AltoLOCATION

0.96+

VM WearORGANIZATION

0.96+

CloudORGANIZATION

0.96+

VMwareORGANIZATION

0.96+

ViennaLOCATION

0.96+

three thingsQUANTITY

0.96+

first thingQUANTITY

0.95+

over 120 network functionsQUANTITY

0.94+

GMORGANIZATION

0.93+

oneQUANTITY

0.92+

V AmirORGANIZATION

0.91+

Shopping DayEVENT

0.91+

agileTITLE

0.91+

twoQUANTITY

0.89+

EVPPERSON

0.89+

VMworldEVENT

0.89+

Pat SrPERSON

0.88+

0 200QUANTITY

0.87+

3rd 1QUANTITY

0.86+

five GORGANIZATION

0.86+

ChoPERSON

0.84+

five GCOMMERCIAL_ITEM

0.84+

CokeORGANIZATION

0.82+

next 10 yearsDATE

0.81+

firstQUANTITY

0.8+

A. TORGANIZATION

0.8+

OrtizPERSON

0.78+

three stakeholdersQUANTITY

0.77+

Mark Clare, AstraZeneca & Glenn Finch, IBM | IBM CDO Summit 2019


 

>> live from San Francisco, California. It's the key. You covering the IBM chief Data officer? Someone brought to you by IBM. >> We're back at the IBM CDO conference. Fisherman's Worf Worf in San Francisco. You're watching the Cube, the leader in life tech coverage. My name is David Dante. Glenn Finches. Here's the global leader of Big Data Analytics and IBM, and we're pleased to have Mark Clare. He's the head of data enablement at AstraZeneca. Gentlemen, welcome to the Cube. Thanks for coming on my mark. I'm gonna start with this head of data Data Enablement. That's a title that I've never heard before. And I've heard many thousands of titles in the Cube. What is that all about? >> Well, I think it's the credit goes to some of the executives at AstraZeneca when they recruited me. I've been a cheap date officer. Several the major financial institutions, both in the U. S. And in Europe. Um, AstraZeneca wanted to focus on how we actually enable our business is our science areas in our business is so it's not unlike a traditional CDO role, but we focus a lot more on what the enabling functions or processes would be >> So it sounds like driving business value is really the me and then throw. Sorry. >> I've always looked at this role in three functions value, risk and cost. So I think that in any CDO role, you have to look at all three. I think the you'd slide it if you didn't. This one with the title. Obviously, we're looking at quite a bit at the value we will drive across the the firm on how to leverage our date in a different way. >> I love that because you can quantify all three. All right, Glenn. So you're the host of this event. So awesome. I love that little presentation that you gave. So for those you didn't see it, you gave us pay stubs and then you gave us a website and said, Take a picture of the paste up, uploaded, and then you showed how you're working with your clients. Toe. Actually digitize that and compress all kinds of things. Time to mortgage origination. Time to decision. So explain that a little bit. And what's that? What's the tech behind that? And how are people using it? You know, >> for three decades, we've had this OCR technology where you take a piece of paper, you tell the machine what's on the paper. What longitudinal Enter the coordinates are and you feed it into the hope and pray to God that it isn't in there wrong. The form didn't change anything like that. That's what that's way. We've lived for three decades with cognitive and a I, but I read things like the human eye reads things. And so you put the page in and the machine comes back and says, Hey, is this invoice number? Hey, is this so security number? That's how you train it as compared to saying, Here's what it So we use this cognitive digitization capability to grab data that's locked in documents, and then you bring it back to the process so that you can digitally re imagine the process. Now there's been a lot of use of robotics and things like that. I'm kind of taken existing processes, and I'm making them incrementally. Better write This says look, you now have the data of the process. You can re imagine it. However, in fact, the CEO of our client ADP said, Look, I want you to make me a Netflix, not a blood Urbach Blockbuster, right? So So it's a mind shift right to say we'll use this data will read it with a I will digitally re imagine the process. And it usually cuts like 70 or 80% of the cycle time, 50 to 75% of the cost. I mean, it's it's pretty groundbreaking when you see it. >> So markets ahead of data neighborhood. You hear something like that and you're not. You're not myopically focused on one little use case. You're taking a big picture of you doing strategies and trying to develop a broader business cases for the organization. But when you see an example like that and many examples out there, I'm sure the light bulbs go off. So >> I wrote probably 10 years cases down while >> Glenn was talking about you. You do get tactical, Okay, but but But where do you start when you're trying to solve these problems? >> Well, I look att, Glenn's example, And about five and 1/2 years ago, Glenn was one I went to had gone to a global financial service, firms on obviously having scale across dozens of countries, and I had one simple request. Thio Glenn's team as well as a number of other technology companies. I want cognitive intelligence for on data in Just because the process is we've had done for 20 years just wouldn't scale not not its speed across many different languages and cultures. And I now look five and 1/2 years later, and we have beginning of, I would say technology opportunities. When I asked Glenn that question, he was probably the only one that didn't think I had horns coming out of my head, that I was crazy. I mean, some of the leading technology firms thought I was crazy asking for cognitive data management capabilities, and we are five and 1/2 years later and we're seeing a I applied not just on the front end of analytics, but back in the back end of the data management processes themselves started automate. So So I look, you know, there's a concept now coming out day tops on date offices. You think of what Dev Ops is. It's bringing within our data management processes. It's bringing cognitive capabilities to every process step, And what level of automation can we do? Because the, you know, for typical data science experiment 80 to 90% of that work Estate engineering. If I can automate that, then through a date office process, then I could get to incite much faster, but not in scale it and scale a lot more opportunities and have to manually do it. So I I look at presentations and I think, you know, in every aspect of our business, where we clear could we apply >> what you talk about date engineering? You talk about data scientist spending his or her time just cleaning the wrangling data, All the all the not fun stuff exactly plugging in cables back in the infrastructure date. >> You're seeing horror stories right now. I heard from a major academic institution. A client came to them and their data scientists. They had spent several years building. We're spending 99% of their time trying to cleanse and prep data. They were spend 90% cleansing and prepping, and of the remaining 10% 90% of that fixing it where they fix it wrong and the first time so they had 1% of their job doing their job. So this is a huge opportunity. You can start automating more of that and actually refocusing data science on data >> science. So you've been a chief data officer number of financial institutions. You've got this kind of cool title now, which touches on some of the things a CDO might do and your technical. We got a technical background. So when you look a lot of the what Ginny Rometty calls incumbents, call them incumbent Disruptors two years ago at Ivy and think they've got data that has been hardened, you know, in all these projects and use cases and it's locked and people talk about the silos, part of your role is to figure out Okay, how do we get that data out? Leverage. It put it at the core. Is that is that fair? >> Well, and I'm gonna stay away from the word core cause to make core Kenan for kind of legacy processes of building a single repositories single warehouse, which is very time consuming. So I think I can I leave it where it is, but find a wayto to unify it. >> Not physically, exactly what I say. Corny, but actually the court, that's what we need >> to think about is how to do this logically and cream or of Ah unification approach that has speed and agility with it versus the old physical approaches, which took time. And resource is >> so That's a that's a computer science problem that people have been trying to solve for years. Decentralized, distributed, dark detectors, right? And why is it that we're now able Thio Tap your I think it's >> a perfect storm of a I of Cloud, the cloud native of Io ti, because when you think of I o. T, it's a I ot to be successful fabric that can connect millions of devices or millions of sensors. So you'd be paired those three with the investment big data brought in the last seven or eight years and big data to me. Initially, when I started talking to companies in the Valley 10 years ago, the early days of, um, apparatus, what I saw or companies and I could get almost any of the digital companies in the valley they were not. They were using technology to be more agile. They were finding agile data science. Before we call the data signs the map produce and Hadoop, we're just and after almost not an afterthought. But it was just a mechanism to facilitate agility and speed. And so if you look at how we built out all the way up today and all the convergence of all these new technologies, it's a perfect storm to actually innovate differently. >> Well, what was profound about my producing in the dupe? It was like leave the data where it is and shipped five megabytes a code two upended by the data and that you bring up a good point. We've now, we spent 10 years leveraging that at a much lower cost. And you've got the cloud now for scale. And now machine intelligence comes in that you can apply in the data causes. Bob Pityana once told me, Data's plentiful insights aren't Amen to that. So Okay, so this is really interesting discussion. You guys have known each other for a couple of couple of decades. How do you work together toe to solve problems Where what is that conversation like, Do >> you want to start that? >> So, um, first of all, we've never worked together on solving small problems, not commodity problems. We would usually tackle something that someone would say would not be possible. So normally Mark is a change agent wherever he goes. And so he usually goes to a place that wants to fix something or change something in an abnormally short amount of time for an abnormally small amount of money. Right? So what's strange is that we always find that space together. Mark is very judicious about using us as a service is firm toe help accelerate those things. But then also, we build in a plan to transition us away in transition, in him into full ownership. Right. But we usually work together to jump start one of these wicked, hard, wicked, cool things that nobody else >> was. People hate you. At first. They love you. I would end the one >> institution and on I said, OK, we're going to a four step plan. I'm gonna bring the consultants in day one while we find Thailand internally and recruit talent External. That's kind of phases one and two in parallel. And then we're gonna train our talent as we find them, and and Glenn's team will knowledge transfer, and by face for where, Rayna. And you know, that's a model I've done successfully in several organizations. People can. I hated it first because they're not doing it themselves, but they may not have the experience and the skills, and I think as soon as you show your staff you're willing to invest in them and give them the time and exposure. The conversation changes, but it's always a little awkward. At first, I've run heavy attrition, and some organizations at first build the organizations. But the one instance that Glen was referring to, we came in there and they had a 4 1 1 2 1 12 to 15 year plan and the C I O. Looked at me, he says. I'll give you two years. I'm a bad negotiator. I got three years out of it and I got a business case approved by the CEO a week later. It was a significant size business case in five minutes. I didn't have to go back a second or third time, but we said We're gonna do it in three years. Here's how we're gonna scale an organization. We scaled more than 1000 person organization in three years of talent, but we did it in a planned way and in that particular organization, probably a year and 1/2 in, I had a global map of every data and analytics role I need and I could tell you were in the US they set and with what competitors earning what industry and where in India they set and in what industry And when we needed them. We went out and recruited, but it's time to build that. But you know, in any really period, I've worked because I've done this 20 plus years. The talent changes. The location changes someone, but it's always been a challenge to find him. >> I guess it's good to have a deadline. I guess you did not take the chief data officer role in your current position. Explain that. What's what. What's your point of view on on that role and how it's evolved and how it's maybe being used in ways that don't I >> mean, I think that a CDO, um on during the early days, there wasn't a definition of a matter of fact. Every time I get a recruiter, call me all. We have a great CDO row for first time I first thing I asked him, How would you define what you mean by CDO? Because I've never seen it defined the same way into cos it's just that way But I think that the CDO, regardless of institutions, responsibility end in to make sure there's an Indian framework from strategy execution, including all of the governance and compliance components, and that you have ownership of each piece in the organization. CDO most companies doesn't own all of that, but I think they have a responsibility and too many organizations that hasn't occurred. So you always find gaps and each organization somewhere between risk costs and value, in terms of how how they're, how the how the organization's driving data and in my current role. Like I said, I wanted to focus. We want the focus to really be on how we're enabling, and I may be enabling from a risk and compliance standpoint, Justus greatly as I'm enabling a gross perspective on the business or or cost management and cost reductions. We have been successful in several programs for self funding data programs for multi gears. By finding and costs, I've gone in tow several organizations that it had a decade of merger after merger and Data's afterthought in almost any merger. I mean, there's a Data Silas section session tomorrow. It'd be interesting to sit through that because I've found that data data is the afterthought in a lot of mergers. But yet I knew of one large health care company. They've made data core to all of their acquisitions, and they was one the first places they consolidated. And they grew faster by acquisition than any of their competitors. So I think there's a There's a way to do it correctly. But in most companies you go in, you'll find all kinds of legacy silos on duplication, and those are opportunities to, uh, to find really reduce costs and self fund. All the improvements, all the strategic programs you wanted, >> a number inferring from the Indian in the data roll overlaps or maybe better than gaps and data is that thread between cost risk. And it is >> it is. And I've been lucky in my career. I've report toe CEOs. I reported to see Yellows, and I've reported to CEO, so I've I've kind of reported in three different ways, and each of those executives really looked at it a little bit differently. Value obviously is in a CEO's office, you know, compliance. Maurizio owes office and costs was more in the c i o domain, but you know, we had to build a program looking >> at all three. >> You know, I think this topic, though, that we were just talking about how these rules are evolving. I think it's it's natural, because were about 5 2.0. to 7 years into the evolution of the CDO, it might be time for a CDO Um, and you see Maur CEOs moving away from pure policy and compliance Tomb or value enablement. It's a really hard change, and that's why you're starting to Seymour turnover of some of the studios because people who are really good CEOs at policy and risk and things like that might not be the best enablers, right? So I think it's pretty natural evolution. >> Great discussion, guys. We've got to leave it there, They say. Data is the new oil date is more valuable than oil because you could use data to reduce costs to reduce risk. The same data right toe to drive revenue, and you can't put a gallon of oil in your car and a quart of oil in the car quarter in your house of data. We think it's even more valuable. Gentlemen, thank you so much for coming on the cues. Thanks so much. Lot of fun. Thanks. Keep right, everybody. We'll be back with our next guest. You're watching the Cube from IBM CDO 2019 right back.

Published Date : Jun 24 2019

SUMMARY :

Someone brought to you by IBM. Here's the global leader of Big Data Analytics and IBM, and we're pleased to have Mark Clare. Well, I think it's the credit goes to some of the executives at AstraZeneca when So it sounds like driving business value is really the me and So I think that in any CDO role, you have to look at all three. I love that little presentation that you gave. However, in fact, the CEO of our client ADP said, Look, I want you to But when you see an example like that and Okay, but but But where do you start when you're trying to solve these problems? So I I look at presentations and I think, you know, what you talk about date engineering? and of the remaining 10% 90% of that fixing it where they fix it wrong and the first time so they had 1% of the what Ginny Rometty calls incumbents, call them incumbent Disruptors two years ago Well, and I'm gonna stay away from the word core cause to make core Kenan for kind of legacy Corny, but actually the court, that's what we need to think about is how to do this logically and cream or of Ah unification approach that has speed and I think it's And so if you look at how we built out all the way up today and all the convergence of all And now machine intelligence comes in that you can apply in the data causes. something that someone would say would not be possible. I would end the one I had a global map of every data and analytics role I need and I could tell you were I guess you did not take the chief and that you have ownership of each piece in the organization. a number inferring from the Indian in the data roll overlaps or maybe better domain, but you know, we had to build a program looking Um, and you see Maur CEOs moving away from pure and you can't put a gallon of oil in your car and a quart of oil in the car quarter in your house of data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GlennPERSON

0.99+

Bob PityanaPERSON

0.99+

AstraZenecaORGANIZATION

0.99+

IBMORGANIZATION

0.99+

David DantePERSON

0.99+

Mark ClarePERSON

0.99+

MarkPERSON

0.99+

50QUANTITY

0.99+

EuropeLOCATION

0.99+

20 yearsQUANTITY

0.99+

99%QUANTITY

0.99+

70QUANTITY

0.99+

two yearsQUANTITY

0.99+

90%QUANTITY

0.99+

San Francisco, CaliforniaLOCATION

0.99+

10 yearsQUANTITY

0.99+

10%QUANTITY

0.99+

GlenPERSON

0.99+

IndiaLOCATION

0.99+

three yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Ginny RomettyPERSON

0.99+

five minutesQUANTITY

0.99+

USLOCATION

0.99+

MaurizioPERSON

0.99+

80%QUANTITY

0.99+

1%QUANTITY

0.99+

five megabytesQUANTITY

0.99+

each pieceQUANTITY

0.99+

millionsQUANTITY

0.99+

three decadesQUANTITY

0.99+

bothQUANTITY

0.99+

RaynaPERSON

0.99+

NetflixORGANIZATION

0.99+

U. S.LOCATION

0.99+

80QUANTITY

0.99+

20 plus yearsQUANTITY

0.99+

tomorrowDATE

0.99+

eachQUANTITY

0.99+

Thio GlennPERSON

0.99+

a week laterDATE

0.99+

Glenn FinchPERSON

0.99+

oneQUANTITY

0.99+

more than 1000 personQUANTITY

0.99+

Big Data AnalyticsORGANIZATION

0.99+

secondQUANTITY

0.99+

75%QUANTITY

0.99+

ADPORGANIZATION

0.99+

7 yearsQUANTITY

0.98+

first timeQUANTITY

0.98+

threeQUANTITY

0.98+

10 years agoDATE

0.98+

each organizationQUANTITY

0.98+

Glenn FinchesPERSON

0.98+

IvyORGANIZATION

0.98+

15 yearQUANTITY

0.98+

third timeQUANTITY

0.98+

two years agoDATE

0.98+

todayDATE

0.97+

first placesQUANTITY

0.97+

firstQUANTITY

0.97+

single warehouseQUANTITY

0.97+

first timeQUANTITY

0.97+

a yearQUANTITY

0.97+

millions of devicesQUANTITY

0.97+

ThailandLOCATION

0.96+

one instanceQUANTITY

0.96+

1/2QUANTITY

0.96+

SeymourPERSON

0.95+

twoQUANTITY

0.95+

four stepQUANTITY

0.94+

one simple requestQUANTITY

0.93+

first thingQUANTITY

0.93+

Ronen Schwartz, Informatica & Daniel Jewett, Tableau Software | Informatica World 2019


 

(upbeat music) >> Live from Las Vegas, it's theCUBE. Covering Informatica World 2019. Brought to you by Informatica. >> Welcome back everyone to theCUBE's live coverage of Informatica World. I'm your host, Rebecca Knight. We have two guests for this segment. We have Ronen Schwartz. He is the senior vice-president and general manager, Big Data Cloud and Data Integration at Informatica. Welcome Ronen, Welcome back, Ronen. >> Yes, pleasure to be here. Welcome to Informatica World. >> Thank you. And we have Daniel Jewett, VP Product Management at Tableau. Thank you so much for coming on theCUBE. >> Thank you for the welcome, Rebecca. Happy to be here. >> Yes So there's some big news that's going to be announced later today. Tell us about the partnership with Tableau and Informatica. I Want to start with you, Ronen. >> Yes, So Tableau been an amazing innovator in the area of data visualization, analytics. I think more than all they've actually opened the ability for more people to use data. And Informatica have been very excited to partner with Tableau on this journey of how do we empower more users, more company, to become data driven. So I think very exciting partnership. A lot of innovation. A lot of great capabilities. >> So we hear so much about the explosion of data and how much its use is being just across the enterprise. More and more functions are using data to make their decisions. How does this impact the strategic importance of data? >> Yeah, absolutely. Well, the relationship with Informatica for us has become important over the years as that data has exploded. Right, it used to start off, you had a spreadsheet of some numbers and you wanted to try and understand what was in there and Tableau helped you with that. But then as data lake started coming on the scene and not just a single data lake but multiple feeds of data and streaming data and data's here, and data all over in Europe, and data's wherever it happens to be, that becomes a real challenge for the individuals who have some questions about data. So Tableau's only as good as the data that we can get our hands on. So to have a great partner like Informatica, who can marshal and rationalize where all that data is is a valuable partnership for us to have. >> And it's really about data governance but then also about democratization of data and analytics. Want to talk about that a little bit, Ronen? >> Yes, so I think democratization of data actually depends on your ability to have built-in governance. So that the users are using the right data at the right time. And the organization actually understands what is available where. I think this is actually one of the sweet spots for the partnership. >> Right. >> Actually, the ability of Tableau with a very easy interface to allow everybody to really work with data and the ability of Informatica to enable everybody to get the data in a governed way when you can actually control the quality and the availability of the data is actually our sweet spot as partners. >> There's some real tension there between the democratization and the governance side, right? So from a business user's perspective, democratization means, I want to use that data and I want to start working with it. From a business user's perspective governance, typically means no. IT says you can't use that data or you can't have it or it's too complicated for you. So to be able to break that down and say no. Data catalog and some of the tools from Informatica make the data available in an accessible and friendly manner and understandable manner, is what enables the democratization to happen. So it's kind of turning that "no" into a "yes, let me help you", which is a big difference. >> And how is that relationship between IT and the business side? How would you say that it has evolved in recent years as there is more of a push and pull between these two functions. >> Yes, it's definitely evolved over the years. So as Ronen said, we have been working for a long time. I think we officially became partners back in 2011. There was probably some tension out of a lot of accounts between the IT camp and the business camp and we were always the flag bearers of the business users As we've seen over the years, business users get frustrated by untrusted data and not being able to find data. So as the IT organizations have helped bridge that gap I would like to think we're helping put that olive branch in between the two. The two camps have companies with the products working together. >> I think, imagine that instead of IT actually being on the way of people using data. IT is really giving the power to find the right data to the business users. And this is actually, instead of, like, the user really, working really really hard to get the data, now it's in their fingertip. They can find it. And when they find it, they can use it all the way from the source into Tableau in a very very easy way. >> And trust it. >> And trust it. >> The value add >> The veracity, exactly. >> I can find a lot of data easily but most of it is not trustworthy and I don't know if I want to do my analysis on untrustworthy data. So to be able to trust that data that I've come across is really important. >> We're talking a lot about AI and machine learning here. How do those two concepts, ideas, approaches, methodologies play into Tableau's vision? >> For Tableau, we've always been the company that wants the human as part of the process, right? We think people are curious and we want them to explore that data and work with that. So, at first glance you might think AI and machine learning doesn't fit in with that but we think there's really a powerful way for it to do it. Instead of a machine learning solution handing you the answer, we want the machine solution to say, we think there's something interesting here that you should go explore more. So that's the angle that we're putting our investment in. >> So putting the human into these tech >> Human still needs to be >> Human centered >> in the loop >> machine learning. >> and the machine can help coach you along the right way to make those inferences around the data. >> Final question. We're talking a lot about the skills gap. It is a pressing problem in the technology industry. Ronen, I'm going to start with you. How much does this keep you up at night? And what are you doing to ensure that you have the right technical and business talent to fill the open roles you have on your team? I think, I don't know if, I probably answer it in a relatively unique way. I think one of our job as a vendor is actually to empower more users to do more complex tasks, actually without the necessity to build a huge skill set. And I think today, especially in this event, a lot of the clear AI technologies really coming to give user that are less skill a lot of power. And this is actually a critical thing in order to address the new needs, right? So the needs will continue to grow. The demand is going to continue to grow. We believe that a big part of answering the demand versus supply is by empowering new users to participate in an effective way within the integration, data management analytics space. So we're making a major major effort there. But we're also adding a lot of guided, a lot of advice, a lot of optimization that is done for the users automatically so the users are more effective. I still think that the need for talent is only going to grow. It's not just a growth in the data. It's the growth in the demand for data and the growth in the demand for good data. So I think a lot of enablement, a lot of investment in people, and the technology to actually empower more users. >> Daniel? >> Yeah so for us part of the onus is on us to make the software easy enough to use and understandable for the audiences that are coming across it. So there's really no reason why everybody can't be an analyst. They might be afraid of that title but you're all working with data. You're looking at your phone, You're looking at your steps, You're looking at everything. Data. It's as simple as that. But data comes across your landscape in a lot of ways. So it's up to us to make the analytic flow as easy as we can and understandable as we can. But it's also up to us to help grow the skills. You can only make it so easy 'cause sometimes doing analytic task and working with data is just hard. There are complicated things. So what can we do to uplift the skills? We do a lot with Tableau for teaching and trying to nurture education programs all the way from K to 12, and up in universities to try and seed the universities' and elementary school instructors to start introducing the concepts of working with data at early ages. And then in college, there's whole classes that people use Tableau in to help understand the analytic process. So it's a little step and it's a forward looking step. The payoff won't be for many years until those people get into the workforce. >> We're starting them young. (laughing) >> But you have to. >> Mommas, teach your babies data science. >> Absolutely. (laughing) >> Daniel, Ronen, Thank you both so much for coming on theCUBE. It's been a great conversation. >> Excellent, >> Thank you. >> thank you, Rebecca. >> I'm Rebecca Knight, we will have much more of theCUBE's live coverage of Informatica World 2019. Stay tuned. (upbeat music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Informatica. He is the senior vice-president and general manager, Yes, pleasure to be here. Thank you so much for coming on theCUBE. Happy to be here. I Want to start with you, Ronen. the ability for more people to use data. to make their decisions. as the data that we can get our hands on. Want to talk about that a little bit, Ronen? So that the users are using the right data and the ability of Informatica So to be able to break that down and say no. between IT and the business side? So as the IT organizations have helped bridge that gap of IT actually being on the way of people using data. So to be able to trust that data How do those two concepts, So that's the angle that we're putting our investment in. and the machine can help coach you along the right way and the technology to actually empower more users. all the way from K to 12, We're starting them young. (laughing) Thank you both so much for coming on theCUBE. of Informatica World 2019.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RonenPERSON

0.99+

Daniel JewettPERSON

0.99+

Rebecca KnightPERSON

0.99+

Ronen SchwartzPERSON

0.99+

InformaticaORGANIZATION

0.99+

EuropeLOCATION

0.99+

RebeccaPERSON

0.99+

TableauORGANIZATION

0.99+

DanielPERSON

0.99+

2011DATE

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

TableauTITLE

0.99+

two guestsQUANTITY

0.99+

two campsQUANTITY

0.99+

Informatica WorldORGANIZATION

0.98+

todayDATE

0.98+

Tableau SoftwareORGANIZATION

0.98+

bothQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

two functionsQUANTITY

0.97+

oneQUANTITY

0.96+

singleQUANTITY

0.95+

12QUANTITY

0.93+

two conceptsQUANTITY

0.92+

later todayDATE

0.87+

first glanceQUANTITY

0.85+

Big Data CloudORGANIZATION

0.78+

Informatica WorldEVENT

0.78+

Informatica World 2019EVENT

0.77+

data lakeORGANIZATION

0.77+

2019TITLE

0.49+

KQUANTITY

0.44+

WorldEVENT

0.41+

2019DATE

0.41+

#DatriumCrowdChat


 

>> Hi, I'm Peter Burroughs. And welcome to another cube conversation. This one is part of a very, very special digital community event sponsored by day trip. What we're going to be talking about today. Well, date comes here with a special product announcement that's intended to help customers do a better job of matching their technology needs with the speed and opportunities to use their data differently within their business. This is a problem that every single customer faces every single enterprise faces, and it's one that's become especially acute as those digital natives increasingly hunt down and take out some of those traditional businesses that are trying to better understand how to use their data. Now, as we have with all digital community events at the end of this one, we're gonna be running a crowd chat, so stay with us, will go through a couple of day trim and datum customer conversations, and then it'LL be your turn toe. Weigh in on what you think is important. Ask the questions of Data Room and others in the community that you think need to be addressed. Let's hear what you have to say about this increasingly special relationship between data technology and storage services. So without further ado, let's get it kicked off. Tim Page is the CEO of Datum. Tim, Welcome to the Cube. Thank you, Peter. So data give us a quick take on where you guys are. >> Yeah, Day tree ums formulated as a software to find converged infrastructure company that takes that converges to the next level. And the purpose of us is to give the user the same experience, whether you're working on Prem or across multi cloud. >> Great. So let's start by saying, that's the vision, but you've been talking a lot of customers. What's the problem that you keep hearing over and over that you're pointing towards? >> Yeah, it's funny. So it's so meet with the number CEOs over the years and specifically is related to a tree, and they'LL tell you they were on an on demand economy that expects instant outcomes, which means you have to digitally transform. And to do that, you've got to transform it, which means it's got to be easy. It's got to be consistent. You've got to get rid of a lot of the management issues, and it's got a feel and take advantage of the services that cloud has to offer. >> All right, so that's the nature of the problem. You've also done a fair amount of research looking into the specifics of what they're asking for. Give us some insight into what day terms discovering as you talk to customers about what the solutions are going to look like. >> It's interesting. So if you look at how to resolve that, you've got to conf urged to transform in some form or fashion. If you look at the first level of convergence a lot of people have done, it's been directly as relates the hardware architecture. We've taken that to a whole new level until Point were saying, How do you actually automate those mundane task that take multiple groups to solve specifically primary backup disaster recovery? All the policies involved in that is a lot of work that goes into that across multiple groups, and we set out to solve those issues, >> so there's still a need for performance. There's still the need for capacity to reduce management time and overhead etcetera. But Tim, as we move forward, how our customers responding this you're getting some sense of what percentage of them are going, Teo say Yeah, that's it >> s so interesting. So we could start a survey and got over five hundred people leaders to respond to it. It's interesting is they talk about performance management security, but they're also talking about consistency of that experience. And specifically, we asked how many of you is important to have your platform have built in backup and policy services with encryption built in et cetera. We got a seventy percent rate of those applicants of those those people interviewed saying is really important for that to be part of a plan. >> So it sounds like you're really talking about something Mohr than just a couple of products. You really talking about forcing customers or you're not forcing. Customers are starting the process of rethinking your data infrastructure, and I got that right. >> That's right. If you look at how infrastructure is grown in the last twenty years, right? Twenty years ago, san technology was related, and every time you throw open app, you had to put different policies that Apple put different one tight management to how much of my resources and go to certain things. We set out to actually automate that which is why it took us four years. To build this platform with a hundred programmers is, Well, how do we actually make you not think about how you're going to back up? How do you set a policy and no disaster recovery is going to run? And to do that, you've gotta have it one code base and we know we're on to something, even based on our survey, because the old array vendors are all buying Bolton's because they know users want an experience. But you can't have that experience with the ball time. You have to have it your fundamental platform. >> Well, let me let me step in here. So I've been around for a long time him and heard a lot of people talk about platforms. And if I have kind of one rule companies and introduce platforms that just expand, typically failed companies that bring an opinion and converge more things so it's simpler tend to be more successful. Which direction's date >> going? So we definitely That's why we took time, right? If you want to be an enterprise class company, you can't build a cheap platform in eighteen months and hit the market because were you, architect, you stay. So our purpose from the beginning was purposefully to spend four years building an enterprise clap platform that did away with a lot of the mundane task seeing management That's twenty years old. Technology right? One management. So if you're buying your multi cloud type technology experience in cages, you're just buying old stuff. We took an approach saying, We want that consistent approach that whether you're running your services on from or in any type of cloud, you could instantly take advantage of that, and it feels the same. That's a big task because you're looking to run the speed of storage with the resiliency of backup right, which is a whole different type of technology. Which is how our founders, who have built the first words in this went to the second, almost third version of that type of oven. Stan she ation of a platform. >> All right, so we know what the solution is going to look like. It's going to look like a data platform that's rethought to support the needs of data assets and introduces a set of converge services that really focus the value proposition to what the enterprise needs So what do you guys announcing? >> That's exactly right. So we've finalized what we call our auto matrix platform. So auto matrix in inherently In it we'LL have primary backup Disaster recovery D Our solution All the policies within that an encryption built in from the very beginning. Soto have those five things we believe toe actually have on the next generation experience across true multi cloud. You're not bolting on hardware technologies. You're bolting on software technologies that operate in the same manner. Those five things have to be an errand or you're a bolt on type company. >> So you're not building a platform out by acquisition. You build a platform out by architecture and development. >> That's right. And we took four years to do it with one hundred guys building this thing out. It's released, it's out and it's ready to go. So our first we're announcing is that first in Stan she ation of that as a product we're calling control shift, which is really a data mobility orchestrator. True sas based you could orchestrate from the prime from the cloud cloud to cloud, and our first generation of that is disaster recovery so truly to be able to set up your policies, check those policies and make sure you're going to have true disaster recovery with an Rto zero. It's a tough thing we've done it. >> That's upstanding. Great to hear Tim Page, CEO Data Room, talking about some of the announcement that were here more about in the second. Let's now turn our attention, Teo. A short video. Let's hear more about it. >> The bank is focused on small businesses and helping them achieve their success. We want to redesign the customer engagement in defining the bank of the future. This office is our first implementation of that concept, as you can see is a much more open floor plan design that increases the interaction between our lead bank associates and our clients with day tree and split provisioning. Oliver Data is now on the host, so we have seen eighty times lower application. Leighton. See, this gives our associates instant responses to their queries so they can answer client questions in real time. That time is always expensive in our business. In the past we had a forty eight hour recovery plan, but with the atrium we were able to far exceed that plan we've been able to recover systems in minutes now instead of backing at once per day with that backup time taking eighteen hours. Now we're doing full system snapshots our league, and we're replicating those offsite stay trim is the only vendor I know of that could provide this end to end encryption. So any cyber attacks that get into our system are neutralized with the data absolution. We don't have to have storage consultants anymore. We don't have to be stored. Experts were able to manage everything from a storage perceptive through the center, obviously spending less time and money on infrastructure. We continue to leverage new technologies to improve application performance and lower costs. We also want to animate RDR fail over. So we're looking forward to implementing daydreams. Product deloused orchestrate an automaton. RDR fell over process. >> It is always great to hear from a customer. Want to get on Peterborough's? This's a Cube conversation, part of a digital community event sponsored by Data Room. We were talking about how the relationship between the new digital business outcomes highly dependent upon data and the mismatch of technology to be able to support those new classes of outcomes is causing problems in so many different enterprises. So let's dig a little bit more deeply into someone. Tatum's announcements to try to find ways to close those gaps. We've got his already who's the CTO of data on with today, says all are welcome to the Cube, >> that being a good to see you again. >> So automate tricks give us a little bit more toe tail and how it's creating value for customers. >> So if you go to any data center today, you notice that for the amount of data they have their five different vendors and five different parts to manage the data. There is the primary storage. There is the backup on DH. There is the D R. And then there's mobility. And then there is the security or think about so this five different products, our kind of causing friction for you if you want to move, if you want to be in the undermanned economy and move fast in your business, these things are causing friction. You cannot move that fast. And so what we have done is that we took. We took a step back and built this automatics platform. It's provides this data services. We shall kind of quality that autonomous data services. The idea is that you don't have to really do much for it by converging all this functions into one simple platform that we let him with all the friction you need to manage all your data. And that's kind of what we call auto metrics that >> platform. So as a consequence, I gotta believe, Then your customers are discovering that not only is it simpler, easy to use perhaps a little bit less expertise required, but they also are more likely to be operationally successful with some of the core functions like D are that they have to work with. >> Yeah, So the other thing about these five five grandpre functions and products you need is that if you want to imagine a future, where you going, you know, leverage the cloud For a simple thing like the R, for example, the thing is that if you want to move this data to a different place with five different products, how does it move? Because, you know, all these five products must move together to some of the place. That's not how it's gonna operate for you. So by having these five different functions converge into one platform is that when the data moves between the other place, the functions move with it giving the same exist same exact, consistent view for your data. That's kind of what we were built. And on top of all the stuff is something we have this global data management applications to control the all the data you have your enterprise. >> So how are customers responding to this new architecture of autumn matrix converge services and a platform for building data applications? >> Yeah, so our customers consistently Teyla's one simple thing is that it's the most easiest platform there ever used in their entire enterprise life. So that's what we do aimed for simplicity for the customer experience. Autonomous data services give you exactly that experience. So as an example, last quarter we had about forty proof of concept sort in the field out of them, about thirty of adopted already, and we're waiting for the ten of the results to come out this quarter. So generally we found that a proof of concept don't come back because once you touch it, experience simplicity offered and how how do you get all this service is simple, then people don't tend to descend it back. They like to keep it and could have operated that way. >> So you mentioned earlier, and I kind of summarizing notion of applications, Data services, applications tell us a little bit about those and how they really toward a matrix. >> Right? So once you have data in multiple places, people have not up not a cloud. And we're going to also being all these different clouds and report that uniform experience you need this date. You need this global data management applications to extract value out off your data. And that's kind of the reason why we built some global data management applications. I SAS products, I think, install nothing to manage them. Then they sit outside and then they help you manage globally. All the data you have. >> So as a result, the I and O people, the destruction operations administrators, I can think of terms of automata whose platform the rest of the business could look at in terms of services and applications that through using and support, >> that's exactly right. So you get the single dashboard to manage all the data. You have an enterprise >> now I know you're introducing some of these applications today. Can you give us a little peek into? Yeah. >> Firstly, our automatics platform is a soft is available on prime as a software defined converge infrastructure, and you can get that we call it D V X. And then we also offer in the cloud our services. It's called Cloud Devi Ex. You could get these and we're also about kind off announcing the release ofthe control shift. It's over for one of our first date. Imagine applications, which kind of helps you manage data in a two different locations. >> So go over more specific and detail in the control shift. Specifically is which of those five data services you talk about is control shift most clearly associating with >> right. So if you go toe again back to this question about like five different services, if you have to think about B r o D. R. Is a necessity for every business, it's official protection. You need it. But the challenge is that you know that three four challenges you gently round into the most common people talk about is that one is that you have a plan. You'LL have a proper plan. It's challenging to plan something, and then you think about the fire drill. We have to run when there's a problem. And then last leaving actually pushed the button. Tofail over doesn't really work for you. Like how fast is it going to come up? So those three problems we can have one to solve really like really solidly So we call our service is a dear services fail proof tr that's actually takes a little courage to say fail proof. So control shift is our service, which actually does this. The orchestration does mobility across the two different places from could be on prime time on Prem on prompted the cloud. And because we have this into end data services ourselves, the it's easy to then to compliance checks all the time so we could do compliance checks every few minutes. So what that gives you that? Is that the confidence that that your dear plan's going to work for you when you need it? And then secondly, when you push the button because you also prime restoration back up, it's then easy to bring upon your services at once like that, and the last one is that because we are able to then work across the clouds and pride, the seamless experience. So when you move the data to the cloud, have some backups there. When you push a button to fail over, we'LL bring up your services in via MacLeod so that the idea is that it look exactly the same no matter where you are in the D. R or North India and then, you know, you watch the video, watch the demos. I think they look and see that you can tell the difference. >> Well, that's great. So give us a little bit of visibility into how day Truman intends to extend these capabilities, which give us a little visibility in the road map. Next. >> So we are already on Amazon with the cloud. The next time you're gonna be delivering his azure, that's the next step. But But if I step back a little bit and how do we think about our ourselves? Like if you look at his example Google, Google, you know, fairies, all the data and Internet data and prizes that instant search for that instant like an access to all the data you know, at your finger finger tips. So we wanted something similar for enterprise data. How do we Federated? How do we aggregate data and the property? The customer, the instant management they can get from all the data. They have already extract value from the data. So those things are set off application We're building towards organic scum. Examples are we're building, like, deep search. How do you find the things you want to find? You know, I've been a very nice into to weigh. And how do you do Compliance? GPR. And also, how do you think about you know, some dependent addicts on the data? And so we also extend our control shift not to just manage the data on all platforms. Brawls hardly manage data across different platforms. So those kind of things they're thinking about as a future >> excellent stuff is already CTO daydream. Thanks very much for talking to us about auto matrix control shift and the direction that you're taking with this very, very extreme new vision about data on business come more easily be bought together. So, you know, I'll tell you what. Let's take a look at a demo >> in today's enterprise data centers. You want a simple way to deal with your data, whether in the private or public cloud, and ensure that dealing with disaster recovery is easy to set up. Always complied and in sync with the sites they address and ready to run as the situations require built on consistent backups, allowing you to leverage any current or previous recovery point in time with near zero rto as the data does not have to be moved in order to use it. Automated orchestration lets you easily test or execute recovery plans you have constructed with greater confidence, all while monitoring actions and progress from essential resource. This, along with maintaining comprehensive run books of these actions, automatically from the orchestration framework. Managing your Systems Day Tree in autumn matrix provides this solution. Run on local host flash and get the benefits of better performance and lower. Leighton sees back up and protect your data on the same converged platform without extracting it to another system while securing the data in your enterprise with end and encryption automating salas desired for your business needs with policy driven methods. The capture the what, when and where aspects of protecting your data, keeping copies locally or at other sites efficiently Move the data from one location to another weather in your private or public cloud. This is the power of the software defined converged infrastructure with cloudy are from day tree, um, that we call Oughta Matrix. >> Hi. And welcome back to another cube conversation once again on Peter Births. And one of the biggest challenges that every user faces is How did they get Mohr out of their technology suppliers, especially during periods of significant transformation? Soto have that conversation. We've got Brian Bond, who's the director of infrastructure? The meter A seaman's business. Brian, welcome to the Cube. >> Thanks for having me. >> So tell us a little about the meteor and what you do there. >> So E Meter is a developer and supplier of smart grid infrastructure software for enterprise level clients. Utilities, water, power, energy and, ah, my team was charged with managing infrastructure for that entire business units. Everything from Deb Test Q and sales. >> Well, the you know, the intelligent infrastructure as it pertains to electronica rid. That's not a small set of applications of small city use cases. What kinds of pressure is that putting on your infrastructure >> A lot of it is the typical pressures that you would see with do more with less doom or faster. But a lot of it is wrapped around our customers and our our other end users in needing more storage, needing Mohr at performance and needing things delivered faster on a daily basis. Things change, and keeping up with the Joneses gets harder and harder to do as time moves on. >> So as you think about day trims Auto Matrix. How is it creating value for you today? Give us kind of, Ah, peek into what it's doing to alleviate some of these scaling and older and researcher pressures, >> So the first thing it does is it does allow us to do a lot more with less. We get two times the performance five times the capacity, and we spend zero time managing our storage infrastructure. And when I say zero time I mean zero time, we do not manage storage anymore. With the data in product, we can deploy thanks faster. We can recover things faster are Rto and R R P. O matrix is down two seconds instead of minutes or hours, and those types of things really allow us to provide, Ah much better level of service to our customers. >> And it's especially critical. Infrastructure like electronic grid is good to hear. That the Rto Harpo is getting is close to zero as possible. But that's the baseline today. Look out and is you and vision where the needs of these technologies are going for improving protection, consolidating converging gated services and overall, providing a better experience from a business uses data. How do you anticipate that you're goingto evolve your use of autumn matrix and related data from technologies? >> Well, we fully intend to to expand our use of the existing piece that we have. But then this new autumn matrix piece is going to help us, not witches deployments. But it's also going to help us with compliance testing, data recovery, disaster recovery, um, and also being able to deploy into any type of cloud or any type of location without having to change what we do in the back in being able to use one tool across the entire set of the infrastructure they were using. >> So what about the tool set? You're using the whole thing consistently, but what about the tool set when in easiest for you within your shop, >> installing the infrastructure pieces themselves in its entirety. We're very, very easy. So putting that into what we had already and where we were headed was very, very simple. We were able to do that on the fly in production and not have to do a whole lot of changes with the environments that were doing at the time. The the operational pieces within the D. V X, which is this the storage part of the platform were seamless as far as V Center and other tools that we're using went and allowed us to just extend what we were doing already and be able to just apply that as we went forward. And we immediately found that again, we just didn't manage storage anymore. And that wasn't something we were intending and that made our r I just go through the roof. >> So it sounds like time to value for the platform was reserved for quick and also it fit into your overall operational practices. So you have to do a whole bunch of unnatural acts to get >> right. We did not have to change a lot of policies. We didn't have to change a lot of procedures, a lot of sounds. We just shortened. We took a few steps out on a lot of cases. >> So how is it changing being able to do things like that, changing your conversation with your communities that you're serving a CZ? They asked for more stores where they ask for more capabilities. >> First off, it's making me say no, a lot less, and that makes them very, very happy. The answer usually is less. And then the answer to the question of how long will it take changes from? Oh, we can get that done in a couple of days or, oh, we can get that done in a couple hours to I did that while I was sitting here in the meeting with you, and it's it's been handled and you're off to the races. >> So it sounds like you're police in a pretty big bed and a true, uh, what's it like? Working with them is a company. >> It's been a great experience from from the start, in the initial piece of talking to them and going through the POC process. They were very helpful, very knowledgeable SCS on DH, and since then They've been very, very helpful in allowing us to tell them what our needs are, rather than them telling us what our needs are and going through and working through the new processes and the and the new procedures within our environments. They've been very instrumental and performance testing and deployment testing with things, uh, that a lot of other storage providers didn't have any interest in talking with us about so they've been very, very helpful with that and very, very knowledgeable people that air there are actually really smart, which is not surprising. But the fact that they can relay that into solutions to what my actual problems are and give me something that I can push forward on to my business and have ah, positive impact from Day one has been absolutely, without question, one of the better things. >> Well, it's always one of the big, biggest challenge when working with a company that just getting going is how do you get the smarts of that organization into the business outcomes that really succeeded? Sounds like it's working well. Absolutely. All right. Brian Bond, director Vital infrastructure demeanor, Seaman's business Thanks again for being on the Cube >> has been great >> on. Once again, this has been a cube conversation, and now what we'd like to do is don't forget this is your opportunity to participate in the crowd. Chat immediately after this video ends and let's hear your thoughts. What's important in your world is you think about new classes of data platforms, new rules of data, new approaches to taking great advantage of the data assets that air differentiating your business. Have those conversations make those comments? Asked those questions. We're here to help. Once again, Peter Bourjos, Let's go out yet.

Published Date : May 15 2019

SUMMARY :

Ask the questions of Data Room and others in the community that you think need to be addressed. takes that converges to the next level. What's the problem that you keep hearing over and over that you're pointing towards? management issues, and it's got a feel and take advantage of the services that cloud has to offer. Give us some insight into what day terms discovering as you talk to customers So if you look at how to resolve that, you've got to conf urged to transform There's still the need for capacity to reduce we asked how many of you is important to have your platform have Customers are starting the process of rethinking your data infrastructure, hundred programmers is, Well, how do we actually make you not think about how you're going to back up? more things so it's simpler tend to be more successful. So our purpose from the beginning was purposefully to spend four years building services that really focus the value proposition to what the enterprise needs So what do you guys announcing? Those five things have to be an errand or you're a bolt on type company. So you're not building a platform out by acquisition. the prime from the cloud cloud to cloud, and our first generation of that is disaster recovery so talking about some of the announcement that were here more about in the second. This office is our first implementation of that concept, as you can see is a much more open It is always great to hear from a customer. So automate tricks give us a little bit more toe tail and how it's creating value for simple platform that we let him with all the friction you need to manage all your data. but they also are more likely to be operationally successful with some of the core functions like D are is something we have this global data management applications to control the all the data you have your So generally we found that a proof of concept don't come back because once you touch it, experience simplicity offered and So you mentioned earlier, and I kind of summarizing notion of applications, Data services, All the data you have. So you get the single dashboard to manage all the data. Can you give us a little peek into? as a software defined converge infrastructure, and you can get that we call it D V X. So go over more specific and detail in the control shift. that the idea is that it look exactly the same no matter where you are in the to extend these capabilities, which give us a little visibility in the road map. instant search for that instant like an access to all the data you know, at your finger finger tips. auto matrix control shift and the direction that you're taking with this very, efficiently Move the data from one location to another weather in your private or public cloud. And one of the biggest challenges So E Meter is a developer and supplier of smart grid infrastructure software for Well, the you know, the intelligent infrastructure as it pertains to A lot of it is the typical pressures that you would see with do more with less doom or faster. So as you think about day trims Auto Matrix. So the first thing it does is it does allow us to do a lot more with less. How do you anticipate that you're goingto But it's also going to help us with compliance testing, data recovery, disaster recovery, not have to do a whole lot of changes with the environments that were doing at the time. So it sounds like time to value for the platform was reserved for quick and also it fit into your overall operational We didn't have to change a lot of procedures, So how is it changing being able to do things like that, changing your conversation with your communities And then the answer to the question of how long will it So it sounds like you're police in a pretty big bed and a true, uh, what's it like? But the fact that they can relay that into Well, it's always one of the big, biggest challenge when working with a company that just getting going is how do you get the smarts of the data assets that air differentiating your business.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BourjosPERSON

0.99+

BrianPERSON

0.99+

PeterPERSON

0.99+

Brian BondPERSON

0.99+

Tim PagePERSON

0.99+

GoogleORGANIZATION

0.99+

two secondsQUANTITY

0.99+

two timesQUANTITY

0.99+

tenQUANTITY

0.99+

eighteen hoursQUANTITY

0.99+

AppleORGANIZATION

0.99+

TimPERSON

0.99+

twenty yearsQUANTITY

0.99+

five timesQUANTITY

0.99+

Peter BurroughsPERSON

0.99+

AmazonORGANIZATION

0.99+

North IndiaLOCATION

0.99+

seventy percentQUANTITY

0.99+

SeamanPERSON

0.99+

four yearsQUANTITY

0.99+

five productsQUANTITY

0.99+

one platformQUANTITY

0.99+

first wordsQUANTITY

0.99+

secondQUANTITY

0.99+

three problemsQUANTITY

0.99+

five different productsQUANTITY

0.99+

eighteen monthsQUANTITY

0.99+

Twenty years agoDATE

0.99+

E MeterORGANIZATION

0.99+

five different servicesQUANTITY

0.99+

D. RLOCATION

0.99+

zero timeQUANTITY

0.99+

one hundred guysQUANTITY

0.99+

eighty timesQUANTITY

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

five different productsQUANTITY

0.98+

last quarterDATE

0.98+

DatumORGANIZATION

0.98+

todayDATE

0.98+

two different placesQUANTITY

0.98+

Peter BirthsPERSON

0.98+

TrumanPERSON

0.98+

TatumPERSON

0.98+

FirstQUANTITY

0.98+

zeroQUANTITY

0.97+

over five hundred peopleQUANTITY

0.97+

FirstlyQUANTITY

0.97+

about thirtyQUANTITY

0.97+

five thingsQUANTITY

0.97+

first dateQUANTITY

0.97+

one simple thingQUANTITY

0.97+

third versionQUANTITY

0.96+

five different vendorsQUANTITY

0.96+

five different partsQUANTITY

0.96+

TeoPERSON

0.96+

one toolQUANTITY

0.95+

first generationQUANTITY

0.95+

forty eight hourQUANTITY

0.95+

Data RoomORGANIZATION

0.95+

one simple platformQUANTITY

0.94+

this quarterDATE

0.94+

five different functionsQUANTITY

0.94+

five data servicesQUANTITY

0.94+

MohrORGANIZATION

0.93+

single dashboardQUANTITY

0.92+

BoltonORGANIZATION

0.92+

about forty proof of conceptQUANTITY

0.91+

Vital infrastructureORGANIZATION

0.89+

one locationQUANTITY

0.89+

first implementationQUANTITY

0.87+

LeightonORGANIZATION

0.86+

once per dayQUANTITY

0.85+

Datrium V2


 

(light music) >> Hi, I'm Peter Burris and welcome to another CUBE Conversation. This one is part of a very, very special digital community event sponsored by Datrium. What are we gonna be talking about today? Well, Datrium's here with a special product announcement that's intended to help customers do a better job at matching their technology needs with the speed and opportunities to use their data differently within their business. This is a problem that every single customer faces, every single enterprise faces and it's one that's become especially acute as those digital natives increasingly hunt down and take out some of those traditional businesses that are trying to better understand how to use their data. Now, as we have with all digital community events, at the end of this one, we're gonna be running a crowd chat, so stay with us. We'll go through a couple of Datrium and Datrium customer conversations and then it'll be your turn to weigh in on what you think is important, ask the questions of Datrium and others in the community that you think need to be addressed. Let's hear what you have to say about this increasingly special relationship between data, technology and storage services. So, without further ado, let's get it kicked off. Tim Page is the CEO of Datrium. Tim, welcome to theCUBE. >> Thank you, Peter. >> So, Datrium, give us a quick take on where you guys are. >> Yeah, Datrium's formulated as a software defined converged infrastructure company that takes that convergence to the next level, and the purpose of us is to give the user the same experience whether you're working on-prem or across multicloud. >> Great, so let's start by saying that's the vision, but you've been talking to a lot of customers. What's the problem that you keep hearing over and over that you're pointing towards? >> Yeah, it's funny, meeting with a number of CIOs over the years and specifically as related to Datrium, they'll tell you we're on an on-demand economy that expects instant outcomes, which means you have to digitally transform and to do that, you've gotta transform IT, which means it's gotta be easy, it's gotta be consistent. You've gotta get rid of a lot of the management issues and it's gotta feel or take advantage of the services that cloud has to offer. >> All right, so that's the nature of the problem. You've also done a fair amount of research looking into the specifics of what they're asking for. Give us some insight into what Datrium's discovering as you talk to customers about what the solutions are gonna look like. >> It's interesting, if you look at how to resolve that, you've gotta converge to transform in some form or fashion. If you look at the first level of convergence a lot of people have done, it's been directly as it relates to hardware architecture. We've taken that to a whole new level to a point where we're saying how do you actually automate those mundane tasks that take multiple groups to solve. Specifically, primary, backup, disaster recovery, all the policies involved in that. There's a lot of work that goes into that across multiple groups and we set out to solve those issues. >> So, there's still a need for performance, there's still the need for capacity, to reduce management time and overhead, et cetera, but, Tim, as we move forward, how are customers responding to this? Are you getting some sense of what percentage of them are going to say, yeah, that's it? >> Yeah, so interesting, we just ran a survey and got over 500 people, IT leaders to respond to it and it's interesting 'cause they talk about performance, management, security, but they're also talking about consistency of that experience. Specifically, we asked how many of you is it important to have your platform have built-in backup and policy services with encryption built-in, et cetera and we got a 70% rate of those applicants, of those people interviewed saying it's really important for that to be part of a platform. >> Now, it sounds like you're really talking about something more than just a couple of products. You're really talking about forcing customers or you're not forcing, but customers are starting the process of rethinking their data infrastructure. Have I got that right? >> That's right. If you look at how infrastructure's grown over the last 20 years, 20 years ago, SAN technology was related and every time you threw up an app, you had to put different policies to that app or put different LUN type management to how much of my resources can go to certain things. We set out to actually automate that, which is why it took us four years to build this platform with 100 programmers is, well, how do we actually make you not think about how you're gonna back up. How do you set a policy and know disaster recovery is gonna run? And to do that, you gotta have it in one code base. And we know we're on to something even based on our survey because the old array vendors are all buying bolt-ons because they know users want an experience, but you can't have that experience with a bolt-on. You have to have it in your fundamental platform. >> Well, let me step in here. I've been around for a long time, Tim and heard a lot of people talk about platforms and if I have one rule, companies that introduce platforms that just expand typically fail. Companies that bring an opinion and converge more things so it's simpler, tend to be more successful. Which direction is Datrium going? >> Yeah, definitely, that's why we took time. If you wanna be an enterprise class company, you can't build a cheap platform in 18 months and hit the market, 'cause where you architect, you stay. Our purpose from the beginning was purposefully to spend four years building an enterprise platform that did away with a lot of the mundane tasks, SAN management. That's 20 years old technology, LUN management. If you're buying your multi-cloud type technology experience in cages, you're just buying old stuff. We took an approach saying we want that consistent approach that whether you're running your services on prem or in any type of cloud, you could instantly take advantage of that and it feels the same. That's a big task 'cause you're looking to run the speed of storage with the resiliency of backup, which is a whole different type of technology, which is how our founders who have built the first version of this went to the second and almost third version of that type of instantiation of a platform. >> All right, so we know what the solution's gonna look like. It's gonna look like a data platform that's rethought to support the needs of data assets and introduces a set of converged services that really focus the value proposition to what the enterprise needs. So, what are you guys announcing? >> That's exactly right. So, we've finalized what we call our AutoMatrix platform. AutoMatrix inherently in it will have primary backup, disaster recovery, DR solution, all the policies within that and encryption built-in from the very beginning. To have those five things, we believe to actually have the next generation experience across true multicloud, you're not bolting on hardware technologies, you're bolting on software technologies that operate in the same manner. Those five things have to be inherent or you're a bolt-on type company. >> So, you're not building a platform out by acquisition. You're building a platform out by architecture and development. >> That's right and we took four years to do it with 100 guys building this thing out. It's released, it's out and it's ready to go. So our first we're announcing is that first instantiation of that is a product we're calling Control Shift, which is really a data mobility orchestrator, true SaaS based. You can orchestrate prem to prem, prem to cloud, cloud to cloud and our first iteration of that is disaster recovery. So, truly, to be able to set up your policies, check those policies and make sure you're gonna have true disaster recovery with an RTO of zero. It's a tough thing. We've done it. >> That's outstanding. Great to hear, Tim Page, CEO Datrium talking about some of the announcements that we're gonna hear more about in a second. Let's now turn our attention to a short video. Let's hear more about it. (light music) >> Lead Bank is focused on small businesses and helping them achieve their success. We want through and redesigned the customer engagement in defining the bank in the future. This office is our first implementation of that concept. As you can see, it's a much more open floor plan design that increases the interaction between our Lead Bank associates and our clients. With Datrium's split provisioning, all of our data is now on the host. So, we have seen 80 times lower application latency. This gives our associates instant responses to their queries, so they can answer client questions in real-time. Down time is always expensive in our business. In the past, we had a 48 hour recovery plan, but with Datrium, we were able to far exceed that plan. We've been able to recover systems in minutes now. Instead of backing up once per day, with that backup time taking 18 hours, now we're doing full system snapshots hourly and we're replicating those offsite. Datrium is the only vendor I know of that can provide this end-to-end encryption. So, any cyber attacks that get into our system are neutralized. With the Datrium solution, we don't have to have storage consultants anymore. We don't have to be storage experts. We're able to manage everything from a storage perspective through vCenter, obviously spending less time and money on infrastructure. We continue to leverage new technologies to improve application performance and lower costs. We also wanna automate our DR failover, so we're looking forward to implementing Datrium's product that'll allow us to orchestrate and automate our DR failover process. (light music) >> It is always great to hear from a customer. Once again, I'm Peter Burris, this a CUBE Conversation, part of a digital community event sponsored by Datrium. We've been talking about how the relationship between the new digital business outcomes highly dependent upon data and the mismatch of technology to be able to support those new classes of outcomes. It's causing problems in so many different enterprises. So, let's dig a little bit more deeply into some of Datrium's announcements to try to find ways to close those gaps. We've got Sazzala Reddy, who's the CTO of Datrium with us today. Sazzala, welcome to theCUBE. >> Hey Peter, good to see you again. >> So, AutoMatrix, give us a little bit more detail and how it's creating value for customers. >> Yeah, if you go to any data center today, you notice that for the amount of data they have, they have five different vendors and five different products to manage that data. There is the primary storage, there is the backup and there is the DR and then there's mobility and then there is the security you have to think about. So, these five different products are causing friction for you. If you wanna be in the on-demand economy and move fast in your business, these things are causing friction. You cannot move that fast. What we have done is we took a step back and we built this Automatrix platform. It has this data services which is gonna provide autonomous data services. The idea is that you don't have to do much for it. By converging all these functions into one simple platform will remove all the friction you need to manage all your data and that's what we call Automatrix platform. >> As a consequence, I gotta believe then, your customers are discovering that not only is it super easy to use, perhaps a little bit less expertise required, but they also are more likely to be operationally successful with some of the core functions like DR that they have to work with. >> Yeah, so the other thing about these five different functions and products you need is that if you wanna imagine a future where you're gonna leverage the cloud for a simple thing like DR for example, the thing is that if you wanna move this data to a different place, with five different products, how does it move? 'Cause all these five products must move together to some other place. That's not how it's gonna operate for you. So, by having these five different functions converged into one platform is that when the data moves to any other place, the functions move with it giving you the same exact consistent view for your data. That's what we have built and on top of all this stuff is something we have, this global data management applications to control all the data you have in your enterprise. >> So, how are customers responding to this new architecture of AutoMatrix, converged services and a platform for building data applications? >> Yeah, so our customers consistently tell us one simple thing is that it's the most easiest platform they ever used in their entire enterprise life. So, that's what we aimed for simplicity of the customer experience. Autonomous data services give you exactly that experience. So, as an example, last quarter, we had about 40 proof of concepts out in the field. Out of them, about 30 have adopted it already and we're waiting for the 10 of them for results to come out in this quarter. So, generally we found that our proof of concepts don't come back because once you touch it, you experience the simplicity of it and how you get all these service and support, then people don't tend to send it back. They like to keep it and operate it that way. >> So, you mentioned earlier and I summarized the notion of applications, data services applications. Tell us a little bit about those and how they relate to AutoMatrix. >> Right, so once you have data in multiple places, people are adopt multi-cloud and we are going to also be in all these different clouds and we provide that uniform experience, you need this global data management applications to extract value out of your data and that's the reason why we built some global data management applications as SAAS products. Nothing to install, nothing to manage them, then they sit outside and then they help you manage globally all the data you have. >> So, as a result, the I&O people, the infrastructure and operations administrators, do things in terms of AutoMatrix's platform, the rest of the business can look at it in terms of services and applications that you're using in support. >> That's exactly right, so you get the single dashboard to manage all the data you have in your enterprise. >> Now, I know you're introducing some of these applications today. Can you give us a little peek into those? >> Yeah, firstly, our AutoMatrix platform is available on prem as a software defined converged infrastructure and you can get that. We call it DVX. And then we also offer in the cloud our services. It's called Cloud DVX. You can get these. And we're also announcing the release of Control Shift. It's one of our first data management applications, which helps you manage data in two different locations. >> So, go a little bit more specific into or detail into Control Shift. Specifically, which of those five data services you talk about is Control Shift most clearly associated with? >> Right, so to go to again back to this question about if you have five different services, if you have to think about DR. DR is a necessity for every business. It's digital protection, you need it, but the challenge is that there are three or four challenges you generally run into with most common people talk about is that one is that you have to plan. You have to have a proper plan. It's challenging to plan something and then you have to think about the file drill we have to run when there's a problem. And then lastly, when you eventually push the button to fail over, does it really work for you. How fast is it gonna come up? Those are three problems we wanted to solve really solidly, so we call our services, our DR services as failproof DR. That's actually takes a little courage to say failproof. ControlShift is our service which actually does this DR orchestration. It does mobility across two different places. It could be on-prem to on-prem, on-prem to the cloud and because we have this end-to-end data services ourselves, it's easy to then do compliance checks all the time. So, we do compliance checks every few minutes. What that gives you, that gives you the confidence that your DR plan's gonna work for you when you need it. And then secondly, when you push the button because you want some primary storage and backup, it's then easy to bring up all your services at once like that. And the last one is that because we are able to then work across the clouds and provide a seamless experience, so when you move the data to the cloud and have some backups there, you're gonna push a button to fail over, we'll bring up your services in VMware cloud, so that the idea is that it look exactly the same no matter where you are, in DR or not in DR and then watch the video, watch some demos. I think that you can see that you can't tell the difference. >> Well, that's great, so give us a little bit of visibility into how Datrium intends to extend these capabilities, give us a little visibility on your road map. What's up next? >> We are already on Amazon with the cloud. The next thing we're gonna be delivering is Azure, that's the next step, but if you step back a little bit and how do we think about ourselves? If you look at as an example Google, Google federates all the data, the internet data and processes an instant search, provides that instant click and access to all the data at your fingertips. So, we wanna do something similar for enterprise data. How do we federate, how do we aggregate data and provide the customer that instant management they can get from all the data they have. How do you extract value from the data? These set of applications are building towards some examples are we're building deep search. How do you find the things you want to find in a very nice intuitive way? And how do you do compliance, GDPR and also how do you think about some deep analytics on your data? So, we also wanna extend our Control Shift not to just manage the data on our platform, but also to manage data across different platforms. So, those are the kind of things we're thinking about as a future. >> Excellent stuff. Sazzala Reddy, CTO of Datrium, thanks so much for talking with us about AutoMatrix, Control Shift and the direction that you're taking with this. Very, very interesting new vision about how data and business can more easily be brought together. You know, I'll tell you what, let's take a look at a demo. Hi and welcome back to another CUBE Conversation. Once again, I'm Peter Burris and one of the biggest challenges that every user faces is how do they get more out of their technology suppliers, especially during periods of significant transformation. So, to have that conversation, we've got Bryan Bond who is Director of IT Infrastructure at eMeter, A Siemens Business. Bryan, welcome to theCUBE. >> Thanks for having me. >> So, tell us a little bit about eMeter and what you do there. >> So, eMeter is a developer and supplier of smart grid infrastructure software for enterprise level clients, utilities, water, power, energy. My team is charged with managing infrastructure for that entire business units, everything from dev tests, QA and sales. >> Well, the intelligent infrastructure as it pertains to the electronic grid, that's not a small set of applications, a small set of use cases. What kinds of pressure is that putting on your IT infrastructure? >> A lot of it is the typical pressures that you would see with do more with less, do more faster. But a lot of it is wrapped around our customers and our other end users in needing more storage, needing more app performance and needing things delivered faster. On a daily basis, things change and keeping up with the Jones' gets harder and harder to do as time moves on. >> So, as you think about Datrium's AutoMatrix, how is it creating value for you today? Give us a peek into what it's doing to alleviate some of these scaling and other sorts of pressures. >> So, the first thing it does is it does allow us to do a lot more with less. We get two times the performance, five times the capacity and we spend zero time managing our storage infrastructure. And when I say zero time, I mean zero time. We do not manage storage anymore with the Datrium product. We can deploy things faster, we can recover things faster. Our RTO and our RPO matrix is down to seconds instead of minutes or hours. And those types of things really allow us to provide a much better level of service to our customers. >> And it's especially for infrastructure like the electronic grid, it's good to hear that the RTO, RPO is getting as close to zero as possible, but that's the baseline today. Look out and as you envision where the needs are of these technologies are going for improving protection, consolidating, converging data services and overall providing a better experience for how a business uses data, how do you anticipate that you're going to evolve your use of AutoMatrix and relate it to Datrium technologies? >> Well, we fully intend to expand our use of the existing piece that we have, but then this new AutoMatrix piece is going to help us not with just deployments, but it's also gonna help us with compliance testing, data recovery, disaster recovery and also being able to deploy into any type of cloud or any type of location without having to change what we do in the back end, being able to use one tool across the entire set of the infrastructure that we're using. >> So, what about the tool set, you're using the whole thing consistently, but what about the tool set went in easiest for you within your shop? >> Installing the infrastructure pieces themselves in its entirety were very, very easy. So, putting that into what we had already and where we were headed was very, very simple. We were able to do that on the fly in production and not have to do a whole lot of changes with the environments that we were doing at the time. The operational pieces within the DVX, which is the storage part of the platform, were seamless as far as vCenter and other tools that we were using went and allowed us to just extend what we were doing already and be able to just apply that as we went forward. And we immediately found that again, we just didn't manage storage anymore and that wasn't something we were intending and that made our ROI just go through the roof. >> So, it sounds like time value for the platform was very, very quick and also it fit into your overall operational practices. You didn't have to do a whole bunch of unnatural acts to get there. >> Right, we did not have to change a lot of policies, we did not have to change a lot of procedures. A lot of times, we just shortened them, we took a few steps out in a lot of cases. >> So, how is it changing, being able to do things like that, changing your conversation with your communities that you're serving as they ask for more capabilities? >> First off, it's making me say no a lot less and that makes them very, very happy. The answer usually is less and the answer to the question of how long will it take changes from oh, we can get that done in a couple of days or oh, we can get that done in a couple hours to I did that while I was sitting here in the meeting with you and it's been handled and you're off to the races. >> So, it sounds like you're placing a pretty big bet on Datrium. What's it like working with them as a company? >> It's been a great experience. From the start in the initial piece of talking to them and going through the POC process, they were very helpful, very knowledgeable SCs and since then, they've been very, very helpful in allowing us to tell them what our needs are rather than them telling us what our needs are and going through and working through the new processes and the new procedures within our own environments. They've been very instrumental in performance testing and deployment testing with things that a lot of other storage providers didn't have any interest in talking with us about, so they've been very, very helpful with that and very, very knowledgeable. The people that are there are actually really smart, which is not surprising, but the fact that they can relay that into solutions to what my actual problems are and give me something that I can push forward onto my business and have a positive impact from day one has been absolutely without question one of the better things. >> Well, that's always one of the biggest challenge when working with a company that's just getting going is how do you get the smarts of that organization into the business outcomes and really succeed. It sounds like it's working well. >> Absolutely. >> All right, Bryan Bond, Director of IT Infrastructure at eMeter, A Siemens Business. Thanks again for being on theCUBE. >> Bryan: It's been great. >> And once again, this has been a CUBE Conversation. Now, what we'd like to do is don't forget this is your opportunity to participate in the crowd chat immediately after this video ends and let's hear your thoughts. What's important in your world as you think about new classes of data platforms, new roles of data, new approaches to taking greater advantage of the data assets that are differentiating your business. Have those conversations, make those comments, ask those questions. We're here to help. Once again, Peter Burris, let's crowd chat. (light music)

Published Date : May 7 2019

SUMMARY :

and others in the community that you think need to the next level, and the purpose of us is What's the problem that you keep hearing over and over and to do that, you've gotta transform IT, which means All right, so that's the nature of the problem. We've taken that to a whole new level to a point for that to be part of a platform. but customers are starting the process And to do that, you gotta have it in one code base. so it's simpler, tend to be more successful. of that and it feels the same. So, what are you guys announcing? on software technologies that operate in the same manner. So, you're not building a platform out by acquisition. You can orchestrate prem to prem, prem to cloud, cloud of the announcements that we're gonna hear more about all of our data is now on the host. of Datrium's announcements to try to find ways and how it's creating value for customers. The idea is that you don't have to do much for it. of the core functions like DR that they have to work with. management applications to control all the data you have and how you get all these service and support, and how they relate to AutoMatrix. all the data you have. So, as a result, the I&O people, the infrastructure to manage all the data you have in your enterprise. Can you give us a little peek into those? and you can get that. you talk about It's challenging to plan something and then you have into how Datrium intends to extend these capabilities, manage the data on our platform, but also to manage data So, to have that conversation, we've got Bryan Bond and what you do there. for that entire business units, everything from dev tests, to the electronic grid, that's not a small set A lot of it is the typical pressures that you would see how is it creating value for you today? Our RTO and our RPO matrix is down to seconds instead that the RTO, RPO is getting as close to zero as possible, is going to help us not with just deployments, and not have to do a whole lot of changes You didn't have to do a whole bunch of unnatural acts A lot of times, we just shortened them, in the meeting with you and it's been handled So, it sounds like you're placing a pretty big bet that into solutions to what my actual problems are is how do you get the smarts of that organization Thanks again for being on theCUBE. of the data assets that are differentiating your business.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BryanPERSON

0.99+

Peter BurrisPERSON

0.99+

SazzalaPERSON

0.99+

five timesQUANTITY

0.99+

PeterPERSON

0.99+

threeQUANTITY

0.99+

DatriumORGANIZATION

0.99+

Tim PagePERSON

0.99+

two timesQUANTITY

0.99+

TimPERSON

0.99+

10QUANTITY

0.99+

Sazzala ReddyPERSON

0.99+

Bryan BondPERSON

0.99+

18 hoursQUANTITY

0.99+

70%QUANTITY

0.99+

80 timesQUANTITY

0.99+

three problemsQUANTITY

0.99+

100 programmersQUANTITY

0.99+

secondQUANTITY

0.99+

GoogleORGANIZATION

0.99+

48 hourQUANTITY

0.99+

20 yearsQUANTITY

0.99+

four yearsQUANTITY

0.99+

five different productsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

eMeterORGANIZATION

0.99+

100 guysQUANTITY

0.99+

one ruleQUANTITY

0.99+

five different servicesQUANTITY

0.99+

first versionQUANTITY

0.99+

zero timeQUANTITY

0.99+

one platformQUANTITY

0.99+

five productsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

oneQUANTITY

0.99+

four challengesQUANTITY

0.99+

five thingsQUANTITY

0.99+

last quarterDATE

0.99+

five different vendorsQUANTITY

0.98+

over 500 peopleQUANTITY

0.98+

20 years agoDATE

0.98+

AutoMatrixORGANIZATION

0.98+

AutomatrixORGANIZATION

0.98+

third versionQUANTITY

0.97+

first thingQUANTITY

0.97+

two different placesQUANTITY

0.97+

firstQUANTITY

0.97+

one toolQUANTITY

0.97+

todayDATE

0.97+

FirstQUANTITY

0.97+

five data servicesQUANTITY

0.96+

Mike Evans, Red Hat | Google Cloud Next 2019


 

>> reply from San Francisco. It's the Cube covering Google Club next nineteen Tio by Google Cloud and its ecosystem partners. >> We're back at Google Cloud next twenty nineteen. You're watching the Cube, the leader in live tech coverage on Dave a lot with my co host to minimum John Farriers. Also here this day. Two of our coverage. Hash tag. Google Next nineteen. Mike Evans is here. He's the vice president of technical business development at Red Hat. Mike, good to see you. Thanks for coming back in the Cube. >> Right to be here. >> So, you know, we're talking hybrid cloud multi cloud. You guys have been on this open shift for half a decade. You know, there were a lot of deniers, and now it's a real tail one for you in the whole world is jumping on. That bandwagon is gonna make you feel good. >> Yeah. No, it's nice to see everybody echoing a similar message, which we believe is what the customers demand and interest is. So that's a great validation. >> So how does that tie into what's happening here? What's going on with the show? It's >> interesting. And let me take a step back for us because I've been working with Google on their cloud efforts for almost ten years now. And it started back when Google, when they were about to get in the cloud business, they had to decide where they're going to use caveat present as their hyper visor. And that was a time when we had just switched to made a big bet on K V M because of its alignment with the Lenox Colonel. But it was controversial and and we help them do that. And I look back on my email recently and that was two thousand nine. That was ten years ago, and that was that was early stages on DH then, since that time, you know, it's just, you know, cloud market is obviously boomed. I again I was sort of looking back ahead of this discussion and saying, you know, in two thousand six and two thousand seven is when we started working with Amazon with rail on their cloud and back when everyone thought there's no way of booksellers goingto make an impact in the world, etcetera. And as I just play sort of forward to today and looking at thirty thousand people here on DH you know what sort of evolved? Just fascinated by, you know, sort of that open sources now obviously fully mainstream. And there's no more doubters. And it's the engine for everything. >> Like maybe, you know, bring us inside. So you know KK Veum Thie underpinning we know well is, you know, core to the multi clouds tragedy Red hat. And there's a lot that you've built on top of it. Speak, speak a little bit of some of the engineering relationships going on joint customers that you have. Ah, and kind of the value of supposed to, you know, write Hatton. General is your agnostic toe where lives, but there's got to be special work that gets done in a lot of places. >> Ralph has a Google. Yeah, yeah, yeah. >> Through the years, >> we've really done a lot of work to make sure that relative foundation works really well on G C P. So that's been a that's been a really consistent effort and whether it's around optimization for performance security element so that that provides a nice base for anybody who wants to move any work loader application from on crime over there from another cloud. And that's been great. And then the other maid, You know, we've also worked with them. Obviously, the upstream community dynamics have been really productive between Red Hat and Google, and Google has been one of the most productive and positive contributors and participants and open source. And so we worked together on probably ten or fifteen different projects, and it's a constant interaction between our upstream developers where we share ideas. And do you agree with this kind of >> S O Obviously, Cooper Netease is a big one. You know, when you see the list, it's it's Google and Red Hat right there. Give us a couple of examples of some of the other ones. I >> mean again, it's K B M is also a foundation on one that people kind of forget about that these days. But it still is a very pervasive technology and continuing to gain ground. You know, there's all there's the native stuff. There's the studio stuff in the AML, which is a whole fascinating category in my mind as well. >> I like history of kind of a real student of industry history, and so I like that you talk to folks who have been there and try to get it right. But there was a sort of this gestation period from two thousand six to two thousand nine and cloud Yeah, well, like you said, it's a book seller. And then even in the down turn, a lot of CFO said, Hey, cap backstop ex boom! And then come out of the downturn. And it was shadow I t around that two thousand nine time frame. But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in in In our cloud and homogeneity had a lot of a lot of traditional companies fumbling with their cloud strategies. And and And he had the big data craze. And obviously open source was a huge part of that. And then containers, which, of course, have been around since Lennox. Yeah, yeah, and I guess Doctor Boom started go crazy. And now it's like this curve is reshaping with a I and sort of a new era of data thoughts on sort of the accuracy of that little historical narrative and and why that big uptick with containers? >> Well, a couple of things there won the data, the whole data evolution and this is a fascinating one. For many, many years. I'm gonna be there right after nineteen years. So I've seen a lot of the elements of that history and one of the constant questions we would always get sometimes from investor. Why don't you guys buy a database company? You know, years ago and we would, you know, we didn't always look at it. Or why aren't you guys doing a dupe distribution When that became more spark, etcetera. And we always looked at it and said, You know, we're a platform company and if we were to pick anyone database, it would only cover some percentage and there's so many, and then it just kind of upsets the other. So we've we've decided we're going to focus, not on the data layer. We're going to focus on the infrastructure and the application layer and work down from it and support the things underneath. So it's consistent now with the AML explosion, which, you know, we're who was a pioneer of AML. They've got some of the best services and then we've been doing a lot of work within video in the last two years to make sure that all the GP use wherever they're run. Hybrid private cloud on multiple clouds that those air enabled and Raylan enabled in open shift. Because what we see happening and in video does also is right now all the applications being developed by free mlr are written by extremely technical people. When you write to tense airflow and things like that, you kind of got to be able to write a C compiler level, but so were working with them to bring open shift to become the sort of more mass mainstream tool to develop. A I aml enable app because the value of having rail underneath open shift and is every piece of hardware in the world is supported right for when that every cloud And then when we had that GPU enablement open shift and middleware and our storage, everything inherits it. So that's the That's the most valuable to me. That's the most valuable piece of ah, real estate that we own in the industry is actually Ralph and then everything build upon that and >> its interest. What you said about the database, Of course, we're a long discussion about that this morning. You're right, though. Mike, you either have to be, like, really good at one thing, like a data stacks or Cassandra or a mongo. And there's a zillion others that I'm not mentioning or you got to do everything you know, like the cloud guys were doing out there. You know, every one of them's an operational, you know, uh, analytics already of s no sequel. I mean, one of each, you know, and then you have to partner with them. So I would imagine you looked at that as well. I said, How're we going to do all that >> right? And there's only, you know, there's so many competitive dynamics coming at us and, you know, for we've always been in the mode where we've been the little guy battling against the big guys, whoever, maybe whether it was or, you know, son, IBM and HP. Unix is in the early days. Oracle was our friend for a while. Then they became. Then they became a nen ime, you know, are not enemy but a competitor on the Lennox side. And the Amazon was early friend, and then, though they did their own limits. So there's a competitive, so that's that's normal operating model for us to us to have this, you know, big competitive dynamic with a partnering >> dynamic. You gotta win it in the marketplace that the customers say. Come on, guys. >> Right. We'Ll figure it out >> together, Figured out we talked earlier about hybrid cloud. We talked about multi cloud and some people those of the same thing. But I think they actually you know, different. Yeah, hybrid. You think of, you know, on prim and public and and hopefully some kind of level of integration and common data. Plain and control plan and multi cloud is sort of evolved from multi vendor. How do you guys look at it? Is multi cloud a strategy? How do you look at hybrid? >> Yeah, I mean, it's it's it's a simple It's simple in my mind, but I know the words. The terms get used by a lot of different people in different ways. You know, hybrid Cloud to me is just is just that straightforward. Being able to run something on premise have been able to run something in any in a public cloud and have it be somewhat consistent or share a bowl or movable and then multi cloud has been able to do that same thing with with multiple public clouds. And then there's a third variation on that is, you know, wanting to do an application that runs in both and shares information, which I think the world's you know, You saw that in the Google Antos announcement, where they're talking about their service running on the other two major public cloud. That's the first of any sizable company. I think that's going to be the norm because it's become more normal wherever the infrastructure is that a customer's using. If Google has a great service, they want to be able to tell the user toe, run it on their data there at there of choice. So, >> yeah, so, like you brought up Antos and at the core, it's it's g k. So it's the community's we've been talking about and, he said, worked with eight of us work for danger. But it's geeky on top of those public clouds. Maybe give us a little bit of, you know, compare contrast of that open shift. Does open ship lives in all of these environments, too, But they're not fully compatible. And how does that work? So are >> you and those which was announced yesterday. Two high level comments. I guess one is as we talked about the beginning. It's a validation of what our message has been. Its hybrid cloud is a value multi clouds of values. That's a productive element of that to help promote that vision And that concept also macro. We talked about all of it. It it puts us in a competitive environment more with Google than it was yesterday or two days ago. But again, that's that's our normal world way partnered with IBM and HP and competed against them on unit. We partner with that was partnered with Microsoft and compete with them, So that's normal. That said, you know, we believe are with open shift, having five plus years in market and over a thousand customers and very wide deployments and already been running in Google, Amazon and Microsoft Cloud already already there and solid and people doing really things with that. Plus being from a position of an independent software vendor, we think is a more valuable position for multi cloud than a single cloud vendor. So that's, you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For all these public called companies going on from its, you know, it's It's a lot of complexity when you have to deal with, You know, American Express is Infrastructure, Bank of Hong Kong's infrastructure, Ford Motors infrastructure and it's a it's a >> right right here. You know Google before only had to run on Google servers in Google Data Center. Everything's very clean environment, one temperature on >> DH Enterprise customers have it a little different demands in terms of version ality and when the upgrade and and how long they let things like there's a lot of differences. >> But actually, there was one of the things Cory Quinn will. It was doing some analysis with us on there. And Google, for the most part, is if we decide to pull something, you've got kind of a one year window to do, you know? How does Red Hot look at that? >> I mean, and >> I explained, My >> guess is they'LL evolve over time as they get deeper in it. Or maybe they won't. Maybe they have a model where they think they will gain enough share and theirs. But I mean, we were built on on enterprise DNA on DH. We've evolved to cloud and hybrid multi cloud, DNA way love again like we love when people say I'm going to the cloud because when they say they're going to the cloud, it means they're doing new APs or they're modifying old apse. And we have a great shot of landing that business when they say we're doing something new >> Well, right, right. Even whether it's on Prem or in the public cloud, right? They're saying when they say we'LL go to the club, they talk about the cloud experience, right? And that's really what your strategy is to bring that cloud experience to wherever your data lives. Exactly. So talking about that multi cloud or a Romney cloud when we sort of look at the horses on the track and you say Okay, you got a V M. We're going after that. You've got you know, IBM and Red Hat going after that Now, Google sort of huge cloud provider, you know, doing that wherever you look. There's red hat now. Course I know you can't talk much about the IBM, you know, certainly integration, but IBM Executive once said to me still that we're like a recovering alcoholic. We learned our lesson from mainframe. We are open. We're committed to open, so we'LL see. But Red hat is everywhere, and your strategy presumably has to stay that sort of open new tia going last year >> I give to a couple examples of long ago. I mean, probably five. Six years ago when the college stuff was still more early. I had a to seo conference calls in one day, and one was with a big graphics, you know, Hollywood Graphics company, the CEO. After we explained all of our cloud stuff, you know, we had nine people on the call explaining all our cloud, and the guy said, Okay, because let me just tell you, right, that guy, something the biggest value bring to me is having relish my single point of sanity that I can move this stuff wherever I want. I just attach all my applications. I attached third party APS and everything, and then I could move it wherever we want. So realize that you're big, and I still think that's true. And then there was another large gaming company who was trying to decide to move forty thousand observers, from from their own cloud to a public cloud and how they were going to do it. And they had. They had to Do you know, the head of servers, a head of security, the head of databases, the head of network in the head of nine different functions there. And they're all in disagreement at the end. And the CEO said at the end of day, said, Mike, I've got like, a headache. I need some vodka and Tylenol now. So give me one simple piece of advice. How do I navigate this? I said, if you just write every app Terrell, Andrzej, boss. And this was before open shift. No matter >> where you want >> to run him, Raylan J. Boss will be there, and he said, Excellent advice. That's what we're doing. So there's something really beautiful about the simplicity of that that a lot of people overlooked, with all the hand waving of uber Netease and containers and fifty versions of Cooper Netease certified and you know, etcetera. It's it's ah, it's so I think there's something really beautiful about that. We see a lot of value in that single point of sanity and allowing people flexibility at you know, it's a pretty low cost to use. Relish your foundation >> over. Source. Hybrid Cloud Multi Cloud Omni Cloud All tail wins for Red Hat Mike will give you the final world where bumper sticker on Google Cloud next or any other final thoughts. >> To me, it's It's great to see thirty thousand people at this event. It's great to see Google getting more and more invested in the cloud and more and more invested in the enterprise about. I think they've had great success in a lot of non enterprise accounts, probably more so than the other clowns. And now they're coming this way. They've got great technology. We've our engineers love working with their engineers, and now we've got a more competitive dynamic. And like I said, welcome to the jungle. >> We got Red Hat Summit coming up stew. Writerly May is >> absolutely back in Beantown data. >> It's nice. Okay, I'll be in London there, >> right at Summit in Boston And May >> could deal. Mike, Thanks very much for coming. Thank you. It's great to see you. >> Good to see you. >> All right, everybody keep right there. Stew and I would back John Furry is also in the house watching the cube Google Cloud next twenty nineteen we'LL be right back

Published Date : Apr 10 2019

SUMMARY :

It's the Cube covering Thanks for coming back in the Cube. So, you know, we're talking hybrid cloud multi cloud. So that's a great validation. you know, it's just, you know, cloud market is obviously boomed. Ah, and kind of the value of supposed to, you know, Yeah, yeah, yeah. And do you agree with this kind of You know, when you see the list, it's it's Google and Red Hat right there. There's the studio stuff in the AML, But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in You know, years ago and we would, you know, we didn't always look at it. I mean, one of each, you know, and then you have to partner with them. And there's only, you know, there's so many competitive dynamics coming at us and, You gotta win it in the marketplace that the customers say. We'Ll figure it out But I think they actually you know, different. which I think the world's you know, You saw that in the Google Antos announcement, where they're you know, compare contrast of that open shift. you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For You know Google before only had to run on Google servers in Google Data Center. and how long they let things like there's a lot of differences. And Google, for the most part, is if we decide to pull something, And we have a great shot of landing that business when they say we're doing something new talk much about the IBM, you know, certainly integration, but IBM Executive one day, and one was with a big graphics, you know, at you know, it's a pretty low cost to use. final world where bumper sticker on Google Cloud next or any other final thoughts. And now they're coming this way. Writerly May is It's nice. It's great to see you. Stew and I would back John Furry is also in the house watching the cube Google Cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

HPORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Mike EvansPERSON

0.99+

OracleORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

American ExpressORGANIZATION

0.99+

Ford MotorsORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

five plus yearsQUANTITY

0.99+

one yearQUANTITY

0.99+

tenQUANTITY

0.99+

TwoQUANTITY

0.99+

nine peopleQUANTITY

0.99+

yesterdayDATE

0.99+

Hollywood GraphicsORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

thirty thousand peopleQUANTITY

0.99+

John FarriersPERSON

0.99+

eightQUANTITY

0.99+

last yearDATE

0.99+

DavePERSON

0.99+

firstQUANTITY

0.99+

TerrellPERSON

0.99+

RalphPERSON

0.99+

StewPERSON

0.99+

two thousandQUANTITY

0.99+

Six years agoDATE

0.99+

thirty thousand peopleQUANTITY

0.99+

two days agoDATE

0.99+

LenoxORGANIZATION

0.99+

Bank of Hong KongORGANIZATION

0.99+

BostonLOCATION

0.99+

oneQUANTITY

0.99+

CassandraPERSON

0.98+

John FurryPERSON

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

ten years agoDATE

0.98+

AndrzejPERSON

0.98+

half a decadeQUANTITY

0.98+

over a thousand customersQUANTITY

0.98+

Red HotORGANIZATION

0.98+

one dayQUANTITY

0.97+

forty thousand observersQUANTITY

0.97+

Google CloudTITLE

0.97+

HattonPERSON

0.96+

third variationQUANTITY

0.96+

Cory QuinnPERSON

0.95+

one simple pieceQUANTITY

0.95+

two thousand nineQUANTITY

0.95+

fifty versionsQUANTITY

0.94+

Raylan J. BossPERSON

0.93+

single pointQUANTITY

0.93+

next twenty nineteenDATE

0.93+

LennoxORGANIZATION

0.92+

UnixORGANIZATION

0.92+

KK Veum ThiePERSON

0.92+

two thousand sevenQUANTITY

0.92+

stewPERSON

0.91+