Image Title

Search Results for gilbert:

Bob Muglia, George Gilbert & Tristan Handy | How Supercloud will Support a new Class of Data Apps


 

(upbeat music) >> Hello, everybody. This is Dave Vellante. Welcome back to Supercloud2, where we're exploring the intersection of data analytics and the future of cloud. In this segment, we're going to look at how the Supercloud will support a new class of applications, not just work that runs on multiple clouds, but rather a new breed of apps that can orchestrate things in the real world. Think Uber for many types of businesses. These applications, they're not about codifying forms or business processes. They're about orchestrating people, places, and things in a business ecosystem. And I'm pleased to welcome my colleague and friend, George Gilbert, former Gartner Analyst, Wiki Bond market analyst, former equities analyst as my co-host. And we're thrilled to have Tristan Handy, who's the founder and CEO of DBT Labs and Bob Muglia, who's the former President of Microsoft's Enterprise business and former CEO of Snowflake. Welcome all, gentlemen. Thank you for coming on the program. >> Good to be here. >> Thanks for having us. >> Hey, look, I'm going to start actually with the SuperCloud because both Tristan and Bob, you've read the definition. Thank you for doing that. And Bob, you have some really good input, some thoughts on maybe some of the drawbacks and how we can advance this. So what are your thoughts in reading that definition around SuperCloud? >> Well, I thought first of all that you did a very good job of laying out all of the characteristics of it and helping to define it overall. But I do think it can be tightened a bit, and I think it's helpful to do it in as short a way as possible. And so in the last day I've spent a little time thinking about how to take it and write a crisp definition. And here's my go at it. This is one day old, so gimme a break if it's going to change. And of course we have to follow the industry, and so that, and whatever the industry decides, but let's give this a try. So in the way I think you're defining it, what I would say is a SuperCloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. >> Boom. Nice. Okay, great. I'm going to go back and read the script on that one and tighten that up a bit. Thank you for spending the time thinking about that. Tristan, would you add anything to that or what are your thoughts on the whole SuperCloud concept? >> So as I read through this, I fully realize that we need a word for this thing because I have experienced the inability to talk about it as well. But for many of us who have been living in the Confluence, Snowflake, you know, this world of like new infrastructure, this seems fairly uncontroversial. Like I read through this, and I'm just like, yeah, this is like the world I've been living in for years now. And I noticed that you called out Snowflake for being an example of this, but I think that there are like many folks, myself included, for whom this world like fully exists today. >> Yeah, I think that's a fair, I dunno if it's criticism, but people observe, well, what's the big deal here? It's just kind of what we're living in today. It reminds me of, you know, Tim Burns Lee saying, well, this is what the internet was supposed to be. It was supposed to be Web 2.0, so maybe this is what multi-cloud was supposed to be. Let's turn our attention to apps. Bob first and then go to Tristan. Bob, what are data apps to you? When people talk about data products, is that what they mean? Are we talking about something more, different? What are data apps to you? >> Well, to understand data apps, it's useful to contrast them to something, and I just use the simple term people apps. I know that's a little bit awkward, but it's clear. And almost everything we work with, almost every application that we're familiar with, be it email or Salesforce or any consumer app, those are applications that are targeted at responding to people. You know, in contrast, a data application reacts to changes in data and uses some set of analytic services to autonomously take action. So where applications that we're familiar with respond to people, data apps respond to changes in data. And they both do something, but they do it for different reasons. >> Got it. You know, George, you and I were talking about, you know, it comes back to SuperCloud, broad definition, narrow definition. Tristan, how do you see it? Do you see it the same way? Do you have a different take on data apps? >> Oh, geez. This is like a conversation that I don't know has an end. It's like been, I write a substack, and there's like this little community of people who all write substack. We argue with each other about these kinds of things. Like, you know, as many different takes on this question as you can find, but the way that I think about it is that data products are atomic units of functionality that are fundamentally data driven in nature. So a data product can be as simple as an interactive dashboard that is like actually had design thinking put into it and serves a particular user group and has like actually gone through kind of a product development life cycle. And then a data app or data application is a kind of cohesive end-to-end experience that often encompasses like many different data products. So from my perspective there, this is very, very related to the way that these things are produced, the kinds of experiences that they're provided, that like data innovates every product that we've been building in, you know, software engineering for, you know, as long as there have been computers. >> You know, Jamak Dagani oftentimes uses the, you know, she doesn't name Spotify, but I think it's Spotify as that kind of example she uses. But I wonder if we can maybe try to take some examples. If you take, like George, if you take a CRM system today, you're inputting leads, you got opportunities, it's driven by humans, they're really inputting the data, and then you got this system that kind of orchestrates the business process, like runs a forecast. But in this data driven future, are we talking about the app itself pulling data in and automatically looking at data from the transaction systems, the call center, the supply chain and then actually building a plan? George, is that how you see it? >> I go back to the example of Uber, may not be the most sophisticated data app that we build now, but it was like one of the first where you do have users interacting with their devices as riders trying to call a car or driver. But the app then looks at the location of all the drivers in proximity, and it matches a driver to a rider. It calculates an ETA to the rider. It calculates an ETA then to the destination, and it calculates a price. Those are all activities that are done sort of autonomously that don't require a human to type something into a form. The application is using changes in data to calculate an analytic product and then to operationalize that, to assign the driver to, you know, calculate a price. Those are, that's an example of what I would think of as a data app. And my question then I guess for Tristan is if we don't have all the pieces in place for sort of mainstream companies to build those sorts of apps easily yet, like how would we get started? What's the role of a semantic layer in making that easier for mainstream companies to build? And how do we get started, you know, say with metrics? How does that, how does that take us down that path? >> So what we've seen in the past, I dunno, decade or so, is that one of the most successful business models in infrastructure is taking hard things and rolling 'em up behind APIs. You take messaging, you take payments, and you all of a sudden increase the capability of kind of your median application developer. And you say, you know, previously you were spending all your time being focused on how do you accept credit cards, how do you send SMS payments, and now you can focus on your business logic, and just create the thing. One of, interestingly, one of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse, inside of your data lake. These are core concepts that, you know, you would imagine that the business would be able to create applications around very easily, but in fact that's not the case. It's actually quite challenging to, and involves a lot of data engineering pipeline and all this work to make these available. And so if you really want to make it very easy to create some of these data experiences for users, you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes, and they don't need to. >> So how rich can that API layer grow if you start with metric definitions that you've defined? And DBT has, you know, the metric, the dimensions, the time grain, things like that, that's a well scoped sort of API that people can work within. How much can you extend that to say non-calculated business rules or governance information like data reliability rules, things like that, or even, you know, features for an AIML feature store. In other words, it starts, you started pragmatically, but how far can you grow? >> Bob is waiting with bated breath to answer this question. I'm, just really quickly, I think that we as a company and DBT as a product tend to be very pragmatic. We try to release the simplest possible version of a thing, get it out there, and see if people use it. But the idea that, the concept of a metric is really just a first landing pad. The really, there is a physical manifestation of the data and then there's a logical manifestation of the data. And what we're trying to do here is make it very easy to access the logical manifestation of the data, and metric is a way to look at that. Maybe an entity, a customer, a user is another way to look at that. And I'm sure that there will be more kind of logical structures as well. >> So, Bob, chime in on this. You know, what's your thoughts on the right architecture behind this, and how do we get there? >> Yeah, well first of all, I think one of the ways we get there is by what companies like DBT Labs and Tristan is doing, which is incrementally taking and building on the modern data stack and extending that to add a semantic layer that describes the data. Now the way I tend to think about this is a fairly major shift in the way we think about writing applications, which is today a code first approach to moving to a world that is model driven. And I think that's what the big change will be is that where today we think about data, we think about writing code, and we use that to produce APIs as Tristan said, which encapsulates those things together in some form of services that are useful for organizations. And that idea of that encapsulation is never going to go away. It's very, that concept of an API is incredibly useful and will exist well into the future. But what I think will happen is that in the next 10 years, we're going to move to a world where organizations are defining models first of their data, but then ultimately of their business process, their entire business process. Now the concept of a model driven world is a very old concept. I mean, I first started thinking about this and playing around with some early model driven tools, probably before Tristan was born in the early 1980s. And those tools didn't work because the semantics associated with executing the model were too complex to be written in anything other than a procedural language. We're now reaching a time where that is changing, and you see it everywhere. You see it first of all in the world of machine learning and machine learning models, which are taking over more and more of what applications are doing. And I think that's an incredibly important step. And learned models are an important part of what people will do. But if you look at the world today, I will claim that we've always been modeling. Modeling has existed in computers since there have been integrated circuits and any form of computers. But what we do is what I would call implicit modeling, which means that it's the model is written on a whiteboard. It's in a bunch of Slack messages. It's on a set of napkins in conversations that happen and during Zoom. That's where the model gets defined today. It's implicit. There is one in the system. It is hard coded inside application logic that exists across many applications with humans being the glue that connects those models together. And really there is no central place you can go to understand the full attributes of the business, all of the business rules, all of the business logic, the business data. That's going to change in the next 10 years. And we'll start to have a world where we can define models about what we're doing. Now in the short run, the most important models to build are data models and to describe all of the attributes of the data and their relationships. And that's work that DBT Labs is doing. A number of other companies are doing that. We're taking steps along that way with catalogs. People are trying to build more complete ontologies associated with that. The underlying infrastructure is still super, super nascent. But what I think we'll see is this infrastructure that exists today that's building learned models in the form of machine learning programs. You know, some of these incredible machine learning programs in foundation models like GPT and DALL-E and all of the things that are happening in these global scale models, but also all of that needs to get applied to the domains that are appropriate for a business. And I think we'll see the infrastructure developing for that, that can take this concept of learned models and put it together with more explicitly defined models. And this is where the concept of knowledge graphs come in and then the technology that underlies that to actually implement and execute that, which I believe are relational knowledge graphs. >> Oh, oh wow. There's a lot to unpack there. So let me ask the Colombo question, Tristan, we've been making fun of your youth. We're just, we're just jealous. Colombo, I'll explain it offline maybe. >> I watch Colombo. >> Okay. All right, good. So but today if you think about the application stack and the data stack, which is largely an analytics pipeline. They're separate. Do they, those worlds, do they have to come together in order to achieve Bob's vision? When I talk to practitioners about that, they're like, well, I don't want to complexify the application stack cause the data stack today is so, you know, hard to manage. But but do those worlds have to come together? And you know, through that model, I guess abstraction or translation that Bob was just describing, how do you guys think about that? Who wants to take that? >> I think it's inevitable that data and AI are going to become closer together? I think that the infrastructure there has been moving in that direction for a long time. Whether you want to use the Lakehouse portmanteau or not. There's also, there's a next generation of data tech that is still in the like early stage of being developed. There's a company that I love that is essentially Cross Cloud Lambda, and it's just a wonderful abstraction for computing. So I think that, you know, people have been predicting that these worlds are going to come together for awhile. A16Z wrote a great post on this back in I think 2020, predicting this, and I've been predicting this since since 2020. But what's not clear is the timeline, but I think that this is still just as inevitable as it's been. >> Who's that that does Cross Cloud? >> Let me follow up on. >> Who's that, Tristan, that does Cross Cloud Lambda? Can you name names? >> Oh, they're called Modal Labs. >> Modal Labs, yeah, of course. All right, go ahead, George. >> Let me ask about this vision of trying to put the semantics or the code that represents the business with the data. It gets us to a world that's sort of more data centric, where data's not locked inside or behind the APIs of different applications so that we don't have silos. But at the same time, Bob, I've heard you talk about building the semantics gradually on top of, into a knowledge graph that maybe grows out of a data catalog. And the vision of getting to that point, essentially the enterprise's metadata and then the semantics you're going to add onto it are really stored in something that's separate from the underlying operational and analytic data. So at the same time then why couldn't we gradually build semantics beyond the metric definitions that DBT has today? In other words, you build more and more of the semantics in some layer that DBT defines and that sits above the data management layer, but any requests for data have to go through the DBT layer. Is that a workable alternative? Or where, what type of limitations would you face? >> Well, I think that it is the way the world will evolve is to start with the modern data stack and, you know, which is operational applications going through a data pipeline into some form of data lake, data warehouse, the Lakehouse, whatever you want to call it. And then, you know, this wide variety of analytics services that are built together. To the point that Tristan made about machine learning and data coming together, you see that in every major data cloud provider. Snowflake certainly now supports Python and Java. Databricks is of course building their data warehouse. Certainly Google, Microsoft and Amazon are doing very, very similar things in terms of building complete solutions that bring together an analytics stack that typically supports languages like Python together with the data stack and the data warehouse. I mean, all of those things are going to evolve, and they're not going to go away because that infrastructure is relatively new. It's just being deployed by companies, and it solves the problem of working with petabytes of data if you need to work with petabytes of data, and nothing will do that for a long time. What's missing is a layer that understands and can model the semantics of all of this. And if you need to, if you want to model all, if you want to talk about all the semantics of even data, you need to think about all of the relationships. You need to think about how these things connect together. And unfortunately, there really is no platform today. None of our existing platforms are ultimately sufficient for this. It was interesting, I was just talking to a customer yesterday, you know, a large financial organization that is building out these semantic layers. They're further along than many companies are. And you know, I asked what they're building it on, and you know, it's not surprising they're using a, they're using combinations of some form of search together with, you know, textual based search together with a document oriented database. In this case it was Cosmos. And that really is kind of the state of the art right now. And yet those products were not built for this. They don't really, they can't manage the complicated relationships that are required. They can't issue the queries that are required. And so a new generation of database needs to be developed. And fortunately, you know, that is happening. The world is developing a new set of relational algorithms that will be able to work with hundreds of different relations. If you look at a SQL database like Snowflake or a big query, you know, you get tens of different joins coming together, and that query is going to take a really long time. Well, fortunately, technology is evolving, and it's possible with new join algorithms, worst case, optimal join algorithms they're called, where you can join hundreds of different relations together and run semantic queries that you simply couldn't run. Now that technology is nascent, but it's really important, and I think that will be a requirement to have this semantically reach its full potential. In the meantime, Tristan can do a lot of great things by building up on what he's got today and solve some problems that are very real. But in the long run I think we'll see a new set of databases to support these models. >> So Tristan, you got to respond to that, right? You got to, so take the example of Snowflake. We know it doesn't deal well with complex joins, but they're, they've got big aspirations. They're building an ecosystem to really solve some of these problems. Tristan, you guys are part of that ecosystem, and others, but please, your thoughts on what Bob just shared. >> Bob, I'm curious if, I would have no idea what you were talking about except that you introduced me to somebody who gave me a demo of a thing and do you not want to go there right now? >> No, I can talk about it. I mean, we can talk about it. Look, the company I've been working with is Relational AI, and they're doing this work to actually first of all work across the industry with academics and research, you know, across many, many different, over 20 different research institutions across the world to develop this new set of algorithms. They're all fully published, just like SQL, the underlying algorithms that are used by SQL databases are. If you look today, every single SQL database uses a similar set of relational algorithms underneath that. And those algorithms actually go back to system R and what IBM developed in the 1970s. We're just, there's an opportunity for us to build something new that allows you to take, for example, instead of taking data and grouping it together in tables, treat all data as individual relations, you know, a key and a set of values and then be able to perform purely relational operations on it. If you go back to what, to Codd, and what he wrote, he defined two things. He defined a relational calculus and relational algebra. And essentially SQL is a query language that is translated by the query processor into relational algebra. But however, the calculus of SQL is not even close to the full semantics of the relational mathematics. And it's possible to have systems that can do everything and that can store all of the attributes of the data model or ultimately the business model in a form that is much more natural to work with. >> So here's like my short answer to this. I think that we're dealing in different time scales. I think that there is actually a tremendous amount of work to do in the semantic layer using the kind of technology that we have on the ground today. And I think that there's, I don't know, let's say five years of like really solid work that there is to do for the entire industry, if not more. But the wonderful thing about DBT is that it's independent of what the compute substrate is beneath it. And so if we develop new platforms, new capabilities to describe semantic models in more fine grain detail, more procedural, then we're going to support that too. And so I'm excited about all of it. >> Yeah, so interpreting that short answer, you're basically saying, cause Bob was just kind of pointing to you as incremental, but you're saying, yeah, okay, we're applying it for incremental use cases today, but we can accommodate a much broader set of examples in the future. Is that correct, Tristan? >> I think you're using the word incremental as if it's not good, but I think that incremental is great. We have always been about applying incremental improvement on top of what exists today, but allowing practitioners to like use different workflows to actually make use of that technology. So yeah, yeah, we are a very incremental company. We're going to continue being that way. >> Well, I think Bob was using incremental as a pejorative. I mean, I, but to your point, a lot. >> No, I don't think so. I want to stop that. No, I don't think it's pejorative at all. I think incremental, incremental is usually the most successful path. >> Yes, of course. >> In my experience. >> We agree, we agree on that. >> Having tried many, many moonshot things in my Microsoft days, I can tell you that being incremental is a good thing. And I'm a very big believer that that's the way the world's going to go. I just think that there is a need for us to build something new and that ultimately that will be the solution. Now you can argue whether it's two years, three years, five years, or 10 years, but I'd be shocked if it didn't happen in 10 years. >> Yeah, so we all agree that incremental is less disruptive. Boom, but Tristan, you're, I think I'm inferring that you believe you have the architecture to accommodate Bob's vision, and then Bob, and I'm inferring from Bob's comments that maybe you don't think that's the case, but please. >> No, no, no. I think that, so Bob, let me put words into your mouth and you tell me if you disagree, DBT is completely useless in a world where a large scale cloud data warehouse doesn't exist. We were not able to bring the power of Python to our users until these platforms started supporting Python. Like DBT is a layer on top of large scale computing platforms. And to the extent that those platforms extend their functionality to bring more capabilities, we will also service those capabilities. >> Let me try and bridge the two. >> Yeah, yeah, so Bob, Bob, Bob, do you concur with what Tristan just said? >> Absolutely, I mean there's nothing to argue with in what Tristan just said. >> I wanted. >> And it's what he's doing. It'll continue to, I believe he'll continue to do it, and I think it's a very good thing for the industry. You know, I'm just simply saying that on top of that, I would like to provide Tristan and all of those who are following similar paths to him with a new type of database that can actually solve these problems in a much more architected way. And when I talk about Cosmos with something like Mongo or Cosmos together with Elastic, you're using Elastic as the join engine, okay. That's the purpose of it. It becomes a poor man's join engine. And I kind of go, I know there's a better answer than that. I know there is, but that's kind of where we are state of the art right now. >> George, we got to wrap it. So give us the last word here. Go ahead, George. >> Okay, I just, I think there's a way to tie together what Tristan and Bob are both talking about, and I want them to validate it, which is for five years we're going to be adding or some number of years more and more semantics to the operational and analytic data that we have, starting with metric definitions. My question is for Bob, as DBT accumulates more and more of those semantics for different enterprises, can that layer not run on top of a relational knowledge graph? And what would we lose by not having, by having the knowledge graph store sort of the joins, all the complex relationships among the data, but having the semantics in the DBT layer? >> Well, I think this, okay, I think first of all that DBT will be an environment where many of these semantics are defined. The question we're asking is how are they stored and how are they processed? And what I predict will happen is that over time, as companies like DBT begin to build more and more richness into their semantic layer, they will begin to experience challenges that customers want to run queries, they want to ask questions, they want to use this for things where the underlying infrastructure becomes an obstacle. I mean, this has happened in always in the history, right? I mean, you see major advances in computer science when the data model changes. And I think we're on the verge of a very significant change in the way data is stored and structured, or at least metadata is stored and structured. Again, I'm not saying that anytime in the next 10 years, SQL is going to go away. In fact, more SQL will be written in the future than has been written in the past. And those platforms will mature to become the engines, the slicer dicers of data. I mean that's what they are today. They're incredibly powerful at working with large amounts of data, and that infrastructure is maturing very rapidly. What is not maturing is the infrastructure to handle all of the metadata and the semantics that that requires. And that's where I say knowledge graphs are what I believe will be the solution to that. >> But Tristan, bring us home here. It sounds like, let me put pause at this, is that whatever happens in the future, we're going to leverage the vast system that has become cloud that we're talking about a supercloud, sort of where data lives irrespective of physical location. We're going to have to tap that data. It's not necessarily going to be in one place, but give us your final thoughts, please. >> 100% agree. I think that the data is going to live everywhere. It is the responsibility for both the metadata systems and the data processing engines themselves to make sure that we can join data across cloud providers, that we can join data across different physical regions and that we as practitioners are going to kind of start forgetting about details like that. And we're going to start thinking more about how we want to arrange our teams, how does the tooling that we use support our team structures? And that's when data mesh I think really starts to get very, very critical as a concept. >> Guys, great conversation. It was really awesome to have you. I can't thank you enough for spending time with us. Really appreciate it. >> Thanks a lot. >> All right. This is Dave Vellante for George Gilbert, John Furrier, and the entire Cube community. Keep it right there for more content. You're watching SuperCloud2. (upbeat music)

Published Date : Jan 4 2023

SUMMARY :

and the future of cloud. And Bob, you have some really and I think it's helpful to do it I'm going to go back and And I noticed that you is that what they mean? that we're familiar with, you know, it comes back to SuperCloud, is that data products are George, is that how you see it? that don't require a human to is that one of the most And DBT has, you know, the And I'm sure that there will be more on the right architecture is that in the next 10 years, So let me ask the Colombo and the data stack, which is that is still in the like Modal Labs, yeah, of course. and that sits above the and that query is going to So Tristan, you got to and that can store all of the that there is to do for the pointing to you as incremental, but allowing practitioners to I mean, I, but to your point, a lot. the most successful path. that that's the way the that you believe you have the architecture and you tell me if you disagree, there's nothing to argue with And I kind of go, I know there's George, we got to wrap it. and more of those semantics and the semantics that that requires. is that whatever happens in the future, and that we as practitioners I can't thank you enough John Furrier, and the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Steve Kenniston, The Storage Alchemist & Tony Bryston, Town of Gilbert | Dell Technologies World 202


 

>>The cube presents, Dell technologies world brought to you by Dell. >>Welcome back to Dell technologies, world 2022. We're live in Vegas. Very happy to be here. Uh, this is the cubes multi-year coverage. This is year 13 for covering either, you know, EMC world or, uh, Dell world. And now of course, Dell tech world. My name is Dave Volante and I'm here with longtime Cub alum cube guest, Steve Kenon, the storage Alchemist, who's, uh, Beckett, Dell, uh, and his data protection role. And Tony Bryson is the chief information security officer of the town of Gilbert town in Arizona. Most, most towns don't have a CISO, but Tony, we're a thrilled, you're here to tell us that story. How did you become a CISO and how does the town of Gilbert have a CISO? >>Well, thank you for having me here. Uh, believe it or not. The town of Gilbert is actually the fourth largest municipality in Arizona. We serve as 281,000 citizens. So it's a fairly large enterprise. We're a billion dollar enterprise. And it got to the point where the, uh, cybersecurity concerns were at such a point that they elected to bring in their first chief information security officer. And I managed to, uh, be the lucky gentleman that got that particular position. >>That's awesome. And there's a, is there a CIO as well? Are you guys peers? Do you, how what's the reporting structure look like? >>We have a chief technology officer. Okay. I report through his office mm-hmm <affirmative> and then he reports, uh, directly to the town executive. >>So you guys talk a lot, you I'm sure you present a lot to the, to the board or wherever the governance structure is. Yeah, >>We do. I, I do quarterly report outs to the, I report through to the town council. Uh, let them know exactly what our cyber security posture is like, the type of threats that we're facing. As a matter of fact, I have to do one when I return to, uh, Gilbert from this particular conference. So really looking forward to that one, cuz this is an interesting time to be in cyber security. >>So obviously a sea. So Steve is gonna say, cyber's the number one priority, but I would say the CTO is gonna say the, say the same thing I would say the board is gonna say the same thing. I would also say Steve, that, uh, cyber and cyber resilience is probably the number one topic here at the show. When you walk around and you see the cyber demonstrations, the security demonstrations, they're packed, it's kind of your focus. Um, it's a good call. >>Yeah. <laugh> I'm the luckiest guy in storage, right? <laugh> um, yeah, there hasn't I in the last 24 months, I don't think that there's been a, a meeting that I've been to with a customer, no matter who's in the room where, uh, cyber resiliency, cybersecurity hasn't come up. I mean, it is, it is one of the hot topics in last night. I mean, Michael was just here. Uh, Michael Dell was just here last night. He came into the showroom floor, he came back, he took a look at what we were offering for cyber capabilities and was impressed. And, and so, so that's really good. >>Yeah. So I noticed, you know, when I talked to a lot of CIOs in particular, they would tell me that the pre pandemic, their cyber resiliency was very Dr. Focused, right. They really, it really wasn't an organizational resilience. It was a, if there's an oh crap moment, they could get it back in theory. And they sort of rethought that. Do you see you that amongst your peers, Tony? >>I think so. I think that people are quickly starting to understand that you just can't focus on, in, on protecting yourself from something that you think may never happen. The reality is that you're likely to see some type of cyber event, so you better be prepared for it. And you protect yourself against that. So plan for resiliency plan with making sure that you have the right people in place that can take that challenge on, because it's not a matter of if it's a matter of when >>I would imagine. Well, Steve, you and I have talked about this, that, you know, the data protection business used to be, we used to call it backup in recovery and security, which is a whole different animal, but they're really starting to come together. It's kind of an Adjay. I, I know you've got this, uh, Maverick report that, that you want to talk about. What, what is that as a new Gartner research? I, I'm not familiar with it. >>Yeah. So it's some very interesting Gartner research and what I think, and I'd be curious to, Tony's take on, especially after that last question is, you know, a lot of people are, are spending a lot of money to keep the bad actors out. Right. And Gardner's philosophy on this whole, um, it's, it's, you're going to get hacked. So embrace the breach, that's their report. Right. So what they're suggesting is you're spending a lot of money, but, but we're witnessing a lot of attacks still coming in. Are you prepared to recover that when it happens? Right. And so their philosophy is it's time to start thinking about the recovery aspects of, you know, if, if they're gonna get through, how do you handle that? Right. >>Well, so you got announcements this week, big one of the big four, I guess, or big five cyber recovery vault. It's been, you're enhancing that you guys are talking things like, you know, air gaps and so forth. Give us the overview of the news there. >>Yeah. So there's, uh, cyber recovery vault for AWS for the cloud. There is, uh, a lot of stuff we're doing with, uh, cyber recovery vault for, uh, Aw, uh, Azure also, right along with the cyber sense technology, which is the technology that scans the data. Once it comes in from the backup to ensure that it clean and can be recovered and you can feel confident that your recoveries look good, right? So now, now you can do that OnPrem, or you can do it through a colo. You can do it with in the cloud, or you can, uh, ask Dell technologies with our apex business services to help provide cyber recovery services wherever for you at your co at yet OnPrem or for you from the cloud. So it's kind of giving the customer, allowing them to keep that freedom of choice of how they want to operate, but provide them those same recovery capabilities. >>So Tony, give us paint us a picture without giving away too much for the bad guys. How, how you approach this, maybe are you using some of these products? What's your sort of infrastructure look like? >>Yeah. Without giving away the state secrets, um, we are heavily invested in the cyber recovery vault and cyber sense. Uh, it plays heavily in our strategy. We wanna make sure we have a safe Harbor for our data. And that's something that, that the Dell power protect cyber recovery vault provides to us. Uh, we're exceptionally excited about the, the development that's going on, especially with apex. We're looking at that, and that has really captured our imagination. It could be a game changer for us as a town because we're, we're a small organization transitioning to a midsize organization and what apex provides and what the Dell cyber recovery vault provides to us. Putting those two together gives us the elasticity we need as a small organization to expand quickly and deal with our internal data concerns. >>So cyber recovery as a service is what you're interested in. Let me ask you a question. Are you interested in a managed service or are you interested in managing it yourself? >>That's a great question, personally. I would prefer that we went with managed services. I think that from a manager's perspective, you get a bigger bang for the buck going with managed services. You have people that work with that technology all the time. You don't have to ramp people up and develop that expertise in house. You also then have that peace of mind that you have more people that are doing the services and it acts as a force multiplier for you. So from a dollar and cents perspective, it's the way that you want to go. When I start talking to my internal people, of course, there's that, that sense of fear that comes with the unknown and especially outsourcing that type of critical infrastructure, the there's some concern there, but I think that with education, with exposure, to some of the things that we get from the managed service, it makes sense for everybody to go that >>Route and, and you can, I presume sort of POC it and then expand it and then get more comfortable with it and then say, okay, when it's hardened and ready now, this is the, the Def facto standard across the organization. >>I suspect we'll end up in a hybrid environment to begin with where we'll some assets on site, and then we'll have some assets in the cloud. And that's again, where apex will be that, that big linchpin for us and really make it all work. How >>Important are air gaps? >>Oh, they're incredibly, incredibly, uh, needed right now. You cannot have true data of security without having an air gap. A lot of the ransomware that we see moves laterally through your organization. So if you have, uh, all your data backed up in the same data center that your, your backups and your primary data sources are in odds are they're all gonna get owned at the same time. So having that air gap solution in there is critical to having the peace of mind that allows the CISO to sleep at night. >>I always tell my crypto and NFT readers, this doesn't apply to data centers. You gotta air back air, air gap, your crypto, you know, when you're NFT. So how do you guys Steve deal with, with air gap? Can you explain the solutions? >>So in the, in the cyber recovery vault itself, it is driven through, uh, you've got one, uh, power protect, uh, appliance on one, one side in your data center, and then wherever your, your, your vaulted area is, whether it be a colo, whether it be on pre wherever it might be. Uh, we create a connection between between the two that is one directional, right? So we send the data to that vault. We call it the vault and, you know, we replicate a copy of your backup data. Once it lives over there, we make a copy of that data. And then what we do is with the cyber sense technology that Tony was talking about, we scan that data and we validate it against, with a whole cyber sense is built on IML machine learning. We look at a couple hundred different kind of profiles that come through and compare it to the, to the day before as backup and the day before that and understand kind of what's changing. >>And is it changing the right way? Right? Like there might be some reasons it it's supposed to change that way. Right. But things that look anomalous, we send up a warning when we let the people know that, you know, whoever's monitoring, something's going on. You might want to take a look. And then based on that, if there's whatever's happening in the environment, we have the ability to then recover that data back to the, to the original system. You can use the vault as a, as a clean room area, if you want to send people to it, depending on kind of what's going on in, in, in your main data center. So there's a lot of things we do to protect that. Do >>You recommend, like changing the timing of when you take, you know, snapshots or you do the same time every day, it's gotta create different patterns or >>I'll tell you that's, that's one thing to keep the, keep the hackers on their tow, right? It it's tough to do operationally, right? Because you kind that's processes. But, but the reality is if you really are that, uh, concerned about attacks, that makes a lot of sense, >>Tony, what's the CISOs number one challenge today? >>Uh, I, it has to be resilience. It has to be making sure your organization that if or when they get hit, that you're able to pick the pieces back up and get the operation back up as quickly and efficiently as possible. Making sure that the, the mission critical data is immediately, uh, recoverable and be able to be put back into play. >>And, and what's the biggest challenge or best practice in terms of doing that? Obviously the technology, the people, the process >>Right now, I would probably say it's it's people, uh, we're going through the, the, um, a period of, of uncertainty in the marketplace when it comes to trying to find people. So it is difficult to find the right people to do certain things, which is why managed services is so important to an organization of our size and, and what we're trying to do, where we are, are incorporating such big ideas. We need those manager services because we just can't find the bodies that can do some of this work. >>You got an interesting background, you a PhD in psychology, you're an educator, you're a golf pro and you're a CISO. I I've never met anybody like you, Tony <laugh>. So, thanks for coming on, Steve, give you the last word. >>Well, I think I, I think one of the things that Tony said, and I wanted to parlay this a little bit, uh, from that Gartner report, I even talked about people is so critical when it comes to cyber resiliency and that sort of thing. And one of the things I talked about in that embraced the breach report is as you're looking to hire staff for your environment, right, you wanna, you know, a lot of people might shy away from hiring that CSO that got fired because they had a cyber event. Right, right. Oh, maybe they didn't do their job. But the reality is, is those folks, because this is very new. I mean, of course we've been talking about cyber for a couple of years, but, but getting that experience under your belt and understanding what happens in the event. I mean, there are a lot of companies that run things like cyber ranges, resiliency, ranges to put people through the paces of, Hey, this is what have happens when an event happens and are you prepared to respond? I think there's a big set of learning lessons that happens when you go through one of those events and it helps kind of educate the people about what's needed. >>It's a great point. Failure used to mean fire right in this industry. And, and today it's different. The adversary is very well armed and quite capable and motivated that learning even during, even when you fail, can be applied to succeed in the future or not fail, I guess there's no such thing as success in your business. Guys. Thanks so much for coming on the cube. Really appreciate your time. Thank you. Thanks very >>Much. >>All right. And thank you for watching the cubes coverage of Dell tech world 2022. This is Dave Valenti. We'll be back with John furrier, Lisa Martin and David Nicholson. Two days of wall to wall coverage left. Keep it with us.

Published Date : May 3 2022

SUMMARY :

This is year 13 for covering either, you know, EMC world or, uh, Dell world. Well, thank you for having me here. Are you guys peers? I report through his office mm-hmm <affirmative> and then he reports, So you guys talk a lot, you I'm sure you present a lot to the, to the board or wherever the governance structure is. As a matter of fact, I have to do one when I return to, uh, So Steve is gonna say, cyber's the number one priority, I mean, it is, it is one of the hot topics in last night. Do you see you that amongst your peers, Tony? I think that people are quickly starting to understand that you just can't focus Well, Steve, you and I have talked about this, that, you know, the data protection business used to be, especially after that last question is, you know, a lot of people are, are spending a lot of things like, you know, air gaps and so forth. So it's kind of giving the customer, allowing them to keep that freedom of How, how you approach this, that the Dell power protect cyber recovery vault provides to us. Are you interested in a managed service or are you interested in it's the way that you want to go. Route and, and you can, I presume sort of POC it and then expand it and then get more comfortable I suspect we'll end up in a hybrid environment to begin with where we'll some assets on So if you have, uh, all your data backed up in the same data center that your, So how do you guys Steve deal with, with air gap? you know, we replicate a copy of your backup data. if you want to send people to it, depending on kind of what's going on in, in, in your main data center. But, but the reality is if you really are that, uh, concerned about attacks, Uh, I, it has to be resilience. the right people to do certain things, which is why managed services is so important to an organization You got an interesting background, you a PhD in psychology, you're an educator, I think there's a big set of learning lessons that happens when you go through one of those events that learning even during, even when you fail, can be applied to succeed in the And thank you for watching the cubes coverage of Dell tech world 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

TonyPERSON

0.99+

Steve KenonPERSON

0.99+

Tony BrysonPERSON

0.99+

Dave ValentiPERSON

0.99+

MichaelPERSON

0.99+

Dave VolantePERSON

0.99+

Steve KennistonPERSON

0.99+

VegasLOCATION

0.99+

GardnerPERSON

0.99+

DellORGANIZATION

0.99+

GilbertPERSON

0.99+

AWSORGANIZATION

0.99+

John furrierPERSON

0.99+

GilbertLOCATION

0.99+

GartnerORGANIZATION

0.99+

ArizonaLOCATION

0.99+

Michael DellPERSON

0.99+

Two daysQUANTITY

0.99+

The Storage AlchemistORGANIZATION

0.99+

last nightDATE

0.99+

Tony BrystonPERSON

0.99+

281,000 citizensQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

this weekDATE

0.98+

apexORGANIZATION

0.97+

AlchemistORGANIZATION

0.96+

todayDATE

0.96+

fourth largest municipalityQUANTITY

0.96+

MaverickPERSON

0.96+

Dell TechnologiesORGANIZATION

0.95+

OnPremORGANIZATION

0.95+

one sideQUANTITY

0.94+

billion dollarQUANTITY

0.93+

BeckettPERSON

0.9+

last 24 monthsDATE

0.89+

one thingQUANTITY

0.88+

EMCORGANIZATION

0.85+

first chief informationQUANTITY

0.84+

pandemicEVENT

0.83+

lot of moneyQUANTITY

0.79+

2022DATE

0.79+

NFTORGANIZATION

0.78+

multi-yearQUANTITY

0.75+

AzureTITLE

0.69+

CISOORGANIZATION

0.63+

TownLOCATION

0.63+

officerQUANTITY

0.62+

bigQUANTITY

0.59+

hundredQUANTITY

0.58+

couple of yearsQUANTITY

0.58+

moneyQUANTITY

0.51+

coupleQUANTITY

0.5+

Mazin Gilbert, AT&T | AT&T Spark 2018


 

>> From the Palace of Fine Arts in San Francisco, it's theCUBE! Covering AT&T Spark. (bubbly music) >> Hello! I'm Maribel Lopez, the Founder of Lopez Research, and I am here today at the AT&T Spark event in San Francisco and I have great pleasure and honor of interviewing Mazin Gilbert, who is the VP of Advanced Technology and Systems at AT&T. We've been talking a lot today, and welcome Mazin. >> Thank you, Maribel for having us. >> We've been talking a lot today about 5G, 5G is like the first and foremost topic on a lot of people's mind that came to the event today, but I thought we might step back for those that aren't as familiar with 5G, and maybe we could do a little 5G 101 with Mazin. What's going on with 5G? Tell us about what 5G is and why it's so important to our future. >> 5G is not another G. (Maribel laughs) It really is a transformational and a revolution really to not to what we're doing as a company, but to society and humanity in general. It would really free us to be mobile, untethered, and to explore new experiences that we've never had before. >> Do I think of this as just faster 4G? Because we had 2G, then 3G, then 4G, is 5G something different? When you say it allows us to be mobile and untethered, don't we already have that? >> No we don't. There are a lot of experiences that are not possible to do today. So imagine that having multiple teenagers experiencing virtual reality, augmented reality, all mobile, while they are in the car all in different countries; we can't have that kind of an experience today. Imagine cars as we move towards autonomous cars, we cannot do autonomous cars today without the intelligence, the speeds and the latency with 5G, so that all cars connect and talk to each other in a split of a second. >> See, I think that's one of the real benefits of this concept of 5G. So when you talk about 5G, 5G is yes more bandwidth, but also lower latency, and that's going to allow the things that you're talking about. I know that you also mention things such as telemedicine, and FirstNet network, any other examples that you're seeing that you think are really going to add a difference to peoples lives going forward as they look at 5G? >> 5G is a key enabler in terms of how these experiences are going to really be transformed. But when you bring in 5G with the edge compute. Today, think of compute, and storage, and securing everything, is sitting somewhere, and as you're talking, that something goes to some unknown place. In the 5G era, with the edge, think of computer storage as following you. And now-- >> So you're your own data center. (laughs) >> You're pretty much your own data center. Wherever you go with every corner, there's a data center following you right there. And now add to that, we're transforming our network to be programmable with our software-defined network, and add AI into that, bringing all of this diamond together, the 5G, the edge, programming the network with software-defined, and AI, and that is what the new experiences is. This is when you'll start seeing really an autonomous world. A world in which that we're able to experience drones flying and repairing cell sites, or repairing oil tanks, without us really being involved, from being in our living room watching a movie. This is a world that is extremely fascinating, a world in which that people can interact and experience family reunion, all virtually in the same room, but they're all in different countries. >> I do think there's this breakthrough power of connectivity. We've talked about it in the next generation of telemedicine, you mentioned some of the dangerous jobs that we'd be able to use drones for, not just for sort of hovering over peoples gorgeous monuments or other things that we've seen as the initial deployments, but something that's really meaningful. Now I know the other topic that has come up quite a bit, is this topic of opensource, and you're in the advanced technology group, and sometimes I think that people don't equate the concept of opensource with large established organizations, like an AT&T, but yet, you made the case that this was foundational and critical for your innovation, can you tell us a little bit more about that? >> Opensource is really part of our DNA. If you look at the inventions of the Unix, C, C++, all originated from AT&T Labs and Bell Labs, we've always been part of that opensource community. But really in the past five years, I think opensource has moved to a completely another level. Because now we're not just talking about opensource, we're talking about open platforms, we're talking about open APIs. What that means is that, we're now into-- >> A lot of open here. (laughs) >> Everything open in here. And what that really means is that we no longer as one company, no one company in the world can make it on their own. The world-- >> K, this is a big difference. >> It's a big difference. The world is getting smaller, and companies together, for us to really drive these transformational experiences companies need to collaborate and work with each other. And this is really what opensource is, is that, think of what we've done with our software-defined network, what we called ONAP in the opensource, we started as a one company, and there was another, one of the Chinese mobile companies also had a source code in there. In the past one year, we now have a hundred companies, some of the biggest brand companies, all collaborating to building open APIs. But why the opensource and open API is important, enables collaboration, expedite innovation, we've done more in the past one year than what we could've done alone for 10 years, and that's really the power of opensource and open platforms >> I totally agree with you on this one. One of the things that we've really seen happen is as newer companies, these theoretically innovative companies have come online, cloud native companies, they've been very big on this open proponent, but we're also seeing large established companies move in the same direction, and it's allowing every organization to have that deeply innovative, flexible architecture that allows them to build new services without things breaking, so I think it's very exciting to see the breadth of companies that you had on stage talking about this, and the breadth of companies that are now in that. And the other thing that's interesting about it is they're competitors as well, right? So, there's that little bit of a edgy coopetition that's happening, but it's interesting to see that everybody feels that there's room for intense innovation in that space as well. So we've talked a little bit about opensource, we've talked about 5G, you are in advanced technology, and I think we'd be remiss to not talk about the big two letter acronym that's in the room that's not 5G, which would be AI. Tell me what's going on with AI, how are you guys thinking about it, what advice do you have for other organizations that are approaching it? Because you are actually a huge developer of AI across your entire organization, so maybe you could tee up a little bit about how that works. >> AI is transformational, and fundamental for AT&T. We have always developed AI solutions, and we were the first to deploy a AI in call centers 20 years ago. >> 20 years ago, really? >> 20 years ago. >> You were doing AI 20 years ago? >> 20 years ago. >> See, just goes to show. >> 20 years ago. I mean AI really, if you go to the source of AI, it really goes in the '40s and '50s with pioneers like Shannon and others. But the first deployment in a commercial call center, not a pilot, was really by AT&T. >> An actual implementation, yeah. >> With a service, we called it how may I help you. And the reason we put that out, because our customers were annoyed with press one for this and press two for billing, they wanted to speak naturally. And so we put the system that says "How may I help you?" and how may I help you allowed the customer to tell us in their own language, in their own words, what is it that they want from us as opposed to really dictating to them what they have to say Now today, it's really very hard for you to call any company in the world, without getting a service that uses some form of speech recognition or speech understanding. >> Thankfully. (laughs) >> But where we're applying it today and have been for the past two, three years, we're finding some really amazing opportunities that we've never imagined before. AI in its essence, is nothing more than automation leveraging data. So using your data as the oil, as the foundation, and driving automation, and that automation could be complete automation of a service, or it could be helping the human to doing their job better and faster. It could be helping a doctor in finding information about patients that they couldn't have done by themselves by processing a million records all together. We're doing the same thing at AT&T. The network is the most complex project ever to be created on the planet. And it's a complex project that changes every second of the day as people move around, and they try different devices. And so to be able to optimize that experience, is really an AI problem, so we apply it today to identify where to build the next cell sites all the way to what's the right ad to show to the customer, or, how do we really make your life easier with our services without you really calling our call center, how do I diagnose and repair your setup box before you're calling us? All of that foundation is really starting to be driven by AI technologies, very exciting. >> Well I'm actually very excited to see where AI takes us, and I'm excited to hear about what you're doing in the future. Thanks for takin' the time to come here today, >> It's my pleasure. >> And be with us on theCUBE. Thank you. >> It's always a pleasure talking to you, thank you very much. >> I'm Maribel Lopez closin' out at AT&T Spark, thank you. (bubbly music)

Published Date : Sep 10 2018

SUMMARY :

From the Palace of Fine Arts in San Francisco, I'm Maribel Lopez, the Founder of Lopez Research, and maybe we could do a little 5G 101 with Mazin. and a revolution really to so that all cars connect and talk to each other and that's going to allow the things that you're talking about. that something goes to some unknown place. So you're your own data center. and that is what the new experiences is. in the next generation of telemedicine, But really in the past five years, A lot of open here. no one company in the world and that's really the power of opensource and open platforms and the breadth of companies that are now in that. and we were the first to deploy a it really goes in the '40s and '50s allowed the customer to tell us in their own language, (laughs) and have been for the past two, three years, Thanks for takin' the time to come here today, And be with us on theCUBE. It's always a pleasure talking to you, at AT&T Spark, thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Maribel LopezPERSON

0.99+

MaribelPERSON

0.99+

Bell LabsORGANIZATION

0.99+

AT&T LabsORGANIZATION

0.99+

Mazin GilbertPERSON

0.99+

AT&T.ORGANIZATION

0.99+

MazinPERSON

0.99+

AT&TORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

todayDATE

0.99+

AT&T SparkORGANIZATION

0.99+

20 years agoDATE

0.99+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

FirstNetORGANIZATION

0.98+

one companyQUANTITY

0.98+

a million recordsQUANTITY

0.98+

TodayDATE

0.98+

OneQUANTITY

0.97+

5GORGANIZATION

0.95+

ShannonPERSON

0.94+

Lopez ResearchORGANIZATION

0.94+

2018DATE

0.93+

three yearsQUANTITY

0.93+

'50sDATE

0.92+

T SparkEVENT

0.92+

ChineseOTHER

0.92+

past one yearDATE

0.92+

opensourceTITLE

0.91+

'40sDATE

0.91+

first deploymentQUANTITY

0.88+

hundred companiesQUANTITY

0.86+

ONAPORGANIZATION

0.85+

twoQUANTITY

0.75+

VPPERSON

0.74+

past five yearsDATE

0.72+

two letterQUANTITY

0.69+

C++TITLE

0.66+

opensourceORGANIZATION

0.61+

AT&ORGANIZATION

0.59+

TechnologyPERSON

0.59+

SystemsPERSON

0.58+

secondQUANTITY

0.57+

of Fine ArtsLOCATION

0.56+

PalaceORGANIZATION

0.55+

CTITLE

0.54+

2GQUANTITY

0.53+

AdvancedORGANIZATION

0.53+

UnixTITLE

0.47+

3GQUANTITY

0.45+

pastDATE

0.45+

4GQUANTITY

0.37+

5GQUANTITY

0.37+

101TITLE

0.35+

5GOTHER

0.35+

Action Item Quick Take | George Gilbert - Feb 2018


 

(upbeat music) >> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. George Gilbert, everybody's talking about AI, ML, DL, as though, like always, the stack's going to be highly disaggregated. What's really happening? >> Well, the key part for going really mainstream is that we're going to have these embedded in the fabric of applications, high volume applications. And right now, because it's so bespoke, it can only really be justified in strategic applications. But the key scarce resource of the data scientists, we can look at them as the new class of developer, but they're very different from the old class of developer. I mean, they need entirely different schooling and training and tools. So, the closest we can get to the completely bespoke apps are at the big tech companies for the most part, or tech-centric companies like Adtec and Fintec. But beyond that, when you're trying to get these out to more mainstream, but still sophisticated, customers, we have platforms like C3 or IBM's Watson IoT, where they're templates, but you work intensively between the vendor and the customer. It's going to be a while before we see them as widespread components of legacy enterprise apps, because those apps actually are keeping the vendors and the customers busy trying to move them to the cloud. They were heavily customized and you can't really embed the machine learning apps in those while they're sort of, so customized, because they need a certain amount of data, and they evolve very quickly, where as the systems of record are very, very rigid. >> All right, once again, thank you very much George. This has been Peter Burris with another Wikibon Action Item Quick Take. (upbeat music)

Published Date : Feb 23 2018

SUMMARY :

like always, the stack's going to be highly disaggregated. But the key scarce resource of the data scientists, with another Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

AdtecORGANIZATION

0.99+

FintecORGANIZATION

0.99+

George GilbertPERSON

0.99+

Feb 2018DATE

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

C3TITLE

0.93+

Watson IoTTITLE

0.89+

WikibonORGANIZATION

0.87+

Seth Myers, Demandbase | George Gilbert at HQ


 

>> This is George Gilbert, we're on the ground at Demandbase, the B2B CRM company, based on AI, one of uh, a very special company that's got some really unique technology. We have the privilege to be with Seth Myers today, Senior Data Scientist and resident wizard, and who's going to take us on a journey through some of the technology Demandbase is built on, and some of the technology coming down the road. So Seth, welcome. >> Thank you very much for having me. >> So, we talked earlier with Aman Naimat, Senior VP of Technology, and we talked about some of the functionality in Demandbase, and how it's very flexible, and reactive, and adaptive in helping guide, or react to a customer's journey, through the buying process. Tell us about what that journey might look like, how it's different, and the touchpoints, and the participants, and then how your technology rationalizes that, because we know, old CRM packages were really just lists of contact points. So this is something very different. How's it work? >> Yeah, absolutely, so at the highest level, each customer's going to be different, each customer's going to make decisions and look at different marketing collateral, and respond to different marketing collateral in different ways, you know, as the companies get bigger, and their products they're offering become more sophisticated, that's certainly the case, and also, sales cycles take a long time. You're engaged with an opportunity over many months, and so there's a lot of touchpoints, there's a lot of planning that has to be done, so that actually offers a huge opportunity to be solved with AI, especially in light of recent developments in this thing called reinforcement learning. So reinforcement learning is basically machine learning that can think strategically, they can actually plan ahead in a series of decisions, and it's actually technology behind AlphaGo which is the Google technology that beat the best Go players in the world. And what we basically do is we say, "Okay, if we understand "you're a customer, we understand the company you work at, "we understand the things they've been researching elsewhere "on third party sites, then we can actually start to predict "about content they will be likely to engage with." But more importantly, we can start to predict content they're more likely to engage with next, and after that, and after that, and after that, and so what our technology does is it looks at all possible paths that your potential customer can take, all the different content you could ever suggest to them, all the different routes they will take, and it looks at ones that they're likely to follow, but also ones they're likely to turn them into an opportunity. And so we basically, in the same way Google Maps considers all possible routes to get you from your office to home, we do the same, and we choose the one that's most likely to convert the opportunity, the same way Google chooses the quickest road home. >> Okay, this is really, that's a great example, because people can picture that, but how do you, how do you know what's the best path, is it based on learning from previous journeys from customers? >> Yes. >> And then, if you make a wrong guess, you sort of penalize the engine and say, "Pick the next best, "what you thought was the next best path." >> Absolutely, so the way, the nuts and bolts of how it works is we start working with our clients, and they have all this data of different customers, and how they've engaged with different pieces of content throughout their journey, and so the machine learning model, what it's really doing at any moment in time, given any customer in any stage of the opportunity that they find themselves in, it says, what piece of content are they likely to engage with next, and that's based on historical training data, if you will. And then once we make that decision on a step-by-step basis, then we kind of extrapolate, and we basically say, "Okay, if we showed them this page, or if they engage with "this material, what would that do, what situation would "we find them in at the next step, and then what would "we recommend from there, and then from there, "and then from there," and so it's really kind of learning the right move to make at each time, and then extrapolating that all the way to the opportunity being closed. >> The picture that's in my mind is like, the Deep Blue, I think it was chess, where it would map out all the potential moves. >> Very similar, yeah. >> To the end game. >> Very similar idea. >> So, what about if you're trying to engage with a customer across different channels, and it's not just web content? How is that done? >> Well, that's something that we're very excited about, and that's something that we're currently really starting to devote resources to. Right now, we already have a product live that's focused on web content specifically, but yeah, we're working on kind of a multi-channel type solution, and we're all pretty excited about it. >> Okay so, obviously you can't talk too much about it. Can you tell us what channels that might touch? >> I might have to play my cards a little close to my chest on this one, but I'll just say we're excited. >> Alright. Well I guess that means I'll have to come back. >> Please, please. >> So, um, tell us about the personalized conversations. Is the conversation just another way of saying, this is how we're personalizing the journey? Or is there more to it than that? >> Yeah, it really is about personalizing the journey, right? Like you know, a lot of our clients now have a lot of sophisticated marketing collateral, and a lot of time and energy has gone into developing content that different people find engaging, that kind of positions products towards pain points, and all that stuff, and so really there's so much low-hanging fruit by just organizing and leveraging all of this material, and actually forming the conversation through a series of journeys through that material. >> Okay, so, Aman was telling us earlier that we have so many sort of algorithms, they're all open source, or they're all published, and they're only as good as the data you can apply them to. So, tell us, where do companies, startups, you know, not the Googles, Microsofts, Amazons, where do they get their proprietary information? Is it that you have algorithms that now are so advanced that you can refine raw information into proprietary information that others don't have? >> Really I think it comes down to, our competitive advantage I think is largely in the source of our data, and so, yes, you can build more and more sophisticated algorithms, but again, you're starting with a public data set, you'll be able to derive some insights, but there will always be a path to those datasets for, say, a competitor. For example, we're currently tracking about 700 billion web interactions a year, and then we're also able to attribute those web interactions to companies, meaning the employees at those companies involved in those web interactions, and so that's able to give us an insight that no amount of public data or processing would ever really be able to achieve. >> How do you, Aman started to talk to us about how, like there were DNS, reverse DNS registries. >> Reverse IP lookups, yes. >> Yeah, so how are those, if they're individuals within companies, and then the companies themselves, how do you identify them reliably? >> Right, so reverse IP lookup is, we've been doing this for years now, and so we've kind of developed a multi-source solution, so reverse IP lookups is a big one. Also machine learning, you can look at traffic coming from an IP address, and you can start to make some very informed decisions about what the IP address is actually doing, who they are, and so if you're looking at, at the account level, which is what we're tracking at, there's a lot of information to be gleaned from that kind of information. >> Sort of the way, and this may be a weird-sounding analogy, but the way a virus or some piece of malware has a signature in terms of its behavior, you find signatures in terms of users associated with an IP address. >> And we certainly don't de-anonymize individual users, but if we're looking at things at the account level, then you know, the bigger the data, the more signal you can infer, and so if we're looking at a company-wide usage of an IP address, then you can start to make some very educated guesses as to who that company is, the things that they're researching, what they're in market for, that type of thing. >> And how do you find out, if they're not coming to your site, and they're not coming to one of your customer's sites, how do you find out what they're touching? >> Right, I mean, I can't really go into too much detail, but a lot of it comes from working with publishers, and a lot of this data is just raw, and it's only because we can identify the companies behind these IP addresses, that we're able to actually turn these web interactions into insights about specific companies. >> George: Sort of like how advertisers or publishers would track visitors across many, many sites, by having agreements. >> Yes. Along those lines, yeah. >> Okay. So, tell us a little more about natural language processing, I think where most people have assumed or have become familiar with it is with the B2C capabilities, with the big internet giants, where they're trying to understand all language. You have a more well-scoped problem, tell us how that changes your approach. >> So a lot of really exciting things are happening in natural language processing in general, and the research, and right now in general, it's being measured against this yardstick of, can it understand languages as good as a human can, obviously we're not there yet, but that doesn't necessarily mean you can't derive a lot of meaningful insights from it, and the way we're able to do that is, instead of trying to understand all of human language, let's understand very specific language associated with the things that we're trying to learn. So obviously we're a B2B marketing company, so it's very important to us to understand what companies are investing in other companies, what companies are buying from other companies, what companies are suing other companies, and so if we said, okay, we only want to be able to infer a competitive relationship between two businesses in an actual document, that becomes a much more solvable and manageable problem, as opposed to, let's understand all of human language. And so we actually started off with these kind of open source solutions, with some of these proprietary solutions that we paid for, and they didn't work because their scope was this broad, and so we said, okay, we can do better by just focusing in on the types of insights we're trying to learn, and then work backwards from them. >> So tell us, how much of the algorithms that we would call building blocks for what you're doing, and others, how much of those are all published or open source, and then how much is your secret sauce? Because we talk about data being a key part of the secret sauce, what about the algorithms? >> I mean yeah, you can treat the algorithms as tools, but you know, a bag of tools a product does not make, right? So our secret sauce becomes how we use these tools, how we deploy them, and the datasets we put them again. So as mentioned before, we're not trying to understand all of human language, actually the exact opposite. So we actually have a single machine learning algorithm that all it does is it learns to recognize when Amazon, the company, is being mentioned in a document. So if you see the word Amazon, is it talking about the river, is it talking about the company? So we have a classifier that all it does is it fires whenever Amazon is being mentioned in a document. And that's a much easier problem to solve than understanding, than Siri basically. >> Okay. I still get rather irritated with Siri. So let's talk about, um, broadly this topic that sort of everyone lays claim to as their great higher calling, which is democratizing machine learning and AI, and opening it up to a much greater audience. Help set some context, just the way you did by saying, "Hey, if we narrow the scope of a problem, "it's easier to solve." What are some of the different approaches people are taking to that problem, and what are their sweet spots? >> Right, so the the talk of the data science community, talking machinery right now, is some of the work that's coming out of DeepMind, which is a subsidiary of Google, they just built AlphaGo, which solved the strategy game that we thought we were decades away from actually solving, and their approach of restricting the problem to a game, with well-defined rules, with a limited scope, I think that's how they're able to propel the field forward so significantly. They started off by playing Atari games, then they moved to long term strategy games, and now they're doing video games, like video strategy games, and I think the idea of, again, narrowing the scope to well-defined rules and well-defined limited settings is how they're actually able to advance the field. >> Let me ask just about playing the video games. I can't remember Star... >> Starcraft. >> Starcraft. Would you call that, like, where the video game is a model, and you're training a model against that other model, so it's almost like they're interacting with each other. >> Right, so it really comes down, you can think of it as pulling levers, so you have a very complex machine, and there's certain levers you can pull, and the machine will respond in different ways. If you're trying to, for example, build a robot that can walk amongst a factory and pick out boxes, like how you move each joint, what you look around, all the different things you can see and sense, those are all levers to pull, and that gets very complicated very quickly, but if you narrow it down to, okay, there's certain places on the screen I can click, there's certain things I can do, there's certain inputs I can provide in the video game, you basically limit the number of levers, and then optimizing and learning how to work those levers is a much more scoped and reasonable problem, as opposed to learn everything all at once. >> Okay, that's interesting, now, let me switch gears a little bit. We've done a lot of work at WikiBound about IOT and increasingly edge-based intelligence, because you can't go back to the cloud for your analytics for everything, but one of the things that's becoming apparent is, it's not just the training that might go on in a cloud, but there might be simulations, and then the sort of low-latency response is based on a model that's at the edge. Help elaborate where that applies and how that works. >> Well in general, when you're working with machine learning, in almost every situation, training the model is, that's really the data-intensive process that requires a lot of extensive computation, and that's something that makes sense to have localized in a single location which you can leverage resources and you can optimize it. Then you can say, alright, now that I have this model that understands the problem that's trained, it becomes a much simpler endeavor to basically put that as close to the device as possible. And so that really is how they're able to say, okay, let's take this really complicated billion-parameter neural network that took days and weeks to train, and let's actually derive insights at the level, right at the device level. Recent technology though, like I mentioned deep learning, that in itself, just the actual deploying the technology creates new challenges as well, to the point that actually Google invented a new type of chip to just run... >> The tensor processing. >> Yeah, the TPU. The tensor processing unit, just to handle what is now a machine learning algorithm so sophisticated that even deploying it after it's been trained is still a challenge. >> Is there a difference in the hardware that you need for training vs. inferencing? >> So they initially deployed the TPU just for the sake of inference. In general, the way it actually works is that, when you're building a neural network, there is a type of mathematical operation to do a whole bunch, and it's based on the idea of working with matrices and it's like that, that's still absolutely the case with training as well as inference, where actually, querying the model, but so if you can solve that one mathematical operation, then you can deploy it everywhere. >> Okay. So, one of our CTOs was talking about how, in his view, what's going to happen in the cloud is richer and richer simulations, and as you say, the querying the model, getting an answer in realtime or near realtime, is out on the edge. What exactly is the role of the simulation? Is that just a model that understands time, and not just time, but many multiple parameters that it's playing with? >> Right, so simulations are particularly important in taking us back to reinforcement learning, where you basically have many decisions to make before you actually see some sort of desirable or undesirable outcome, and so, for example, the way AlphaGo trained itself is basically by running simulations of the game being played against itself, and really what that simulations are doing is allowing the artificial intelligence to explore the entire possibilities of all games. >> Sort of like WarGames, if you remember that movie. >> Yes, with uh... >> Matthew Broderick, and it actually showed all the war game scenarios on the screen, and then figured out, you couldn't really win. >> Right, yes, it's a similar idea where they, for example in Go, there's more board configurations than there are atoms in the observable universe, and so the way Deep Blue won chess is basically, more or less explore the vast majority of chess moves, that's really not the same option, you can't really play that same strategy with AlphaGo, and so, this constant simulation is how they explore the meaningful game configurations that it needed to win. >> So in other words, they were scoped down, so the problem space was smaller. >> Right, and in fact, basically one of the reasons, like AlphaGo was really kind of two different artificial intelligences working together, one that decided which solutions to explore, like which possibilities it should pursue more, and which ones not to, to ignore, and then the second piece was, okay, given the certain board configuration, what's the likely outcome? And so those two working in concert, one that narrows and focuses, and one that comes up with the answer, given that focus, is how it was actually able to work so well. >> Okay. Seth, on that note, that was a very, very enlightening 20 minutes. >> Okay. I'm glad to hear that. >> We'll have to come back and get an update from you soon. >> Alright, absolutely. >> This is George Gilbert, I'm with Seth Myers, Senior Data Scientist at Demandbase, a company I expect we'll be hearing a lot more about, and we're on the ground, and we'll be back shortly.

Published Date : Nov 2 2017

SUMMARY :

We have the privilege to and the participants, and the company you work at, say, "Pick the next best, the right move to make the Deep Blue, I think it was chess, that we're very excited about, Okay so, obviously you I might have to play I'll have to come back. Is the conversation just and actually forming the as good as the data you can apply them to. and so that's able to give us Aman started to talk to us about how, and you can start to make Sort of the way, and this the things that they're and a lot of this data is just George: Sort of like how Along those lines, yeah. the B2C capabilities, focusing in on the types of about the company? the way you did by saying, the problem to a game, playing the video games. Would you call that, and that gets very complicated a model that's at the edge. that in itself, just the Yeah, the TPU. the hardware that you need and it's based on the idea is out on the edge. and so, for example, the if you remember that movie. it actually showed all the and so the way Deep Blue so the problem space was smaller. and focuses, and one that Seth, on that note, that was a very, very I'm glad to hear that. We'll have to come back and and we're on the ground,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

GeorgePERSON

0.99+

AmazonsORGANIZATION

0.99+

MicrosoftsORGANIZATION

0.99+

SiriTITLE

0.99+

GooglesORGANIZATION

0.99+

DemandbaseORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

StarcraftTITLE

0.99+

second pieceQUANTITY

0.99+

WikiBoundORGANIZATION

0.99+

two businessesQUANTITY

0.99+

Seth MyersPERSON

0.99+

Aman NaimatPERSON

0.99+

twoQUANTITY

0.99+

AtariORGANIZATION

0.99+

SethPERSON

0.98+

each customerQUANTITY

0.98+

each jointQUANTITY

0.98+

GoTITLE

0.98+

singleQUANTITY

0.98+

Matthew BroderickPERSON

0.98+

oneQUANTITY

0.98+

todayDATE

0.97+

AmanPERSON

0.96+

Deep BlueTITLE

0.96+

billion-parameterQUANTITY

0.94+

each timeQUANTITY

0.91+

two different artificial intelligencesQUANTITY

0.88+

decadesQUANTITY

0.88+

Google MapsTITLE

0.86+

AlphaGoORGANIZATION

0.82+

about 700 billion web interactions a yearQUANTITY

0.81+

StarTITLE

0.81+

AlphaGoTITLE

0.79+

one mathematicalQUANTITY

0.78+

lotQUANTITY

0.76+

yearsQUANTITY

0.74+

DeepMindORGANIZATION

0.74+

lot of informationQUANTITY

0.73+

bag of toolsQUANTITY

0.63+

IOTTITLE

0.62+

WarGamesTITLE

0.6+

sitesQUANTITY

0.6+

Aman Naimat, Demandbase, Chapter 2 | George Gilbert at HQ


 

>> And we're back, this is George Gilbert from Wikibon, and I'm here with Aman Naimat at Demandbase, the pioneers in the next gen AI generation of CRM. So Aman, let's continue where we left off. So we're talking about natural language processing, and I think most people are familiar with it more on the B to C technology, where the big internet providers have sort of accumulated a lot of voice data and have learned how to process it and convert it into text. So tell us how B to B NLP is different, to use a lot of acronyms. In other words, how you're using it to build up a map of relationships between businesses. >> Right, yeah, we call it the demand graph. So it's an interesting question, because firstly, it turns out that, while very different, B to B is also, the language is quite boring. It doesn't evolve as fast as consumer concepts. And so it makes the problem much more approachable from a language understanding point of view. So natural language processing or natural language understanding is all about how machines can understand and store and take action on language. So while we were working on this four or five years ago, and that's my background as well, it turned out the problem was simpler, because human language is very rich, and natural language processing converting voice to text is trivial compared to understanding meaning of things and words, which is much more difficult. Or even the sense of the word, apparently in English each word has six meanings, right? We call them word senses. So the problem was only simpler because B to B language doesn't tend to evolve as fast as regular language, because terms stick in an industry. The challenge with B to B and why it was different is that each industry or sub-industry has a very specific language and jargon and acronyms. So to really understand that industry, you need to come from that industry. So if you go back to the CRM example of what happened 10, 20 years ago, you would have a sales person that would come from that industry if you wanted to sell into it. And that still happens in some traditional companies, right? So the idea was to be able to replicate the knowledge that they would have as if they came from that industry. So it's the language, the vocabularies, and then ultimately have a way of storing and taking action on it. It's very analogous to what Google had done with Knowledge Graph. >> Alright, so two questions I guess. First is, it sounds almost like a translation problem, in the sense that you have some base language primitives, like partner, supplier, competitor, customer. But that the language in each industry is different, and so you have to map those down to those sort of primitives. So tell us the process. You don't have on staff people who translate from every industry. >> I mean that was the whole, writing logical rules or expressions for language, which use conventional good old fashioned AI. >> You mean this was the rules-based knowledge engineering? >> That's right. And that clearly did not succeed, because it is impossible to do it. >> The old quip which was, one researcher said, "Every time I fired a rules engineer, "my accuracy score would go up." (chuckles) >> That's right, and now the problem is because language is evolving, and the context is so different. So even pharmaceutical companies in the US or in the Bay Area would use different language than pharma in Europe or in Switzerland. And so it's just impossible to be able to quantify the variations. >> George: To do it manually. >> To do it manually, it's impossible. It's certainly not possible for a small startup. And we did try having it be generated. In the early days we used to have crowdsource workers validate the machine. But it turned out that they couldn't do it either, because they didn't understand the pharmaceutical language either, right? So in the end, the only way to do that was to have some sort of model and some seed data to be able to validate it, or to hire experts and to have small samples of data to validate. So going back to the graph, right, it turns out that when we have seen sophisticated AI work, you know, towards complex problems, so for example predicting your next connection on LinkedIn, or your next friend, or what ads should you see on Facebook, they have used network-based data, social graph data, or in the case of Google, it's the Knowledge Graph, of how things are connected. And somehow machine learning and AI systems based on network data tend to be more powerful and more intuitive than other types of models. >> So OK, when you say model, help us with an example of, you're representing a business and who it's connected to and its place in the world. >> So the demand graph is basically as Demandbase, who are our customers, who are their partners, who are their suppliers, who are their competitors. And utilizing that network of companies in a manner that we have network of friends on LinkedIn or Facebook. And it turns out that businesses are extremely social in nature. In fact, we found out that the connections between companies have more signal, and are more predictive of acquisition or predicting the next customer, than even the Facebook social graph. So it's much easier to utilize the business graph, the B to B business graph, to predict the next customer, than to say, predict your next friend on Facebook. >> OK, so that's a perfect analogy. So tell us about the raw material you churn through on the web, and then how you learn what that terminology might be. You've boot-strapped a little bit, now you have all this data, and you have to make sense out of new terms, and then you build this graph of who this business is related to. >> That's right, and the hardest part is to be able to handle rumors and to be able to handle jokes, like, "Isn't it time for Microsoft to just buy Salesforce?" Question mark, smiley face. You know, so it's a challenging problem. But we were lucky that business language and business press is definitely more boring than, you know, people talking about movies. >> George: Or Reddit. >> Or Reddit, right. So the way we work is we process the entire business internet, or the entire internet. And initially we used to crawl it ourselves, but soon realized that Common Crawl, which is an open source foundation that has crawled the internet and put at least a large chunk of it, and that really enabled us to stop the crawling. And we read the entire internet and look at, ultimately we're interested in businesses, 'cause that's the world we are, in business, B to B marketing and B to B sales. We look at wherever there's a company mentioned or a business person or business title mentioned, and then ignore everything else. 'Cause if it doesn't have a company or a business person, we don't care. Right, so, or a business product. So we read the entire internet, and try to then infer that this is, Amazon is mentioned in it, then we figure out, is it Amazon the company, or is it Amazon the river? So that's problem number one. So we call it the entity linking problem. And then we try to understand and piece together the various expressions of relationships between companies expressed in text. It could be a press release, it could be a competitive analysis, it could be announcement of a new product. It could be a supply chain relationship. It could be a rumor. And then it also turns out the internet's very noisy, so we look at corroboration across multiple disparate sources-- >> George: Interesting, to decide-- >> Is it true? >> George: To signal is it real. >> Right, yeah, 'cause there's a lot of fake news out there. (George laughs) So we look at corroboration and the sources to be able to infer if we can have confidence in this. >> I can imagine this could be applied to-- >> A lot of other problems. >> Political issues. So OK, you've got all these sources, give us some specific examples of feeds, of sources, and then help us understand. 'Cause I don't think we've heard a lot about the notion of boot-strapping, and it sounds like you're generalizing, which is not something that most of us are familiar with who have a surface-level familiarity with machine learning. >> I think there was a lot of research like, not to credit Google too much, but... Boot-strapping methods were used by Sergei I think was the first person, and then he gave up 'cause they founded Google and they moved on. And since then in 2003, 2004, there was a lot of research around this topic. You know, and it's in the genre of unsupervised machine learning models. And in the real world, because there's less labeled data, we tend to find that to be an extremely effective method, to learn language and obviously now with deep learning, it's also being utilized more, unsupervised methods. But the idea is really to, and this was around five years ago when we started building this graph, and I obviously don't know how the Google Knowledge Graph is built, but I can assume it's a similar technique. We don't tend to talk about how commercial products work that much. But the idea is basically to generalize models or learn from a small seed, so let's say I put in seed like Nike and Adidas, and say they compete, right? And then if you look at the entire internet and look at all the expressions of how Nike and Adidas are expressed together in language, it could be, you know, "I think "Nike shoes are better than Adidas." >> Ah, so it's not just that you find an opinion that they're better than, but you find all the expressions that explain that they're different and they're competition. >> That's right. But we also find cases where somebody's saying, "I bought Nike and Adidas," or, "Nike and Adidas shoes are sold here." So we have to be able to be smart enough to discern when it's something else and not competition. >> OK, so you've told us how this graph gets built out. So the suppliers, the partners, the customers, the competitors, now you've got this foundation-- >> And people and products as well. >> OK, people, products. You've got this really rich foundation. Now you build and application on top of it. Tell us about CRM with that foundation. >> Yeah, I mean we have the demand graph, in which we tie in also things around basic data that you could find from graphics and intent that we've also built. But it also turns out that the knowledge graph itself, our initial intuition was that we'll just expose this to end users, and they'll be able to figure it out. But it was just too complicated. It really needed another level of machinery and AI on top to take advantage of the graph, and to be able to build prescriptive actions. And action could be, or to solve a business problem. A problem could be, I'm an IOT startup, I'm looking for manufacturing companies who will buy my product. Or it could be, I am a venture capital firm, I want to understand what other venture capital firms are investing in. Or, hey, I'm Tesla, and I'm looking for a new supplier for the new Tesla screen. Or you know, things of that nature. So then we apply and build specific models, more machine learning, or layers of machine learning, to then solve specific business problems. Like the reinforcement learning to understand next best action. >> And are these models associated with one of your customers? >> No, they're general purpose, they're packaged applications. >> OK, tell us more, so what was the base level technology that you started with in terms of the being able to manage a customer conversation, a marketing conversation, and then how did that get richer over time? >> Yeah, I mean we take our proprietary data sets that we've accumulated over the years and manufactured over the years, and then co-mingle with customer data, which we keep private, 'cause they own the data. And the technology is generic, but you're right, the model being generated by the machine is specific to every customer. So obviously the next best action model for a pharmaceutical company is based on doctors visiting, and is this person an oncologist, or what they're researching online. And that model is very different than a model for Demandbase for example, or Salesforce. >> Is it that the algorithm's different, or it's trained on different data? >> It's trained on different data. It's the same code, I mean we only have 20, 30 data scientists, so we're obviously not going to build custom code for... So the idea is it's the same model, but the same meta model is trained on different data. So public data, but also customers' private data. >> And how much does the customer, let's say your customer's Tesla, how much of it is them running some of their data through this boot-strapping process, versus how much of it is, your model is set up and it just automatically once you've boot-strapped it, it automatically starts learning from the interactions with the Tesla, with Tesla itself from all the different partners and customers? >> Right, I think you know, we have found, most startups are just learning over small data sets, which are customer-centric. What we have found is real magic happens when you take private data and combine it with large amounts of public data. So at Demandbase, we have massive amounts of public and proprietary data. And then we plug in, and we have to tell you that our client is Tesla, so it understands the localized graph, and knows the Tesla ecosystem, and that's based on public data sets and our proprietary data. Then we also bring in your private slice whenever possible. >> George: Private...? >> Slice of data. So we have code that can plug into your web site, and then start understanding interactions that your customers are having. And then based on that, we're able to train our models. As much as possible, we try to automate the data capture process, so in essence using a sensor or using a pixel on your web site, and then we take that private stream of data and include it in our graph and merge it in, and that's where we find... Our data by itself is not as powerful as our data mixed with your private data. >> So I guess one way to think about it would be, there's a skeletal graph, and that may be sounding too minimalistic, there's a graph. But let's say you take Tesla as the example, you tell them what data you need from them, and that trains the meta models, and then it fleshes out the graph of the Tesla ecosystem. >> Right, whatever data we couldn't get or infer, from the outside. And we have a lot of proprietary data, where we see online traffic, business traffic, what people are reading, who's interested in what, for hundreds of millions of people. We have developed that technology. So we know a lot without actually getting people's private slice. But you know, whenever possible, we want the maximum impact. >> So... >> It's actually simple, and let's divorce the words graphs for a second. It's really about, let's say that I know you, right, and there's some information you can tell me about you. But imagine if I google your name, and I read every document about you, every video you have produced, every blog you have written, then I have the best of both knowledge, right, your private data from maybe your social graph on Facebook, and then your public data. And then if I knew, you know... If I partnered with Forbes and they told me you logged in and read something on Forbes, then they'll get me that data, so now I really have a deep understanding of what you're interested in, who you are, what's your language, you know, what are you interested in. It's that, sort of simplified, but similar, at a much larger scale. >> Alright, let's take a pause at this point and then we'll come back with part three. >> Excellent.

Published Date : Nov 2 2017

SUMMARY :

more on the B to C technology, So the idea was to be able to replicate in the sense that you have I mean that was the because it is impossible to do it. The old quip which And so it's just impossible to be So in the end, the only way to do that was So OK, when you say model, the B to B business graph, and then how you learn what the hardest part is to So the way we work is and the sources to be and it sounds like you're generalizing, But the idea is basically to generalize Ah, so it's not just that you find So we have to be able to So the suppliers, the Now you build and and to be able to build No, they're general purpose, and manufactured over the years, So the idea is it's the same model, and we have to tell you and then we take that graph of the Tesla ecosystem. get or infer, from the outside. and then your public data. and then we'll come back with part three.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SwitzerlandLOCATION

0.99+

AmazonORGANIZATION

0.99+

EuropeLOCATION

0.99+

George GilbertPERSON

0.99+

USLOCATION

0.99+

2003DATE

0.99+

GeorgePERSON

0.99+

SergeiPERSON

0.99+

GoogleORGANIZATION

0.99+

Bay AreaLOCATION

0.99+

TeslaORGANIZATION

0.99+

AdidasORGANIZATION

0.99+

NikeORGANIZATION

0.99+

two questionsQUANTITY

0.99+

six meaningsQUANTITY

0.99+

2004DATE

0.99+

MicrosoftORGANIZATION

0.99+

ForbesORGANIZATION

0.99+

FirstQUANTITY

0.99+

DemandbaseORGANIZATION

0.99+

each wordQUANTITY

0.99+

bothQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

AmanORGANIZATION

0.99+

each industryQUANTITY

0.99+

FacebookORGANIZATION

0.99+

fourDATE

0.98+

Aman NaimatPERSON

0.98+

WikibonORGANIZATION

0.95+

RedditORGANIZATION

0.95+

hundreds of millions of peopleQUANTITY

0.95+

EnglishOTHER

0.94+

10, 20 years agoDATE

0.94+

first personQUANTITY

0.94+

one wayQUANTITY

0.94+

Aman NaimatORGANIZATION

0.94+

five years agoDATE

0.93+

20, 30 data scientistsQUANTITY

0.88+

SalesforceORGANIZATION

0.88+

firstlyQUANTITY

0.86+

one researcherQUANTITY

0.83+

around five years agoDATE

0.82+

oneQUANTITY

0.73+

a secondQUANTITY

0.71+

SalesforceTITLE

0.67+

Chapter 2OTHER

0.64+

Knowledge GraphTITLE

0.63+

part threeQUANTITY

0.56+

Aman Naimat, Demandbase, Chapter 1 | George Gilbert at HQ


 

>> Hi, this is George Gilbert. We have an extra-special guest today on our CUBEcast, Aman Naimat, Senior Vice President and CTO of Demandbase started with a five-person startup, Spiderbook. Almost like a reverse IPO, Demandbase bought Spiderbook, but it sounds like Spiderbook took over Demandbase. So Aman, welcome. >> Thank you, excited to be here. Always good to see you. >> So, um, Demandbase is a Next Gen CRM program. Let's talk about, just to set some context. >> Yes. >> For those who aren't intimately familiar with traditional CRM, what problems do they solve? And how did they start, and how did they evolve? >> Right, that's a really good question. So, for the audience, CRM really started as a contact manager, right? And it was replicating what a salesperson did in their own private notebook, writing contact phone numbers in an electronic version of it, right? So you had products that were really built for salespeople on an individual basis. But it slowly evolved, particularly with Siebel, into more of a different twist. It evolved into more of a management tool or reporting tool because Tom Siebel was himself a sales manager, ran a sales team at Oracle. And so, it actually turned from an individual-focused product to an organization management reporting product. And I've been building this stuff since I was 19. And so, it's interesting that, you know, the products today, we're going, actually pivoting back into products that help salespeople or help individual marketers and add value and not just focus on management reporting. >> That's an interesting perspective. So it's more now empowering as opposed to, sort of, reporting. >> Right, and I think some of it is cultural influence. You know, over the last decade, we have seen consumer apps actually take a much more, sort of predominant position rather than in the traditional, earlier in the 80s and 90s, the advanced applications were corporate applications, your large computers and companies. But over the last year, as consumer technology has taken off, and actually, I would argue has advanced more than even enterprise technology, so in essence, that's influencing the business. >> So, even ERP was a system of record, which is the state of the enterprise. And this is much more an organizational productivity tool. >> Right. >> So, tell us now, the mental leap, the conceptual leap that Demandbase made in terms of trying to solve a different problem. >> Right, so, you know, Demandbase started on the premise or around marketing automation and marketing application which was around identifying who you are. As we move towards more digital transaction and Web was becoming the predominant way of doing business, as people say that's 70 to 80 percent of all businesses start using online digital research, there was no way to know it, right? The majority of the Internet is this dark, unknown place. You don't know who's on your website, right? >> You're referring to the anonymity. >> Exactly. >> And not knowing who is interacting with you until very late. >> Exactly, and you can't do anything intelligent if you don't know somebody, right? So if you didn't know me, you couldn't really ask. What will you do? You'll ask me stupid questions around the weather. And really, as humans, I can only communicate if you know somebody. So the sort of innovation behind Demandbase was, and it still continues to be to actually bring around and identify who you're talking to, be it online on your website and now even off your website. And that allows you to have a much more sort of personalized conversation. Because ultimately in marketing and perhaps even in sales, it comes down to having a personal conversation. So that's really what, which if you could have a billion people who could talk to every person coming to your website in a personalized manner, that would be fantastic. But that's just not possible. >> So, how do you identify a person before they even get to a vendor's website so that you can start on a personalized level? >> Right, so Demandbase has been building this for a long time, but really, it's a hard problem. And it's harder now than ever before because of security and privacy, lots of hackers out there. People are actually trying to hide, or at least prevent this from leaking out. So, eight, nine years ago, we could buy registries or reverse DNS. But now with ISBs, and we are behind probably Comcast or Level 3. So how do you even know who this IP address is even registered to? So about eight years ago, we started mapping IP addresses, 'cause that's how you browse the Internet, to companies that they work at, right? But it turned out that was no longer effective. So we have built over the last eight years proprietary methods that know how companies relate to the IP addresses that they have. But we have gone to doing partnerships. So when you log into certain websites, we partner with them to identify you if you self-identify at Forbes.com, for example. So when you log in, we do a deal. And we have hundreds of partners and data providers. But now, the state of the art where we are is we are now looking at behavioral signals to identify who you are. >> In other words, not just touch points with partners where they collect an identity. >> Right. >> You have a signature of behavior. >> That's right. >> It's really interesting that humans are very unique. And based on what they're reading online and what they're reading about, you can actually identify a person and certainly identify enough things about them to know that this is an executive at Tesla who's interested in IOT manufacturing. >> Ah, so you don't need to resolve down to the name level. >> No. >> You need to know sort of the profile. >> Persona, exactly. >> The persona. >> The persona, and that's enough for marketing. So if I knew that this is a C-level supply chain executive from Tesla who lives in Palo Alto and has interests in these areas or problems, that's enough for Siemens to then have an intelligent conversation to this person, even if they're anonymous on their website or if they call on the phone or anything else. >> So, okay, tell us the next step. Once you have a persona, is it Demandbase that helps them put together a personalized? >> Profile. >> Profile, and lead it through the conversation? >> Yeah, so earlier, well, not earlier, but very recently, rebuilding this technology was just a very hard problem. To identify now hundreds of millions of people, I think around 700 are businesspeople globally which is majority of the business world. But we realize that in AI, making recommendations or giving you data in advanced analytics is just not good enough because you need a way to actually take action and have a personalized conversation because there are 100 thousand people on your website. Making recommendations, it's just overwhelming for humans to get that much data. So the better sort of idea now that we're working on is just take the action. So if somebody from Tesla visits your website, and they are an executive who will buy your product, take them to the right application. If they go back and leave your website, then display them the right message in a personalized ad. So it's all about taking actions. And then obviously, whenever possible, guiding humans towards a personalized conversation that will maximize your relationship. >> So, it sounds like sometimes it's anticipating and recommending a next best action. >> Yeah. >> And sometimes, it's your program taking the next best action. >> That's right, because it's just not possible to scale people to take actions. I mean, we have 30, 40 sales reps in Demandbase. We can't handle the volume. And it's difficult to create that personalized letter, right? So we make recommendations, but we've found that it's just too overwhelming. >> Ah, so in other words, when you're talking about recommendations, you're talking about recommendations for Demandbase for? >> Or our clients, employees, or salespeople, right? >> Okay. >> But whenever possible, we are looking to now build systems that in essence are in autopilot mode, and they take the action. They drive themselves. >> Give us some examples of the actions. >> That's right, so some actions could be if you know that a qualified person came to your website, notify the salesperson and open a chat window saying, "This is an executive. "This is similar to a person who will buy "a product from you. "They're looking for this thing. "Do you want to connect with a salesperson?" And obviously, only the people that will buy from you. Or, the action could be, send them an email automatically based on something they will be interested in, and in essence, have a conversation. Right? So it's all about conversation. An ad or an email or a person are just ways of having a conversation, different channels. >> So, it sounds like there was an intermediate marketing automation generation. >> Right. >> After traditional CRM which was reporting. >> Right, that's true. >> Where it was basically, it didn't work until you registered on the website. >> That's right. >> And then, they could email you. They could call you. The inside sales reps. >> That's right. >> You know, if you took a demo, >> That's right. >> you had to put an idea in there. >> And that's still, you know, so when Demandbase came around, that was the predominant between the CRM we were talking about. >> George: Right. >> There was a gap. There was a generation which started to be marketing. It was all about form fills. >> George: Yeah. >> And it was all about nurturing, but I think that's just spam. And today, their effectiveness is close to nothing. >> Because it's basically email or outbound calls. >> Yeah, it's email spam. Do you know we all have email boxes filled with this stuff? And why doesn't it work? Because, not only because it's becoming ineffective and that's one reason. Because they don't know me, right? And it boils down to if the email was really good and it related to what you're looking for or who you are, then it will be effective. But spam, or generic email is just not effective. So it's to some extent, we lost the intimacy. And with the new generation of what we call account-based marketing, we are trying to build intimacy at scale. >> Okay, so tell us more. Tell us first the philosophy behind account-based marketing and then the mechanics of how you do it. >> Sure, really, account-based marketing is nothing new. So if you walk into a corporation, they have these really sophisticated salespeople who understand their clients, and they focus on one-on-one, and it's very effective. So if you had Google as a client or Tesla as a client, and you are Siemens, you have two people working and keeping that relationship working 'cause you make millions of dollars. But that's not a scalable model. It's certainly not scalable for startups here to work with or to scale your organization, be more effective. So really, the idea behind account-based marketing is to scale that same efficacy, that same personalized conversation but at higher volume, right? And maximize, and the only way to really do that is using artificial intelligence. Because in essence, we are trying to replicate human behavior, human knowledge at scale. Right? And to be able to harvest and know what somebody who knows about pharma would know. >> So give me an example of, let's stay in pharma for a sec. >> Sure. >> And what are the decision points where based on what a customer does or responds to, you determine the next step or Demandbase determines what next step to take? >> Right. >> What are some of those options? Like a decision tree maybe? >> You can think of it, it's quite faddish in our industry now. It's reinforcement learning which is what Google used in the Go system. >> George: Yeah, AlphaGo. >> AlphaGo, right, and we were inspired by that. And in essence, what we are trying to do is predict not only what will keep you going but where you will win. So we give rewards at each point. And the ultimate goal is to convert you to a customer. So it looks at all your possible futures, and then it figures out in what possible futures you will be a customer. And then it works backwards to figure out where it should take you next. >> Wow, okay, so this is very different from >> They play six months ahead. So it's a planning system. >> Okay. >> Cause your sales cycles are six months ahead. >> So help us understand the difference between the traditional statistical machine learning that is a little more mainstream now. >> Sure. >> Then the deep learning, the neural nets, and then reinforcement learning. >> Right. >> Where are the sweet spots? What are the sweet spots for the problems they solve? >> Yeah, I mean, you know, there's a lot of fad and things out there. In my opinion, you can achieve a lot and solve real-world problems with simpler machine learning algorithms. In fact, for the data science team that I run, I always say, "Start with like the most simplest algorithm." Because if the data is there and you have the intuition, you can get to a 60% F-score or quality with the most naive implementation. >> George: 60% meaning? >> Like accuracy of the model. >> Confidence. >> Confidence. Sure, how good the model is, how precise it is. >> Okay. >> And sure, then you can make it better by using more advanced algorithms. The reinforcement learning, the interesting thing is that its ability to plan ahead. Most machine learning can only make a decision. They are classifiers of sorts, right? They say, is this good or bad? Or, is this blue? Or, is this a cat or not? They're mostly Boolean in nature or you can simulate that in multi-class classifiers. But reinforcement learning allows you to sort of plan ahead. And in CRM or as humans, we're always planning ahead. You know, a really good salesperson knows that for this stage opportunity or this person in pharma, I need to invite them to the dinner 'cause their friends are coming and they know that last year when they did that, then in the future, that person converted. Right, if they go to the next stage and they, so it plans ahead the possible futures and figures out what to do next. >> So, for those who are familiar with the term AB testing. >> Sure. >> And who are familiar with the notion that most machine learning models have to be trained on data where the answer exists, and they test it out, train it on one set of data >> Sure. >> Where they know the answers, then they hold some back and test it and see if it works. So, how does reinforcement learning change that? >> I mean, it's still testing on supervised models to know. It can be used to derive. You still need data to understand what the reward function would be. Right? And you still need to have historical data to understand what you should give it. And sure, have humans influence it as well, right? At some point, we always need data. Right? If you don't have the data, you're nowhere. And if you don't have, but it also turns out that most of the times, there is a way to either derive the data from some unsupervised method or have a proxy for the data that you really need. >> So pick a key feature in Demandbase and then where you can derive the data you need to make a decision, just as an example. >> Yeah, that's a really good question. We derive datas all the time, right? So, let me use something quite, quite interesting that I wish more companies and people used is the Internet data, right? The Internet today is the largest source of human knowledge, and it actually know more than you could imagine. And even simple queries, so we use the Bing API a lot. And to know, so one of the simple problems we ran into many years ago, and that's when we realized how we should be using Internet data which in academia has been used but not as used as it should be. So you know, you can buy APIs from Bing. And I wish Google would give their API, but they don't. So, that's our next best choice. We wanted to understand who people are. So there's their common names, right? So, George Gilbert is a common name or Alan Fletcher who's my co-founder. And, you know, is that a common name? And if you search that, just that name, you get that name in various contexts. Or co-occurring with other words, you can see that there are many Alan Fletchers, right? Or if you get, versus if you type in my name, Aman Naimat, you will always find the same kind of context. So you will know it's one person or it's a unique name. >> So, it sounds to me that reinforcement learning is online learning where you're using context. It's not perfectly labeled data. >> Right. I think there is no perfectly labeled data. So there's a misunderstanding of data scientists coming out of perfectly labeled data courses from Stanford, or whatever machine learning program. And we realized very quickly that the world doesn't have any perfect labeled data. We think we are going to crowdsource that data. And it turns out, we've tried it multiple times, and after a year, we realized that it's just a waste of time. You can't get, you know, 20 cents or 25 cents per item worker somewhere in wherever to hat and label data of any quality to you. So, it's much more effective to, and we were a startup, so we didn't have money like Google to pay. And even if you had the money, it generally never works out. We find it more effective to bootstrap or reuse unsupervised models to actually create data. >> Help us. Elaborate on that, the unsupervised and the bootstrapping where maybe it's sort of like a lawnmower where you give it that first. >> That's right. >> You know, tug. >> I mean, we've used it extensively. So let me give you an example. Let's say you wanted to create a list of cities, right? Or a list of the classic example actually was a paper written by Sergey Brin. I think he was trying to figure out the names of all authors in the world, and this is 1988. And basically if you search on Google, the term "has written the book," just the term "has written the book," these are called patterns, or hearse patterns, I think. Then you can imagine that it's also always preceded by a name of a person who's an author. So, "George Gilbert has written the book," and then the name of the book, right? Or "William Shakespeare has written the book X." And you seed it with William Shakespeare, and you get some books. Or you put Shakespeare and you get some authors, right? And then, you use it to learn other patterns that also co-occurred between William Shakespeare and the book. >> George: Ah. >> And then you learn more patterns and you use it to extract more authors. >> And in the case of Demandbase, that's how you go from learning, starting bootstrapping within, say, pharma terminology. >> Yes. >> And learning the rest of pharma terminology. >> And then, using generic terminology to enter an industry, and then learning terminology that we ourselves don't understand yet it means. For example, I always used this example where if we read a sentence like "Takeda has in-licensed "a molecule from Roche," it may mean nothing to us, but it means that they're partnered and bought a product, in pharma lingo. So we use it to learn new language. And it's a common technique. We use it extensively, both. So it goes down to, while we do use highly sophisticated algorithms for some problems, I think most problems can be solved with simple models and thinking through how to apply domain expertise and data intuition and having the data to do it. >> Okay, let's pause on that point and come back to it. >> Sure. >> Because that sounds like a rich vein to explore. So this is George Gilbert on the ground at Demandbase. We'll be right back in a few minutes.

Published Date : Nov 2 2017

SUMMARY :

and CTO of Demandbase Always good to see you. Let's talk about, just to set some context. And so, it's interesting that, you know, So it's more now empowering so in essence, that's influencing the business. And this is much more an organizational the conceptual leap that Demandbase made identifying who you are. And not knowing who is interacting with you And that allows you to have a much more to identify who you are. with partners where they collect an identity. you can actually identify a person Ah, so you don't need to resolve down So if I knew that this is a C-level Once you have a persona, is it Demandbase is just not good enough because you need a way So, it sounds like sometimes it's anticipating And sometimes, it's your program And it's difficult to create that personalized letter, to now build systems that in essence And obviously, only the people that will buy from you. So, it sounds like there was an intermediate until you registered on the website. And then, they could email you. And that's still, you know, There was a generation which started to be marketing. And it was all about nurturing, And it boils down to if the email was really good the mechanics of how you do it. So if you had Google as a client So give me an example of, You can think of it, it's quite faddish And the ultimate goal is to convert you to a customer. So it's a planning system. between the traditional statistical machine learning Then the deep learning, the neural nets, Because if the data is there and you have Sure, how good the model is, how precise it is. And sure, then you can make it better So, for those who are familiar with the term and see if it works. And if you don't have, but it also turns out and then where you can derive the data you need And if you search that, just that name, So, it sounds to me that reinforcement learning And even if you had the money, it's sort of like a lawnmower where you give it that first. And basically if you search on Google, And then you learn more patterns And in the case of Demandbase, and having the data to do it. So this is George Gilbert on the ground at Demandbase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

70QUANTITY

0.99+

TeslaORGANIZATION

0.99+

Alan FletcherPERSON

0.99+

Tom SiebelPERSON

0.99+

SiemensORGANIZATION

0.99+

ComcastORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

25 centsQUANTITY

0.99+

60%QUANTITY

0.99+

20 centsQUANTITY

0.99+

Sergey BrinPERSON

0.99+

hundredsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

two peopleQUANTITY

0.99+

OracleORGANIZATION

0.99+

RocheORGANIZATION

0.99+

1988DATE

0.99+

StanfordORGANIZATION

0.99+

100 thousand peopleQUANTITY

0.99+

William ShakespearePERSON

0.99+

Aman NaimatPERSON

0.99+

six monthsQUANTITY

0.99+

last yearDATE

0.99+

ShakespearePERSON

0.99+

DemandbaseORGANIZATION

0.99+

TakedaORGANIZATION

0.99+

bothQUANTITY

0.99+

SiebelPERSON

0.99+

one reasonQUANTITY

0.99+

five-personQUANTITY

0.99+

todayDATE

0.98+

BingORGANIZATION

0.98+

AlphaGoORGANIZATION

0.98+

one personQUANTITY

0.98+

Alan FletchersPERSON

0.98+

AmanPERSON

0.98+

90sDATE

0.97+

around 700QUANTITY

0.97+

millions of dollarsQUANTITY

0.96+

each pointQUANTITY

0.96+

hundreds of millions of peopleQUANTITY

0.96+

Chapter 1OTHER

0.96+

eight,DATE

0.96+

80 percentQUANTITY

0.96+

firstQUANTITY

0.96+

SpiderbookORGANIZATION

0.96+

GooglORGANIZATION

0.96+

one setQUANTITY

0.95+

Forbes.comORGANIZATION

0.95+

oneQUANTITY

0.94+

a billion peopleQUANTITY

0.9+

80sDATE

0.89+

about eight years agoDATE

0.84+

last eight yearsDATE

0.84+

last decadeDATE

0.83+

CUBEcastORGANIZATION

0.82+

Aman Naimat, Demandbase, Chaper 3 | George Gilbert at HQ


 

>> This is George Gilbert from Wikibon. We're back on the ground with Aman Naimat at Demandbase. >> Hey. >> And, we're having a really interesting conversation About building next-gen enterprise applications. >> It's getting really deep. (laughing) >> So, so let's look ahead a little bit. >> Sure. >> We've talked in some detail about the foundation technologies. >> Right. >> And you told me before that we have so much technology, you know, still to work with. >> Yeah. >> That is unexploited. That we don't need, you know, a whole lot of breakthroughs, but we should focus on customer needs that are unmet. >> Yeah. >> Let's talk about some problems yet to be solved, but that are customer facing with, as you have told me, existing technology. >> Right, can solve. >> Yes. >> Absolutely, I mean, there's a lot of focus in Silicon Valley about, like, scaling machine learning and investing in, you know, GPUs and what have you. But I think there's enough technology there. So where's the gap? The really gap is in understanding how to build AI applications, and how to monetize it, because it is quite different than building traditional applications. It has different characteristics. You know, so it's much more experimental in nature. Although, you know, with lean engineering, we've moved towards iterative to (mumbles) development, for example. Like, for example, 90% of the time, I, you know, after 20 years of building software, I'm quite confident I can build software. It turns out, in the world of data science and AI driven, or AI applications, you can't have that much confidence. It's a lot more like discovering molecules in pharma. So you have to experiment more often, and methods have to be discovered, there's more discovery and less engineering in the early stages. >> Is the discovery centered on do you have the right data? >> Yeah, or are you measuring the right thing, right? If you thought you were going to maximize, work the model to maximize revenue, but really, maybe the end function should be increasing engagement with the customer. So, often, we don't know the end objective function, or incorrectly guess the right or wrong objective function. The only way to do that is to be able to build and end-to-end system in days, and then iterate through the different models in hours and days as quickly as possible with the end goal and customer in mind. >> This is really fascinating because we were, some of the research we're doing is on the really primitive capabilities of the, sort of, analytic data pipeline. >> Yes. >> Where, you know, all the work that has to do with coming up with the features. >> Yeah. >> And then plugging that into a model, and then managing the model's life cycle. That those, that whole process is so fragmented. >> Yeah. >> And it's, you know, chewing gum and bailing wire. >> Sure. >> And I imagine that that's slows dramatically that experimentation process. >> I mean, it slows it down, but it's also mindset, right? >> Okay. >> So, now that we have built, you know, we probably have a hundred machine learning models now, Demandbase, that I've contributed to the build with our data scientists, and in the end we've found out that you can actually do something in a day or two with extremely small amount of data over, using Python and SKLearn, today, very quickly, that will give you, and then, you know, build some simple UI that a human can evaluate and give feedback or whatever action you're trying to get to. And get to that as quickly as possible, rather than worrying about the pipelines, rather than worry about everything else because in 80% of the cases, it will fail anyways. Or you will realize that either you don't have the right data, or nobody wants it, or it can never be done, or you need to approach it completely different, from a completely different objective function. >> Let me parse what you've said in a different way. >> Sure. >> And see if I understand it. Traditional model building is based, not on sampling, but on the full data set. >> That's right. >> And what you're saying, in terms of experimentation. >> Start doing that, yes. >> Is to go back to samples. >> That's right. Go back to, there's a misunderstanding that we need, you know, while Demandbase processes close to a trillion rows of data today, we found that almost all big data, AI solutions, can be initially proven with very small amounts of data, and small amount of features. And if they don't work that, if you cannot have a hundred rows of data and have a human look at some rows and make a judgment, then it's not possible, most likely, with one billion, and with ten billion. So, if you cannot work it, now there are exceptions to this, but in 90% of the cases, if the solution is not at, you know, few thousand or million rows of data. Now the problem is that all the easy, you know, libraries and open-source stuff that's out there, it's all designed to be workable in small amounts of data. So, what we don't want to do is build this whole massive infrastructure, which is getting easier, and worrying about data pipelines and putting it all together, only to realize that this is not going to work. Or, more often, it doesn't solve any problem. >> So, if I were to sort of boil that down into terms of product terms. >> Yeah. >> The notion that you could have something like Spark running on your laptop. >> Yeah. >> And scaling out to a bit cluster. >> Yeah, just run it on laptop. >> That, yeah. >> In fact, you don't even need Spark. >> Or, I was going to say, not even spark. >> No. >> Just use Python. >> Just by scale learning is much better for something like this. >> It's almost like, this is, so it's back to Visual Basic. You know, you're not going to build a production app in >> I wouldn't go that far. >> Well >> It's a prototype. >> No I meant for the prototype GUI app you do in Visual Basic, and then, you know, when you're going to build a production one, you use Microsoft Foundation Class. >> Because most often, right, more often, you don't have the right data, you have the wrong objective function, or your customer is not happy with the results or wants to modify. And that's true for conventional business applications, the old school whatever internet applications. But it is more true for here because it's much, the data is much more noisy, the problems are much more complex, and ultimately you need to be able to take real world action, and so build something that can take the real world action, be it for a very narrow problem or use case. And get to it, even without any model. And the first model that I recommend, or I do, or my data scientists do, is I just do it yourself by hand. Just label the data and say as if, let's pretend that this was the right answer, and we can take this action and the workflow works, like, did something good happen? You know, will it be something that will satisfy some problem? And if that's not true, then why build it? And you can do that manually, right? So I guess it's no different than any other entrepreneurial endeavor. But it's more true in data science projects, firstly, because they're more likely to be wrong than I think we have learned now how to build good software. >> Imperative software. >> The imperative software. And data science is called data science for a reason. It's much more experimental, right? Like, in science, you don't know. A negative experiment is a fine experiment. >> This is actually, of all that we've been talking about, it might sound the most abstract, but it's also the most profound because what you're saying is this elaborate process and the technology to support it, you know, this whole pipeline, that it's like you only do that once you've proven the prototype. >> That's right. And get the prototype in a day. >> You don't want that elaborate structure and process when you're testing something out. >> No, yeah, exactly. And, you know, like when we build our own machine learning models, obviously coming out of academia, you know, there was a class project that it took us a year or six months to really design the best models, and test it, and prove it out intrinsic, intrinsic testing, and we knew it was working. But what we should really have done, what should we do now is we build models, we do experiments daily. And get to, in essence, the patient with our molecule every day, so, you know, we have the advantage given that we entail the marketing, that we can get to test our molecules or drugs on a daily basis. And we have enough data to test it, and we have enough customers, thankfully, to test it. And some of them are collaborating with us. So, we get to end solution on a daily basis. >> So, now I understand why you said, we don't need these radical algorithmic breakthroughs or, you know, new super, turbo-charged processors. So, with this approach of really fast prototyping, what are some of the unmet needs in, you know, it's just a matter of cycling through these experiments? >> Yeah, so I think one of the biggest unmet need today, we're able to understand language, we're able to predict who should you do business with and what should you talk about, but I think natural language generation, or creating a personalized email, really personalized and really beautifully written, is still something that we haven't quite, you know, have a full grasp on. And to be able to communicate at human level personalization, to be able to talk, you know, we can generate ads today, but that's not really, you know, language, right? It is language, but not as sophisticated as what we're talking here. Or to be able to generate text or have a bot speak to you, right? We can have a bot, we can now understand and respond in text, but really speak to you fluently with context about you is definitely an area we're heavily investing in, or looking to invest in in the near future. >> And with existing technology. >> With existing technology. I think, we think if you can narrow it down, we can generate emails that are much better than what are salesperson would write. In fact, we already have a product that can personalize a website, automatically, using AI, reinforcement learning, all the data we have. And it can rewrite a website to be customized for each visitor, personalized to each visitor, >> Give us an example of what. >> So, you know, for example if you go to Siemens or SAP and you come from pharma, it will take you and surface different content about pharmaceuticals. And, you know, in essence, at some point you can generate a whole page that's personalized to if somebody comes to pharma from a CFO versus an IT person, it will change the entire page content, right? To that, to, in essence, the entire buyer journey could be personalized. Because, you know, today buying from B2B, it's quite jarring, it's filled with spam, it's, you know, it's not a pleasant experience. It's not concierge level experience. And really, in an ideal world, you want B2B or marketing to be personalized. You want it to be like you're being, you know, guided through, if you need something, you can ask a question and you have a personalized assistant talking to you about it. >> So that there's, the journey is not coded in. >> It isn't, yeah. >> The journey, or the conversation response reacts to the >> To the customer. >> To the customer. >> Right, and B2B buyers want, you know, they want something like that. They don't have time to waste to it. Who want's to be lost on a website? >> Right. >> You know, you go to any Fortune 500 company's website and you, it's a mess. >> Okay, so let's back up to the Demandbase in the Bay Area, software ecosystem. >> Sure. >> So, SalesForce is a big company. >> Yes. >> Marketing is one of their pillars. >> Yes. >> Tell us, what is it about this next gen technology that is so, we touched on this before, but so anathema to the way traditional software companies build their products? >> Yeah, I mean, SalesForce is a very close partner, they're a customer, we work with them very closely. I think they're also an investor, small investor for Demandbase. We have a deep relationship with them. And I, myself, come from the traditional software background, you know, I've been building CRM, so I'll talk about myself, because I've seen how different and, you know, I have to sort of transition at a very early stage from a human centric CRM to a data driven CRM, or a human driven versus data driven. And it's, you have to think about things differently. So, one difference is that, when you look at data in human driven CRM, you trust it implicitly because somebody in your org put it in. You may challenge it, it's old, it's stale, but there's no fear that it's a machine recommending you and driving you. And it requires the interfaces to be much different. You have to think about how do you build trust between the person, you know, who's being driven in a Tesla, also, similar problem. And, you know, how do you give them the controls so they can turn of the autopilot, right? And how do you, you know, take feedback from humans to improve the models? So, it's a different way that human interface even becomes more different, and simpler. The other interesting thing is that if you look at traditional applications, they're quite complicated. They have all these fields because, you know, just enter all this data and you type it in. But the way you interact with our application, is that we already know everything, or a lot. So, why bother asking you? We already know where you are, who you are, what you should do, so we are in essence, guiding you more of a, using the Tesla autopilot example, it already knows where you are. It knows you're sitting in the car and it knows that you need to break because, you know, you're going to crash, so it'll just break by itself. So, you know, the interface is. >> That's really an interesting analogy. Tesla is a data driven piece of software. >> It is. >> Whereas, you know, my old BMW or whatever is a human driven piece of software. >> And there's some things in the middle. So, I recently, I mean, looking at cars, I just had a baby, and Volvo is something in the middle. Where, if you're going to have an accident or somebody comes close, it blinks. So, it's like advanced analytics, right? Which is analogous to that. Tesla just stops if you're going to have an accident. And that's the right idea, because if I'm going to have an accident, you don't want to rely on me to look at some light, what if I'm talking on the phone or looking at my kid? You know, some blinking light over there. Which is why advanced analytics hasn't been as successful as it should be. >> Because the hand off between the data driven and the human driven is a very difficult hand off. >> It's a very difficult hand off. And whenever possible, the right answer for us today is if you know everything, and you can take the action, like if you're going to have an accident just stop. Or, if you need to go, go, right? So if you come out in the morning, you know, and you go to work at 9 am, it should just put itself out, like, you know, why wait for human to, you know, get rid of all the monotonous problems that we ourselves have, right? >> That's a great example. On that note, let's break and this is George Gilbert. I'm with, and having a great conversation with Aman Naimat, Senior VP and CTO of Demandbase, and we will be back shortly with a member of the data science team. >> Thank you, George.

Published Date : Nov 2 2017

SUMMARY :

We're back on the ground with Aman Naimat at Demandbase. And, we're having a really interesting conversation It's getting really deep. the foundation technologies. technology, you know, still to work with. That we don't need, you know, a whole lot of breakthroughs, as you have told me, existing technology. and investing in, you know, GPUs and what have you. Yeah, or are you measuring the right thing, right? This is really fascinating because we were, Where, you know, all the work that And then plugging that into a model, And I imagine that that's slows dramatically So, now that we have built, you know, we probably not on sampling, but on the full data set. Now the problem is that all the easy, you know, So, if I were to sort of boil that down The notion that you could have something for something like this. It's almost like, this is, so it's back to Visual Basic. and then, you know, when you're going to build And you can do that manually, right? Like, in science, you don't know. you know, this whole pipeline, that it's like And get the prototype in a day. You don't want that elaborate structure and process every day, so, you know, we have the advantage what are some of the unmet needs in, you know, and respond in text, but really speak to you fluently I think, we think if you can narrow it down, So, you know, for example if you go to Siemens Right, and B2B buyers want, you know, You know, you go to any Fortune 500 company's in the Bay Area, software ecosystem. between the person, you know, who's being driven Tesla is a data driven piece of software. Whereas, you know, my old BMW or whatever is a to have an accident, you don't want to rely on me and the human driven is a very difficult hand off. to, you know, get rid of all the monotonous problems Senior VP and CTO of Demandbase, and we will be back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

90%QUANTITY

0.99+

one billionQUANTITY

0.99+

VolvoORGANIZATION

0.99+

SiemensORGANIZATION

0.99+

BMWORGANIZATION

0.99+

Visual BasicTITLE

0.99+

80%QUANTITY

0.99+

ten billionQUANTITY

0.99+

9 amDATE

0.99+

a yearQUANTITY

0.99+

SalesForceORGANIZATION

0.99+

todayDATE

0.99+

TeslaORGANIZATION

0.99+

DemandbaseORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

PythonTITLE

0.99+

GeorgePERSON

0.99+

six monthsQUANTITY

0.99+

SparkTITLE

0.99+

20 yearsQUANTITY

0.98+

MicrosoftORGANIZATION

0.98+

Bay AreaLOCATION

0.98+

a dayQUANTITY

0.98+

each visitorQUANTITY

0.98+

SAPORGANIZATION

0.98+

twoQUANTITY

0.97+

first modelQUANTITY

0.96+

SKLearnTITLE

0.96+

oneQUANTITY

0.94+

WikibonORGANIZATION

0.93+

firstlyQUANTITY

0.92+

one differenceQUANTITY

0.85+

Aman NaimatORGANIZATION

0.77+

hundred machineQUANTITY

0.73+

trillion rows of dataQUANTITY

0.73+

million rows of dataQUANTITY

0.72+

ChaperPERSON

0.69+

Aman NaimatPERSON

0.68+

thousandQUANTITY

0.68+

fewQUANTITY

0.59+

Foundation ClassTITLE

0.59+

500QUANTITY

0.44+

FortuneTITLE

0.36+

Wikibon Conversation with John Furrier and George Gilbert


 

(upbeat electronic music) >> Hello, everyone. Welcome to the Cube Studios in Palo Alto, California. I'm John Furrier, the co-host of the Cube and co-founder of SiliconANGLE Media Inc. I'm here with George Gilbert for a Wikibon conversation on the state of the big data. George Gilbert is the analyst at Wikibon covering big data. George, great to see you. Looking good. (laughing) >> Good to see you, John. >> So George, you're obviously covering big data. Everyone knows you. You always ask the tough questions, you're always drilling down, going under the hood, and really inspecting all the trends, and also looking at the technology. What are you working on these days as the big data analyst? What's the hot thing that you're covering? >> OK, so, what's really interesting is we've got this emerging class of applications. The name that we've used so far is modern operational analytic applications. Operational in the sense that they help drive business operations, but analytical in the sense that the analytics either inform or drive transactions, or anticipate and inform interactions with people. That's the core of this class of apps. And then there are some sort of big challenges that customers are having in trying to build, and deploy, and operate these things. That's what I want to go through. >> George, you know, this is a great piece. I can't wait to (mumbling) some of these questions and ask you some pointed questions. But I would agree with you that to me, the number one thing I see customers either fumbling with or accelerating value with is how to operationalize some of the data in a way that they've never done it before. So you start to see disciplines come together. You're starting to see people with a notion of digital business being something that's not a department, it's not a marketing department. Data is everywhere, it's horizontally scalable, and the smart executives are really looking at new operational tactics to do that. With that, let me kick off the first question to you. People are trying to balance the cloud, On Premise, and The Edge, OK. And that's classic, you're seeing that now. I've got a data center, I have to go to the cloud, a hybrid cloud. And now the edge of the network. We were just taking about Block Chain today, there's this huge problem. They've got the balance that, but they've got to balance it versus leveraging specialized services. How do you respond to that? What is your reaction? What is your presentation? >> OK, so let's turn it into something really concrete that everyone can relate to, and then I'll generalize it. The concrete version is for a number of years, everyone associated Hadoop with big data. And Hadoop, you tried to stand up on a cluster on your own premises, for the most part. It was on had EMR, but sort of the big company activity outside, even including the big tech companies was stand up a Hadoop cluster as a pilot and start building a data lake. Then see what you could do with sort of huge amounts of data that you couldn't normally sort of collect and analyze. The operational challenges of standing up that sort of cluster was rather overwhelming, and I'll explain that later, so sort of park that thought. Because of that complexity, more and more customers, all but the most sophisticated, are saying we need a cloud strategy for that. But once you start taking Hadoop into the cloud, the components of this big data analytic system, you have tons more alternatives. So whereas in Cloudera's version of Hadoop you had Impala as your MPP sequel database. On Amazon, you've got Amazon Redshift, you've got Snowflake, you've got dozens up MPP sequel databases. And so the whole playing field shifts. And not only that, Amazon has instrumented their, in that particular case, their application, to be more of a more managed service, so there's a whole lot less for admins to do. And you take that on sort of, if you look at the slides, you take every step in that pipeline. And when you put it on a different cloud, it's got different competitors. And even if you take the same step in a pipeline, let's say Spark on HDFS to do your ETL, and your analysis, and your shaping of data, and even some of the machine learning, you put that on Azure and on Amazon, it's actually on different storage foundation. So even if you're using the same component, it's different. There's a lot of complexity and a lot of trade off that you got to make. >> Is that a problem for customers? >> Yes, because all of a sudden, they have to evaluate what those trade offs are. They have to evaluate the trade off between specialization. Do I use the best to breed thing on one platform. And if I do, it's not compatible with what I might be running on prem. >> That'll slow a lot of things down. I can tell you right now, people want to have the same code base on all environments, and then just have the same seamless operational role. OK, that's a great point, George. Thanks for sharing that. The second point here is harmonizing and simplifying management across hybrid clouds. Again, back to your point. You set that up beautifully. Great example, open source innovation hits a roadblock. And the roadblock is incompatible components in multiple clouds. That's a problem. It's a management nightmare. How do harmonization about hybrid cloud work? >> You couldn't have asked it better. Let me put it up in terms of an X Y chart where on the x-axis, you have the components of an analytic pipeline. Ingest, process, analyze, predict, serve. But then on the y-axis, this is for an admin, not a developer. These are just some of the tasks they have to worry about. Data governance, performance monitoring, scheduling and orchestration, availability and recovery, that whole list. Now, if you have a different product for each step in that pipeline, and each product has a different way of handling all those admin tasks, you're basically taking all the unique activities on the y-axis, multiplying it by all the unique products on the x-axis, and you have overwhelming complexity, even if these are managed services on the cloud. Here now you've got several trade offs. Do I use the specialized products that you would call best to breed? Do I try and do end to end integration so I get simplification across the pipeline? Or do I use products that I had on-prem, like you were saying, so that I have seamless compatibility? Or do I use the cloud vendors? That's a tough trade off. There's another similar one for developers. Again, on the y-axis, for all the things that a developer would have to deal with, not all of them, just a sample. The data model and the data itself, how to address it, the programing model, the persistence. So on that y-axis, you multiply all those different things you have to master for each product. And then on the x-axis, all the different products and the pipeline. And you have that same trade off, again. >> Complexity is off the charts. >> Right. And you can trade end to end integration to simplify the complexity, but we don't really have products that are fully fleshed out and mature that stretch from one end of the pipeline to the other, so that's a challenge. Alright. Let's talk about another way of looking at management. This was looking at the administrators and the developers. Now, we're getting better and better software for monitoring performance and operations, and trying to diagnose root cause when something goes wrong and then remediate it. There's two real approaches. One is you go really deep, but on a narrow part of your application and infrastructure landscape. And that narrow part might be, you know, your analytic pipeline, your big data. The broad approach is to get end to end visibility across Edge with your IOT devices, across on-prem, perhaps even across multiple clouds. That's the breadth approach, end to end visibility. Now, there's a trade off here too as in all technology choices. When you go deep, you have bounded visibility, but that bounded visibility allows you to understand exactly what is in that set of services, how they fit together, how they work. Because the vendor, knowing that they're only giving you management of your big data pipeline, they can train their models, their machine learning models, so that whenever something goes wrong, they know exactly what caused it and they can filter out all the false positives, the scattered errors that can confuse administrators. Whereas if you want breadth, you want to see end to end your entire landscape so that you can do capacity planning and see if there was an error way upstream, something might be triggered way downstream or a bunch of things downstream. So the best way to understand this is how much knowledge do you have of all the pieces work together, and how much knowledge you have of all the pieces, the software pieces fit together. >> This is actually an interesting point. So if I kind of connect the dots for you here is the bounded root cause analysis that we see a lot of machine learning, that's where the automation is. >> George: Yeah. >> The unbounded, the breadth, that's where the data volume is. But they can work together, that's what you're saying. >> Yes. And actually, I hadn't even got to that, so thanks for taking it out. >> John: Did I jump ahead on that one? (laughing) >> No, no, you teed it out. (laughing) Because ultimately-- >> Well a lot of people want to know where it's going to be automated away. All the undifferentiated labored and scale can be automated. >> Well, when you talk about them working together. So for the deep depth first, there's a small company called Unravel Data that sort of modeled eight million jobs or workloads of big data workloads from high tech companies, so they know how all that fits together and they can tell you when something goes wrong exactly what goes wrong and how to remediate it. So take something like Rocana or Splunk, they look end to end. The interesting thing that you brought up is at some point, that end to end product is going to be like a data warehouse and the depth products are going to sit on top of it. So you'll have all the contextual data of your end to end landscape, but you'll have the deep knowledge of how things work and what goes wrong sitting on it. >> So just before we jump to the machine learning question which I want to ask you, what you're saying is the industry is evolving to almost looking like a data warehouse model, but in a completely different way. >> Yeah. Think of it as, another cue. (laughing) >> John: That's what I do, George. I help you out with the cues. (laughing) No, but I mean the data warehouse, everyone knows what that was. A huge industry, created a lot of value, but then the world got rocked by unstructured data. And then their bounded, if you will, view has got democratized. So creative destruction happened which is another word for new entrants came in and incumbents got rattled. But now it's kind of going back to what looks like a data warheouse, but it's completely distributed around. >> Yes. And I was going to do one of my movie references, but-- >> No, don't do it. Save us the judge. >> If you look at this starting in the upper right, that's the data lake where you're collecting all the data and it's for search, it's exploratory. As you get more structure, you get to the descriptive place where you can build dashboards to monitor what's going on. And you get really deep, that's when you have the machine learning. >> Well, the machine learning is hitting the low hanging fruit, and that's where I want to get to next to move it along. Sourcing machine learning capability, let's discuss that. >> OK, alright. Just to set contacts before we get there, notice that when you do end to end visibility, you're really seeing across a broad landscape. And when I'm showing my public cloud big data, that would be depth first just for that component. But you would do breadth first, you could do like a Rocana or a Splunk that then sees across everything. The point I wanted to make was when you said we're reverting back to data warehouses and revisiting that dream again, the management applications started out as saying we know how to look inside machine data and tell you what's going on with your landscape. It turns out that machine data and business operations data, your application data, are really becoming one and the same. So what used to be a transaction, there was one transaction. And that, when you summarized them, that went into the data warehouse. Then we had with systems of engagement, you had about 100 interaction events that you tracked or sort of stored for everything business transaction. And then when we went out to the big data world, it's so resource intensive that we actually had 1,000 to 10,000 infrastructure events for every business transaction. So that's why the data volumes have grown so much and why we had to go back first to data lake, and then curate it to the warehouse. >> Classic innovation story, great. Machine learning. Sourcing machine learning capabilities 'cause that's where the rubber starts hitting the road. You're starting to see clear skies when it comes to where machine learning is starting fit in. Sourcing machine learning capabilities. >> You know, even though we sort of didn't really rehearse this, you're helping cue me on perfectly. Let me make the assertion that with machine learning, we have the same shortage of really trained data scientists that we had when we were trying to stand up Hadoop clusters and do big data analytics. We did not have enough administrators because these were open source components built from essentially different projects, and putting them all together required a huge amount of skills. Data science requires, really, knowledge of algorithms that even really sophisticated programmers will tell you, "Jeez, now I need a PhD "to really understand how this stuff works." So the shortage, that means we're not going to get a lot of hand-built machine learning applications for a while. >> John: In a lot of libraries out there right now, you see TensorFlow from Google. Big traction with that application. >> George: But for PhDs, for PhDs. My contention is-- >> John: Well developers too, you could argue developers, but I'm just putting it out there. >> George: I will get to that, actually. A slide just on that. Let me do this one first because my contention is the first big application, widespread application of machine learning, is going to be the depth first management because it comes with a model built in of how all the big data workloads, services, and infrastructure fit together and work together. And if you look at how the machine learning model operates, when it knows something goes wrong, let's say an analytic job takes 17 hours and then just falls over and crashes, the model can actually look at the data layout and say we have way too much on one node, and it can change the settings and change the layout or the data because it knows how all the stuff works. The point about this is the vendor. In this particular example, Unravel Data, they built into their model an understanding of how to keep a big data workload running as opposed to telling the customer, "You have to program it." So that fits into the question you were just asking which is where do you get this talent. When you were talking about like TensorFlow, and Cafe, and Torch, and MXnet, those are all like assembly language. Yes, those are the most powerful places you could go to program machine learning. But the number of people is inversely proportional to the power of those. >> John: Yeah, those are like really unique specialty people. High, you know, the top guys. >> George: Lab coats, rocket scientists. >> John: Well yeah, just high end tier one coders, tier one brains coding away, AI gurus. This is not your working developer. >> George: But if you go up two levels. So go up one level is Amazon machine learning, Spark machine learning. Go up another level, and I'm using Amazon as an example here. Amazon has a vision service called Recognition. They have a speech generation service, Natural Language. Those are developer ready. And when I say developer ready, I mean developer just uses an API, you know, passes in the data that comes out. He doesn't have to know how the model works. >> John: It's kind of like what DevOps was for cloud at the end of the day. This slide is completely accurate in my opinion. And we're at the early days and you're starting to see the platforms develop. It's the classic abstraction layer. Whoever can extract away the complexity as AI and machine learning grows is going to be the winning platform, no doubt about it. Amazon is showing some good moves there. >> George: And you know how they abstracted away. In traditional programming, it was just building higher and higher APIs, more accessible. In machine learning, you can't do that. You have to actually train the models which means you need data. So if you look at the big cloud vendors right now. So Google, Microsoft, Amazon, and IBM. Most of them, the first three, they have a lot of data from their B to C businesses. So you know, people talking to Echo, people talking to Google Assistant or Siri. That's where they get enough of their speech. >> John: So data equals power? >> George: Yes. >> By having data, you have the ingredients. And the more data that you have, the more data that you know about, the more data that has information around it, the more effective it can be to train machine learning algorithms. >> Yes. >> And the benefit comes back to the people who have the data. >> Yes. And so even though your capabilities get narrower, 'cause you could do anything on TensorFlow. >> John: Well, that's why Facebook is getting killed right now just to kind of change tangents. They have all this data and people are very unhappy, they just released that the Russians were targeting anti-semitic advertising, they enabled that. So it's hard to be a data platform and still provide user utility. This is what's going on. Whoever has the data has the power. It was a Frankenstein moment for Facebook. So there's that out there for everyone. How do companies do the right thing? >> And there's also the issue of customer intellectual property protection. As consumers, we're like you can take our voice, you can take all our speech to Siri or to Echo or whatever and get better at recognizing speech because we've given up control of that 'cause we want those services for free. >> Whoever can shift the data value to the users. >> George: To the developers. >> Or to the developers, or communities, better said, will win. >> OK. >> In my opinion, that's my opinion. >> For the most part, Amazon, Microsoft, and Google have similar data assets. For the most part, so far. IBM has something different which is they work closely with their industry customers and they build progressively. They're working with Mercedes, they're working with BMW. They'll work on the connected car, you know, the autonomous car, and they build out those models slowly. >> So George, this slide is really really interesting and I think this should be a roadmap for all customers to look at to try to peg where they are in the machine learning journey. But then the question comes in. They do the blocking and tackling, they have the foundational low level stuff done, they're building the models, they're understanding the mission, they have the right organizational mindset and personnel. Now, they want to orchestrate it and implement it into action. That's the final question. How do you orchestrate the distributed machine learning feedback and the data coherency? How do you get this thing scaling? How do these machines and the training happen so you have the breadth, and then you could bring the machine learning up the curve into the dashboard? >> OK. We've saved the best for last. It's not easy. When I show the chevrons, that's the analytic data pipeline. And imagine in the serve and predict at the very end, let's take an IOT app, a very sophisticated one. which would be an autonomous car. And it doesn't actually have to be an autonomous one, you could just be collected a lot of information off the car to do a better job insuring it, the insurance company. But the key then is you're collecting data on a fleet of cars, right? You're collecting data off each one, but you're also collecting then the fleet. And that, in the cloud, is where you keep improving your model of how the car works. You run simulations to figure out not just how to design better ones in the future, but how to tune and optimize the ones that are on the road now. That's number three. And then in four, you push that feedback back out to the cars on the road. And you have to manage, and this is tricky, you have to make sure that the models that you trained in step three are coherent, or the same, when you take out the fleet data and then you put the model for a particular instance of a car back out on the highway. >> George, this is a great example, and I think this slide really represents the modern analytical operational role in digital business. You can't look further than Tesla, this is essentially Tesla, and now all cars as a great example 'cause it's complex, it's an internet (mumbling) device, it's on the edge of the network, it's mobility, it's using 5G. It encapsulates everything that you are presenting, so I think this is example, is a great one, of the modern operational analytic applications that supports digital business. Thanks for joining this Wikibon conversaion. >> Thank you, John. >> George Gilbert, the analyst at Wikibon covering big data and the modern operational analytical system supporting digital business. It's data driven. The people with the data can train the machines that have the power. That's the mandate, that's the action item. I'm John Furrier with George Gilbert. Thanks for watching. (upbeat electronic music)

Published Date : Sep 23 2017

SUMMARY :

George Gilbert is the analyst at Wikibon covering big data. and really inspecting all the trends, that the analytics either inform or drive transactions, With that, let me kick off the first question to you. And even if you take the same step in a pipeline, they have to evaluate what those trade offs are. And the roadblock is These are just some of the tasks they have to worry about. that stretch from one end of the pipeline to the other, So if I kind of connect the dots for you here But they can work together, that's what you're saying. And actually, I hadn't even got to that, No, no, you teed it out. All the undifferentiated labored and scale can be automated. and the depth products are going to sit on top of it. to almost looking like a data warehouse model, Think of it as, another cue. And then their bounded, if you will, view And I was going to do one of my movie references, but-- No, don't do it. that's when you have the machine learning. is hitting the low hanging fruit, and tell you what's going on with your landscape. You're starting to see clear skies So the shortage, that means we're not going to get you see TensorFlow from Google. George: But for PhDs, for PhDs. John: Well developers too, you could argue developers, So that fits into the question you were just asking High, you know, the top guys. This is not your working developer. George: But if you go up two levels. at the end of the day. So if you look at the big cloud vendors right now. And the more data that you have, And the benefit comes back to the people 'cause you could do anything on TensorFlow. Whoever has the data has the power. you can take all our speech to Siri or to Echo or whatever Or to the developers, you know, the autonomous car, and then you could bring the machine learning up the curve or the same, when you take out the fleet data It encapsulates everything that you are presenting, and the modern operational analytical system

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MercedesORGANIZATION

0.99+

George GilbertPERSON

0.99+

GeorgePERSON

0.99+

JohnPERSON

0.99+

BMWORGANIZATION

0.99+

John FurrierPERSON

0.99+

1,000QUANTITY

0.99+

FacebookORGANIZATION

0.99+

SiliconANGLE Media Inc.ORGANIZATION

0.99+

firstQUANTITY

0.99+

second pointQUANTITY

0.99+

17 hoursQUANTITY

0.99+

SiriTITLE

0.99+

WikibonORGANIZATION

0.99+

HadoopTITLE

0.99+

first questionQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

eight million jobsQUANTITY

0.99+

EchoCOMMERCIAL_ITEM

0.99+

two levelsQUANTITY

0.99+

TeslaORGANIZATION

0.99+

OneQUANTITY

0.99+

each productQUANTITY

0.99+

each stepQUANTITY

0.99+

first threeQUANTITY

0.98+

Cube StudiosORGANIZATION

0.98+

one levelQUANTITY

0.98+

one platformQUANTITY

0.98+

RocanaORGANIZATION

0.98+

one transactionQUANTITY

0.97+

about 100 interactionQUANTITY

0.97+

dozensQUANTITY

0.96+

fourQUANTITY

0.96+

CubeORGANIZATION

0.96+

one endQUANTITY

0.96+

each oneQUANTITY

0.96+

Google AssistantTITLE

0.96+

two real approachesQUANTITY

0.94+

Unravel DataORGANIZATION

0.94+

oneQUANTITY

0.93+

todayDATE

0.92+

Breaking Analysis: Databricks faces critical strategic decisions…here’s why


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.

Published Date : Mar 10 2023

SUMMARY :

bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

David FloyerPERSON

0.99+

Mike OlsonPERSON

0.99+

2014DATE

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

GeorgePERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Andy JassyPERSON

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Erik BradleyPERSON

0.99+

DavePERSON

0.99+

UberORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Bob MugliaPERSON

0.99+

GartnerORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

2010DATE

0.99+

DatabricksORGANIZATION

0.99+

Kristin MartinPERSON

0.99+

Rob HofPERSON

0.99+

threeQUANTITY

0.99+

15 yearsQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

two placesQUANTITY

0.99+

BostonLOCATION

0.99+

Tristan HandyPERSON

0.99+

M&AORGANIZATION

0.99+

Frank QuattronePERSON

0.99+

second elementQUANTITY

0.99+

Daren BrabhamPERSON

0.99+

TechAlpha PartnersORGANIZATION

0.99+

third elementQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

50 yearQUANTITY

0.99+

40%QUANTITY

0.99+

ClouderaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

five yearsQUANTITY

0.99+

Is Data Mesh the Killer App for Supercloud | Supercloud2


 

(gentle bright music) >> Okay, welcome back to our "Supercloud 2" event live coverage here at stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We've got exclusive news and a scoop here for SiliconANGLE and theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called NextData.com NextData, she's a cube alumni and contributor to our Supercloud initiative, as well as our coverage and breaking analysis with Dave Vellante on data, the killer app for Supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> John: Wonderful. Your contributions to the data conversation has been well-documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing, you know, cold water on it. Some are, I think, it's the next big thing. Tell us about the data mesh super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, it's, you know, the pain point that it surfaced were universal. Everybody said, "Oh, why didn't I think of that?" You know, it was just an obvious next step and people are approaching it, implementing it. I guess the last few years, I've been involved in many of those implementations, and I guess Supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include boundaries, organizational boundaries cloud technology boundaries and trust boundaries. >> I want to bring that up because your venture, NextData which is new, just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Zhamak: Absolutely, yes. So next data is the result of, I suppose, the pains that I suffered from implementing a database for many of the organizations. Basically, a lot of organizations that I've worked with, they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find, I guess, the common denominator that solves those problems and enables that developer experience for data sharing. >> John: Since you just announced the news, what's been the reaction? >> Zhamak: I just announced the news right now, so what's the reaction? >> John: But people in the industry that know you, you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth modes, so we haven't publicly talked about it, but folks that have been close to us in fact have reached out. We already have implementations of our pilot platform with early customers, which is super exciting. And we're going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those where we are going to have multiple pilots, implementations of our platform in real world. We're real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak: When I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally obviously not surprising. They don't include the big vision of inclusivity across clouds across different data stores. But it seems like people are having to go through some gymnastics to get to, you know, the organizational reality of decentralizing data, and at least pushing data ownership to the line of business. How are you approaching or are you approaching, solving that problem? Are you taking a narrow slice? What can you tell us about Next Data? >> Zhamak: Sure, yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that, you know, the data, as you know, resides on different platforms. It's owned by different people, it's processed by pipelines that who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem, the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in this autonomous units, we call them data products, I guess in data mesh, right? That constitutes computation, that governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is, you know, data in different places, decentralization and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation, APIs to get to it in a technology agnostic way, in an open way. And then, sit on top and use existing existing tech, you know, Snowflake, Databricks, whatever exists, you know, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here, that the language, and the modeling that we use is really native to data mesh is that I will make a data product, I'm sharing a data product, and that encapsulates on providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side connected to peer-to-peer data sharing with data product as a primitive first class concept. >> Okay, so the idea would be developers would build applications leveraging those data products which are discoverable and governed. Now, today you see some companies, you know, take a snowflake for example. >> Zhamak: Yeah. >> Attempting to do that within their own little walled garden. They even, at one point, used the term, "Mesh." I dunno if they pull back on that. And then they sort of became aware of some of your work. But a lot of the things that they're doing within their little insulated environment, you know, support that, that, you know, governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realize that, you know, and this is a reality, like you go to organizations, they have a snowflake and half of the organization happily operates on Snowflake. And on the other half, oh, we are on, you know, bare infrastructure on AWS, or we are on Databricks. This is the realities, you know, this Supercloud that's written up here. It's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use next data mesh operating system. People will have different platforms." So you have to build with openness in mind, and in case of Snowflake, I think, you know, they have I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So, it's worth reviewing that basically, the concept of data mesh is that, whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember, I wrote a blog post in 2007 called, "Data's the new developer kit." Back then, they used to call 'em developer kits, if you remember. And that we said at that time, whoever can code data >> Zhamak: Yes. >> Will have a competitive advantage. >> Aren't there machines going to be doing that? Didn't we just hear that? >> Well we have, and you know, Hey Siri, hey Cube. Find me that best video for data mesh. There it is. I mean, this is the point, like what's happening is that, now, data has to be addressable >> Zhamak: Yes. >> For machines and for coding. >> Zhamak: Yes. >> Because as you need to call the data. So the question is, how do you manage the complexity of big things as promiscuous as possible, making it available as well as then governing it because it's a trade off. The more you make open >> Zhamak: Definitely. >> The better the machine learning. >> Zhamak: Yes. >> But yet, the governance issue, so this is the, you need an OS to handle this maybe. >> Yes, well, we call our mental model for our platform is an OS operating system. Operating systems, you know, have shown us how you can kind of abstract what's complex and take care of, you know, a lot of complexities, but yet provide an open and, you know, dynamic enough interface. So we think about it that way. We try to solve the problem of policies live with the data. An enforcement of the policies happens at the most granular level which is, in this concept, the data product. And that would happen whether you read, write, or access a data product. But we can never imagine what are these policies could be. So our thinking is, okay, we should have a open policy framework that can allow organizations write their own policy drivers, and policy definitions, and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, you know, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now, the primitives that we work with to train machine-learning model are still bits and bites in data. They're fields, rows, columns, right? And that creates quite a large surface area, an attack area for, you know, for privacy of the data. So perhaps, one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area so you can really leave the control of the data to the sovereign owners of that data, right? So that data product. So I think the evolution of our data APIs perhaps will become more and more computational. So you describe what you want, and the data owner decides, you know, how to manage the- >> John: That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment with you, who's a machine learning, could really been around the industry. It's almost as if you're starting to see reason come into the data, reasoning. It's like you starting to see not just metadata, using the data to reason so that you don't have to expose the raw data. It's almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that 'cause that seems to be where the dots are connecting. >> Zhamak: Yes, this is perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge-making mode, however, by just the basic notion of saying, "I'm going to put an API in front of my data, and that API today might be as primitive as a level of indirection as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage, and insert all of this intelligence that need to happen. And then I will, today, I will still give you a file. But by just defining that API and standardizing it, now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can kind of evolve that, right? Now you have a point of evolution to this very futuristic, I guess, future where you just describe the question that you're asking from the chat. >> Well, this is the Supercloud, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so, his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has, and he wants your feedback on this, "Is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products, how do you respond to that? How do you see, is that a problem or is that something that is overstated, or do you have an answer for that?" >> Zhamak: Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not burdening them with the complexity of the application and application logic, and yet reducing their cognitive load by localizing what they need to know about which is that domain where they're operating within. Because what's happening right now? what's happening right now is that data engineers, a ton of empathy for them for their high threshold of pain that they can, you know, deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curates it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers, these are still separately moving units. Your app and your data products are independent but yet tightly closed with each other, tightly coupled with each other based on the context of the domain, so reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application but yet have them them separate from app because app provides a very different service. Transactional data for my e-commerce transaction, data product provides a very different service, longitudinal data for the, you know, variety of this intelligent analysis that I can do on the data. But yet, it's all within the domain of e-commerce or sales or whatnot. >> So a lot of decoupling and coupling create that cohesiveness. >> Zhamak: Absolutely. >> Architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server, data center days and cloud, SRE, Google coined the term, "Site Reliability Engineer" for someone to look over the hundreds of thousands of servers. We asked a question to data engineering community who have been suffering, by the way, agree. Is there an SRE-like role for data? Because in a way, data engineering, that platform engineer, they are like the SRE for data. In other words, managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Zhamak: Yes, exactly. So, maybe we go through that history of how SRE came to be. So we had the first DevOps movement which was, remove the wall between dev and ops and bring them together. So you have one cross-functional units of the organization that's responsible for, you build it you run it, right? So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that, and then we said, "Okay, as we decentralized and had this many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing and running a lot while giving autonomy to this cross-functional team." And that's where the SRE, a new generation of engineers came to exist. So I think if I just look- >> Hence Borg, hence Kubernetes. >> Hence, hence, exactly. Hence chaos engineering, hence embracing the complexity and messiness, right? And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think, if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain oriented cross-functional teams, full stop, and still have a very advanced maybe at the platform infrastructure level kind of operational team that they're not busy doing two jobs which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> John: So you see similarities. >> Absolutely, but I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening yet. Eh, a little bit fast and loose with some complexities to clean up. >> Yes, yes. This is a different restructure. As you said we, you know, the job of this industry as a whole on architects is decompose, recompose, decompose, recomposing a new way, and now we're like decomposing centralized team, recomposing them as domains and- >> John: So is data mesh the killer app for Supercloud? >> You had to do this for me. >> Dave: Sorry, I couldn't- (John and Dave laughing) >> Zhamak: What do you want me to say, Dave? >> John: Yes. >> Zhamak: Yes of course. >> I mean Supercloud, I think it's, really the terminology's Supercloud, Opencloud. But I think, in spirits of it, this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> John: Well thank you so much for coming on Supercloud too, really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. (John laughs) >> John: That's now going well. We can move faster, so thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Okay, Supercloud 2 live here in Palo Alto. Our stage performance, I'm John Furrier with Dave Vellante. We're back with more after this short break, Stay with us all day for Supercloud 2. (gentle bright music)

Published Date : Feb 17 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few years, What's the pain point? a database for many of the organizations. in terms of the approach, but folks that have been close to us to get to, you know, the data, as you know, resides Okay, so the idea would be developers But a lot of the things that they're doing This is the realities, you know, inside of the data. And that we said at that Well we have, and you know, So the question is, how do so this is the, you need and the data owner decides, you know, so that you don't have 'cause that seems to be where of this API, you not So the concern that he has, into the domain closer to So a lot of decoupling So I have to ask you, this a lot of the complexity of domains and the infrastructure, in a more early days of that movement. to clean up. the job of this industry the world would work. John: Well thank you so much for coming Dave: Been a great catalyst. We can move faster, so Thank you for hosting me. after this short break,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

ZhamakPERSON

0.99+

DavePERSON

0.99+

George GilbertPERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

Palo AltoLOCATION

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

JPMCORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DavPERSON

0.99+

two jobsQUANTITY

0.99+

SupercloudORGANIZATION

0.99+

NextDataORGANIZATION

0.99+

todayDATE

0.99+

OpencloudORGANIZATION

0.99+

last yearDATE

0.99+

SiriTITLE

0.99+

ThoughtWorksORGANIZATION

0.98+

NextData.comORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HelloFreshORGANIZATION

0.98+

firstQUANTITY

0.98+

millions of dollarsQUANTITY

0.96+

SnowflakeEVENT

0.96+

OracleORGANIZATION

0.96+

SRETITLE

0.94+

SnowflakeORGANIZATION

0.94+

CubePERSON

0.93+

ZhamaPERSON

0.92+

Data Mesh the Killer AppTITLE

0.92+

SiliconANGLEORGANIZATION

0.91+

DatabricksORGANIZATION

0.9+

first classQUANTITY

0.89+

Supercloud 2ORGANIZATION

0.88+

theCUBEORGANIZATION

0.88+

hundreds of thousandsQUANTITY

0.85+

one pointQUANTITY

0.84+

ZhamPERSON

0.83+

SupercloudEVENT

0.83+

ChatGPTORGANIZATION

0.72+

SREORGANIZATION

0.72+

BorgPERSON

0.7+

SnowflakeTITLE

0.66+

SupercloudTITLE

0.65+

halfQUANTITY

0.64+

Discussion about Walmart's Approach | Supercloud2


 

(upbeat electronic music) >> Okay, welcome back to Supercloud 2, live here in Palo Alto. I'm John Furrier, with Dave Vellante. Again, all day wall-to-wall coverage, just had a great interview with Walmart, we've got a Next interview coming up, you're going to hear from Bob Muglia and Tristan Handy, two experts, both experienced entrepreneurs, executives in technology. We're here to break down what just happened with Walmart, and what's coming up with George Gilbert, former colleague, Wikibon analyst, Gartner Analyst, and now independent investor and expert. George, great to see you, I know you're following this space. Like you read about it, remember the first days when Dataverse came out, we were talking about them coming out of Berkeley? >> Dave: Snowflake. >> John: Snowflake. >> Dave: Snowflake In the early days. >> We, collectively, have been chronicling the data movement since 2010, you were part of our team, now you've got your nose to the grindstone, you're seeing the next wave. What's this all about? Walmart building their own super cloud, we got Bob Muglia talking about how these next wave of apps are coming. What are the super apps? What's the super cloud to you? >> Well, this key's off Dave's really interesting questions to Walmart, which was like, how are they building their supercloud? 'Cause it makes a concrete example. But what was most interesting about his description of the Walmart WCMP, I forgot what it stood for. >> Dave: Walmart Cloud Native Platform. >> Walmart, okay. He was describing where the logic could run in these stateless containers, and maybe eventually serverless functions. But that's just it, and that's the paradigm of microservices, where the logic is in this stateless thing, where you can shoot it, or it fails, and you can spin up another one, and you've lost nothing. >> That was their triplet model. >> Yeah, in fact, and that was what they were trying to move to, where these things move fluidly between data centers. >> But there's a but, right? Which is they're all stateless apps in the cloud. >> George: Yeah. >> And all their stateful apps are on-prem and VMs. >> Or the stateful part of the apps are in VMs. >> Okay. >> And so if they really want to lift their super cloud layer off of this different provider's infrastructure, they're going to need a much more advanced software platform that manages data. And that goes to the -- >> Muglia and Handy, that you and I did, that's coming up next. So the big takeaway there, George, was, I'll set it up and you can chime in, a new breed of data apps is emerging, and this highly decentralized infrastructure. And Tristan Handy of DBT Labs has a sort of a solution to begin the journey today, Muglia is working on something that's way out there, describe what you learned from it. >> Okay. So to talk about what the new data apps are, and then the platform to run them, I go back to the using what will probably be seen as one of the first data app examples, was Uber, where you're describing entities in the real world, riders, drivers, routes, city, like a city plan, these are all defined by data. And the data is described in a structure called a knowledge graph, for lack of a, no one's come up with a better term. But that means the tough, the stuff that Jack built, which was all stateless and sits above cloud vendors' infrastructure, it needs an entirely different type of software that's much, much harder to build. And the way Bob described it is, you're going to need an entirely new data management infrastructure to handle this. But where, you know, we had this really colorful interview where it was like Rock 'Em Sock 'Em, but they weren't really that much in opposition to each other, because Tristan is going to define this layer, starting with like business intelligence metrics, where you're defining things like bookings, billings, and revenue, in business terms, not in SQL terms -- >> Well, business terms, if I can interrupt, he said the one thing we haven't figured out how to APIify is KPIs that sit inside of a data warehouse, and that's essentially what he's doing. >> George: That's what he's doing, yes. >> Right. And so then you can now expose those APIs, those KPIs, that sit inside of a data warehouse, or a data lake, a data store, whatever, through APIs. >> George: And the difference -- >> So what does that do for you? >> Okay, so all of a sudden, instead of working at technical data terms, where you're dealing with tables and columns and rows, you're dealing instead with business entities, using the Uber example of drivers, riders, routes, you know, ETA prices. But you can define, DBT will be able to define those progressively in richer terms, today they're just doing things like bookings, billings, and revenue. But Bob's point was, today, the data warehouse that actually runs that stuff, whereas DBT defines it, the data warehouse that runs it, you can't do it with relational technology >> Dave: Relational totality, cashing architecture. >> SQL, you can't -- >> SQL caching architectures in memory, you can't do it, you've got to rethink down to the way the data lake is laid out on the disk or cache. Which by the way, Thomas Hazel, who's speaking later, he's the chief scientist and founder at Chaos Search, he says, "I've actually done this," basically leave it in an S3 bucket, and I'm going to query it, you know, with no caching. >> All right, so what I hear you saying then, tell me if I got this right, there are some some things that are inadequate in today's world, that's not compatible with the Supercloud wave. >> Yeah. >> Specifically how you're using storage, and data, and stateful. >> Yes. >> And then the software that makes it run, is that what you're saying? >> George: Yeah. >> There's one other thing you mentioned to me, it's like, when you're using a CRM system, a human is inputting data. >> George: Nothing happens till the human does something. >> Right, nothing happens until that data entry occurs. What you're talking about is a world that self forms, polling data from the transaction system, or the ERP system, and then builds a plan without human intervention. >> Yeah. Something in the real world happens, where the user says, "I want a ride." And then the software goes out and says, "Okay, we got to match a driver to the rider, we got to calculate how long it takes to get there, how long to deliver 'em." That's not driven by a form, other than the first person hitting a button and saying, "I want a ride." All the other stuff happens autonomously, driven by data and analytics. >> But my question was different, Dave, so I want to get specific, because this is where the startups are going to come in, this is the disruption. Snowflake is a data warehouse that's in the cloud, they call it a data cloud, they refactored it, they did it differently, the success, we all know it looks like. These areas where it's inadequate for the future are areas that'll probably be either disrupted, or refactored. What is that? >> That's what Muglia's contention is, that the DBT can start adding that layer where you define these business entities, they're like mini digital twins, you can define them, but the data warehouse isn't strong enough to actually manage and run them. And Muglia is behind a company that is rethinking the database, really in a fundamental way that hasn't been done in 40 or 50 years. It's the first, in his contention, the first real rethink of database technology in a fundamental way since the rise of the relational database 50 years ago. >> And I think you admit it's a real Hail Mary, I mean it's quite a long shot right? >> George: Yes. >> Huge potential. >> But they're pretty far along. >> Well, we've been talking on theCUBE for 12 years, and what, 10 years going to AWS Reinvent, Dave, that no one database will rule the world, Amazon kind of showed that with them. What's different, is it databases are changing, or you can have multiple databases, or? >> It's a good question. And the reason we've had multiple different types of databases, each one specialized for a different type of workload, but actually what Muglia is behind is a new engine that would essentially, you'll never get rid of the data warehouse, or the equivalent engine in like a Databricks datalake house, but it's a new engine that manages the thing that describes all the data and holds it together, and that's the new application platform. >> George, we have one minute left, I want to get real quick thought, you're an investor, and we know your history, and the folks watching, George's got a deep pedigree in investment data, and we can testify against that. If you're going to invest in a company right now, if you're a customer, I got to make a bet, what does success look like for me, what do I want walking through my door, and what do I want to send out? What companies do I want to look at? What's the kind of of vendor do I want to evaluate? Which ones do I want to send home? >> Well, the first thing a customer really has to do when they're thinking about next gen applications, all the people have told you guys, "we got to get our data in order," getting that data in order means building an integrated view of all your data landscape, which is data coming out of all your applications. It starts with the data model, so, today, you basically extract data from all your operational systems, put it in this one giant, central place, like a warehouse or lake house, but eventually you want this, whether you call it a fabric or a mesh, it's all the data that describes how everything hangs together as in one big knowledge graph. There's different ways to implement that. And that's the most critical thing, 'cause that describes your Uber landscape, your Uber platform. >> That's going to power the digital transformation, which will power the business transformation, which powers the business model, which allows the builders to build -- >> Yes. >> Coders to code. That's Supercloud application. >> Yeah. >> George, great stuff. Next interview you're going to see right here is Bob Muglia and Tristan Handy, they're going to unpack this new wave. Great segment, really worth unpacking and reading between the lines with George, and Dave Vellante, and those two great guests. And then we'll come back here for the studio for more of the live coverage of Supercloud 2. Thanks for watching. (upbeat electronic music)

Published Date : Feb 17 2023

SUMMARY :

remember the first days What's the super cloud to you? of the Walmart WCMP, I and that's the paradigm of microservices, and that was what they stateless apps in the cloud. And all their stateful of the apps are in VMs. And that goes to the -- Muglia and Handy, that you and I did, But that means the tough, he said the one thing we haven't And so then you can now the data warehouse that runs it, Dave: Relational totality, Which by the way, Thomas I hear you saying then, and data, and stateful. thing you mentioned to me, George: Nothing happens polling data from the transaction Something in the real world happens, that's in the cloud, that the DBT can start adding that layer Amazon kind of showed that with them. and that's the new application platform. and the folks watching, all the people have told you guys, Coders to code. for more of the live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GeorgePERSON

0.99+

Bob MugliaPERSON

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

BobPERSON

0.99+

Thomas HazelPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

John FurrierPERSON

0.99+

Palo AltoLOCATION

0.99+

Chaos SearchORGANIZATION

0.99+

JackPERSON

0.99+

TristanPERSON

0.99+

12 yearsQUANTITY

0.99+

BerkeleyLOCATION

0.99+

UberORGANIZATION

0.99+

firstQUANTITY

0.99+

DBT LabsORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

two expertsQUANTITY

0.99+

Supercloud 2TITLE

0.99+

GartnerORGANIZATION

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

MugliaORGANIZATION

0.99+

one minuteQUANTITY

0.99+

40QUANTITY

0.99+

two great guestsQUANTITY

0.98+

WikibonORGANIZATION

0.98+

50 yearsQUANTITY

0.98+

JohnPERSON

0.98+

Rock 'Em Sock 'EmTITLE

0.98+

todayDATE

0.98+

first personQUANTITY

0.98+

DatabricksORGANIZATION

0.98+

S3COMMERCIAL_ITEM

0.97+

50 years agoDATE

0.97+

2010DATE

0.97+

MaryPERSON

0.96+

first daysQUANTITY

0.96+

SQLTITLE

0.96+

oneQUANTITY

0.95+

Supercloud waveEVENT

0.95+

each oneQUANTITY

0.93+

DBTORGANIZATION

0.91+

SupercloudTITLE

0.91+

Supercloud2TITLE

0.91+

Supercloud 2ORGANIZATION

0.89+

SnowflakeTITLE

0.86+

DataverseORGANIZATION

0.83+

tripletQUANTITY

0.78+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

Is Data Mesh the Next Killer App for Supercloud?


 

(upbeat music) >> Welcome back to our Supercloud 2 event live coverage here of stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We got exclusive news and a scoop here for SiliconANGLE in theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called Nextdata.com, Nextdata. She's a cube alumni and contributor to our supercloud initiative, as well as our coverage and Breaking Analysis with Dave Vellante on data, the killer app for supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> Wonderful. Your contributions to the data conversation has been well documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing cold water on it. Some are thinking it's the next big thing. Tell us about the data mesh, super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, the pain point that it surface were universal. Everybody said, "Oh, why didn't I think of that?" It was just an obvious next step and people are approaching it, implementing it. I guess the last few years I've been involved in many of those implementations and I guess supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include organizational boundaries, cloud technology boundaries, and trust boundaries. >> I want to bring that up because your venture, Nextdata, which is new just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Absolutely. Yes, so Nextdata is the result of, I suppose the pains that I suffered from implementing data mesh for many of the organizations. Basically a lot of organizations that I've worked with they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data, and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find the, I guess the common denominator that solves those problems and enables that developer experience for data sharing. >> Since you just announced the news, what's been the reaction? >> I just announced the news right now, so what's the reaction? >> But people in the industry know you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth mode so we haven't publicly talked about it, but folks that have been close to us, in fact have reached that we already have implementations of our pilot platform with early customers, which is super exciting. And we going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those, but we are going to have multiple pilot implementations of our platform in real world where real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak, when I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients, helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally, obviously not surprising, they don't include the big vision of inclusivity across clouds, across different data storage. But it seems like people are having to go through some gymnastics to get to the organizational reality of decentralizing data and at least pushing data ownership to the line of business. How are you approaching, or are you approaching solving that problem? Are you taking a narrow slice? What can you tell us about Nextdata? >> Yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that the data as you know resides on different platforms, it's owned by different people, is processed by pipelines that who knows who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in these autonomous units. We call them data products, I guess in data mesh. That constitutes computation. That governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is data in different places, decentralization, and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation APIs to get to it in a technology agnostic way, in an open way. And then sit on top and use existing tech, Snowflake, Databricks, whatever exists, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here. The language and the modeling that we use is really native to data mesh, which is that I'm building a data product I'm sharing a data product, and that encapsulates I'm providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side, connected to peer-to-peer data sharing with data product as a primitive first class concept. >> So the idea would be developers would build applications leveraging those data products, which are discoverable and governed. Now today you see some companies, take a Snowflake for example, attempting to do that within their own little walled garden. They even at one point used the term mesh. I don't know if they pull back on that. And then they became aware of some of your work. But a lot of the things that they're doing within their little insulated environment support that governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realized that, and this is a reality, like you go to organizations, they have a Snowflake and half of the organization happily operates on Snowflake. And on the other half, "oh, we are on Bare infrastructure on AWS or we are on Databricks." This is the reality. This supercloud that's written up here, it's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use Nextdata, data mesh operating system. People will have different platforms." So you have to build with openness in mind and in case of Snowflake, I think, they have very, I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So it's worth reviewing that basically the concept of data mesh is that whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember I wrote a blog post in 2007 called "Data as the New Developer Kit" back then we used to call them developer kits if you remember. And that we said at that time, whoever can code data will have a competitive advantage. >> Aren't the machines going to be doing that? Didn't we just hear that? >> Well, we have. Hey, Siri. Hey, Cube, find me that best video for data mesh. There it is. But this is the point, like what's happening is that now data has to be addressable. for machines and for coding because as you need to call the data. So the question is how do you manage the complexity of big things as promiscuous as possible, making it available, as well as then governing it? Because it's a trade off. The more you make open, the better the machine learning. But yet the governance issue, so this is the, you need an OS to handle this maybe. >> Yes. So yes, well we call, our mental model for our platform is an OS operating system. Operating systems have shown us how you can abstract what's complex and take care of a lot of complexities, but yet provide an open and dynamic enough interface. So we think about it that way. Just, we try to solve the problem of policies live with the data, an enforcement of the policies happens at the most granular level, which is in this concept of the data product. And that would happen whether you read, write or access a data product. But we can never imagine what are these policies could be. So our thinking is we should have a policy, open policy framework that can allow organizations write their own policy drivers and policy definitions and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now the primitives that we work with to train machine learning model are still bits and bytes and data. They're fields, rows, columns and that creates quite a large surface area and attack area for privacy of the data. So perhaps one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area. So you can really leave the control of the data to the sovereign owners of that data. So that data product. So I think that evolution of our data APIs perhaps will become more and more computational. So you describe what you want and the data owner decides how to manage. >> That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment we had with you. It was a machine learning have been around the industry. It's almost as if you're starting to see reason come into, the data reasoning is like starting to see not just metadata. Using the data to reason so that you don't have to expose the raw data. So almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that? 'Cause that seems to be where the dots are connecting. >> Yes, perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge making mode. However, by just the basic notion of saying, "I'm going to put an API in front of my data." And that API today might be as primitive as a level of indirection, as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage and insert all of this intelligence that need to happen. And then today, I will still give you a file. But by just defining that API and standardizing it now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can evolve that. Now you have a point of evolution to this very futuristic, I guess, future where you just described the question that you're asking from the ChatGPT. >> Well, this is the supercloud, go ahead, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has and he wants your feedback on this, is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products? How do you respond to that? How do you see? Is that a problem? Is that something that is overstated or do you have an answer for that? >> Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not overburdening them with the complexity of the application and application logic and yet reducing their cognitive load by localizing what they need to know about, which is that domain where they're operating within. Because what's happening right now? What's happening right now is that data engineers with, a ton of empathy for them for their high threshold of pain that they can deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curate it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers. These are still separately moving units. Your app and your data products are independent, but yet tightly closed with each other, tightly coupled with each other based on the context of the domain. So reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application, but yet have them separate from app because app provides a very different service. Transactional data for my e-commerce transaction. Data product provides a very different service. Longitudinal data for the variety of this intelligent analysis that I can do on the data. But yet it's all within the domain of e-commerce or sales or whatnot. >> It's a lot of decoupling and coupling create that cohesiveness architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server data center days and cloud, SRE, Google coined the term, site reliability engineer, for someone to look over the hundreds of thousands of servers. We asked the question to data engineering community who have been suffering, by the way, I agree. Is there an SRE like role for data? Because in a way data engineering, that platform engineer, they are like the SRE for data. In other words managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Yes, exactly. So maybe we go through that history of how SRE came to be. So we had the first DevOps movement, which was remove the wall between dev and ops and bring them together. So you have one unit of one cross-functional units of the organization that's responsible for you build it, you run it. So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that and then we said, okay, there is a ton, as we decentralized and had these many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing, and running a lot while giving autonomy to this cross-functional team. And that's where the SRE, a new generation of engineers came to exist. So I think if I just look at. >> Hence, Kubernetes. >> Hence, hence, exactly. Hence, chaos engineering. Hence, embracing the complexity and messiness. And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain-oriented cross-functional teams full stop and still have a very advanced maybe at the platform level, infrastructure level operational team that they're not busy doing two jobs, which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> So you see similarities? >> I see, absolutely. But I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening, yet a little bit fast and loose with some complexities to clean up. >> Yes. This is a different restructure. As you said, the job of this industry as a whole, an architect, is decompose recompose, decompose recompose in new way and now we're like decomposing centralized team, recomposing them as domains. >> So is data mesh the killer app for supercloud? >> You had to do this to me. >> Sorry, I couldn't resist. >> I know. Of course you want me to say this. >> Yes. >> Yes, of course. I mean, supercloud, I think it's really, the terminology supercloud, open cloud, but I think in spirits of it this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> Well, thank you so much for coming on Supercloud 2. We really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. >> That's now going well. We can move faster. So thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Supercloud 2 live here in Palo Alto, our stage performance. I'm John Furrier with Dave Vellante. We'll back with more after this short break. Stay with us all day for Supercloud 2. (upbeat music)

Published Date : Jan 25 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few What's the pain point? for many of the organizations. But people in the industry know you did but folks that have been close to us, at least the ones that I've is that the data as you know But a lot of the things that they're doing and half of the organization that basically the concept of data mesh And that we said at that time, is that now data has to be addressable. and the data owner decides how to manage. the data reasoning is like starting to see 'Cause that seems to be where What's a logic that I need to go Well, this is the So the concern that he has into the domain closer to We asked the question to of the organization that's responsible So I think if we look at that evolution, in a more early days of that movement. So it's a data DevOps As you said, the job of Of course you want me to say this. assume the world would work. the conversation and exposed So thanks for coming on. Thank you for hosting me. I'm John Furrier with Dave Vellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

George GilbertPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

NextdataORGANIZATION

0.99+

ZhamakPERSON

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

Nextdata.comORGANIZATION

0.99+

two jobsQUANTITY

0.99+

JPMCORGANIZATION

0.99+

todayDATE

0.99+

HelloFreshORGANIZATION

0.99+

ThoughtWorksORGANIZATION

0.99+

last yearDATE

0.99+

Supercloud 2EVENT

0.99+

OracleORGANIZATION

0.98+

firstQUANTITY

0.98+

SiriTITLE

0.98+

CubePERSON

0.98+

DatabricksORGANIZATION

0.98+

SnowflakeORGANIZATION

0.97+

SupercloudORGANIZATION

0.97+

bothQUANTITY

0.97+

one unitQUANTITY

0.97+

SnowflakeTITLE

0.96+

SRETITLE

0.95+

millions of dollarsQUANTITY

0.94+

first classQUANTITY

0.94+

hundreds of thousands of serversQUANTITY

0.92+

supercloudORGANIZATION

0.92+

one pointQUANTITY

0.92+

Supercloud 2TITLE

0.89+

ChatGPTORGANIZATION

0.81+

halfQUANTITY

0.81+

Data Mesh the Next Killer AppTITLE

0.78+

supercloudTITLE

0.75+

a tonQUANTITY

0.73+

Supercloud 2ORGANIZATION

0.72+

SiliconANGLEORGANIZATION

0.7+

DevOpsTITLE

0.66+

SnowflakeEVENT

0.59+

S3TITLE

0.54+

lastDATE

0.54+

supercloudEVENT

0.48+

KubernetesTITLE

0.47+

Closing Remarks | Supercloud2


 

>> Welcome back everyone to the closing remarks here before we kick off our ecosystem portion of the program. We're live in Palo Alto for theCUBE special presentation of Supercloud 2. It's the second edition, the first one was in August. I'm John Furrier with Dave Vellante. Here to wrap up with our special guest analyst George Gilbert, investor and industry legend former colleague of ours, analyst at Wikibon. George great to see you. Dave, you know, wrapping up this day what in a phenomenal program. We had a contribution from industry vendors, industry experts, practitioners and customers building and redefining their company's business model. Rolling out technology for Supercloud and multicloud and ultimately changing how they do data. And data was the theme today. So very, very great program. Before we jump into our favorite parts let's give a shout out to the folks who make this possible. Free contents our mission. We'll always stay true to that mission. We want to thank VMware, alkira, ChaosSearch, prosimo for being sponsors of this great program. We will have Supercloud 3 coming up in a month or so, or two months. We'll see. Or sooner, we don't know. But it'll be more about security, but a lot more momentum. Okay, so that's... >> And don't forget too that this program not going to end now. We've got a whole ecosystem speaks track so stay tuned for that. >> John: Yeah, we got another 20 interviews. Feels like it. >> Well, you're going to hear from Saks, Veronika Durgin. You're going to hear from Western Union, Harveer Singh. You're going to hear from Ionis Pharmaceuticals, Nick Taylor. Brian Gracely chimes in on Supecloud. So he's the man behind the cloud cast. >> Yeah, and you know, the practitioners again, pay attention to also to the cloud networking interviews. Lot of change going on there that's going to be disruptive and actually change the landscape as well. Again, as Supercloud progresses to be the next big thing. If you're not on this next wave, you'll drift what, as Pat Gelsinger says. >> Yep. >> To kick off the closing segments, George, Dave, this is a wave that's been identified. Again, people debate the word all you want Supercloud. It is a gateway to multicloud eventually it is the standard for new applications, new ways to do data. There's new computer science being generated and customer requirements being addressed. So it's the confluence of, you know, tectonic plates shifting in the industry, new computer science seeing things like AI and machine learning and data at the center of it and new infrastructure all kind of coming together. So, to me, that's my takeaway so far. That is the big story and it's going to change society and ultimately the business models of these companies. >> Well, we've had 10, you know, you think about it we came out of the financial crisis. We've had 10, 12 years despite the Covid of tech success, right? And just now CIOs are starting to hit the brakes. And so my point is you've had all this innovation building up for a decade and you've got this massive ecosystem that is running on the cloud and the ecosystem is saying, hey, we can have even more value by tapping best of of breed across clouds. And you've got customers saying, hey, we need help. We want to do more and we want to point our business and our intellectual property, our software tooling at our customers and monetize our data. So you have all these forces coming together and it's sort of entering a new era. >> George, I want to go to you for a second because you are big contributor to this event. Your interview with Bob Moglia with Dave was I thought a watershed moment for me to hear that the data apps, how databases are being rethought because we've been seeing a diversity of databases with Amazon Web services, you know, promoting no one database rules of the world. Now it's not one database kind of architecture that's puling these new apps. What's your takeaway from this event? >> So if you keep your eye on this North Star where instead of building apps that are based on code you're building apps that are defined by data coming off of things that are linked to the real world like people, places, things and activities. Then the idea is, and the example we use is, you know, Uber but it could be, you know, amazon.com is defined by stuff coming off data in the Amazon ecosystem or marketplace. And then the question is, and everyone was talking at different angles on this, which was, where's the data live? How much do you hide from the developer? You know, and when can you offer that? You know, and you started with Walmart which was describing apps, traditional apps that are just code. And frankly that's easier to make that cross cloud and you know, essentially location independent. As soon as you have data you need data management technology that a customer does not have the sophistication to build. And then the argument was like, so how much can you hide from the developer who's building data apps? Tristan's version was you take the modern data stack and you start adding these APIs that define business concepts like bookings, billings and revenue, you know, or in the Uber example like drivers and riders, you know, and ETA's and prices. But those things execute still on the data warehouse or data lakehouse. Then Bob Muglia was saying you're not really hiding enough from the developer because you still got to say how to do all that. And his vision is not only do you hide where the data is but you hide how to sort of get at all that code by just saying what you want. You define how a car and how a driver and how a rider works. And then those things automatically figure out underneath the cover. >> So huge challenges, right? There's governance, there's security, they could be big blockers to, you know, the Supercloud but the industry's going to be attacking that problem. >> Well, what's your take? What's your favorite segment? Zhamak Dehghani came on, she's starting in that company, exclusive news. That was big notable moment for theCUBE. She launched her company. She pioneered the data mesh concept. And I think what George is saying and what data mesh points to is something that we've been saying for a long time. That data is now going to flip the script on how apps behave. And the Uber example I think is illustrated 'cause people can relate to Uber. But imagine that for every business whether it's a manufacturing business or retail or oil and gas or FinTech, they can look at their business like a game almost gamify it with data, riders, cars you know, moving data around the value of data. This is something that Adam Selipsky teased out at AWS, Dave. So what's your takeaway from this Supercloud? Where are we in your mind? Well big thing is data products and decentralizing your data architecture, but putting data in the hands of domain experts who can actually monetize the data. And I think that's, to me that's really exciting. Because look, data products financial industry has always been doing building data products. Mortgage backed securities is a data product. But why should the financial industry have all the fun? I mean virtually every organization can tap its ecosystem build data products, take its internal IP and processes and software and point it to the world and actually begin to make money out of it. >> Okay, so let's go around the horn. I'll start, I'll get you guys some time to think. Next question, what did you learn today? I learned that I think it's an infrastructure game and talking to Kit Colbert at VMware, I think it's all about infrastructure refactoring and I think the data's going to be an ingredient that's going to be operating system like. I think you're going to see the infrastructure influencing operations that will enable Superclouds to be real. And developers won't even know what a Supercloud is because they'll be using it. It's the operations focus is going to be very critical. Just like DevOps movements started Cloud native I think you're going to see a data native movement and I think infrastructure is critical as people go to the next level. That's my big takeaway today. And I'll say the data conversation is at the center. I think security, data are going to be always active horizontally scalable concepts, but every company's going to reset their infrastructure, how it looks and if it's not set up for data and or things that there need to be agile on, it's going to be a non-starter. So I think that's the cloud NextGen, distributed computing. >> I mean, what came into focus for me was I think the hyperscaler is going to continue to do their thing, you know, and be very, very successful and they're each coming at it from different approaches. We talk about this all the time in theCUBE. Amazon the best infrastructure, you know, Google's got its you know, data and AI thing and it's playing catch up and Microsoft's got this massive estate. Okay, cool. Check. The next wave of innovation which is coming from data, I've always said follow the data. That's where the where the money's going to be is going to come from other places. People want to be able to, organizations want to be able to share data across clouds across their organization, outside of their ecosystem and make money with that data sharing. They don't want to FTP it anymore. I got it. You take it. They want to work with live data in real time and I think the edge, we didn't talk much about the edge today is going to even take that to a new level real time inferencing at the edge, AI and and being able to do new things with data that we haven't even seen. But playing around with ChatGPT, it's blowing our mind. And I think you're right, it's like when we first saw the browser, holy crap, this is going to change the world. >> Yeah. And the ChatGPT by the way is going to create a wave of machine learning and data refactoring for sure. But also Howie Liu had an interesting comment, he was asked by a VC how much to replicate that and he said it's in the hundreds of millions, not billions. Now if you asked that same question how much does it cost to replicate AWS? The CapEx alone is unstoppable, they're already done. So, you know, the hyperscalers are going to continue to boom. I think they're going to drive the infrastructure. I think Amazon's going to be really strong at silicon and physics and squeeze every ounce atom out of every physical thing and then get latency as your bottleneck and the rest is all going to be... >> That never blew me away, a hundred million to create kind of an open AI, you know, competitor. Look at companies like Lacework. >> John: Some people have that much cash on the balance sheet. >> These are security companies that have raised a billion dollars, right? To compete. You know, so... >> If you're not shifting left what do you do with data, shift up? >> But, you know. >> What did you learn, George? >> I'm listening to you and I think you're helping me crystallize something which is the software infrastructure to enable the data apps is wide open. The way Zhamak described it is like if you want a data product like a sales and operation plan, that is built on other data products, like a sales plan which has a forecast in it, it has a production plan, it has a procurement plan and then a sales and operation plan is actually a composition of all those and they call each other. Now in her current platform, you need to expose to the developer a certain amount of mechanics on how to move all that data, when to move it. Like what happens if something fails. Now Muglia is saying I can hide that completely. So all you have to say is what you want and the underlying machinery takes care of everything. The problem is Muglia stuff is still a few years off. And Tristan is saying, I can give you much of that today but it's got to run in the data warehouse. So this trade offs all different ways. But again, I agree with you that the Cloud platform vendors or the ecosystem participants who can run across Cloud platforms and private infrastructure will be the next platform. And then the cloud platform is sort of where you run the big honking centralized stuff where someone else manages the operations. >> Sounds like middleware to me, Dave >> And key is, I'll just end with this. The key is being able to get to the data, whether it's in a data warehouse or a data lake or a S3 bucket or an object store, Oracle database, whatever. It's got to be inclusive that is critical to execute on the vision that you just talked about 'cause that data's in different systems and you're not going to put it all into some new system. >> So creating middleware in the cloud that sounds what it sounds like to me. >> It's like, you discovered PaaS >> It's a super PaaS. >> But it's platform services 'cause PaaS connotes like a tightly integrated platform. >> Well this is the real thing that's going on. We're going to see how this evolves. George, great to have you on, Dave. Thanks for the summary. I enjoyed this segment a lot today. This ends our stage performance live here in Palo Alto. As you know, we're live stage performance and syndicate out virtually. Our afternoon program's going to kick in now you're going to hear some great interviews. We got ChaosSearch. Defining the network Supercloud from prosimo. Future of Cloud Network, alkira. We got Saks, a retail company here, Veronika Durgin. We got Dave with Western Union. So a lot of customers, a pharmaceutical company Warner Brothers, Discovery, media company. And then you know, what is really needed for Supercloud, good panels. So stay with us for the afternoon program. That's part two of Supercloud 2. This is a wrap up for our stage live performance. I'm John Furrier with Dave Vellante and George Gilbert here wrapping up. Thanks for watching and enjoy the program. (bright music)

Published Date : Jan 17 2023

SUMMARY :

to the closing remarks here program not going to end now. John: Yeah, we got You're going to hear from Yeah, and you know, It is a gateway to multicloud starting to hit the brakes. go to you for a second the sophistication to build. but the industry's going to And I think that's, to me and talking to Kit Colbert at VMware, to do their thing, you know, I think Amazon's going to be really strong kind of an open AI, you know, competitor. on the balance sheet. that have raised a billion dollars, right? I'm listening to you and I think It's got to be inclusive that is critical So creating middleware in the cloud But it's platform services George, great to have you on, Dave.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

Dave VellantePERSON

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Adam SelipskyPERSON

0.99+

Pat GelsingerPERSON

0.99+

Bob MogliaPERSON

0.99+

Veronika DurginPERSON

0.99+

JohnPERSON

0.99+

Bob MugliaPERSON

0.99+

GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

Western UnionORGANIZATION

0.99+

Nick TaylorPERSON

0.99+

Palo AltoLOCATION

0.99+

10QUANTITY

0.99+

John FurrierPERSON

0.99+

UberORGANIZATION

0.99+

Brian GracelyPERSON

0.99+

Howie LiuPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

hundreds of millionsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Ionis PharmaceuticalsORGANIZATION

0.99+

AugustDATE

0.99+

Warner BrothersORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

billionsQUANTITY

0.99+

ZhamakPERSON

0.99+

MugliaPERSON

0.99+

20 interviewsQUANTITY

0.99+

DiscoveryORGANIZATION

0.99+

second editionQUANTITY

0.99+

ChaosSearchORGANIZATION

0.99+

todayDATE

0.99+

two monthsQUANTITY

0.99+

Supercloud 2TITLE

0.98+

VMwareORGANIZATION

0.98+

SaksORGANIZATION

0.98+

PaaSTITLE

0.98+

amazon.comORGANIZATION

0.98+

first oneQUANTITY

0.98+

LaceworkORGANIZATION

0.98+

Harveer SinghPERSON

0.98+

OracleORGANIZATION

0.97+

alkiraPERSON

0.96+

firstQUANTITY

0.96+

SupercloudORGANIZATION

0.95+

Supercloud2TITLE

0.94+

WikibonORGANIZATION

0.94+

SupecloudORGANIZATION

0.94+

eachQUANTITY

0.93+

hundred millionQUANTITY

0.92+

multicloudORGANIZATION

0.92+

every ounce atomQUANTITY

0.91+

Amazon WebORGANIZATION

0.88+

Supercloud 3TITLE

0.87+

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

ML & AI Keynote Analysis | AWS re:Invent 2022


 

>>Hey, welcome back everyone. Day three of eight of us Reinvent 2022. I'm John Farmer with Dave Volante, co-host the q Dave. 10 years for us, the leader in high tech coverage is our slogan. Now 10 years of reinvent day. We've been to every single one except with the original, which we would've come to if Amazon actually marketed the event, but they didn't. It's more of a customer event. This is day three. Is the machine learning ai keynote sws up there. A lot of announcements. We're gonna break this down. We got, we got Andy Thra here, vice President, prince Constellation Research. Andy, great to see you've been on the cube before one of our analysts bringing the, bringing the, the analysis, commentary to the keynote. This is your wheelhouse. Ai. What do you think about Swami up there? I mean, he's awesome. We love him. Big fan Oh yeah. Of of the Cuban we're fans of him, but he got 13 announcements. >>A lot. A lot, >>A lot. >>So, well some of them are, first of all, thanks for having me here and I'm glad to have both of you on the same show attacking me. I'm just kidding. But some of the announcement really sort of like a game changer announcements and some of them are like, meh, you know, just to plug in the holes what they have and a lot of golf claps. Yeah. Meeting today. And you could have also noticed that by, when he was making the announcements, you know, the, the, the clapping volume difference, you could say, which is better, right? But some of the announcements are, are really, really good. You know, particularly we talked about, one of that was Microsoft took that out of, you know, having the open AI in there, doing the large language models. And then they were going after that, you know, having the transformer available to them. And Amazon was a little bit weak in the area, so they couldn't, they don't have a large language model. So, you know, they, they are taking a different route saying that, you know what, I'll help you train the large language model by yourself, customized models. So I can provide the necessary instance. I can provide the instant volume, memory, the whole thing. Yeah. So you can train the model by yourself without depending on them kind >>Of thing. So Dave and Andy, I wanna get your thoughts cuz first of all, we've been following Amazon's deep bench on the, on the infrastructure pass. They've been doing a lot of machine learning and ai, a lot of data. It just seems that the sentiment is that there's other competitors doing a good job too. Like Google, Dave. And I've heard folks in the hallway, even here, ex Amazonians saying, Hey, they're train their models on Google than they bring up the SageMaker cuz it's better interface. So you got, Google's making a play for being that data cloud. Microsoft's obviously putting in a, a great kind of package to kind of make it turnkey. How do they really stand versus the competition guys? >>Good question. So they, you know, each have their own uniqueness and the we variation that take it to the field, right? So for example, if you were to look at it, Microsoft is known for as industry or later things that they are been going after, you know, industry verticals and whatnot. So that's one of the things I looked here, you know, they, they had this omic announcement, particularly towards that healthcare genomics space. That's a huge space for hpz related AIML applications. And they have put a lot of things in together in here in the SageMaker and in the, in their models saying that, you know, how do you, how do you use this transmit to do things like that? Like for example, drug discovery, for genomics analysis, for cancer treatment, the whole, right? That's a few volumes of data do. So they're going in that healthcare area. Google has taken a different route. I mean they want to make everything simple. All I have to do is I gotta call an api, give what I need and then get it done. But Amazon wants to go at a much deeper level saying that, you know what? I wanna provide everything you need. You can customize the whole thing for what you need. >>So to me, the big picture here is, and and Swami references, Hey, we are a data company. We started, he talked about books and how that informed them as to, you know, what books to place front and center. Here's the, here's the big picture. In my view, companies need to put data at the core of their business and they haven't, they've generally put humans at the core of their business and data. And now machine learning are at the, at the outside and the periphery. Amazon, Google, Microsoft, Facebook have put data at their core. So the question is how do incumbent companies, and you mentioned some Toyota Capital One, Bristol Myers Squibb, I don't know, are those data companies, you know, we'll see, but the challenge is most companies don't have the resources as you well know, Andy, to actually implement what Google and Facebook and others have. >>So how are they gonna do that? Well, they're gonna buy it, right? So are they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft and Google, I pulled some ETR data to say, okay, who are the top companies that are showing up in terms of spending? Who's spending with whom? AWS number one, Microsoft number two, Google number three, data bricks. Number four, just in terms of, you know, presence. And then it falls down DataRobot, Anaconda data icu, Oracle popped up actually cuz they're embedding a lot of AI into their products and, and of course IBM and then a lot of smaller companies. But do companies generally customers have the resources to do what it takes to implement AI into applications and into workflows? >>So a couple of things on that. One is when it comes to, I mean it's, it's no surprise that the, the top three or the hyperscalers, because they all want to bring their business to them to run the specific workloads on the next biggest workload. As you was saying, his keynote are two things. One is the A AIML workloads and the other one is the, the heavy unstructured workloads that he was talking about. 80%, 90% of the data that's coming off is unstructured. So how do you analyze that? Such as the geospatial data. He was talking about the volumes of data you need to analyze the, the neural deep neural net drug you ought to use, only hyperscale can do it, right? So that's no wonder all of them on top for the data, one of the things they announced, which not many people paid attention, there was a zero eight L that that they talked about. >>What that does is a little bit of a game changing moment in a sense that you don't have to, for example, if you were to train the data, data, if the data is distributed everywhere, if you have to bring them all together to integrate it, to do that, it's a lot of work to doing the dl. So by taking Amazon, Aurora, and then Rich combine them as zero or no ETL and then have Apaches Apaches Spark applications run on top of analytical applications, ML workloads. That's huge. So you don't have to move around the data, use the data where it is, >>I, I think you said it, they're basically filling holes, right? Yeah. They created this, you know, suite of tools, let's call it. You might say it's a mess. It's not a mess because it's, they're really powerful but they're not well integrated and now they're starting to take the seams as I say. >>Well yeah, it's a great point. And I would double down and say, look it, I think that boring is good. You know, we had that phase in Kubernetes hype cycle where it got boring and that was kind of like, boring is good. Boring means we're getting better, we're invisible. That's infrastructure that's in the weeds, that's in between the toes details. It's the stuff that, you know, people we have to get done. So, you know, you look at their 40 new data sources with data Wrangler 50, new app flow connectors, Redshift Auto Cog, this is boring. Good important shit Dave. The governance, you gotta get it and the governance is gonna be key. So, so to me, this may not jump off the page. Adam's keynote also felt a little bit of, we gotta get these gaps done in a good way. So I think that's a very positive sign. >>Now going back to the bigger picture, I think the real question is can there be another independent cloud data cloud? And that's the, to me, what I try to get at my story and you're breaking analysis kind of hit a home run on this, is there's interesting opportunity for an independent data cloud. Meaning something that isn't aws, that isn't, Google isn't one of the big three that could sit in. And so let me give you an example. I had a conversation last night with a bunch of ex Amazonian engineering teams that left the conversation was interesting, Dave. They were like talking, well data bricks and Snowflake are basically batch, okay, not transactional. And you look at Aerospike, I can see their booth here. Transactional data bases are hot right now. Streaming data is different. Confluence different than data bricks. Is data bricks good at hosting? >>No, Amazon's better. So you start to see these kinds of questions come up where, you know, data bricks is great, but maybe not good for this, that and the other thing. So you start to see the formation of swim lanes or visibility into where people might sit in the ecosystem, but what came out was transactional. Yep. And batch the relationship there and streaming real time and versus you know, the transactional data. So you're starting to see these new things emerge. Andy, what do you, what's your take on this? You're following this closely. This seems to be the alpha nerd conversation and it all points to who's gonna have the best data cloud, say data, super clouds, I call it. What's your take? >>Yes, data cloud is important as well. But also the computational that goes on top of it too, right? Because when, when the data is like unstructured data, it's that much of a huge data, it's going to be hard to do that with a low model, you know, compute power. But going back to your data point, the training of the AIML models required the batch data, right? That's when you need all the, the historical data to train your models. And then after that, when you do inference of it, that's where you need the streaming real time data that's available to you too. You can make an inference. One of the things, what, what they also announced, which is somewhat interesting, is you saw that they have like 700 different instances geared towards every single workload. And there are some of them very specifically run on the Amazon's new chip. The, the inference in two and theran tr one chips that basically not only has a specific instances but also is run on a high powered chip. And then if you have that data to support that, both the training as well as towards the inference, the efficiency, again, those numbers have to be proven. They claim that it could be anywhere between 40 to 60% faster. >>Well, so a couple things. You're definitely right. I mean Snowflake started out as a data warehouse that was simpler and it's not architected, you know, in and it's first wave to do real time inference, which is not now how, how could they, the other second point is snowflake's two or three years ahead when it comes to governance, data sharing. I mean, Amazon's doing what always does. It's copying, you know, it's customer driven. Cuz they probably walk into an account and they say, Hey look, what's Snowflake's doing for us? This stuff's kicking ass. And they go, oh, that's a good idea, let's do that too. You saw that with separating compute from storage, which is their tiering. You saw it today with extending data, sharing Redshift, data sharing. So how does Snowflake and data bricks approach this? They deal with ecosystem. They bring in ecosystem partners, they bring in open source tooling and that's how they compete. I think there's unquestionably an opportunity for a data cloud. >>Yeah, I think, I think the super cloud conversation and then, you know, sky Cloud with Berkeley Paper and other folks talking about this kind of pre, multi-cloud era. I mean that's what I would call us right now. We are, we're kind of in the pre era of multi-cloud, which by the way is not even yet defined. I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. Yeah. People have multiple clouds. They got, they, they end up by default, not by design as Dell likes to say. Right? And they gotta deal with it. So it's more of they're inheriting multiple cloud environments. It's not necessarily what they want in the situation. So to me that is a big, big issue. >>Yeah, I mean, again, going back to your snowflake and data breaks announcements, they're a data company. So they, that's how they made their mark in the market saying that, you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. And, and Amazon is catching up with that with a lot of that announcements they made, how far it's gonna get traction, you know, to change when I to say, >>Yeah, I mean to me, to me there's no doubt about Dave. I think, I think what Swamee is doing, if Amazon can get corner the market on out of the box ML and AI capabilities so that people can make it easier, that's gonna be the end of the day tell sign can they fill in the gaps. Again, boring is good competition. I don't know mean, mean I'm not following the competition. Andy, this is a real question mark for me. I don't know where they stand. Are they more comprehensive? Are they more deeper? Are they have deeper services? I mean, obviously shows to all the, the different, you know, capabilities. Where, where, where does Amazon stand? What's the process? >>So what, particularly when it comes to the models. So they're going at, at a different angle that, you know, I will help you create the models we talked about the zero and the whole data. We'll get the data sources in, we'll create the model. We'll move the, the whole model. We are talking about the ML ops teams here, right? And they have the whole functionality that, that they built ind over the year. So essentially they want to become the platform that I, when you come in, I'm the only platform you would use from the model training to deployment to inference, to model versioning to management, the old s and that's angle they're trying to take. So it's, it's a one source platform. >>What about this idea of technical debt? Adrian Carro was on yesterday. John, I know you talked to him as well. He said, look, Amazon's Legos, you wanna buy a toy for Christmas, you can go out and buy a toy or do you wanna build a, to, if you buy a toy in a couple years, you could break and what are you gonna do? You're gonna throw it out. But if you, if you, if part of your Lego needs to be extended, you extend it. So, you know, George Gilbert was saying, well, there's a lot of technical debt. Adrian was countering that. Does Amazon have technical debt or is that Lego blocks analogy the right one? >>Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes APIs? It depends on what team you're on. If you're on the runtime gene, you're gonna optimize for Kubernetes, but E two is the resources you want to use. So I think the idea of the 15 years of technical debt, I, I don't believe that. I think the APIs are still hardened. The issue that he brings up that I think is relevant is it's an end situation, not an or. You can have the bag of Legos, which is the primitives and build a durable application platform, monitor it, customize it, work with it, build it. It's harder, but the outcome is durability and sustainability. Building a toy, having a toy with those Legos glued together for you, you can get the play with, but it'll break over time. Then you gotta replace it. So there's gonna be a toy business and there's gonna be a Legos business. Make your own. >>So who, who are the toys in ai? >>Well, out of >>The box and who's outta Legos? >>The, so you asking about what what toys Amazon building >>Or, yeah, I mean Amazon clearly is Lego blocks. >>If people gonna have out the box, >>What about Google? What about Microsoft? Are they basically more, more building toys, more solutions? >>So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. But, but if it comes to vertical industry solutions, Microsoft is, is is ahead, right? Because they have, they have had years of indu industry experience. I mean there are other smaller cloud are trying to do that too. IBM being an example, but you know, the, now they are starting to go after the specific industry use cases. They think that through, for example, you know the medical one we talked about, right? So they want to build the, the health lake, security health lake that they're trying to build, which will HIPPA and it'll provide all the, the European regulations, the whole line yard, and it'll help you, you know, personalize things as you need as well. For example, you know, if you go for a certain treatment, it could analyze you based on your genome profile saying that, you know, the treatment for this particular person has to be individualized this way, but doing that requires a anomalous power, right? So if you do applications like that, you could bring in a lot of the, whether healthcare, finance or what have you, and then easy for them to use. >>What's the biggest mistake customers make when it comes to machine intelligence, ai, machine learning, >>So many things, right? I could start out with even the, the model. Basically when you build a model, you, you should be able to figure out how long that model is effective. Because as good as creating a model and, and going to the business and doing things the right way, there are people that they leave the model much longer than it's needed. It's hurting your business more than it is, you know, it could be things like that. Or you are, you are not building a responsibly or later things. You are, you are having a bias and you model and are so many issues. I, I don't know if I can pinpoint one, but there are many, many issues. Responsible ai, ethical ai. All >>Right, well, we'll leave it there. You're watching the cube, the leader in high tech coverage here at J three at reinvent. I'm Jeff, Dave Ante. Andy joining us here for the critical analysis and breaking down the commentary. We'll be right back with more coverage after this short break.

Published Date : Nov 30 2022

SUMMARY :

Ai. What do you think about Swami up there? A lot. of, you know, having the open AI in there, doing the large language models. So you got, Google's making a play for being that data cloud. So they, you know, each have their own uniqueness and the we variation that take it to have the resources as you well know, Andy, to actually implement what Google and they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft the neural deep neural net drug you ought to use, only hyperscale can do it, right? So you don't have to move around the data, use the data where it is, They created this, you know, It's the stuff that, you know, people we have to get done. And so let me give you an example. So you start to see these kinds of questions come up where, you know, it's going to be hard to do that with a low model, you know, compute power. was simpler and it's not architected, you know, in and it's first wave to do real time inference, I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. the different, you know, capabilities. at a different angle that, you know, I will help you create the models we talked about the zero and you know, George Gilbert was saying, well, there's a lot of technical debt. Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. you know, it could be things like that. We'll be right back with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AdrianPERSON

0.99+

DavePERSON

0.99+

AndyPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Adrian CarroPERSON

0.99+

Dave VolantePERSON

0.99+

Andy ThraPERSON

0.99+

90%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

JohnPERSON

0.99+

AdamPERSON

0.99+

13 announcementsQUANTITY

0.99+

LegoORGANIZATION

0.99+

John FarmerPERSON

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Bristol Myers SquibbORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

OneQUANTITY

0.99+

ChristmasEVENT

0.99+

second pointQUANTITY

0.99+

yesterdayDATE

0.99+

AnacondaORGANIZATION

0.99+

todayDATE

0.99+

Berkeley PaperORGANIZATION

0.99+

oneQUANTITY

0.99+

eightQUANTITY

0.98+

700 different instancesQUANTITY

0.98+

three yearsQUANTITY

0.98+

SwamiPERSON

0.98+

AerospikeORGANIZATION

0.98+

bothQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

two thingsQUANTITY

0.98+

60%QUANTITY

0.98+

Rob Emsley, Dell Technologies


 

>>Welcome back to a blueprint for trusted infrastructure. We're here with Rob Emsley. Who's the director of product marketing for data protection and cyber security. Rob. Good to see you a new role. >>Yeah. Good to be back, Dave. Good to see you. Yeah, it's been a while since we chatted last and you know, one of the changes in, in my world is that I've expanded my responsibilities beyond data protection marketing, to also focus on cyber security marketing specifically for our infrastructure solutions group. So certainly that's, you know, something that really has driven us to, you know, to come and have this conversation with you today. >>So data protection obviously has become an increasingly important component of the cyber security space. I, I don't think necessarily of, you know, traditional backup and recovery as security it's to me, it's an adjacency. I know some companies have said, oh yeah, now we're a security company. They're kind of chasing the valuation for sure. Bubble. Dell's interesting because you, you have, you know, data protection in the form of backup and recovery and data management, but you also have security, you know, direct security capabilities. So you're sort of bringing those two worlds together and it sounds like your responsibility is to, to connect those, those dots. Is that right? >>Absolutely. Yeah. I mean, I think that the reality is, is that security is a, a multi-layer discipline. I think the, the days of thinking that it's one or another technology that you can use or process that you can use to make your organization secure long gone. I mean, certainly you actually correct. If you think about the backup and recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery. It's all about the recovery. It's all about getting yourself backup and running when bad things happen. And one of the realities, unfortunately today is that one of the worst things that can happen is cyber attacks. You know, ransomware, malware are all things that are top of mind for all organizations today. And that's why you see a lot of technology and a lot of innovation going into the backup and recovery space, because if you have a copy, a good copy of your data, then that is really the, the first place you go to recover from a cyber attack. >>And that's why it's so important. The reality is is that unfortunately the cyber criminals keep on getting smarter. I don't know how it happens, but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have, they go after your, your backup data as well. So over the last half a decade, Dell technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery volts. And that is really the, you know, we've had many conversations about that over the years. Yeah. And that's really a big tenant of what we do in the debt protection portfolio. >>So this idea of, of cybersecurity resilience, that definition is evolving. What does it mean to you? >>Yeah, I think the, the analyst team over at Gartner, they wrote a, a very insightful paper called you will be hacked, embraced the breach. And the whole basis of this analysis is so much money's been spent on prevention is that what's outta balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked. You have to embrace that fact and be ready and prepared to bring yourself back into business. You know, and that's really where cyber resiliency is very, very different than cyber security and prevention, you know, and I think that balance of get your security disciplines well funded, get your defenses as good as you can get them, but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan, it's really the basis of any good solid data protection backup from recovery philosophy. >>So if I had to do a SWOT analysis, we don't have to do the w OT, but let's focus on the S what would you say are Dell's strengths in this, you know, cybersecurity space, as it relates to data protection. >>One is we've been doing it a long time. You know, we talk a lot about Dell's data protection being proven and modern. You know, certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't, as I mentioned to you, one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduce a little over five years ago, five, six years, power protect cyber recovery is something which has become a unique capability for customers to adopt on top of their investment in Dell technologies, data protection, you know, the, the unique elements of our solution already threefold, and it's, we call them the three eyes. It's isolation, it's a mutability and its intelligence. And the, the isolation part is really so important because you need to reduce the attack surface of your good known copies of data. >>You know, you need to put it in a location that the bad actors can't get to it. And that really is the, the, the, the essence of a cyber recovery vault. Interestingly enough, you're starting to see the market throw out that word, you know, from many other places, but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault, you know, allowing it to be controlled by your backup application. Our cyber recovery vaulting technology is independent of the backup infrastructure. It uses it, but it controls its own security. And that is so, so important. It's like having a, a vault that the only way to open it is from the inside, you know, and think about that. If you think about, you know, vaults in banks or vaults in your home, normally you have a keypad on the outside, think of our cyber recovery vault as having its security controlled from inside of the vault. >>So nobody can get in, nothing can get in unless it's already in. And if it's already in, then it's trusted. Exactly. Yeah, exactly. Yeah. So isolation's the key. And then you, you mentioned immutability is the second piece. >>Yeah, so I, mutability is, is also something which has been around for a long time. People talk about backup mutability or immutable backup copies. So immutability is just the, the, the additional technology that allows the data that's inside of the vault to be unchangeable, you know, but again, that immutability, you know, your mileage varies, you know, when you look across the, the different offers that are out there in the market, especially in the backup industry, you make a very valid point earlier that the backup vendors in the market seem to be security, washing their marketing messages. I mean, everybody is leaning into the ever present danger of cyber security, not a bad thing, but the reality is is that you have to have the technology to back it up, you know, quite literally, >>Yeah, yeah, no pun intended. Right. And then actually pun intended. Now what about the intelligence piece of it? That's that's AI ML, where does that fit >>For sure. So the intelligence piece is delivered by a solution called cyber sense. And cyber sense for us is what really gives you the confidence that what you have in your cyber recovery volt is a good clean copy of data. So it's looking at the backup copies that get driven into the cyber volt, and it's looking for anomalies. So it's not looking for signatures of malware. You know, that's what your antivirus software does. That's what your endpoint protection software does. That's on the prevention side of the equation. But what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good, and that when you get a request to restore and recover your business, you go right, let's go and do it. And you don't have any concern that what you have in the vault has been compromised. >>So cyber sense is really a, a unique analytics solution in the market, based upon the fact that it, it, isn't looking at at cursory indicators of, of, of, of, of malware infection or, or, or ransomware introduction it's doing full content analytics, you know, looking at, you know, has the data in any way changed, has it suddenly become encrypted? Has it suddenly become different to how it was in the previous scan? So that anomaly detection is very, very different. It's looking for, you know, like different characteristics that really are an indicator that something is going on. And of course, if it sees it, you immediately get flagged. But the good news is, is that you always have in the vault, the previous copy of good known data, which now becomes your restore point. >>So we're talking to Rob Emsley about how data protection fits into what Dell calls DT, I, Dell trusted infrastructure. And, and I'm, I want to come back Rob to this notion of, and, or cuz I think a lot of people are skeptical. Like how can I have great security and not introduce friction into my organization? Is that an automation play? How, how does Dell tackle that problem? >>I mean, I think a lot of it is across our infrastructure is, is security has to be built in, I mean, intrinsic security within our servers, within our storage devices, within our elements of our backup infrastructure. I mean, security, multifactor authentication, you know, elements that make the overall infrastructure secure. You know, we have capabilities that, you know, allow us to identify whether or not configurations have changed. You know, we'll probably be talking about that a little bit more to you later in the segment, but the, the essence is, is security is not, not a Bolton. It has to be part of the overall infrastructure. And that's so true, certainly in the data protection space, >>Give us the, the, the bottom line on, on how you see Dell's key differentiators. Maybe you could talk about Dell, of course always talks about its portfolio, but, but why should customers, you know, lead in to Dell in, in this whole cyber resilience space, >>You know, staying on the data protection space. As I mentioned, the, the, the work we've been doing to introduce this cyber resiliency solution for debt protection is in our opinion, as good as it gets, you know, the, you know, you've spoken to a number of our, of our best customers, whether it be Bob bender from founders, federal, or more recently at Delta arches world, you spoke to Tony Bryson yep. From the town of Gilbert. And these are customers that we've had for many years that have implemented cyber recovery volts. And at the end of the day, they can now sleep at night. You know, that's really the, the peace of mind that they have is that the insurance that a data protection from Dell cyber recovery vault a para protect cyber recovery solution, gives them, you know, really allows them to, you know, just have the assurance that they don't have to pay a ransom if they have a, an insider threat issue. And you know, all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from. >>Great, well, Rob, congratulations on the new scope of responsibility. I like how you know, your organization is expanding as the threat surface is expanding. As we said, data protection becoming an adjacency to, to security, not security in and of itself. A key component of a comprehensive security strategy. Rob Emsley. Thank you for coming back in the cube. Good to see you again. >>You too, Dave. Thanks. >>All right. In a moment, I'll be back to wrap up a blueprint for trusted infrastructure. You watching the cube.

Published Date : Sep 20 2022

SUMMARY :

Good to see you a new role. something that really has driven us to, you know, to come and have this conversation with you today. but you also have security, you know, direct security capabilities. recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery. And that is really the, you know, What does it mean to you? that if the inevitable happens and you find yourself you say are Dell's strengths in this, you know, cybersecurity space, And the, the isolation part is really so important because you need is from the inside, you know, and think about that. you mentioned immutability is the second piece. you know, but again, that immutability, you know, your mileage varies, And then actually pun intended. And you don't have any concern that what you have in the vault has been compromised. you know, looking at, you know, has the data in any way So we're talking to Rob Emsley about how data protection fits into what Dell calls DT, You know, we have capabilities that, you know, allow us to identify whether or not you know, lead in to Dell in, in this whole cyber resilience space, as good as it gets, you know, the, you know, you've spoken to a number of I like how you know, In a moment, I'll be back to wrap up a blueprint for trusted infrastructure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Tony BrysonPERSON

0.99+

Rob EmsleyPERSON

0.99+

RobPERSON

0.99+

DellORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

second pieceQUANTITY

0.99+

Bob benderPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

todayDATE

0.99+

GilbertLOCATION

0.99+

three decadesQUANTITY

0.98+

fiveQUANTITY

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.97+

two worldsQUANTITY

0.96+

three eyesQUANTITY

0.96+

OneQUANTITY

0.94+

last half a decadeDATE

0.92+

DeltaORGANIZATION

0.89+

five years agoDATE

0.83+

yearsQUANTITY

0.82+

overDATE

0.72+

six yearsQUANTITY

0.54+

Dell A Blueprint for Trusted Infrastructure


 

the cyber security landscape has changed dramatically over the past 24 to 36 months rapid cloud migration has created a new layer of security defense sure but that doesn't mean csos can relax in many respects it further complicates or at least changes the ciso's scope of responsibilities in particular the threat surface has expanded and that creates more seams and cisos have to make sure their teams pick up where the hyperscaler clouds leave off application developers have become a critical execution point for cyber assurance shift left is the kind of new buzz phrase for devs but organizations still have to shield right meaning the operational teams must continue to partner with secops to make sure infrastructure is resilient so it's no wonder that in etr's latest survey of nearly 1500 cios and it buyers that business technology executives cite security as their number one priority well ahead of other critical technology initiatives including collaboration software cloud computing and analytics rounding out the top four but budgets are under pressure and csos have to prioritize it's not like they have an open checkbook they have to contend with other key initiatives like those just mentioned to secure the funding and what about zero trust can you go out and buy xero trust or is it a framework a mindset in a series of best practices applied to create a security consciousness throughout the organization can you implement zero trust in other words if a machine or human is not explicitly allowed access then access is denied can you implement that policy without constricting organizational agility the question is what's the most practical way to apply that premise and what role does infrastructure play as the enforcer how does automation play in the equation the fact is that today's approach to cyber resilient type resilience can't be an either or it has to be an and conversation meaning you have to ensure data protection while at the same time advancing the mission of the organization with as little friction as possible and don't even talk to me about the edge that's really going to keep you up at night hello and welcome to the special cube presentation a blueprint for trusted infrastructure made possible by dell technologies in this program we explore the critical role that trusted infrastructure plays in cyber security strategies how organizations should think about the infrastructure side of the cyber security equation and how dell specifically approaches securing infrastructure for your business we'll dig into what it means to transform and evolve toward a modern security infrastructure that's both trusted and agile first up are pete gear and steve kenniston they're both senior cyber security consultants at dell technologies and they're going to talk about the company's philosophy and approach to trusted infrastructure and then we're going to speak to paris arcadi who's a senior consultant for storage at dell technologies to understand where and how storage plays in this trusted infrastructure world and then finally rob emsley who heads product marketing for data protection and cyber security he's going to take a deeper dive with rob into data protection and explain how it has become a critical component of a comprehensive cyber security strategy okay let's get started pete gear steve kenniston welcome to the cube thanks for coming into the marlboro studios today great to be here dave thanks dave good to see you great to see you guys pete start by talking about the security landscape you heard my little rap up front what are you seeing i thought you wrapped it up really well and you touched on all the key points right technology is ubiquitous today it's everywhere it's no longer confined to a monolithic data center it lives at the edge it lives in front of us it lives in our pockets and smartphones along with that is data and as you said organizations are managing sometimes 10 to 20 times the amount of data that they were just five years ago and along with that cyber crime has become a very profitable enterprise in fact it's been more than 10 years since uh the nsa chief actually called cyber crime the biggest transfer of wealth in history that was 10 years ago and we've seen nothing but accelerating cyber crime and really sophistication of how those attacks are perpetrated and so the new security landscape is really more of an evolution we're finally seeing security catch up with all of the technology adoption all the build out the work from home and work from anywhere that we've seen over the last couple of years we're finally seeing organizations and really it goes beyond the i t directors it's a board level discussion today security's become a board level discussion yeah i think that's true as well it's like it used to be the security was okay the secops team you're responsible for security now you've got the developers are involved the business lines are involved it's part of onboarding for most companies you know steve this concept of zero trust it was kind of a buzzword before the pandemic and i feel like i've often said it's now become a mandate but it's it's it's still fuzzy to a lot of people how do you guys think about zero trust what does it mean to you how does it fit yeah i thought again i thought your opening was fantastic in in this whole lead into to what is zero trust it had been a buzzword for a long time and now ever since the federal government came out with their implementation or or desire to drive zero trust a lot more people are taking a lot more seriously because i don't think they've seen the government do this but ultimately let's see ultimately it's just like you said right if if you don't have trust to those particular devices uh applications or data you can't get at it the question is and and you phrase it perfectly can you implement that as well as allow the business to be as agile as it needs to be in order to be competitive because we're seeing with your whole notion around devops and the ability to kind of build make deploy build make deploy right they still need that functionality but it also needs to be trusted it needs to be secure and things can't get away from you yeah so it's interesting we attended every uh reinforce since 2019 and the narrative there is hey everything in this in the cloud is great you know and this narrative around oh security is a big problem is you know doesn't help the industry the fact is that the big hyperscalers they're not strapped for talent but csos are they don't have the the capabilities to really apply all these best practices they're they're playing whack-a-mole so they look to companies like yours to take their r your r d and bake it into security products and solutions so what are the critical aspects of the so-called dell trusted infrastructure that we should be thinking about yeah well dell trusted infrastructure for us is a way for us to describe uh the the work that we do through design development and even delivery of our it system so dell trusted infrastructure includes our storage it includes our servers our networking our data protection our hyper converged everything that infrastructure always has been it's just that today customers consume that infrastructure at the edge as a service in a multi-cloud environment i mean i view the cloud as really a way for organizations to become more agile and to become more flexible and also to control costs i don't think organizations move to the cloud or move to a multi-cloud environment to enhance security so i don't see cloud computing as a panacea for security i see it as another attack surface and another uh aspect in front that organizations and and security organizations and departments have to manage it's part of their infrastructure today whether it's in their data center in a cloud or at the edge i mean i think it's a huge point because a lot of people think oh data's in the cloud i'm good it's like steve we've talked about oh why do i have to back up my data it's in the cloud well you might have to recover it someday so i don't know if you have anything to add to that or any additional thoughts on it no i mean i think i think like what pete was saying when it comes to when it comes to all these new vectors for attack surfaces you know people did choose the cloud in order to be more agile more flexible and all that did was open up to the csos who need to pay attention to now okay where can i possibly be attacked i need to be thinking about is that secure and part of the part of that is dell now also understands and thinks about as we're building solutions is it is it a trusted development life cycle so we have our own trusted development life cycle how many times in the past did you used to hear about vendors saying you got to patch your software because of this we think about what changes to our software and what implementations and what enhancements we deliver can actually cause from a security perspective and make sure we don't give up or or have security become a whole just in order to implement a feature we got to think about those things yeah and as pete alluded to our secure supply chain so all the way through knowing what you're going to get when you actually receive it is going to be secure and not be tampered with becomes vitally important and pete and i were talking earlier when you have tens of thousands of devices that need to be delivered whether it be storage or laptops or pcs or or whatever it is you want to be you want to know that that that those devices are can be trusted okay guys maybe pete you could talk about the how dell thinks about it's its framework and its philosophy of cyber security and then specifically what dell's advantages are relative to the competition yeah definitely dave thank you so we've talked a lot about dell as a technology provider but one thing dell also is is a partner in this larger ecosystem we realize that security whether it's a zero trust paradigm or any other kind of security environment is an ecosystem uh with a lot of different vendors so we look at three areas one is protecting data in systems we know that it starts with and ends with data that helps organizations combat threats across their entire infrastructure and what it means is dell's embedding security features consistently across our portfolios of storage servers networking the second is enhancing cyber resiliency over the last decade a lot of the funding and spending has been in protecting or trying to prevent cyber threats not necessarily in responding to and recovering from threats right we call that resiliency organizations need to build resiliency across their organization so not only can they withstand a threat but they can respond recover and continue with their operations and the third is overcoming security complexity security is hard it's more difficult because of the things we've talked about about distributed data distributed technology and and attack surfaces everywhere and so we're enabling organizations to scale confidently to continue their business but know that all all the i.t decisions that they're making um have these intrinsic security features and are built and delivered in a consistent security so those are kind of the three pillars maybe we could end on what you guys see as the key differentiators that people should know about that that dell brings to the table maybe each of you could take take a shot at that yeah i think first of all from from a holistic portfolio perspective right the uh secure supply chain and the secure development life cycle permeate through everything dell does when building things so we build things with security in mind all the way from as pete mentioned from from creation to delivery we want to make sure you have that that secure device or or asset that permeates everything from servers networking storage data protection through hyper converge through everything that to me is really a key asset because that means you can you understand when you receive something it's a trusted piece of your infrastructure i think the other core component to think about and pete mentioned as dell being a partner for making sure you can deliver these things is that even though those are that's part of our framework these pillars are our framework of how we want to deliver security it's also important to understand that we are partners and that you don't need to rip and replace but as you start to put in new components you can be you can be assured that the components that you're replacing as you're evolving as you're growing as you're moving to the cloud as you're moving to a more on-prem type services or whatever that your environment is secure i think those are two key things got it okay pete bring us home yeah i think one of one of the big advantages of dell is our scope and our scale right we're a large technology vendor that's been around for decades and we develop and sell almost every piece of technology we also know that organizations are might make different decisions and so we have a large services organization with a lot of experienced services people that can help customers along their security journey depending on whatever type of infrastructure or solutions that they're looking at the other thing we do is make it very easy to consume our technology whether that's traditional on-premise in a multi-cloud environment uh or as a service and so the best of breed technology can be consumed in any variety of fashion and know that you're getting that consistent secure infrastructure that dell provides well and dell's forgot the probably top supply chain not only in the tech business but probably any business and so you can actually take take your dog food and then and allow other billionaire champagne sorry allow other people to you know share share best practices with your with your customers all right guys thanks so much for coming thank you appreciate it okay keep it right there after this short break we'll be back to drill into the storage domain you're watching a blueprint for trusted infrastructure on the cube the leader in enterprise and emerging tech coverage be right back concern over cyber attacks is now the norm for organizations of all sizes the impact of these attacks can be operationally crippling expensive and have long-term ramifications organizations have accepted the reality of not if but when from boardrooms to i.t departments and are now moving to increase their cyber security preparedness they know that security transformation is foundational to digital transformation and while no one can do it alone dell technologies can help you fortify with modern security modern security is built on three pillars protect your data and systems by modernizing your security approach with intrinsic features and hardware and processes from a provider with a holistic presence across the entire it ecosystem enhance your cyber resiliency by understanding your current level of resiliency for defending your data and preparing for business continuity and availability in the face of attacks overcome security complexity by simplifying and automating your security operations to enable scale insights and extend resources through service partnerships from advanced capabilities that intelligently scale a holistic presence throughout it and decades as a leading global technology provider we'll stop at nothing to help keep you secure okay we're back digging into trusted infrastructure with paris sarcadi he's a senior consultant for product marketing and storage at dell technologies parasaur welcome to the cube good to see you great to be with you dave yeah coming from hyderabad awesome so i really appreciate you uh coming on the program let's start with talking about your point of view on what cyber security resilience means to to dell generally but storage specifically yeah so for something like storage you know we are talking about the data layer name and if you look at cyber security it's all about securing your data applications and infrastructure it has been a very mature field at the network and application layers and there are a lot of great technologies right from you know enabling zero trust advanced authentications uh identity management systems and so on and and in fact you know with the advent of you know the the use of artificial intelligence and machine learning really these detection tools for cyber securities have really evolved in the network and the application spaces so for storage what it means is how can you bring them to the data layer right how can you bring you know the principles of zero trust to the data layer uh how can you leverage artificial intelligence and machine learning to look at you know access patterns and make intelligent decisions about maybe an indicator of a compromise and identify them ahead of time just like you know how it's happening and other ways of applications and when it comes to cyber resilience it's it's basically a strategy which assumes that a threat is imminent and it's a good assumption with the severity of the frequency of the attacks that are happening and the question is how do we fortify the infrastructure in the switch infrastructure to withstand those attacks and have a plan a response plan where we can recover the data and make sure the business continuity is not affected so that's uh really cyber security and cyber resiliency and storage layer and of course there are technologies like you know network isolation immutability and all these principles need to be applied at the storage level as well let me have a follow up on that if i may the intelligence that you talked about that ai and machine learning is that do you do you build that into the infrastructure or is that sort of a separate software module that that points at various you know infrastructure components how does that work both dave right at the data storage level um we have come with various data characteristics depending on the nature of data we developed a lot of signals to see what could be a good indicator of a compromise um and there are also additional applications like cloud iq is the best example which is like an infrastructure wide health monitoring system for dell infrastructure and now we have elevated that to include cyber security as well so these signals are being gathered at cloud iq level and other applications as well so that we can make those decisions about compromise and we can either cascade that intelligence and alert stream upstream for uh security teams um so that they can take actions in platforms like sign systems xtr systems and so on but when it comes to which layer the intelligence is it has to be at every layer where it makes sense where we have the information to make a decision and being closest to the data we have we are basically monitoring you know the various parallels data access who is accessing um are they crossing across any geo fencing uh is there any mass deletion that is happening or a mass encryption that is happening and we are able to uh detect uh those uh patterns and flag them as indicators of compromise and in allowing automated response manual control and so on for it teams yeah thank you for that explanation so at dell technologies world we were there in may it was one of the first you know live shows that that we did in the spring certainly one of the largest and i interviewed shannon champion and a huge takeaway from the storage side was the degree to which you guys emphasized security uh within the operating systems i mean really i mean powermax more than half i think of the features were security related but also the rest of the portfolio so can you talk about the the security aspects of the dell storage portfolio specifically yeah yeah so when it comes to data security and broadly data availability right in the context of cyber resiliency dell storage this you know these elements have been at the core of our um a core strength for the portfolio and the source of differentiation for the storage portfolio you know with almost decades of collective experience of building highly resilient architectures for mission critical data something like power max system which is the most secure storage platform for high-end enterprises and now with the increased focus on cyber security we are extending those core technologies of high availability and adding modern detection systems modern data isolation techniques to offer a comprehensive solution to the customer so that they don't have to piece together multiple things to ensure data security or data resiliency but a well-designed and well-architected solution by design is delivered to them to ensure cyber protection at the data layer got it um you know we were talking earlier to steve kenniston and pete gear about this notion of dell trusted infrastructure how does storage fit into that as a component of that sort of overall you know theme yeah and you know and let me say this if you could adjust because a lot of people might be skeptical that i can actually have security and at the same time not constrict my organizational agility that's old you know not an ore it's an end how do you actually do that if you could address both of those that would be great definitely so for dell trusted infrastructure cyber resiliency is a key component of that and just as i mentioned you know uh air gap isolation it really started with you know power protect cyber recovery you know that was the solution more than three years ago we launched and that was first in the industry which paved way to you know kind of data isolation being a core element of data management and uh for data infrastructure and since then we have implemented these technologies within different storage platforms as well so that customers have the flexibility depending on their data landscape they can approach they can do the right data isolation architecture right either natively from the storage platform or consolidate things into the backup platform and isolate from there and and the other key thing we focus in trusted infrastructure dell infra dell trusted infrastructure is you know the goal of simplifying security for the customers so one good example here is uh you know being able to respond to these cyber threats or indicators of compromise is one thing but an i.t security team may not be looking at the dashboard of the storage systems constantly right storage administration admins may be looking at it so how can we build this intelligence and provide this upstream platforms so that they have a single pane of glass to understand security landscape across applications across networks firewalls as well as storage infrastructure and in compute infrastructure so that's one of the key ways where how we are helping simplify the um kind of the ability to uh respond ability to detect and respond these threads uh in real time for security teams and you mentioned you know about zero trust and how it's a balance of you know not uh kind of restricting users or put heavy burden on you know multi-factor authentication and so on and this really starts with you know what we're doing is provide all the tools you know when it comes to advanced authentication uh supporting external identity management systems multi-factor authentication encryption all these things are intrinsically built into these platforms now the question is the customers are actually one of the key steps is to identify uh what are the most critical parts of their business or what are the applications uh that the most critical business operations depend on and similarly identify uh mission critical data where part of your response plan where it cannot be compromised where you need to have a way to recover once you do this identification then the level of security can be really determined uh by uh by the security teams by the infrastructure teams and you know another you know intelligence that gives a lot of flexibility uh for for even developers to do this is today we have apis um that so you can not only track these alerts at the data infrastructure level but you can use our apis to take concrete actions like blocking a certain user or increasing the level of authentication based on the threat level that has been perceived at the application layer or at the network layer so there is a lot of flexibility that is built into this by design so that depending on the criticality of the data criticality of the application number of users affected these decisions have to be made from time to time and it's as you mentioned it's it's a balance right and sometimes you know if if an organization had a recent attack you know the level of awareness is very high against cyber attacks so for a time you know these these settings may be a bit difficult to deal with but then it's a decision that has to be made by security teams as well got it so you're surfacing what may be hidden kpis that are being buried inside for instance the storage system through apis upstream into a dashboard so that somebody could you know dig into the storage tunnel extract that data and then somehow you know populate that dashboard you're saying you're automating that that that workflow that's a great example and you may have others but is that the correct understanding absolutely and it's a two-way integration let's say a detector an attack has been detected at a completely different layer right in the application layer or at a firewall we can respond to those as well so it's a two-way integration we can cascade things up as well as respond to threats that have been detected elsewhere um uh through the api that's great all right hey api for power skill is the best example for that uh excellent so thank you appreciate that give us the last word put a bow on this and and bring this segment home please absolutely so a dell storage portfolio um using advanced data isolation um with air gap having machine learning based algorithms to detect uh indicators of compromise and having rigor mechanisms with granular snapshots being able to recover data and restore applications to maintain business continuity is what we deliver to customers uh and these are areas where a lot of innovation is happening a lot of product focus as well as you know if you look at the professional services all the way from engineering to professional services the way we build these systems the way we we configure and architect these systems um cyber security and protection is a key focus uh for all these activities and dell.com securities is where you can learn a lot about these initiatives that's great thank you you know at the recent uh reinforce uh event in in boston we heard a lot uh from aws about you know detent and response and devops and machine learning and some really cool stuff we heard a little bit about ransomware but i'm glad you brought up air gaps because we heard virtually nothing in the keynotes about air gaps that's an example of where you know this the cso has to pick up from where the cloud leaves off but that was in front and so number one and number two we didn't hear a ton about how the cloud is making the life of the cso simpler and that's really my takeaway is is in part anyway your job and companies like dell so paris i really appreciate the insights thank you for coming on thecube thank you very much dave it's always great to be in these uh conversations all right keep it right there we'll be right back with rob emsley to talk about data protection strategies and what's in the dell portfolio you're watching thecube data is the currency of the global economy it has value to your organization and cyber criminals in the age of ransomware attacks companies need secure and resilient it infrastructure to safeguard their data from aggressive cyber attacks [Music] as part of the dell technologies infrastructure portfolio powerstor and powermax combine storage innovation with advanced security that adheres to stringent government regulations and corporate compliance requirements security starts with multi-factor authentication enabling only authorized admins to access your system using assigned roles tamper-proof audit logs track system usage and changes so it admins can identify suspicious activity and act with snapshot policies you can quickly automate the protection and recovery process for your data powermax secure snapshots cannot be deleted by any user prior to the retention time expiration dell technologies also make sure your data at rest stays safe with power store and powermax data encryption protects your flash drive media from unauthorized access if it's removed from the data center while adhering to stringent fips 140-2 security requirements cloud iq brings together predictive analytics anomaly detection and machine learning with proactive policy-based security assessments monitoring and alerting the result intelligent insights that help you maintain the security health status of your storage environment and if a security breach does occur power protect cyber recovery isolates critical data identifies suspicious activity and accelerates data recovery using the automated data copy feature unchangeable data is duplicated in a secure digital vault then an operational air gap isolates the vault from the production and backup environments [Music] architected with security in mind dell emc power store and powermax provides storage innovation so your data is always available and always secure wherever and whenever you need it [Music] welcome back to a blueprint for trusted infrastructure we're here with rob emsley who's the director of product marketing for data protection and cyber security rob good to see a new role yeah good to be back dave good to see you yeah it's been a while since we chatted last and you know one of the changes in in my world is that i've expanded my responsibilities beyond data protection marketing to also focus on uh cyber security marketing specifically for our infrastructure solutions group so certainly that's you know something that really has driven us to you know to come and have this conversation with you today so data protection obviously has become an increasingly important component of the cyber security space i i don't think necessarily of you know traditional backup and recovery as security it's to me it's an adjacency i know some companies have said oh yeah now we're a security company they're kind of chasing the valuation for sure bubble um dell's interesting because you you have you know data protection in the form of backup and recovery and data management but you also have security you know direct security capability so you're sort of bringing those two worlds together and it sounds like your responsibility is to to connect those those dots is that right absolutely yeah i mean i think that uh the reality is is that security is a a multi-layer discipline um i think the the days of thinking that it's one uh or another um technology that you can use or process that you can use to make your organization secure uh are long gone i mean certainly um you actually correct if you think about the backup and recovery space i mean people have been doing that for years you know certainly backup and recovery is all about the recovery it's all about getting yourself back up and running when bad things happen and one of the realities unfortunately today is that one of the worst things that can happen is cyber attacks you know ransomware malware are all things that are top of mind for all organizations today and that's why you see a lot of technology and a lot of innovation going into the backup and recovery space because if you have a copy a good copy of your data then that is really the the first place you go to recover from a cyber attack and that's why it's so important the reality is is that unfortunately the cyber criminals keep on getting smarter i don't know how it happens but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have they go after your your backup data as well so over the last half a decade dell technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery vaults and that is really the you know we've had many conversations about that over the years um and that's really a big tenant of what we do in the data protection portfolio so this idea of of cyber security resilience that definition is evolving what does it mean to you yeah i think the the analyst team over at gartner they wrote a very insightful paper called you will be hacked embrace the breach and the whole basis of this analysis is so much money has been spent on prevention is that what's out of balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked you have to embrace that fact and be ready and prepared to bring yourself back into business you know and that's really where cyber resiliency is very very different than cyber security and prevention you know and i think that balance of get your security disciplines well-funded get your defenses as good as you can get them but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan is really the basis of any good solid data protection backup and recovery uh philosophy so if i had to do a swot analysis we don't have to do the wot but let's focus on the s um what would you say are dell's strengths in this you know cyber security space as it relates to data protection um one is we've been doing it a long time you know we talk a lot about dell's data protection being proven and modern you know certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't as i mentioned to you one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduced a little over five years ago five six years parapatek cyber recovery is something which has become a unique capability for customers to adopt uh on top of their investment in dell technologies data protection you know the the unique elements of our solution already threefold and it's we call them the three eyes it's isolation it's immutability and it's intelligence and the the isolation part is really so important because you need to reduce the attack surface of your good known copies of data you know you need to put it in a location that the bad actors can't get to it and that really is the the the the essence of a cyber recovery vault interestingly enough you're starting to see the market throw out that word um you know from many other places but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault you know allowing it to be controlled by your backup application our cyber recovery vaulting technology is independent of the backup infrastructure it uses it but it controls its own security and that is so so important it's like having a vault that the only way to open it is from the inside you know and think about that if you think about you know volts in banks or volts in your home normally you have a keypad on the outside think of our cyber recovery vault as having its security controlled from inside of the vault so nobody can get in nothing can get in unless it's already in and if it's already in then it's trusted exactly yeah exactly yeah so isolation is the key and then you mentioned immutability is the second piece yeah so immutability is is also something which has been around for a long time people talk about uh backup immunoability or immutable backup copies so immutability is just the the the additional um technology that allows the data that's inside of the vault to be unchangeable you know but again that immutability you know your mileage varies you know when you look across the uh the different offers that are out there in the market especially in the backup industry you make a very valid point earlier that the backup vendors in the market seems to be security washing their marketing messages i mean everybody is leaning into the ever-present danger of cyber security not a bad thing but the reality is is that you have to have the technology to back it up you know quite literally yeah no pun intended and then actually pun intended now what about the intelligence piece of it uh that's that's ai ml where does that fit for sure so the intelligence piece is delivered by um a solution called cybersense and cybersense for us is what really gives you the confidence that what you have in your cyber recovery vault is a good clean copy of data so it's looking at the backup copies that get driven into the cyber vault and it's looking for anomalies so it's not looking for signatures of malware you know that's what your antivirus software does that's what your endpoint protection software does that's on the prevention side of the equation but what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good and that when you get a request to restore and recover your business you go right let's go and do it and you don't have any concern that what you have in the vault has been compromised so cyber sense is really a unique analytic solution in the market based upon the fact that it isn't looking at cursory indicators of of um of of of malware infection or or ransomware introduction it's doing full content analytics you know looking at you know has the data um in any way changed has it suddenly become encrypted has it suddenly become different to how it was in the previous scan so that anomaly detection is very very different it's looking for um you know like different characteristics that really are an indicator that something is going on and of course if it sees it you immediately get flagged but the good news is is that you always have in the vault the previous copy of good known data which now becomes your restore point so we're talking to rob emsley about how data protection fits into what dell calls dti dell trusted infrastructure and and i want to come back rob to this notion of and not or because i think a lot of people are skeptical like how can i have great security and not introduce friction into my organization is that an automation play how does dell tackle that problem i mean i think a lot of it is across our infrastructure is is security has to be built in i mean intrinsic security within our servers within our storage devices uh within our elements of our backup infrastructure i mean security multi-factor authentication you know elements that make the overall infrastructure secure you know we have capabilities that you know allow us to identify whether or not configurations have changed you know we'll probably be talking about that a little bit more to you later in the segment but the the essence is is um security is not a bolt-on it has to be part of the overall infrastructure and that's so true um certainly in the data protection space give us the the bottom line on on how you see dell's key differentiators maybe you could talk about dell of course always talks about its portfolio but but why should customers you know lead in to dell in in this whole cyber resilience space um you know staying on the data protection space as i mentioned the the the work we've been doing um to introduce this cyber resiliency solution for data protection is in our opinion as good as it gets you know the you know you've spoken to a number of our of our best customers whether it be bob bender from founders federal or more recently at delton allergies world you spoke to tony bryson from the town of gilbert and these are customers that we've had for many years that have implemented cyber recovery vaults and at the end of the day they can now sleep at night you know that's really the the peace of mind that they have is that the insurance that a data protection from dell cyber recovery vault a parapatex cyber recovery solution gives them you know really allows them to you know just have the assurance that they don't have to pay a ransom if they have a an insider threat issue and you know all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from great well rob congratulations on the new scope of responsibility i like how you know your organization is expanding as the threat surface is expanding as we said data protection becoming an adjacency to security not security in and of itself a key component of a comprehensive security strategy rob emsley thank you for coming back in the cube good to see you again you too dave thanks all right in a moment i'll be back to wrap up a blueprint for trusted infrastructure you're watching the cube every day it seems there's a new headline about the devastating financial impacts or trust that's lost due to ransomware or other sophisticated cyber attacks but with our help dell technologies customers are taking action by becoming more cyber resilient and deterring attacks so they can greet students daily with a smile they're ensuring that a range of essential government services remain available 24 7 to citizens wherever they're needed from swiftly dispatching public safety personnel or sending an inspector to sign off on a homeowner's dream to protecting restoring and sustaining our precious natural resources for future generations with ever-changing cyber attacks targeting organizations in every industry our cyber resiliency solutions are right on the money providing the security and controls you need we help customers protect and isolate critical data from ransomware and other cyber threats delivering the highest data integrity to keep your doors open and ensuring that hospitals and healthcare providers have access to the data they need so patients get life-saving treatment without fail if a cyber incident does occur our intelligence analytics and responsive team are in a class by themselves helping you reliably recover your data and applications so you can quickly get your organization back up and running with dell technologies behind you you can stay ahead of cybercrime safeguarding your business and your customers vital information learn more about how dell technology's cyber resiliency solutions can provide true peace of mind for you the adversary is highly capable motivated and well equipped and is not standing still your job is to partner with technology vendors and increase the cost of the bad guys getting to your data so that their roi is reduced and they go elsewhere the growing issues around cyber security will continue to drive forward thinking in cyber resilience we heard today that it is actually possible to achieve infrastructure security while at the same time minimizing friction to enable organizations to move quickly in their digital transformations a xero trust framework must include vendor r d and innovation that builds security designs it into infrastructure products and services from the start not as a bolt-on but as a fundamental ingredient of the cloud hybrid cloud private cloud to edge operational model the bottom line is if you can't trust your infrastructure your security posture is weakened remember this program is available on demand in its entirety at thecube.net and the individual interviews are also available and you can go to dell security solutions landing page for for more information go to dell.com security solutions that's dell.com security solutions this is dave vellante thecube thanks for watching a blueprint for trusted infrastructure made possible by dell we'll see you next time

Published Date : Sep 20 2022

SUMMARY :

the degree to which you guys

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
tony brysonPERSON

0.99+

10QUANTITY

0.99+

bostonLOCATION

0.99+

hyderabadLOCATION

0.99+

steve kennistonPERSON

0.99+

second pieceQUANTITY

0.99+

rob emsleyPERSON

0.99+

two-wayQUANTITY

0.99+

rob emsleyPERSON

0.99+

dell technologiesORGANIZATION

0.99+

petePERSON

0.99+

todayDATE

0.99+

thecube.netOTHER

0.99+

dell.comORGANIZATION

0.99+

gartnerORGANIZATION

0.98+

three eyesQUANTITY

0.98+

davePERSON

0.98+

more than 10 yearsQUANTITY

0.98+

dellORGANIZATION

0.98+

three areasQUANTITY

0.98+

five years agoDATE

0.98+

two keyQUANTITY

0.98+

10 years agoDATE

0.98+

dell technologiesORGANIZATION

0.98+

bothQUANTITY

0.97+

steve kennistonPERSON

0.97+

20 timesQUANTITY

0.97+

firstQUANTITY

0.97+

thirdQUANTITY

0.97+

cybersenseORGANIZATION

0.97+

nearly 1500 ciosQUANTITY

0.96+

a lot more peopleQUANTITY

0.95+

one thingQUANTITY

0.95+

secondQUANTITY

0.95+

stevePERSON

0.94+

cloud iqTITLE

0.94+

tens of thousands of devicesQUANTITY

0.94+

pete gearPERSON

0.94+

more than three years agoDATE

0.93+

oneQUANTITY

0.93+

powermaxORGANIZATION

0.93+

two worldsQUANTITY

0.93+

2019DATE

0.92+

gilbertLOCATION

0.92+

one of the key waysQUANTITY

0.91+

DellORGANIZATION

0.91+

pandemicEVENT

0.91+

more than halfQUANTITY

0.9+

eachQUANTITY

0.9+

first placeQUANTITY

0.89+

benderPERSON

0.89+

a lot of peopleQUANTITY

0.89+

zero trustQUANTITY

0.89+

last decadeDATE

0.88+

Rob Emsley, Dell Technologies


 

(upbeat music) >> Welcome back to a Blueprint For Trusted Infrastructure. We're here with Rob Emsley. Who's the director of product marketing for data protection and cyber security. Rob, good to see you. A new role. >> Yeah. Good to be back, Dave. Good to see you. Yeah, it's been a while since we chatted last and, you know, one of the changes in my world is that I've expanded my responsibilities beyond data protection marketing to also focus on cybersecurity marketing specifically for our infrastructure solutions group. So certainly that's, you know, something that really has driven us, you know, to come and have this conversation with you today. >> So data protection obviously has become an increasingly important component of the cyber security space. I don't think necessarily of, you know, traditional backup and recovery as security, to me, it's an adjacency. I know some companies have said, oh, yeah. Now we're a security company. They're kind of chasing the valuation bubble. >> For sure. >> Dell's interesting because you have, you know, data protection in the form of backup and recovery and data management, but you also have security, you know, direct security capabilities. So you're sort of bringing those two worlds together and it sounds like your responsibility is to connect those dots. Is that right? >> Absolutely. Yeah. I mean, I think that the reality is is that security is a multi-layer discipline. I think the days of thinking that it's one or another technology that you can use or process that you can use to make your organization secure are long gone. I mean, certainly you actually correct. If you think about the backup and recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery, it's all about the recovery. It's all about getting yourself backup and running when bad things happen. And one of the realities, unfortunately today is that one of the worst things that can happen is cyber attacks. You know, ransomware, malware are all things that are top of mind for all organizations today. And that's why you see a lot of technology and a lot of innovation going into the backup and recovery space because if you have a copy, a good copy of your data, then that is really the first place you go to recover from a cyber attack. And that's why it's so important. The reality is is that unfortunately the cyber criminals keep on getting smarter. I don't know how it happens, but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have, they go after your backup data as well. So over the last half a decade, Dell Technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery vaults. We've had many conversations about that over the years and that's really a big tenant of what we do in the data protection portfolio. >> So this idea of cybersecurity resilience that definition is evolving. What does it mean to you? >> Yeah, I think the analyst team over at Gartner, they wrote a very insightful paper called you will be hacked embrace the breach. And the whole basis of this analysis is so much money's been spent on prevention is that what's out of balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked. You have to embrace that fact and be ready and prepared to bring yourself back into business. You know, and that's really where cyber resiliency is very, very different than cyber security and prevention, you know, and I think that balance of get your security disciplines well funded, get your defenses as good as you can get them but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan, it's really the basis of any good, solid data protection backup from recovery philosophy. >> So if I had to do a SWOT analysis, we don't have to do the WOT, but let's focus on the S. What would you say are Dell's strengths in this, you know, cyber security space as it relates to data protection? >> One is we've been doing it a long time. You know, we talk a lot about Dell's data protection being proven and modern. You know, certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't. As I mentioned to you, one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduce a little over five years ago, five, six years. Power protect cyber recovery is something which has become a unique capability for customers to adopt on top of their investment in Dell Technologies data protection, you know, the unique elements of our solution already threefold, and we call them the three Is. It's isolation, it's a immutability and it's intelligence. And the, the isolation part is really so important because you need to reduce the attack surface of your good known copies of data. You know, you need to put it in a location that the bad actors can't get to it. And that really is the essence of a cyber recovery vault. Interestingly enough, you're starting to see the market throw out that word, you know, from many other places, but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault, you know, allowing it to be controlled by your backup application. Our cyber recovery vaulting technology is independent of the backup infrastructure. It uses it, but it controls its own security. And that is so, so important. It's like having a vault that the only way to open it is from the inside, you know, and think about that. If you think about, you know, vaults in banks or vaults in your home, normally you have a key pad on the outside. Think of our cyber recovery vault as having its security controlled from inside of the vault. >> So nobody can get in, nothing can get in unless it's already in. And if it's already in, then it's trusted. >> Exactly, exactly. >> Yeah. So isolation's the key. And then you mentioned immutability is the second piece. >> Yeah, so immutability is also something which has been around for a long time. People talk about backup mutability or immutable backup copies. So I mutability is just the additional technology that allows the data that's inside of the vault to be unchangeable, you know, but again that immutability, you know, your mileage varies, you know, when you look across the different offers that are out there in the market especially in the backup industry. You made a very valid point earlier that the backup vendors in the market seem to be security washing their marketing messages. I mean, everybody is leaning into the ever present danger of cybersecurity, not a bad thing, but the reality is is that you have to have the technology to back it up, you know, quite literally >> Yeah, no pun intended. Right. Actually pun intended. Now what about the intelligence piece of it? That's that's AI, ML, where does that fit? >> For sure. So the intelligence piece is delivered by a solution called CyberSense. And CyberSense for us is what really gives you the confidence that what you have in your cyber recovery vault is a good clean copy of data. So it's looking at the backup copies that get driven into the cyber vault, and it's looking for anomalies. So it's not looking for signatures of malware. You know, that's what your antivirus software does. That's what your endpoint protection software does. That's on the prevention side of the equation. But what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good and that when you get a request to restore and recover your business, you go, right, let's go and do it. And you don't have any concern that what you have in the vault has been compromised. So cyber sense is really a unique analytic solution in the market based upon the fact that it isn't looking at at cursory indicators of malware infection or ransomware introduction, it's doing full content analytics, you know, looking at, you know, has the data in any way changed, has it suddenly become encrypted? Has it suddenly become different to how it was in the previous scan? So that anomaly detection is very, very different. It's looking for, you know, like different characteristics that really are an indicator that something is going on. And, of course, if it sees it, you immediately get flagged. But the good news is is that you always have in the vault the previous copy of good known data which now becomes your restore point. >> So we're talking to Rob Emsley about how data protection fits into what Dell calls DTI, Dell Trusted Infrastructure. And I want to come back, Rob, to this notion of, and not or cause I think a lot of people are skeptical. Like how can I have great security and not introduce friction into my organization? Is that an automation play? How does Dell tackle that problem? >> I mean, I think a lot of it is across our infrastructure is is security has to be built in, I mean, intrinsic security within our servers, within our storage devices, within our elements of our backup infrastructure. I mean, security, multifactor authentication, you know, elements that make the overall infrastructure secure. You know, we have capabilities that, you know, allow us to identify whether or not configurations have changed. You know, we'll probably be talking about that a little bit more to you later in the segment, but the essence is security is not a Bolton. It has to be part of the overall infrastructure. And that's so true, certainly in the data protection space >> Give us the bottom line on how you see Dell's key differentiators. Maybe you could talk about Dell, of course, always talks about its portfolio, but why should customers, you know, lead in to Dell in this whole cyber resilience space? >> You know, staying on the data protection space as I mentioned, the work we've been doing to introduce this cyber resiliency solution for data protection is in our opinion, as good as it gets. You know, you've spoken to a number of our best customers whether it be Bob Bender from Founders Federal or more recently at (indistinct) you spoke to Tony Bryson from the Town of Gilbert. And these are customers that we've had for many years that have implemented cyber recovery vaults. And at the end of the day, they can now sleep at night. You know, that's really the peace of mind that they have is that the insurance that a data protection from Dell cyber recovery vault, a power protect cyber recovery solution gives them, you know, really allows them to, you know, just have the assurance that they don't have to pay a ransom. If they have an insider threat issue and, you know, all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from. >> Great. Well, Rob, congratulations on the new scope of responsibility. I like how, you know, your organization is expanding as the threat surface is expanding. As we said, data protection becoming an adjacency to security, not security in and of itself. A key component of a comprehensive security strategy. Rob Emsley, thank you for coming back in theCUBE. Good to see you again. >> You too, Dave. Thanks. >> All right, in a moment, I'll be back to wrap up a blueprint for trusted infrastructure. You are watching theCUBE. (upbeat music)

Published Date : Aug 4 2022

SUMMARY :

Who's the director of product So certainly that's, you know, of the cyber security space. also have security, you know, is that the days of them that definition is evolving. that you have a great recovery plan in this, you know, cyber security space from outside of the vault, you know, And if it's already in, then it's trusted. immutability is the second piece. is that you have to have the That's that's AI, ML, where does that fit? that the data that you need Is that an automation play? elements that make the you know, lead in to Dell is that the insurance I like how, you know, your You too, Dave. I'll be back to wrap up a blueprint

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tony BrysonPERSON

0.99+

Rob EmsleyPERSON

0.99+

DavePERSON

0.99+

RobPERSON

0.99+

GartnerORGANIZATION

0.99+

DellORGANIZATION

0.99+

second pieceQUANTITY

0.99+

Bob BenderPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

oneQUANTITY

0.99+

CyberSenseORGANIZATION

0.98+

GilbertLOCATION

0.97+

threeQUANTITY

0.97+

todayDATE

0.97+

OneQUANTITY

0.97+

DTIORGANIZATION

0.96+

two worldsQUANTITY

0.95+

last half a decadeDATE

0.94+

three decadesQUANTITY

0.92+

overDATE

0.86+

five years agoDATE

0.81+

Founders FederalORGANIZATION

0.77+

first placeQUANTITY

0.77+

thingsQUANTITY

0.72+

six yearsDATE

0.54+

threefoldQUANTITY

0.5+

fiveQUANTITY

0.5+

worst thingsQUANTITY

0.5+

Blueprint For Trusted InfrastructureTITLE

0.43+

Glyn Martin, BT Group | DevOps Virtual Forum


 

>>from around the globe. It's >>the Cube with digital coverage of Dev >>Ops Virtual Forum Brought to You by Broadcom. Welcome to Broadcom, Step Ups, Virtual Forum I and Lisa Martin and I'm joined by another Martin very socially. Distance from me all the way. Coming from Birmingham, England, is Glynn Martin, head of Q. A transformation at BT Glenn. It's great to have you on the program. >>Thank you, Lisa. I'm looking forward, Toa. >>As we said before, we went live to Martin's for the price of one in one segment. So this is gonna be an interesting segment, Guesses. What we're gonna do is Glen's gonna give us a really kind of deep inside out view of Dev ops. From an evolution perspective, Soglo's Let's start transformation is at the heart of what you dio. It's obviously been a very transformative year. How have the events of this year affected the transformation that you are so responsible for driving? >>Yeah. Thank you, Leigh. So I mean, yeah, it has been a difficult year Bond, although working for BT, which is ah, global telecommunications company. Relatively resilient, I suppose, as an industry through covert, it obviously still has been affected and has got its challenges on bond. If anything is actually caused us to accelerate of our transformation journey, you know, we had to do some great things during this time around. You know, in the UK for our emergency and health workers give them unlimited data and for vulnerable people to support them and that spent that we've had to deliver changes quickly. Um, but what? We want to be able to do it, deliver those kind of changes quickly, but sustainably for everything that we do, not just because there's an emergency eso we were already on the kind of journey to by John, but ever so ever more important now that we are what we're able to do, those that kind of work, do it more quickly on. But it works because the implications of it not working is could be terrible in terms of, you know, we've been supporting testing centers, new hospitals to treat covert patients, so we need to get it right and therefore the coverage of what we do, the quality of what we do and how quickly we do. It really has taken on a new scowling what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously deal with the fact that you know, Cove in 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less cost, but they're having to deliver more value quicker on bond, you know, to higher quality. So, yeah, certainly the finances is on our minds. And that's why we need flexible models, cost models that allow us to kind of do growth. But we get that growth by showing that we're delivering value, especially in, you know, these times when there are financial challenges on companies. >>So one of the things that I want to ask you about again looking at, develops from the inside out on the evolution that you've seen you talked about the speed of things really accelerating in this last nine months or so. When we think Dev ops, we think speed. But one of the things I love to get your perspective on we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly to support essential businesses, essential workers? How have you seen that cultural shift? >>Yeah, I think you know, before, you know, test test team saw themselves of this part of the software delivery cycle. Andi, actually, now, really, our customers were expecting their quality and to deliver for our customers what they want. Quality has to be ingrained throughout the life cycle. Obviously that you know, there's lots of buzzwords like shift left. How do you do? Shift left testing. But for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle. That Dr you know, Dr Automation drive improvements. I always say that you know, you're only as good as your lowest common denominator on one thing that we're finding on our Dev Ops Journey Waas that we were you know, we would be trying thio do certain things quicker and had automated build automated tests. But if we were taking weeks to create test scripts or we were taking weeks to manly craft data, and even then when we had taken so long to do it that the coverage was quite poor and that led to lots of defects later in the lifecycle or even in in our production environment, we just couldn't afford to do that. And actually, you know, focusing on continuous testing over the last 9 to 12 months has really given us the ability Thio delivered quickly across the the whole life cycle and therefore actually go from doing a kind of semi agile kind of thing where we did you use the stories we did a few of the kind of, you know, as our ceremonies. But we weren't really deploying any quicker into production because, you know, our stakeholders were scared that we didn't have the same control that we had when we had more water for releases. And, you know, when way didn't think ourselves. So we've done a lot of work on every aspect, especially from a testing point of view, every aspect of every activity, rather than just looking at automated test, you know, whether it is actually creating the test in the first place, Whether it's doing security testing earlier in the light and performance testing. Learn the life cycle, etcetera. So, yeah, it Z It's been a riel key thing that for for C T for us to drive, develops, >>talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this, you know, there's a thing. I think people were pretty quiet. Customer experience. Gap. It reminds me of a cart, a Gilbert cartoon where, you know, we start with the requirements here on Do you know, we almost like a Chinese whisper effects and what we deliver eyes completely, completely different. So we think the testing team or the the delivery team, you know, you know, you think they've done a great job. This is what it said in the acceptance criteria, but then our customers the same Well, actually, that's not working. This isn't working, you know, on there's this kind of gap Way had a great launched this year of actual Requirement Society, one of the board common tools Onda that for the first time in in since I remember actually working within B. T, I had customers saying to may, Wow, you know, we want more of this. We want more projects, um, to have a actual requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do you actually, you know, do that have something that both the business on technical people can understand? And we've actually been working with the business using at our requirement. Designer Thio, you know, really look about what the requirements are. Tease out requirements to the hadn't even thought off and making sure that we've got high levels of test coverage. And so what we actually deliver at the end of it, not only have you been able Thio generate test more quickly, but we've got much higher test coverage and also can more smartly, you're using the kind of AI within the tour and with some of the other kind of pipeline tools actually deliver to choose the right tests on the bar, still actually doing a risk based testing approach. So that's been a great launched this year, but just the start of many kind of things that we're >>doing. But what I hear in that Glenn is a lot of positives that have come out of a very challenging situation. Uh, talk to me about it and I like that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration, perspective is you're right. We talk about that a lot critical with Dev Ops. But those challenges there you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pit it so fast? >>I mean, you talked about culture. I mean, you know, Bt is like most come countries companies. So, um, is very siloed. You know, we're still trying to work to become closer as a company. So I think there's a lot of challenges around. How do you integrate with other tools? How do you integrate with you know, the various different technologies and bt we have 58 different whitey stacks? That's not systems that stacks all of those stacks of can have, you know, hundreds of systems on we're trying to. We're gonna drive at the moment a simplified program where we're trying Thio, you know, reduce that number 2 14 stacks. And even then they'll be complexity behind the scenes that that we will be challenged. Maurin Mawr As we go forward, how do you actually hired that to our users on as an I T organization? How do we make ourselves Lena so that even when we you know, we've still got some of that legacy and we'll never fully get rid of it on that's the kind of trade off that we have to make. How do we actually deal with that and and hide that for my users a say and and and drive those programs so we can actually accelerate change. So we take, you know, reduce that kind of waste, and that kind of legacy costs out of our business. You know, the other thing is, well, beating. And I'm sure you know telecoms probably no difference to insurance or finance we've got You know, when you take the number of products that we do and then you combine them, the permutations are tens and hundreds of thousands of products. So we as a business to trying to simplify. We are trying Thio do that in a natural way and haven't trying to do agile in the proper way, you know, and really actually work it paste really deliver value. So I think what we're looking Maura, Maura, at the moment is actually, um is more value focus? Before we used to deliver changes, sometimes into production, someone had a great idea or it was a great idea nine months ago or 12 months ago. But actually, then we end up deploying it. And then we look at the the the users, you know, the usage of that product of that application or whatever it is on. It's not being used for six months, so we're getting much we haven't got, you know, because of the last 12 months, we certainly haven't got room for that kind of waste and you know, the for not really understanding the value of changes that we we are doing. So I think that's the most important thing at the moment is really taken that waste out. You know, there's lots of focus on things like flow management. What bits of the our process are actually taking too long, and we've We've started on that journey, but we've got a hell of a long way to go, you know, But that that involves looking every aspect off the kind of software delivery cycle. >>What are some? Because that that going from, what, 58 i t stocks down to 14 or whatever it's going to be go simplifying is sounds magical. Took everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we've started on a continuous testing journey, and I think that's just the start. I mean, that's really, as I say, looking at every aspect off, you know, from a Q, a point of view. It's every aspect of what we dio. But it's also looking at, you know, we're starting to branch into more like a AI ops and, you know, really, the full life cycle on. But, you know, that's just a stepping stone onto, you know, I think oughta Nomics is the way forward, right? You know all of this kind of stuff that happens um, you know, monitoring, you know, monitoring systems, what's happening in production had to be feed that back. How do you get to a point where actually we think about a change on then suddenly it's in production safely. Or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey. But if we want Thio, you know, in a world where the pace is ever increasing the demands of the team and you know, with the pressures on at the moment where with we're being asked to do things, you know more efficiently Ondas leaving as possible. We need to be, you know, thinking about every part of the process. And how do we put the kind of stepping stones in players to lead us to a more automated kind of, you know, their future? >>Do you feel that that plant outcomes are starting to align with what's delivered? Given this massive shift that you're experiencing, >>I think it's starting to, and I think you know, Azzawi. Look at more of a value based approach on. Do you know a Zeiss? A princess was a kind of flight management. I think that's that will become ever evermore important. So I think it's starting to people. Certainly realized that, you know, people teams need to work together. You know, the kind of the cousin between business and ICT, especially as we go Teoh Mawr kind of sad space solutions, low cold solutions. You know there's not such a gap anymore. Actually, some of our business partners expects to be much more tech savvy. Eso I think you know, this is what we have to kind of appreciate. What is I ts role? How do we give the capabilities become more for centers of excellence rather than actually doing Mount amount of work And for May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? How do we actually generate that instead of created? I think that's the kind of challenge going forward. >>What are some? As we look forward, what are some of the things that you would like to see implemented or deployed in the next say, 6 to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think you know, certainly for for where we are as a company from a Q A perspective. We are. Yeah, there's certain bits that we do Well, you know, we've started creating continuous delivery. A day evokes pipelines. Um, there's still manual aspects of that. So, you know, certainly for May I I've challenged my team with saying, How do we do an automated journey? So if I, you know, I put a requirement injera or value whoever it is, that's why. Then click a button on bond, you know, with either zero touch of one touch, then put that into production and have confidence that that has been done safely on that it works. And what happens if it doesn't work? So you know, that's that's the next in the next few months, that's what our concentration is about. But it's also about decision making, you know, how do we actually understand those value judgements? And I think there's lots of the things Dev ops, ai ops, kind of always that aspects of business operations. I think it's about having the information in one place to make those kind of decisions. How does it all tied together, as I say, even still with kind of Dev ops, we've still got elements within my company where we've got lots of different organizations doing some doing similar kind of things but the walking of working in silos Still. So I think, having a eye ops Aziz becomes more and more to the fore as we go to the cloud. And that's what we need to. You know, we're still very early on in our cloud journey, you know. So we need to make sure the technologies work with Cloud as well as you kind of legacy systems. But it's about bringing that all together and having a full visible pipeline. Everybody can see and make decisions against >>you said the word confidence, which jumped out at me right away. Because absolutely, you've gotta have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops to be able to gain the confidence that they're making the right decisions for their business? >>Yeah, I mean, I think the the approach that we've taken actually is not started with technology we've actually taken human centered design a za core principle of what we dio within the i t part of BT. So by using humans tend to design. That means we talked to our customers. We understand their pain points, we map out their current processes on. But when we mapped out, those processes also understand their aspirations as well, you know, Where do they want to be in six months? You know, Do they want to be more agile and you know, or do they want Teoh? Is this apart their business that they want thio run better? We have to Then look at why that's not running well and then see what solutions are out there. We've been lucky that, you know, with our partnership with Broadcom within the P l. A. A lot of the tortures and the P l. A have directly answered some of the businesses problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which is you know, in some companies, including as they do there is that kind of, you know, almost by understanding their their pain points and then saying This is how we can solve your problem We've tended to be much more successful than trying Thio impose something and say We're here to technology that they don't quite understand doesn't really understand how it could have resonate with their problems. So I think that's the heart of it is really about, you know, getting looking at the data, looking at the processes, looking at where the kind of waste is on. Then actually then looking at the right solutions. And as I say, continuous testing is a massive for us. We've also got a good relationship with capitals looking at visual ai on. Actually, there's a common theme through that, and I mean, AI is becoming more and more prevalent, and I know yeah, sometimes what is A I and people have kind of the semantics of it. Is it true, ai or not? But yes, certainly, you know, AI and machine learning is becoming more and more prevalent in the way that we work, and it's allowing us to be much more effective, the quicker and what we do on being more accurate. You know, whether it's finding defects, running the right tests or, you know, being able to anticipate problems before they're happening in a production environment. >>Welcome. Thank you so much for giving us this sort of insight. Outlook at Dev Ops, sharing the successes that you're having taking those challenges, converting them toe opportunities and forgiving folks who might be in your shoes or maybe slightly behind advice. I'm sure they appreciate it. We appreciate your time. >>It's been an absolute pleasure, Really. Thank you for inviting me of Extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glynn Martin and Lisa Martin. You're watching the Cube?

Published Date : Nov 20 2020

SUMMARY :

from around the globe. It's great to have you on the program. How have the events of this year affected the transformation that you are so We have to obviously deal with the fact that you know, What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly Waas that we were you know, we would be trying thio do certain What are some of the shifts in terms of expectations So we think the testing team or the the delivery team, you know, But those challenges there you guys were able And then we look at the the the users, you know, the usage of that product of that application What are some of the core technology capabilities that you see really But if we want Thio, you know, in a world where the pace is ever increasing May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? So you know, that's that's the next in the next few months, that's what our concentration is Last question for you is how would you advise your peers in a similar situation So I think that's the heart of it is really about, you know, getting looking at the data, Thank you so much for giving us this sort of insight. So thank you ever so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Glynn MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

tensQUANTITY

0.99+

LisaPERSON

0.99+

Maurin MawrPERSON

0.99+

UKLOCATION

0.99+

LeighPERSON

0.99+

MauraPERSON

0.99+

AzzawiPERSON

0.99+

MartinPERSON

0.99+

Birmingham, EnglandLOCATION

0.99+

BroadcomORGANIZATION

0.99+

14QUANTITY

0.99+

6QUANTITY

0.99+

MayDATE

0.99+

oneQUANTITY

0.99+

Glyn MartinPERSON

0.99+

BT GroupORGANIZATION

0.98+

bothQUANTITY

0.98+

nine months agoDATE

0.98+

12 months agoDATE

0.98+

12 monthsQUANTITY

0.98+

GlennPERSON

0.98+

six monthsQUANTITY

0.98+

this yearDATE

0.98+

SogloORGANIZATION

0.98+

six monthsQUANTITY

0.98+

one touchQUANTITY

0.98+

ThioPERSON

0.97+

hundredsQUANTITY

0.97+

P l. AORGANIZATION

0.97+

first timeQUANTITY

0.97+

BTORGANIZATION

0.96+

GilbertPERSON

0.96+

one segmentQUANTITY

0.95+

agileTITLE

0.94+

BT GlennORGANIZATION

0.94+

ToaPERSON

0.92+

Teoh MawrPERSON

0.91+

one thingQUANTITY

0.91+

CoveORGANIZATION

0.89+

ChineseOTHER

0.88+

GlenPERSON

0.87+

ZeissPERSON

0.87+

B. TORGANIZATION

0.86+

zero touchQUANTITY

0.84+

LenaPERSON

0.83+

Step UpsORGANIZATION

0.81+

58QUANTITY

0.79+

last nine monthsDATE

0.79+

AzizPERSON

0.79+

14 stacksQUANTITY

0.78+

first placeQUANTITY

0.76+

hundreds of thousandsQUANTITY

0.76+

productsQUANTITY

0.75+

last 12 monthsDATE

0.75+

58 different whitey stacksQUANTITY

0.73+

2OTHER

0.71+

DevOpsORGANIZATION

0.71+

ForumORGANIZATION

0.69+

pandemicEVENT

0.63+

Dev OpsORGANIZATION

0.62+

Requirement SocietyORGANIZATION

0.62+

9QUANTITY

0.6+

ThioORGANIZATION

0.59+

thioPERSON

0.58+

AndiPERSON

0.57+

Dev OpsTITLE

0.54+

nextDATE

0.53+

oughta NomicsORGANIZATION

0.52+

Dev Ops JourneyTITLE

0.52+

OutlookTITLE

0.51+

monthsDATE

0.49+

lastDATE

0.47+

19QUANTITY

0.4+

DevOps Virtual Forum 2020 | Broadcom


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi, Lisa Martin here covering the Broadcom dev ops virtual forum. I'm very pleased to be joined today by a cube alumni, Jeffrey Hammond, the vice president and principal analyst serving CIO is at Forester. Jeffrey. Nice to talk with you today. >>Good morning. It's good to be here. Yeah. >>So a virtual forum, great opportunity to engage with our audiences so much has changed in the last it's an understatement, right? Or it's an overstated thing, but it's an obvious, so much has changed when we think of dev ops. One of the things that we think of is speed, you know, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now, speaking of the need to adapt, talk to us about what you're seeing with respect to dev ops and agile in the age of COVID, what are things looking like? >>Yeah, I think that, um, for most organizations, we're in a, uh, a period of adjustment, uh, when we initially started, it was essentially a sprint, you know, you run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And, and that's actually what, um, the folks that get hub saw in may when they ran an analysis of how developers, uh, commit times and a level of work that they were committing and how they were working, uh, in the first couple of months of COVID was, was progressing. They found that developers, at least in the Pacific time zone were actually increasing their work volume, maybe because they didn't have two hour commutes or maybe because they were stuck away in their homes, but for whatever reason, they were doing more work. >>And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, you feel great and you just want to run and you want to power through it and you want to go hard. And if you do that by the time you get to mile 18 or 19, you're going to be gassed. It's sucking for wind. Uh, and, and that's, I think where we're starting to hit. So as we start to, um, gear our development chops out for the reality that most of us won't be returning into an office until 2021 at the earliest and many organizations will, will be fundamentally changing, uh, their remote workforce, uh, policies. We have to make sure that the agile processes that we use and the dev ops processes and tools that we use to support these teams are essentially aligned to help developers run that marathon instead of just kind of power through. >>So, um, let me give you a couple of specifics for many organizations, they have been in an environment where they will, um, tolerate Rover remote work and what I would call remote work around the edges like developers can be remote, but product managers and, um, you know, essentially scrum masters and all the administrators that are running the, uh, uh, the SCM repositories and, and the dev ops pipelines are all in the office. And it's essentially centralized work. That's not, we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so one of the implications of that is that, um, we have to think about all the activities that you need to do from a dev ops perspective or from an agile perspective, they have to be remote people. One of the things I found with some of the organizations I talked to early on was there were things that administrators had to do that required them to go into the office to reboot the SCM server as an example, or to make sure that the final approvals for production, uh, were made. >>And so the code could be moved into the production environment. And so it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some States. And so one of the, the results of that is that while we've traditionally said, you know, tools are important, but they're not as important as culture as structure as organization as process. I think we have to rethink that a little bit because to the extent that tools enable us to be more digitally organized and to hiring, you know, achieve higher levels of digitization in our processes and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers, uh, that, that, um, uh, that organizations have at their disposal. Um, I'll give you another example for years. >>We've said that the key to success with agile at the team level is cross-functional co located teams that are working together physically co located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, you know, how do you take the low hanging fruits of an agile transformation and apply it in, in, in, in the time of COVID? Well, I think what you have to do is that you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table. It's the fact that we're able to get into a shared mindspace, uh, from, um, uh, from a measurement perspective, we can have shared purpose. We can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So one of the biggest things that organizations need to start to ask themselves is how do we achieve spiritual colocation with our agile teams? Because we don't have the, the ease of physical co-location available to us anymore? >>Well, the spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge here, we are seven, eight months in for many organizations, as you say, going from, you know, physical workspaces, co-location being able to collaborate face to face to a, a light switch flip overnight. And this undefined period of time where all we were living with with was uncertainty, how does spiritual, what do you, when you talk about spiritual co-location in terms of collaboration and processes and technology help us unpack that, and how are you seeing organizations adopted? >>Yeah, it's, it's, um, it's a great question. And, and I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace dev ops. Um, if you go all the way back to the, to the original, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions over processes and tools. That's still important. Individuals and interactions are at the core of software development, processes and tools that support those individual and interact. Uh, those individuals in those interactions are more important than ever working software over comprehensive documentation. Working software is still more important, but when you are trying to onboard employees and they can't come into the office and they can't do the two day training session and kind of understand how things work and they can't just holler over the cube, uh, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context, uh, customer collaboration over contract negotiation. >>Absolutely still important, but employee collaboration is equally as important if you want to be spiritually, spiritually co-located. And if you want to have a shared purpose and then, um, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our dev ops effort around velocity getting faster. We need to run as fast as we can like that sprinter. Okay. You know, trying to just power through it as quickly as possible. But as we shift to, to the, to the marathon way of thinking, um, velocity is still important, but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important. Um, and then just flat out velocity. Um, and so changing some of the ways that we think about dev ops practices, um, is, is important to make sure that that agility is there for one thing, you have to defer decisions as far down the chain to the team level as possible. >>So those teams have to be empowered to make decisions because you can't have a program level meeting of six or seven teams and one large hall and say, here's the lay of the land. Here's what we're going to do here are our processes. And here are our guardrails. Those teams have to make decisions much more quickly that developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here or 50 minutes there and do something useful. And so the tools that support us have to become tolerant of the reality of, of, of, of how we're working. So if they work in a way that it allows the team together to take as much autonomy as they can handle, um, to, uh, allow them to communicate in a way that, that, that delivers shared purpose and allows them to adapt and master new technologies, then they're in the zone in their spiritual, they'll get spiritually connected. I hope that makes sense. >>It does. I think we all could use some of that, but, you know, you talked about in the beginning and I've, I've talked to numerous companies during the pandemic on the cube about the productivity, or rather the number of hours of work has gone way up for many roles, you know, and, and, and times that they normally late at night on the weekends. So, but it's a cultural, it's a mind shift to your point about dev ops focused on velocity, sprints, sprints, sprints, and now we have to, so that cultural shift is not an easy one for developers. And even at this folks to flip so quickly, what have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >>I think, I think at the core, this really comes down to management sensitivity. Um, when everybody was in the office, you could kind of see the mental health of development teams by, by watching how they work. You know, you call it management by walking around, right. We can't do that. Managers have to, um, to, to be more aware of what their teams are doing, because they're not going to see that, that developer doing a check-in at 9:00 PM on a Friday, uh, because that's what they had to do, uh, to meet the objectives. And, um, and, and they're going to have to, to, um, to find new ways to measure engagement and also potential burnout. Um, friend of mine once had, uh, had a great metric that he called the parking lot metric. It was helpful as the parking lot at nine. And how full was it at five? >>And that gives you an indication of how engaged your developers are. Um, what's the digital equivalent equivalent to the parking lot metric in the time of COVID it's commit stats, it's commit rates. It's, um, you know, the, uh, the turn rate, uh, that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes, how do we use that information? Do we use that information to say, well, this team isn't delivering as at the same level of productivity as another team, do we weaponize that data or do we use that data to identify impedances in the process? Um, why isn't a team working effectively? Is it because they have higher levels of family obligations and they've got kids that, that are at home? Um, is it because they're working with, um, you know, hardware technology, and guess what, they, it's not easy to get the hardware technology into their home office because it's in the lab at the, uh, at the corporate office, uh, or they're trying to communicate, uh, you know, halfway around the world. >>And, uh, they're communicating with a, with an office lab that is also shut down and, and, and the bandwidth just doesn't enable the, the level of high bandwidth communications. So from a dev ops perspective, managers have to get much more sensitive to the, the exhaust that the dev ops tools are throwing off, but also how they're going to use that in a constructive way to, to prevent burnout. And then they also need to, if they're not already managing or monitoring or measuring the level of developer engagement, they have, they really need to start whether that's surveys around developer satisfaction, um, whether it's, you know, more regular social events, uh, where developers can kind of just get together and drink a beer and talk about what's going on in the project, uh, and monitoring who checks in and who doesn't, uh, they have to, to, um, work harder, I think, than they ever have before. >>Well, and you mentioned burnout, and that's something that I think we've all faced in this time at varying levels and it changes. And it's a real, there's a tension in the air, regardless of where you are. There's a challenge, as you mentioned, people having, you know, coworker, their kids as coworkers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that are, that have done this well, this adaptation, what can you share in terms of some real-world examples that might inspire the audience? >>Yeah. Uh, I'll start with, uh, stack overflow. Uh, they recently published a piece in the journal of the ACM around some of the things that they had discovered. Um, you know, first of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from an executive level, um, social spaces. One of the things that they talk about doing is leaving a video conference room open at a team level all day long, and the team members, you know, we'll go on mute, you know, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question. And if anybody else knows the answer, it's kind of like being in that virtual pod. Uh, if you, uh, if you will, um, even here at Forrester, one of the things that we've done is we've invested in social ceremonies. >>We've actually moved our to our team meetings on, on my analyst team from, from once every two weeks to weekly. And we have built more time in for social Ajay socialization, just so we can see, uh, how, how, how we're doing. Um, I think Microsoft has really made some good, uh, information available in how they've managed things like the onboarding process. I think I'm Amanda silver over there mentioned that a couple of weeks ago when, uh, uh, a presentation they did that, uh, uh, Microsoft onboarded over 150,000 people since the start of COVID, if you don't have good remote onboarding processes, that's going to be a disaster. Now they're not all developers, but if you think about it, um, everything from how you do the interviewing process, uh, to how you get people, their badges, to how they get their equipment. Um, security is a, is another issue that they called out typically, uh, it security, um, the security of, of developers machines ends at, at, at the corporate desktop. >>But, you know, since we're increasingly using our own machines, our own hardware, um, security organizations kind of have to extend their security policies to cover, uh, employee devices, and that's caused them to scramble a little bit. Uh, so, so the examples are out there. It's not a lot of, like, we have to do everything completely differently, but it's a lot of subtle changes that, that have to be made. Um, I'll give you another example. Um, one of the things that, that we are seeing is that, um, more and more organizations to deal with the challenges around agility, with respect to delivering software, embracing low-code tools. In fact, uh, we see about 50% of firms are using low-code tools right now. We predict it's going to be 75% by the end of next year. So figuring out how your dev ops processes support an organization that might be using Mendix or OutSystems, or, you know, the power platform building the front end of an application, like a track and trace application really, really quickly, but then hooking it up to your backend infrastructure. Does that happen completely outside the dev ops investments that you're making and the agile processes that you're making, or do you adapt your organization? Um, our hybrid teams now teams that not just have professional developers, but also have business users that are doing some development with a low-code tool. Those are the kinds of things that we have to be, um, willing to, um, to entertain in order to shift the focus a little bit more toward the agility side, I think >>Lot of obstacles, but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow, and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location Jeffrey, thank you so much for joining us on the program today. Very insightful conversation. >>My pleasure. It's it's, it's an important thing. Just remember if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each and you'll find that you'll get there. >>Digestible components, wise advice. Jeffery Hammond. Thank you so much for joining for Jeffrey I'm Lisa Martin, you're watching Broadcom's dev ops virtual forum >>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom, >>Continuing our conversations here at Broadcom's dev ops virtual forum. Lisa Martin here, please. To welcome back to the program, Serge Lucio, the general manager of the enterprise software division at Broadcom. Hey, Serge. Welcome. Thank you. Good to be here. So I know you were just, uh, participating with the biz ops manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, but I wanted to get your thoughts on spiritual co-location as really a necessity for biz ops to succeed in this unusual time in which we're living. What are your thoughts on spiritual colocation in terms of cultural change versus adoption of technologies? >>Yeah, it's a, it's, it's quite interesting, right? When we, when we think about the major impediments for, uh, for dev ops implementation, it's all about culture, right? And swore over the last 20 years, we've been talking about silos. We'd be talking about the paradox for these teams to when it went to align in many ways, it's not so much about these teams aligning, but about being in the same car in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me, the, this, this is really about kind of changing this culture where people start to look at a kind of OKR is instead of the key objective, um, that, that drives the entire team. Now, what it means in practice is really that's, uh, we need to change a lot of behaviors, right? It's not about the Yarki, it's not about roles. It's about, you know, who can do what and when, and, uh, you know, driving a bias towards action. It also means that we need, I mean, especially in this school times, it becomes very difficult, right? To drive kind of a kind of collaboration between these teams. And so I think there there's a significant role that especially tools can play in terms of providing this complex feedback from teams to, uh, to be in that preface spiritual qualification. >>Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect to velocity, all about speed here. But of course this time everything changed so quickly, but going from the physical spaces to everybody being remote really does take it. It's very different than you can't replicate it digitally, but there are collaboration tools that can kind of really be essential to help that cultural shift. Right? >>Yeah. So 2020, we, we touch to talk about collaboration in a very mundane way. Like, of course we can use zoom. We can all get into, into the same room. But the point when I think when Jeff says spiritual, co-location, it's really about, we all share the same objective. Do we, do we have a niece who, for instance, our pipeline, right? When you talk about dev ops, probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team, but just thinking about a pipeline, right, at the end of the day, it's all about what is the meantime to beat back to these teams. If I'm a developer and a commit code, I don't, does it take where, you know, that code to be processed through pipeline pushy? Can I get feedback if I am a finance person who is funding a product or a project, what is my meantime to beat back? >>And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of the Dora metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration, it's really a balance. How do you provide the feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this, this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when I checked in code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allegation, how can the system help me reduce my change pillar rates? Because it's, it's able to predict that some issue was introduced in the outpatient or work product. Um, so I think there's, there's a great role of technology and AI candidate Lynch to, to actually provide that new level of collaboration. >>So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right now is organizations are still in some form of transformation to this new almost 100% remote workforce. >>So I'll just say first, I'm not a big fan of metrics. Um, and the reason being that, you know, you can look at a change killer rate, right, or a lead time or cycle time. And those are, those are interesting metrics, right? The trend on metric is absolutely critical, but what's more important is you get to the root cause what is taught to you lean to that metric to degrade or improve or time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll give you a very mundane example. You know, we know that cycle time is heavily influenced by, um, organizational boundaries. So, you know, we talk a lot about silos, but, uh, we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are a pretty, you know, valid set metrics, but what's way more important is to understand what are the antiperspirants, what are the things that we can detect through the data that actually are affecting those metrics. And, uh, I mean, over the last 10, 20 years, we've learned a lot about kind of what are, what are the antiperspirants within our large enterprise customers. And there are plenty of them. >>What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >>So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle, where DevOps is more and more embracing value stream management value stream mapping. Um, and I think what's interesting is that in many ways the product is becoming the new silo. Uh, the notion of a product is very difficult by itself to actually define people are starting to recognize that a value stream is not its own little kind of Island. That in reality, when I define a product, this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So, so even on that, and there is clearly kind of a new sets, if you will, of anti-patterns where products are being defined as a set of OTRs, they have interdependencies and you have have a new set of silos on the operands, uh, the Abra key movement to Israel and the SRE space where, um, I think there is a cultural clash while the dev ops side is very much embracing this notion of OTRs and value stream mapping and Belgium management. >>On the other end, you have the it operations teams. We still think business services, right? For them, they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so the, the, there is there's this boundary where, um, I think, well, SRE is being put in place. And there's lots of thinking about what kind of metrics can be fined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for true operations team. >>And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You talked about, uh, AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate, maybe see eye to eye with the SRS? >>Yeah. Um, so th you're kind of too. So even for myself, as a leader of a, you know, 1500 people organization, there's a number of things I don't see right. On a daily basis. And, um, I think the, the, the, the technologies that we have at our disposal today from the AI are able to mind a lot of data and expose a lot of, uh, issues that's as leaders we may not be aware of. And some of the, some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you, when you start to understand, for instance, uh, what is the, what is the working progress right to during the sprint? Um, when you start to analyze the data you can detect, for instance, that maybe the teams are over committed, that there is too much work in progress. >>You can start to identify kind of, interdepencies either from a technology, from a people point of view, which were hidden, uh, you can start to understand maybe the change filler rates he's he is dragging. So I believe that there is a, there's a fundamental role to be played by the tools to, to expose again, these anti parents, to, to make these things visible to the teams, to be able to even compare teams. Right. One of the things that's, that's, uh, that's amazing is now we have access to tons of data, not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate, and what's working, what's not working >>Thoughts on AI and automation as, as a facilitator of spiritual co-location. >>Yeah, absolutely. Absolutely. It's um, you know, th there's, uh, the problem we all face is the unknown, right? The, the law city, but volume variety of the data, uh, everyday we don't really necessarily completely appreciate what is the impact of our actions, right? And so, um, AI can really act as a safety net that enables us to, to understand what is the impact of our actions. Um, and so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data, um, and collaborate on the data. And the actual matter, I think is, is a, is a very powerful enabler, uh, on, in that respect. I mean, I, I've seen, um, I've seen countless of times that, uh, for instance, at the SRE boundary, um, to basically show that we'll turn the quality attributes, so an incoming release, right. And exposing that to, uh, an operations person and a sorry person, and enabling that collaboration dialogue through data is a very, very powerful tool. >>Do you have any recommendations for how teams can use, you know, the SRE folks, the dev ops says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be nonproductive. >>Yeah. So to me, the th there, there's a part of the question really is when, when we talk about data, there are there different ways you can use data, right? Um, so you can, you can do a lot of an analytics, predictive analytics. So I think there is a, there's a tendency, uh, to look at, let's say a, um, a specific KPI, like a, an availability KPI, or change filler rate, and to basically do a regression analysis and projecting all these things, going to happen in the future. To me, that that's, that's a, that's a bad approach. The reason why I fundamentally think it's a better approach is because we are systems. The way we develop software is, is a, is a non-leader kind of system, right? Software development is not linear nature. And so I think there's a D this is probably the worst approach is to actually focus on metrics on the other end. >>Um, if you, if you start to actually understand at a more granular level, what har, uh, which are the things which are contributing to this, right? So if you start to understand, for instance, that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we, we have, I've actually, uh, a customer who, uh, identified that, uh, over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed, this resulted in these plant outages. So if you start to be able to basically establish causality, right, cause an effect between kind of data across the last cycle. I think, I think this is the right way to, uh, to, to use AI. And so pharma to be, I think it's way more God could have a classification problem. What are the classes of problems that do exist and affect things as opposed to analytics, predictive, which I don't think is as powerful. >>So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. You're one of the authors of that. I want to get your thoughts on dev ops and biz ops overlapping, complimenting each other, what, from a, the biz ops perspective, what does it mean to the future of dev ops? >>Yeah, so, so it's interesting, right? If you think about DevOps, um, there's no felony document, right? Can we, we can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways, there's no clear definition of what dev ops is. Uh, if you go to the dev ops Institute today, you'll see that they are specific, um, trainings for instance, on value management on SRE. And so in many ways, the problem we have as an industry is that, um, there are set practices between agile dev ops, SRE Valley should management. I told, right. And we all basically talk about the same things, right. We all talk about essentially, um, accelerating in the meantime fee to feedback, but yet we don't have the common framework to talk about that. The other key thing is that we add to wait, uh, for, uh, for jeans, Jean Kim's Lascaux, um, to, uh, to really start to get into the business aspect, right? >>And for value stream mapping to start to emerge for us to start as an industry, right. It, to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about driving these business outcomes. And so to me, these ops is really about kind of, uh, putting a lens on this critical element that it's not business and it, that we in fact need to fuse business 19 that I need needs to transform itself to recognize that it's, it's this value generator, right. It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this Oliver or kind of framework, if you will. That set the context for what is the reason, uh, for it to exist. What's part of the core values and principles that it needs to embrace to, again, change from a cost center to a value center. And then we need to start to use this as a way to start to unify some of the, again, the core practices, whether it's agile, DevOps value, stream mapping SRE. Um, so, so I think over time, my hope is that we start to optimize a lot of our practices, language, um, and, uh, and cultural elements. >>Last question surgeon, the last few seconds we have here talking about this, the relation between biz ops and dev ops, um, what do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >>So to me, the key, the key, um, challenge for, for the industry is really around. So we were seeing a very rapid shift towards kind of, uh, product to product, right. Which we don't want to do is to recreate kind of these new silos, these hard silos. Um, so that, that's one of the big changes, uh, that I think we need to be, uh, to be really careful about, um, because it is ultimately, it is about culture. It's not about, uh, it's not about, um, kind of how we segment the work, right. And, uh, any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of, um, kind of the spiritual co-location, I think it's, it's really about that too. It's really about kind of, uh, uh, focusing on the business outcomes on kind of aligning on driving engagement across the teams, but, but not for create a, kind of a new set of silos, which instead of being vertical are going to be these horizontal products >>Crazy by surge that looking at culture as kind of a way of really, uh, uh, addressing and helping to, uh, re re reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps virtual forum. >>Thank you. Thanks for your time. >>I'll be right back >>From around the globe it's the cube with digital coverage of devops virtual forum brought to you by Broadcom. >>Welcome to Broadcom's DevOps virtual forum, I'm Lisa Martin, and I'm joined by another Martin, very socially distanced from me all the way coming from Birmingham, England is Glynn Martin, the head of QA transformation at BT. Glynn, it's great to have you on the program. Thank you, Lisa. I'm looking forward to it. As we said before, we went live to Martins for the person one in one segment. So this is going to be an interesting segment guys, what we're going to do is Glynn's going to give us a really kind of deep inside out view of devops from an evolution perspective. So Glynn, let's start. Transformation is at the heart of what you do. It's obviously been a very transformative year. How have the events of this year affected the >> transformation that you are still responsible for driving? Yeah. Thank you, Lisa. I mean, yeah, it has been a difficult year. >>Um, and although working for BT, which is a global telecommunications company, um, I'm relatively resilient, I suppose, as a, an industry, um, through COVID obviously still has been affected and has got its challenges. And if anything, it's actually caused us to accelerate our transformation journey. Um, you know, we had to do some great things during this time around, um, you know, in the UK for our emergency and, um, health workers give them unlimited data and for vulnerable people to support them. And that's spent that we've had to deliver changes quickly. Um, but what we want to be able to do is deliver those kinds of changes quickly, but sustainably for everything that we do, not just because there's an emergency. Um, so we were already on the kind of journey to agile, but ever more important now that we are, we are able to do those, that kind of work, do it more quickly. >>Um, and that it works because the, the implications of it not working is, can be terrible in terms of you know, we've been supporting testing centers,  new hospitals to treat COVID patients. So we need to get it right. And then therefore the coverage of what we do, the quality of what we do and how quickly we do it really has taken on a new scale and what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that, you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously, um, deal with the fact that, you know, COVID 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less costs, but having to deliver more value quicker and  to higher quality. So yeah, certainly the finances is, um, on our minds and that's why we need flexible models, cost models that allow us to kind of do growth, but we get that growth by showing that we're delivering value. Um, especially in these times when there are financial challenges on companies. So one of the things that I want to ask you about, I'm again, looking at DevOps from the inside >>Out and the evolution that you've seen, you talked about the speed of things really accelerating in this last nine months or so. When we think dev ops, we think speed. But one of the things I'd love to get your perspective on is we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that you've seen there as, as needing to get, as you said, get things right, but done so quickly to support essential businesses, essential workers. How have you seen that cultural shift? >>Yeah, I think, you know, before test teams for themselves at this part of the software delivery cycle, um, and actually now really our customers are expecting that quality and to deliver for our customers what they want, quality has to be ingrained throughout the life cycle. Obviously, you know, there's lots of buzzwords like shift left. Um, how do we do shift left testing? Um, but for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle that drive automation, drive improvements. I always say that, you know, you're only as good as your lowest common denominator. And one thing that we were finding on our dev ops journey was that we  would be trying to do certain things quick, we had automated build, automated tests. But if we were taking a weeks to create test scripts, or we were taking weeks to manually craft data, and even then when we had taken so long to do it, that the coverage was quite poor and that led to lots of defects later on in the life cycle, or even in our production environment, we just couldn't afford to do that. >>And actually, focusing on continuous testing over the last nine to 12 months has really given us the ability to deliver quickly across the whole life cycle. And therefore actually go from doing a kind of semi agile kind of thing, where we did the user stories, we did a few of the kind of agile ceremonies, but we weren't really deploying any quicker into production because our stakeholders were scared that we didn't have the same control that we had when we had more waterfall releases. And, you know, when we didn't think of ourselves. So we've done a lot of work on every aspect, um, especially from a testing point of view, every aspect of every activity, rather than just looking at automated tests, you know, whether it is actually creating the test in the first place, whether it's doing security testing earlier in the lot and performance testing in the life cycle, et cetera. So, yeah,  it's been a real key thing that for CT, for us to drive DevOps, >>Talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this. Um, you know, there's a thing that I think people will probably call it a customer experience gap, and it reminds me of a Gilbert cartoon, where we start with the requirements here and you're almost like a Chinese whisper effects and what we deliver is completely different. So we think the testing team or the delivery teams, um, know in our teeth has done a great job. This is what it said in the acceptance criteria, but then our customers are saying, well, actually that's not working this isn't working and there's this kind of gap. Um, we had a great launch this year of agile requirements, it's one of the Broadcom tools. And that was the first time in, ever since I remember actually working within BT, I had customers saying to me, wow, you know, we want more of this. >>We want more projects to have extra requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do we actually, you know, do that and have something that both the business and technical people can understand. And we've actually been working with the business , using agile requirements designer to really look at what the requirements are, tease out requirements we hadn't even thought of and making sure that we've got high levels of test coverage. And what we actually deliver at the end of it, not only have we been able to generate tests more quickly, but we've got much higher test coverage and also can more smartly, using the kind of AI within the tool and then some of the other kinds of pipeline tools, actually deliver to choose the right tasks, and actually doing a risk based testing approach. So that's been a great launch this year, but just the start of many kinds of things that we're doing >>Well, what I hear in that, Glynn is a lot of positives that have come out of a very challenging situation. Talk to me about it. And I liked that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration perspective you're right, we talk about that a lot critical with devops. But those challenges there, you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pivot so fast? >>I mean, you talked about culture. You know, BT is like most companies  So it's very siloed. You know we're still trying to work to become closer as a company. So I think there's a lot of challenges around how would you integrate with other tools? How would you integrate with the various different technologies. And BT, we have 58 different IT stacks. That's not systems, that's stacks, all of those stacks can have hundreds of systems. And we're trying to, we've got a drive at the moment, a simplified program where we're trying to you know, reduce that number to 14 stacks. And even then there'll be complexity behind the scenes that we will be challenged more and more as we go forward. How do we actually highlight that to our users? And as an it organization, how do we make ourselves leaner, so that even when we've still got some of that legacy, and we'll never fully get rid of it and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from our users and drive those programs, so we can, as I say, accelerate change,  reduce that kind of waste and that kind of legacy costs out of our business. You know, the other thing as well, I'm sure telecoms is probably no different to insurance or finance. When you take the number of products that we do, and then you combine them, the permutations are tens and hundreds of thousands of products. So we, as a business are trying to simplify, we are trying to do that in an agile way. >>And haven't tried to do agile in the proper way and really actually work at pace, really deliver value. So I think what we're looking more and more at the moment is actually  more value focused. Before we used to deliver changes sometimes into production. Someone had a great idea, or it was a great idea nine months ago or 12 months ago, but actually then we ended up deploying it and then we'd look at the users, the usage of that product or that application or whatever it is, and it's not being used for six months. So we haven't got, you know, the cost of the last 12 months. We certainly haven't gotten room for that kind of waste and, you know, for not really understanding the value of changes that we are doing. So I think that's the most important thing of the moment, it's really taking that waste out. You know, there's lots of focus on things like flow management, what bits of our process are actually taking too long. And we've started on that journey, but we've got a hell of a long way to go. But that involves looking at every aspect of the software delivery cycle. >> Going from, what 58 IT stacks down to 14 or whatever it's going to be, simplifying sounds magical to everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we were started on a continuous testing journey, and I think that's just the start. I mean as I say, looking at every aspect of, you know, from a QA point of view is every aspect of what we do. And it's also looking at, you know, we've started to branch into more like AI, uh, AI ops and, you know, really the full life cycle. Um, and you know, that's just a stepping stone to, you know, I think autonomics is the way forward, right. You know, all of this kind of stuff that happens, um, you know, monitoring, uh, you know, watching the systems what's happening in production, how do we feed that back? How'd you get to a point where actually we think about change and then suddenly it's in production safely, or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey, but if we want to, you know, in a world where the pace is in ever-increasing and the demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, uh, you know, more efficiently and as lean as possible, we need to be thinking about every part of the process and how we put the kind of stepping stones in place to lead us to a more automated kind of, um, you know, um, the future. >>Do you feel that that planned outcomes are starting to align with what's delivered, given this massive shift that you're experiencing? >>I think it's starting to, and I think, you know, as I say, as we look at more of a value based approach, um, and, um, you know, as I say, print, this was a kind of flow management. I think that that will become ever, uh, ever more important. So, um, I think it starting to people certainly realize that, you know, teams need to work together, you know, the kind of the cousin between business and it, especially as we go to more kind of SAS based solutions, low code solutions, you know, there's not such a gap anymore, actually, some of our business partners that expense to be much more tech savvy. Um, so I think, you know, this is what we have to kind of appreciate what is its role, how do we give the capabilities, um, become more of a centers of excellence rather than actually doing mounds amounts of work. And for me, and from a testing point of view, you know, mounds and mounds of testing, actually, how do we automate that? How do we actually generate that instead of, um, create it? I think that's the kind of challenge going forward. >>What are some, as we look forward, what are some of the things that you would like to see implemented or deployed in the next, say six to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think, um, you know, certainly for, for where we are as a company from a QA perspective, we are, um, you let's start in bits that we do well, you know, we've started creating, um, continuous delivery and DevOps pipelines. Um, there's still manual aspects of that. So, you know, certainly for me, I I've challenged my team with saying how do we do an automated journey? So if I put a requirement in JIRA or rally or wherever it is and why then click a button and, you know, with either zero touch for one such, then put that into production and have confidence that, that has been done safely and that it works and what happens if it doesn't work. So, you know, that's, that's the next, um, the next few months, that's what our concentration, um, is, is about. But it's also about decision-making, you know, how do you actually understand those value judgments? >>And I think there's lots of the things dev ops, AI ops, kind of that always ask aspects of business operations. I think it's about having the information in one place to make those kinds of decisions. How does it all try and tie it together? As I say, even still with kind of dev ops, we've still got elements within my company where we've got lots of different organizations doing some, doing similar kinds of things, but they're all kind of working in silos. So I think having AI ops as it comes more and more to the fore as we go to cloud, and that's what we need to, you know, we're still very early on in our cloud journey, you know, so we need to make sure the technologies work with cloud as well as you can have, um, legacy systems, but it's about bringing that all together and having a full, visible pipeline, um, that everybody can see and make decisions. >>You said the word confidence, which jumped out at me right away, because absolutely you've got to have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question then for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops, to be able to gain the confidence that they're making the right decisions for their business? >>I think the, the, the, the, the approach that we've taken actually is not started with technology. Um, we've actually taken a human centered design, uh, as a core principle of what we do, um, within the it part of BT. So by using human centered design, that means we talk to our customers, we understand their pain points, we map out their current processes. Um, and then when we mapped out what this process does, it also understand their aspirations as well, you know? Um, and where do they want to be in six months? You know, do they want it to be, um, more agile and, you know, or do they want to, you know, is, is this a part of their business that they want to do one better? We actually then looked at why that's not running well, and then see what, what solutions are out there. >>We've been lucky that, you know, with our partnership, with Broadcom within the payer line, lots of the tools and the PLA have directly answered some of the business's problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they do there is that kind of, you know, almost by understanding their, their pain points and then starting, this is how we can solve your problem. Um, is we've, we've tended to be much more successful than trying to impose something and say, well, here's the technology that they don't quite understand. It doesn't really understand how it kind of resonates with their problems. So I think that's the heart of it. It's really about, you know, getting, looking at the data, looking at the processes, looking at where the kind of waste is. >>And then actually then looking at the right solutions. Then, as I say, continuous testing is massive for us. We've also got a good relationship with Apple towards looking at visual AI. And actually there's a common theme through that. And I mean, AI is becoming more and more prevalent. And I know, you know, sometimes what is AI and people have kind of this semantics of, is it true AI or not, but it's certainly, you know, AI machine learning is becoming more and more prevalent in the way that we work. And it's allowing us to be much more effective, be quicker in what we do and be more accurate. And, you know, whether it's finding defects running the right tests or, um, you know, being able to anticipate problems before they're happening in a production environment. >>Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the successes that you're having, taking those challenges, converting them to opportunities and forgiving folks who might be in your shoes, or maybe slightly behind advice enter. They appreciate it. We appreciate your time. >>Well, it's been an absolute pleasure, really. Thank you for inviting me. I have a extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glenn Martin. I'm Lisa Martin. You're watching the cube >>Driving revenue today means getting better, more valuable software features into the hands of your customers. If you don't do it quickly, your competitors as well, but going faster without quality creates risks that can damage your brand destroy customer loyalty and cost millions to fix dev ops from Broadcom is a complete solution for balancing speed and risk, allowing you to accelerate the flow of value while minimizing the risk and severity of critical issues with Broadcom quality becomes integrated across the entire DevOps pipeline from planning to production, actionable insights, including our unique readiness score, provide a three 60 degree view of software quality giving you visibility into potential issues before they become disasters. Dev ops leaders can manage these risks with tools like Canary deployments tested on a small subset of users, or immediately roll back to limit the impact of defects for subsequent cycles. Dev ops from Broadcom makes innovation improvement easier with integrated planning and continuous testing tools that accelerate the flow of value product requirements are used to automatically generate tests to ensure complete quality coverage and tests are easily updated. >>As requirements change developers can perform unit testing without ever leaving their preferred environment, improving efficiency and productivity for the ultimate in shift left testing the platform also integrates virtual services and test data on demand. Eliminating two common roadblocks to fast and complete continuous testing. When software is ready for the CIC CD pipeline, only DevOps from Broadcom uses AI to prioritize the most critical and relevant tests dramatically improving feedback speed with no decrease in quality. This release is ready to go wherever you are in your DevOps journey. Broadcom helps maximize innovation velocity while managing risk. So you can deploy ideas into production faster and release with more confidence from around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of dev ops and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high-performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago. So it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate physical co-location. Okay. That makes sense. >>It does. I shouldn't have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with jump. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other understanding, Oh, individuals teams, uh, contributed to kind of a business outcome, what is our shared goal or shared vision? What's what is it we're trying to achieve collectively and keeping it kind of aligned to that? Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organization, there's been specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of reverted back to, okay, what is my database all the time? What is my cycle time? >>Right. And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens or these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, favorites experience was to actually weakness at one of a large financial institution. Um, you know, Tuesday Golder's unquote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, test execution results, you know, Coverity coverage, um, official liabilities and all the all ran. It could have a direction level links. And that's when you start to put these things in context and represent that to you in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and, and understand have they joined our company to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it can, it says simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos. We were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very dev, uh, different environments, depending on what area of BT you're in, whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can, um, drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. >>And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in their kind of testing world at the moment. But, you know, I could talk about things like continuous delivery direct to when a ball comes tools and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum it up, autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter? Jeff, >>I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we look at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test, uh, departments, a group, the fourth level that I think is really important, uh, in, in the current, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric helpful was the parking lot at nine. And how full was it at five o'clock. I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customers. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big impact. And, and, and also how do you measure quality kind of following onto the advice that Jeff provided? >>That's some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker, uh, called me Kirsten a couple of months ago. It taught very much around how important flow management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes long? So I think that's a very important thing for us. It's even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how, how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up our teams, training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are, um, delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, Jeff's point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. >>And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about our, these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood is that, you know, when you have a bad cycle time or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, if you look at, for instance, things like lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. >>And so I think that there is, you know, the only the metrics there is, I think a lot of work that we need to do in terms of classifying, descend type patterns, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as, as a key pattern, the industry or anti-spatter. Um, but waterscrumfall right is a key one, right? And you will detect that through kind of a defect arrival rates. That's where that looks like an S-curve. And so I think it's beyond kind of the, the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on this. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff will go back to you. Okay. >>I would say look for the opportunities that this disruption presents. And there are a couple that I see, first of all, uh, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks to be on the lookout for now? I, I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but low code also. Um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the, the life cycle, um, where, when I heard the word scrum fall, it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment we need to get beyond that. We need >>To be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production and go straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately we all need some peanut butter and chocolate. All right. It starts to take us home. That's what's that nugget you think everyone needs to have on their agendas? >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about the lost city, but it's all about sweet. We still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of a spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value into aligning everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 18 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Nice to talk with you today. It's good to be here. One of the things that we think of is speed, it was essentially a sprint, you know, you run as hard as you can for as fast as you can And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, um, we have to think about all the activities that you need to do from a dev ops perspective and to hiring, you know, achieve higher levels of digitization in our processes and We've said that the key to success with agile at the team level is cross-functional organizations, as you say, going from, you know, physical workspaces, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions is important to make sure that that agility is there for one thing, you have to defer decisions So those teams have to be empowered to make decisions because you can't have a I think we all could use some of that, but, you know, you talked about in the beginning and I've, Um, when everybody was in the office, you could kind of see the And that gives you an indication of how engaged your developers are. um, whether it's, you know, more regular social events, that have done this well, this adaptation, what can you share in terms of some real-world examples that might Um, you know, first of all, since the start of COVID, if you don't have good remote onboarding processes, Those are the kinds of things that we have to be, um, willing to, um, and the business folks to just get better at what they're doing and learning to embrace It's it's, it's an important thing. Thank you so much for joining for Jeffrey I'm Lisa Martin, of dev ops virtual forum brought to you by Broadcom, I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, uh, you know, driving a bias towards action. Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect does it take where, you know, that code to be processed through pipeline pushy? you know, when I checked in code, you know, to do I guess the system to automatically identify what So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right And so I'm much more interested and we, you know, fruit for Broadcom. are being defined as a set of OTRs, they have interdependencies and you have have a new set And so, you know, it's not uncommon to see, you know, teams where, you know, How do you think those technologies can be leveraged by DevOps leaders to influence as a leader of a, you know, 1500 people organization, there's a number of from a people point of view, which were hidden, uh, you can start to understand maybe It's um, you know, you know, the SRE folks, the dev ops says can use AI and automation in the right ways Um, so you can, you can do a lot of an analytics, predictive analytics. So if you start to understand, for instance, that whenever maybe, you know, So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. the problem we have as an industry is that, um, there are set practices between And so to me, these ops is really about kind of, uh, putting a lens on So to me, the key, the key, um, challenge for, We thank you so much for sharing your insights and your time at today's DevOps Thanks for your time. of devops virtual forum brought to you by Broadcom. Transformation is at the heart of what you do. transformation that you are still responsible for driving? you know, we had to do some great things during this time around, um, you know, in the UK for one of the things that I want to ask you about, I'm again, looking at DevOps from the inside But one of the things I'd love to get your perspective I always say that, you know, you're only as good as your lowest And, you know, What are some of the shifts in terms of expectations Um, you know, there's a thing that I think people I mean, we talk about collaboration, but how do we actually, you know, do that and have something that did you face and figure out quickly enough to be able to pivot so fast? and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from So we haven't got, you know, the cost of the last 12 months. What are some of the core technology capabilities that you see really as kind demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, And for me, and from a testing point of view, you know, mounds and mounds of testing, we are, um, you let's start in bits that we do well, you know, we've started creating, ops as it comes more and more to the fore as we go to cloud, and that's what we need to, Last question then for you is how would you advise your peers in a similar situation to You know, do they want it to be, um, more agile and, you know, or do they want to, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they And I know, you know, sometimes what is AI Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the So thank you ever so much. I'm Lisa Martin. the entire DevOps pipeline from planning to production, actionable This release is ready to go wherever you are in your DevOps journey. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago. Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other understanding, that to you in a way that these different stakeholders can, can look at from their different lens. And Glen, we talked a lot about transformation with you last time. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, then we can talk to the teams, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customers. And, and, and also how do you measure quality kind of following the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that of, uh, you know, setting up benchmarks, right? And so one of the things that we really need to evolve, um, as an industry is to understand that we need to do in terms of classifying, descend type patterns, um, you know, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and to put and dev ops are the peanut butter and chocolate to support creative, uh, But I think the last, um, the last thing for me is how do you really instill and having the confidence to actually do more testing in production and go straight to production itself. And if you read through the report, it's all about the I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

SergePERSON

0.99+

GlenPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

AppleORGANIZATION

0.99+

Jeffery HammondPERSON

0.99+

GlennPERSON

0.99+

sixQUANTITY

0.99+

26QUANTITY

0.99+

Glenn MartinPERSON

0.99+

50 minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

Jeff HammondPERSON

0.99+

tensQUANTITY

0.99+

six monthsQUANTITY

0.99+

2021DATE

0.99+

BenPERSON

0.99+

10 yearQUANTITY

0.99+

UKLOCATION

0.99+

two hoursQUANTITY

0.99+

15 yearQUANTITY

0.99+

sevenQUANTITY

0.99+

9:00 PMDATE

0.99+

two hourQUANTITY

0.99+

14 stacksQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

GlynnPERSON

0.99+

two dayQUANTITY

0.99+

MartinPERSON

0.99+

Glynn MartinPERSON

0.99+

KirstenPERSON

0.99+

todayDATE

0.99+

SRE ValleyORGANIZATION

0.99+

five o'clockDATE

0.99+

BothQUANTITY

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

second aspectQUANTITY

0.99+

Glen JeffPERSON

0.99+

threeQUANTITY

0.99+

14QUANTITY

0.99+

75%QUANTITY

0.99+

three weeksQUANTITY

0.99+

Amanda silverPERSON

0.99+

oneQUANTITY

0.99+

seven teamsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

last yearDATE

0.99+

Breaking Analysis: Five Questions About Snowflake’s Pending IPO


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> In June of this year, Snowflake filed a confidential document suggesting that it would do an IPO. Now of course, everybody knows about it, found out about it and it had a $20 billion valuation. So, many in the community and the investment community and so forth are excited about this IPO. It could be the hottest one of the year, and we're getting a number of questions from investors and practitioners and the entire Wiki bond, ETR and CUBE community. So, welcome everybody. This is Dave Vellante. This is "CUBE Insights" powered by ETR. In this breaking analysis, we're going to unpack five critical questions around Snowflake's IPO or pending IPO. And with me to discuss that is Erik Bradley. He's the Chief Engagement Strategists at ETR and he's also the Managing Director of VENN. Erik, thanks for coming on and great to see you as always. >> Great to see you too. Always enjoy being on the show. Thank you. >> Now for those of you don't know Erik, VENN is a roundtable that he hosts and he brings in CIOs, IT practitioners, CSOs, data experts and they have an open and frank conversation, but it's private to ETR clients. But they know who the individual is, what their role is, what their title is, et cetera and it's a kind of an ask me anything. And I participated in one of them this past week. Outstanding. And we're going to share with you some of that. But let's bring up the agenda slide if we can here. And these are really some of the questions that we're getting from investors and others in the community. There's really five areas that we want to address. The first is what's happening in this enterprise data warehouse marketplace? The second thing is kind of a one area. What about the legacy EDW players like Oracle and Teradata and Netezza? The third question we get a lot is can Snowflake compete with the big cloud players? Amazon, Google, Microsoft. I mean they're right there in the heart, in the thick of things there. And then what about that multi-cloud strategy? Is that viable? How much of a differentiator is that? And then we get a lot of questions on the TAM. Meaning the total available market. How big is that market? Does it justify the valuation for Snowflake? Now, Erik, you've been doing this now. You've run a couple VENNs, you've been following this, you've done some other work that you've done with Eagle Alpha. What's your, just your initial sort of takeaway from all this work that you've been doing. >> Yeah, sure. So my first take on Snowflake was about two and a half years ago. I actually hosted them for one of my VENN interviews and my initial thought was impressed. So impressed. They were talking at the time about their ability to kind of make ease of use of a multi-cloud strategy. At the time although I was impressed, I did not expect the growth and the hyper growth that we have seen now. But, looking at the company in its current iteration, I understand where the hype is coming from. I mean, it's 12 and a half billion private valuation in the last round. The least confidential IPO (laughs) anyone's ever seen (Dave laughs) with a 15 to $20 billion valuation coming out, which is more than Teradata, Margo and Cloudera combined. It's a great question. So obviously the success to this point is warranted, but we need to see what they're going to be able to do next. So I think the agenda you laid out is a great one and I'm looking forward to getting into some of those details. >> So let's start with what's happening in the marketplace and let's pull up a slide that I very much love to use. It's the classic X-Y. On the vertical axis here we show net score. And remember folks, net score is an indicator of spending momentum. ETR every quarter does like a clockwork survey where they're asking people, "Essentially are you spending more or less?" They subtract the less from the more and comes up with a net score. It's more complicated than, but like NPS, it's a very simple and reliable methodology. That's the vertical axis. And the horizontal axis is what's called market share. Market share is the pervasiveness within the data set. So it's calculated by the number of mentions of the vendor divided by the number of mentions within that sector. And what we're showing here is the EDW sector. And we've pulled out a few companies that I want to talk about. So the big three, obviously Microsoft, AWS and Google. And you can see Microsoft has a huge presence far to the right. AWS, very, very strong. A lot of Redshift in there. And then they're pretty high on the vertical axis. And then Google, not as much share, but very solid in that. Close to 60% net score. And then you can see above all of them from a vertical standpoint is Snowflake with a 77.5% net score. You can see them in the upper right there in the green. One of the highest Erik in the entire data set. So, let's start with some sort of initial comments on the big guys and Snowflakes. Your thoughts? >> Sure. Just first of all to comment on the data, what we're showing there is just the data warehousing sector, but Snowflake's actual net score is that high amongst the entire universe that we follow. Their data strength is unprecedented and we have forward-looking spending intention. So this bodes very well for them. Now, what you did say very accurately is there's a difference between their spending intentions on a net revenue level compared to AWS, Microsoft. There no one's saying that this is an apples-to-apples comparison when it comes to actual revenue. So we have to be very cognizant of that. There is domination (laughs) quite frankly from AWS and from Azure. And Snowflake is a necessary component for them not only to help facilitate a multi-cloud, but look what's happening right now in the US Congress, right? We have these tech leaders being grilled on their actual dominance. And one of the main concerns they have is the amount of data that they're collecting. So I think the environment is right to have another player like this. I think Snowflake really has a lot of longevity and our data is supporting that. And the commentary that we hear from our end users, the people that take the survey are supporting that as well. >> Okay, and then let's stay on this X-Y slide for a moment. I want to just pull out a couple of other comments here, because one of the questions we're asking is Whither, the legacy EDW players. So we've got in here, IBM, Oracle, you can see Teradata and then Hortonworks and MapR. We're going to talk a little bit about Hortonworks 'cause it's now Cloudera. We're going to talk a little bit about Hadoop and some of the data lakes. So you can see there they don't have nearly the net score momentum. Oracle obviously has a huge install base and is investing quite frankly in R&D and do an Exadata and it has its own cloud. So, it's got a lock on it's customers and if it keeps investing and adding value, it's not going away. IBM with Netezza, there's really been some questions around their commitment to that base. And I know that a lot of the folks in the VENNs that we've talked to Erik have said, "Well, we're replacing Netezza." Frank Slootman has been very vocal about going after Teradata. And then we're going to talk a little bit about the Hadoop space. But, can you summarize for us your thoughts in your research and the commentary from your community, what's going on with the legacy guys? Are these guys cooked? Can they hang on? What's your take? >> Sure. We focus on this quite a bit actually. So, I'm going to talk about it from the data perspective first, and then we'll go into some of the commentary and the panel. You even joined one yesterday. You know that it was touched upon. But, first on the data side, what we're noticing and capturing is a widening bifurcation between these cloud native and the legacy on-prem. It is undeniable. There is nothing that you can really refute. The data is concrete and it is getting worse. That gap is getting wider and wider and wider. Now, the one thing I will say is, nobody's going to rip out their legacy applications tomorrow. It takes years and years. So when you look at Teradata, right? Their market cap's only 2 billion, 2.3 billion. How much revenue growth do they need to stay where they are? Not much, right? No one's expecting them to grow 20%, which is what you're seeing on the left side of that screen. So when you look at the legacy versus the cloud native, there is very clear direction of what's happening. The one thing I would note from the data perspective is if you switched from net score or adoptions and you went to flat spending, you suddenly see Oracle and Teradata move over to that left a little bit, because again what I'm trying to say is I don't think they're going to catch up. No, but also don't think they're going away tomorrow. That these have large install bases, they have relationships. Now to kind of get into what you were saying about each particular one, IBM, they shut down Netezza. They shut it down and then they brought it back to life. How does that make you feel if you're the head of data architecture or you're DevOps and you're trying to build an application for a large company? I'm not going back to that. There's absolutely no way. Teradata on the other hand is known to be incredibly stable. They are known to just not fail. If you need to kind of re-architect or you do a migration, they work. Teradata also has a lot of compliance built in. So if you're a financials, if you have a regulated business or industry, there's still some data sets that you're not going to move up to the cloud. Whether it's a PII compliance or financial reasons, some of that stuff is still going to live on-prem. So Teradata is still has a very good niche. And from what we're hearing from our panels, then this is a direct quote if you don't mind me looking off screen for one second. But this is a great one. Basically said, "Teradata is the only one from the legacy camp who is putting up a fight and not giving up." Basically from a CIO perspective, the rest of them aren't an option anymore. But Teradata is still fighting and that's great to hear. They have their own data as a service offering and listen, they're a small market cap compared to these other companies we're talking about. But, to summarize, the data is very clear. There is a widening bifurcation between the two camps. I do not think legacy will catch up. I think all net new workloads are moving to data as a service, moving to cloud native, moving to hosted, but there are still going to be some existing legacy on-prem applications that will be supported with these older databases. And of those, Oracle and Teradata are still viable options. >> I totally agree with you and my colleague David Floyd is actually quite high on Teradata Vantage because he really does believe that a key component, we're going to talk about the TAM in a minute, but a key component of the TAM he believes must include the on-premises workloads. And Frank Slootman has been very clear, "We're not doing on-prem, we're not doing this halfway house." And so that's an opportunity for companies like Teradata, certainly Oracle I would put it in that camp is putting up a fight. Vertica is another one. They're very small, but another one that's sort of battling it out from the old NPP world. But that's great. Let's go into some of the specifics. Let's bring up here some of the specific commentary that we've curated here from the roundtables. I'm going to go through these and then ask you to comment. The first one is just, I mean, people are obviously very excited about Snowflake. It's easy to use, the whole thing zero to Snowflake in 90 minutes, but Snowflake is synonymous with cloud-native data warehousing. There are no equals. We heard that a lot from your VENN panelist. >> We certainly did. There was even more euphoria around Snowflake than I expected when we started hosting these series of data warehousing panels. And this particular gentleman that said that happens to be the global head of data architecture for a fortune 100 financials company. And you mentioned earlier that we did a report alongside Eagle Alpha. And we noticed that among fortune 100 companies that are also using the big three public cloud companies, Snowflake is growing market share faster than anyone else. They are positioned in a way where even if you're aligned with Azure, even if you're aligned with AWS, if you're a large company, they are gaining share right now. So that particular gentleman's comments was very interesting. He also made a comment that said, "Snowflake is the person who championed the idea that data warehousing is not dead yet. Use that old monthly Python line and you're not dead yet." And back in the day where the Hadoop came along and the data lakes turned into a data swamp and everyone said, "We don't need warehousing anymore." Well, that turned out to be a head fake, right? Hadoop was an interesting technology, but it's a complex technology. And it ended up not really working the way people want it. I think Snowflake came in at that point at an opportune time and said, "No, data warehousing isn't dead. We just have to separate the compute from the storage layer and look at what I can do. That increases flexibility, security. It gives you that ability to run across multi-cloud." So honestly the commentary has been nothing but positive. We can get into some of the commentary about people thinking that there's competition catching up to what they do, but there is no doubt that right now Snowflake is the name when it comes to data as a service. >> The other thing we heard a lot was ETL is going to get completely disrupted, you sort of embedded ETL. You heard one panelist say, "Well, it's interesting to see that guys like Informatica are talking about how fast they can run inside a Snowflake." But Snowflake is making that easy. That data prep is sort of part of the package. And so that does not bode well for ETL vendors. >> It does not, right? So ETL is a legacy of on-prem databases and even when Hadoop came along, it still needed that extra layer to kind of work with the data. But this is really, really disrupting them. Now the Snowflake's credit, they partner well. All the ETL players are partnered with Snowflake, they're trying to play nice with them, but the writings on the wall as more and more of this application and workloads move to the cloud, you don't need the ETL layer. Now, obviously that's going to affect their talent and Informatica the most. We had a recent comment that said, this was a CIO who basically said, "The most telling thing about the ETL players right now is every time you speak to them, all they talk about is how they work in a Snowflake architecture." That's their only metric that they talk about right now. And he said, "That's very telling." That he basically used it as it's their existential identity to be part of Snowflake. If they're not, they don't exist anymore. So it was interesting to have sort of a philosophical comment brought up in one of my roundtables. But that's how important playing nice and finding a niche within this new data as a service is for ETL, but to be quite honest, they might be going the same way of, "Okay, let's figure out our niche on these still the on-prem workloads that are still there." I think over time we might see them maybe as an M&A possibility, whether it's Snowflake or one of these new up and comers, kind of bring them in and sort of take some of the technology that's useful and layer it in. But as a large market cap, solo existing niche, I just don't know how long ETL is for this world. >> Now, yeah. I mean, you're right that if it wasn't for the marketing, they're not fighting fashion. But >> No. >> really there're some challenges there. Now, there were some contrarians in the panel and they signaled some potential icebergs ahead. And I guarantee you're going to see this in Snowflake's Red Herring when we actually get it. Like we're going to see all the risks. One of the comments, I'll mention the two and then we can talk about it. "Their engineering advantage will fade over time." Essentially we're saying that people are going to copycat and we've seen that. And the other point is, "Hey, we might see some similar things that happened to Hadoop." The public cloud players giving away these offerings at zero cost. Essentially marginal cost of adding another service is near zero. So the cloud players will use their heft to compete. Your thoughts? >> Yeah, first of all one of the reasons I love doing panels, right? Because we had three gentlemen on this panel that all had nothing but wonderful things to say. But you always get one. And this particular person is a CTO of a well known online public travel agency. We'll put it that way. And he said, "I'm going to be the contrarian here. I have seven different technologies from private companies that do the same thing that I'm evaluating." So that's the pressure from behind, right? The technology, they're going to catch up. Right now Snowflake has the best engineering which interestingly enough they took a lot of that engineering from IBM and Teradata if you actually go back and look at it, which was brought up in our panel as well. He said, "However, the engineering will catch up. They always do." Now from the other side they're getting squeezed because the big cloud players just say, "Hey, we can do this too. I can bundle it with all the other services I'm giving you and I can squeeze your pay. Pretty much give it a waive at the cost." So I do think that there is a very valid concern. When you come out with a $20 billion IPO evaluation, you need to warrant that. And when you see competitive pressures from both sides, from private emerging technologies and from the more dominant public cloud players, you're going to get squeezed there a little bit. And if pricing gets squeezed, it's going to be very, very important for Snowflake to continue to innovate. That comment you brought up about possibly being the next Cloudera was certainly the best sound bite that I got. And I'm going to use it as Clickbait in future articles, because I think everyone who starts looking to buy a Snowflake stock and they see that, they're going to need to take a look. But I would take that with a grain of salt. I don't think that's happening anytime soon, but what that particular CTO was referring to was if you don't innovate, the technology itself will become commoditized. And he believes that this technology will become commoditized. So therefore Snowflake has to continue to innovate. They have to find other layers to bring in. Whether that's through their massive war chest of cash they're about to have and M&A, whether that's them buying analytics company, whether that's them buying an ETL layer, finding a way to provide more value as they move forward is going to be very important for them to justify this valuation going forward. >> And I want to comment on that. The Cloudera, Hortonworks, MapRs, Hadoop, et cetera. I mean, there are dramatic differences obviously. I mean, that whole space was so hard, very difficult to stand up. You needed science project guys and lab coats to do it. It was very services intensive. As well companies like Cloudera had to fund all these open source projects and it really squeezed their R&D. I think Snowflake is much more focused and you mentioned some of the background of their engineers, of course Oracle guys as well. However, you will see Amazon's going to trot out a ton of customers using their RA3 managed storage and their flash. I think it's the DC two piece. They have a ton of action in the marketplace because it's just so easy. It's interesting one of the comments, you asked this yesterday, was with regard to separating compute from storage, which of course it's Snowflakes they basically invented it, it was one of their climbs to fame. The comment was what AWS has done to separate compute from storage for Redshift is largely a bolt on. Which I thought that was an interesting comment. I've had some other comments. My friend George Gilbert said, "Hey, despite claims to the contrary, AWS still hasn't separated storage from compute. What they have is really primitive." We got to dig into that some more, but you're seeing some data points that suggest there's copycatting going on. May not be as functional, but at the same time, Erik, like I was saying good enough is maybe good enough in this space. >> Yeah, and especially with the enterprise, right? You see what Microsoft has done. Their technology is not as good as all the niche players, but it's good enough and I already have a Microsoft license. So, (laughs) you know why am I going to move off of it. But I want to get back to the comment you mentioned too about that particular gentleman who made that comment about RedShift, their separation is really more of a bolt on than a true offering. It's interesting because I know who these people are behind the scenes and he has a very strong relationship with AWS. So it was interesting to me that in the panel yesterday he said he switched from Redshift to Snowflake because of that and some other functionality issues. So there is no doubt from the end users that are buying this. And he's again a fortune 100 financial organization. Not the same one we mentioned. That's a different one. But again, a fortune 100 well known financials organization. He switched from AWS to Snowflake. So there is no doubt that right now they have the technological lead. And when you look at our ETR data platform, we have that adoption reasoning slide that you show. When you look at the number one reason that people are adopting Snowflake is their feature set of technological lead. They have that lead now. They have to maintain it. Now, another thing to bring up on this to think about is when you have large data sets like this, and as we're moving forward, you need to have machine learning capabilities layered into it, right? So they need to make sure that they're playing nicely with that. And now you could go open source with the Apache suite, but Google is doing so well with BigQuery and so well with their machine learning aspects. And although they don't speak enterprise well, they don't sell to the enterprise well, that's changing. I think they're somebody to really keep an eye on because their machine learning capabilities that are layered into the BigQuery are impressive. Now, of course, Microsoft Azure has Databricks. They're layering that in, but this is an area where I think you're going to see maybe what's next. You have to have machine learning capabilities out of the box if you're going to do data as a service. Right now Snowflake doesn't really have that. Some of the other ones do. So I had one of my guest panelist basically say to me, because of that, they ended up going with Google BigQuery because he was able to run a machine learning algorithm within hours of getting set up. Within hours. And he said that that kind of capability out of the box is what people are going to have to use going forward. So that's another thing we should dive into a little bit more. >> Let's get into that right now. Let's bring up the next slide which shows net score. Remember this is spending momentum across the major cloud players and plus Snowflake. So you've got Snowflake on the left, Google, AWS and Microsoft. And it's showing three survey timeframes last October, April 20, which is right in the middle of the pandemic. And then the most recent survey which has just taken place this month in July. And you can see Snowflake very, very high scores. Actually improving from the last October survey. Google, lower net scores, but still very strong. Want to come back to that and pick up on your comments. AWS dipping a little bit. I think what's happening here, we saw this yesterday with AWS's results. 30% growth. Awesome. Slight miss on the revenue side for AWS, but look, I mean massive. And they're so exposed to so many industries. So some of their industries have been pretty hard hit. Microsoft pretty interesting. A little softness there. But one of the things I wanted to pick up on Erik, when you're talking about Google and BigQuery and it's ML out of the box was what we heard from a lot of the VENN participants. There's no question about it that Google technically I would say is one of Snowflake's biggest competitors because it's cloud native. Remember >> Yep. >> AWS did a license one time. License deal with PowerShell and had a sort of refactor the thing to be cloud native. And of course we know what's happening with Microsoft. They basically were on-prem and then they put stuff in the cloud and then all the updates happen in the cloud. And then they pushed to on-prem. But they have that what Frank Slootman calls that halfway house, but BigQuery no question technically is very, very solid. But again, you see Snowflake right now anyway outpacing these guys in terms of momentum. >> Snowflake is out outpacing everyone (laughs) across our entire survey universe. It really is impressive to see. And one of the things that they have going for them is they can connect all three. It's that multi-cloud ability, right? That portability that they bring to you is such an important piece for today's modern CIO as data architects. They don't want vendor lock-in. They are afraid of vendor lock-in. And this ability to make their data portable and to do that with ease and the flexibility that they offer is a huge advantage right now. However, I think you're a hundred percent right. Google has been so focused on the engineering side and never really focusing on the enterprise sales side. That is why they're playing catch up. I think they can catch up. They're bringing in some really important enterprise salespeople with experience. They're starting to learn how to talk to enterprise, how to sell, how to support. And nobody can really doubt their engineering. How many open sources have they given us, right? They invented Kubernetes and the entire container space. No one's really going to compete with them on that side if they learn how to sell it and support it. Yeah, right now they're behind. They're a distant third. Don't get me wrong. From a pure hosted ability, AWS is number one. Microsoft is yours. Sometimes it looks like it's number one, but you have to recognize that a lot of that is because of simply they're hosted 365. It's a SAS app. It's not a true cloud type of infrastructure as a service. But Google is a distant third, but their technology is really, really great. And their ability to catch up is there. And like you said, in the panels we were hearing a lot about their machine learning capability is right out of the box. And that's where this is going. What's the point of having this huge data if you're not going to be supporting it on new application architecture. And all of those applications require machine learning. >> Awesome. So we're. And I totally agree with what you're saying about Google. They just don't have it figured out how to sell the enterprise yet. And a hundred percent AWS has the best cloud. I mean, hands down. But a very, very competitive market as we heard yesterday in front of Congress. Now we're on the point about, can Snowflake compete with the big cloud players? I want to show one more data point. So let's bring up, this is the same chart as we showed before, but it's new adoptions. And this is really telling. >> Yeah. >> You can see Snowflake with 34% in the yellow, new adoptions, down yes from previous surveys, but still significantly higher than the other players. Interesting to see Google showing momentum on new adoptions, AWS down on new adoptions. And again, exposed to a lot of industries that have been hard hit. And Microsoft actually quite low on new adoption. So this is very impressive for Snowflake. And I want to talk about the multi-cloud strategy now Erik. This came up a lot. The VENN participants who are sort of fans of Snowflake said three things: It was really the flexibility, the security which is really interesting to me. And a lot of that had to do with the flexibility. The ability to easily set up roles and not have to waste a lot of time wrangling. And then the third was multi-cloud. And that was really something that came through heavily in the VENN. Didn't it? >> It really did. And again, I think it just comes down to, I don't think you can ever overstate how afraid these guys are of vendor lock-in. They can't have it. They don't want it. And it's best practice to make sure your sensitive information is being kind of spread out a little bit. We all know that people don't trust Bezos. So if you're in certain industries, you're not going to use AWS at all, right? So yeah, this ability to have your data portability through multi-cloud is the number one reason I think people start looking at Snowflake. And to go to your point about the adoptions, it's very telling and it bodes well for them going forward. Most of the things that we're seeing right now are net new workloads. So let's go again back to the legacy side that we were talking about, the Teradatas, IBMs, Oracles. They still have the monolithic applications and the data that needs to support that, right? Like an old ERP type of thing. But anyone who's now building a new application, bringing something new to market, it's all net new workloads. There is no net new workload that is going to go to SAP or IBM. It's not going to happen. The net new workloads are going to the cloud. And that's why when you switch from net score to adoption, you see Snowflake really stand out because this is about new adoption for net new workloads. And that's really where they're driving everything. So I would just say that as this continues, as data as a service continues, I think Snowflake's only going to gain more and more share for all the reasons you stated. Now get back to your comment about security. I was shocked by that. I really was. I did not expect these guys to say, "Oh, no. Snowflake enterprise security not a concern." So two panels ago, a gentleman from a fortune 100 financials said, "Listen, it's very difficult to get us to sign off on something for security. Snowflake is past it, it is enterprise ready, and we are going full steam ahead." Once they got that go ahead, there was no turning back. We gave it to our DevOps guys, we gave it to everyone and said, "Run with it." So, when a company that's big, I believe their fortune rank is 28. (laughs) So when a company that big says, "Yeah, you've got the green light. That we were okay with the internal compliance aspect, we're okay with the security aspect, this gives us multi-cloud portability, this gives us flexibility, ease of use." Honestly there's a really long runway ahead for Snowflake. >> Yeah, so the big question I have around the multi-cloud piece and I totally and I've been on record saying, "Look, if you're going looking for an agnostic multi-cloud, you're probably not going to go with the cloud vendor." (laughs) But I've also said that I think multi-cloud to date anyway has largely been a symptom as opposed to a strategy, but that's changing. But to your point about lock-in and also I think people are maybe looking at doing things across clouds, but I think that certainly it expands Snowflake's TAM and we're going to talk about that because they support multiple clouds and they're going to be the best at that. That's a mandate for them. The question I have is how much of complex joining are you going to be doing across clouds? And is that something that is just going to be too latency intensive? Is that really Snowflake's expertise? You're really trying to build that data layer. You're probably going to maybe use some kind of Postgres database for that. >> Right. >> I don't know. I need to dig into that, but that would be an opportunity from a TAM standpoint. I just don't know how real that is. >> Yeah, unfortunately I'm going to just be honest with this one. I don't think I have great expertise there and I wouldn't want to lead anyone a wrong direction. But from what I've heard from some of my VENN interview subjects, this is happening. So the data portability needs to be agnostic to the cloud. I do think that when you're saying, are there going to be real complex kind of workloads and applications? Yes, the answer is yes. And I think a lot of that has to do with some of the container architecture as well, right? If I can just pull data from one spot, spin it up for as long as I need and then just get rid of that container, that ethereal layer of compute. It doesn't matter where the cloud lies. It really doesn't. I do think that multi-cloud is the way of the future. I know that the container workloads right now in the enterprise are still very small. I've heard people say like, "Yeah, I'm kicking the tires. We got 5%." That's going to grow. And if Snowflake can make themselves an integral part of that, then yes. I think that's one of those things where, I remember the guy said, "Snowflake has to continue to innovate. They have to find a way to grow this TAM." This is an area where they can do so. I think you're right about that, but as far as my expertise, on this one I'm going to be honest with you and say, I don't want to answer incorrectly. So you and I need to dig in a little bit on this one. >> Yeah, as it relates to question four, what's the viability of Snowflake's multi-cloud strategy? I'll say unquestionably supporting multiple clouds, very viable. Whether or not portability across clouds, multi-cloud joins, et cetera, TBD. So we'll keep digging into that. The last thing I want to focus on here is the last question, does Snowflake's TAM justify its $20 billion valuation? And you think about the data pipeline. You go from data acquisition to data prep. I mean, that really is where Snowflake shines. And then of course there's analysis. You've got to bring in EMI or AI and ML tools. That's not Snowflake's strength. And then you're obviously preparing that, serving that up to the business, visualization. So there's potential adjacencies that they could get into that they may or may not decide to. But so we put together this next chart which is kind of the TAM expansion opportunity. And I just want to briefly go through it. We published this stuff so you can go and look at all the fine print, but it's kind of starts with the data lake disruption. You called it data swamp before. The Hadoop no schema on, right? Basically the ROI of Hadoop became reduction of investment as my friend Abby Meadow would say. But so they're kind of disrupting that data lake which really was a failure. And then really going after that enterprise data warehouse which is kind of I have it here as a 10 billion. It's actually bigger than that. It's probably more like a $20 billion market. I'll update this slide. And then really what Snowflake is trying to do is be data as a service. A data layer across data stores, across clouds, really make it easy to ingest and prepare data and then serve the business with insights. And then ultimately this huge TAM around automated decision making, real-time analytics, automated business processes. I mean, that is potentially an enormous market. We got a couple of hundred billion. I mean, just huge. Your thoughts on their TAM? >> I agree. I'm not worried about their TAM and one of the reasons why as I mentioned before, they are coming out with a whole lot of cash. (laughs) This is going to be a red hot IPO. They are going to have a lot of money to spend. And look at their management team. Who is leading the way? A very successful, wise, intelligent, acquisitive type of CEO. I think there is going to be M&A activity, and I believe that M&A activity is going to be 100% for the mindset of growing their TAM. The entire world is moving to data as a service. So let's take as a backdrop. I'm going to go back to the panel we did yesterday. The first question we asked was, there was an understanding or a theory that when the virus pandemic hit, people wouldn't be taking on any sort of net new architecture. They're like, "Okay, I have Teradata, I have IBM. Let's just make sure the lights are on. Let's stick with it." Every single person I've asked, they're just now eight different experts, said to us, "Oh, no. Oh, no, no." There is the virus pandemic, the shift from work from home. Everything we're seeing right now has only accelerated and advanced our data as a service strategy in the cloud. We are building for scale, adopting cloud for data initiatives. So, across the board they have a great backdrop. So that's going to only continue, right? This is very new. We're in the early innings of this. So for their TAM, that's great because that's the core of what they do. Now on top of it you mentioned the type of things about, yeah, right now they don't have great machine learning. That could easily be acquired and built in. Right now they don't have an analytics layer. I for one would love to see these guys talk to Alteryx. Alteryx is red hot. We're seeing great data and great feedback on them. If they could do that business intelligence, that analytics layer on top of it, the entire suite as a service, I mean, come on. (laughs) Their TAM is expanding in my opinion. >> Yeah, your point about their leadership is right on. And I interviewed Frank Slootman right in the heart of the pandemic >> So impressed. >> and he said, "I'm investing in engineering almost sight unseen. More circumspect around sales." But I will caution people. That a lot of people I think see what Slootman did with ServiceNow. And he came into ServiceNow. I have to tell you. It was they didn't have their unit economics right, they didn't have their sales model and marketing model. He cleaned that up. Took it from 120 million to 1.2 billion and really did an amazing job. People are looking for a repeat here. This is a totally different situation. ServiceNow drove a truck through BMCs install base and with IT help desk and then created this brilliant TAM expansion. Let's learn and expand model. This is much different here. And Slootman also told me that he's a situational CEO. He doesn't have a playbook. And so that's what is most impressive and interesting about this. He's now up against the biggest competitors in the world: AWS, Google and Microsoft and dozens of other smaller startups that have raised a lot of money. Look at the company like Yellowbrick. They've raised I don't know $180 million. They've got a great team. Google, IBM, et cetera. So it's going to be really, really fun to watch. I'm super excited, Erik, but I'll tell you the data right now suggest they've got a great tailwind and if they can continue to execute, this is going to be really fun to watch. >> Yeah, certainly. I mean, when you come out and you are as impressive as Snowflake is, you get a target on your back. There's no doubt about it, right? So we said that they basically created the data as a service. That's going to invite competition. There's no doubt about it. And Yellowbrick is one that came up in the panel yesterday about one of our CIOs were doing a proof of concept with them. We had about seven others mentioned as well that are startups that are in this space. However, none of them despite their great valuation and their great funding are going to have the kind of money and the market lead that Slootman is going to have which Snowflake has as this comes out. And what we're seeing in Congress right now with some antitrust scrutiny around the large data that's being collected by AWS as your Google, I'm not going to bet against this guy either. Right now I think he's got a lot of opportunity, there's a lot of additional layers and because he can basically develop this as a suite service, I think there's a lot of great opportunity ahead for this company. >> Yeah, and I guarantee that he understands well that customer acquisition cost and the lifetime value of the customer, the retention rates. Those are all things that he and Mike Scarpelli, his CFO learned at ServiceNow. Not learned, perfected. (Erik laughs) Well Erik, really great conversation, awesome data. It's always a pleasure having you on. Thank you so much, my friend. I really appreciate it. >> I appreciate talking to you too. We'll do it again soon. And stay safe everyone out there. >> All right, and thank you for watching everybody this episode of "CUBE Insights" powered by ETR. This is Dave Vellante, and we'll see you next time. (soft music)

Published Date : Jul 31 2020

SUMMARY :

This is breaking analysis and he's also the Great to see you too. and others in the community. I did not expect the And the horizontal axis is And one of the main concerns they have and some of the data lakes. and the legacy on-prem. but a key component of the TAM And back in the day where of part of the package. and Informatica the most. I mean, you're right that if And the other point is, "Hey, and from the more dominant It's interesting one of the comments, that in the panel yesterday and it's ML out of the box the thing to be cloud native. That portability that they bring to you And I totally agree with what And a lot of that had to and the data that needs and they're going to be the best at that. I need to dig into that, I know that the container on here is the last question, and one of the reasons heart of the pandemic and if they can continue to execute, And Yellowbrick is one that and the lifetime value of the customer, I appreciate talking to you too. This is Dave Vellante, and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

George GilbertPERSON

0.99+

Erik BradleyPERSON

0.99+

ErikPERSON

0.99+

Frank SlootmanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike ScarpelliPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

David FloydPERSON

0.99+

SlootmanPERSON

0.99+

TeradataORGANIZATION

0.99+

Abby MeadowPERSON

0.99+

HortonworksORGANIZATION

0.99+

100%QUANTITY

0.99+

$180 millionQUANTITY

0.99+

$20 billionQUANTITY

0.99+

NetezzaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

77.5%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

20%QUANTITY

0.99+

10 billionQUANTITY

0.99+

12 and a half billionQUANTITY

0.99+

120 millionQUANTITY

0.99+

OraclesORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

YellowbrickORGANIZATION

0.99+

Power Panel | PegaWorld iNspire


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of PegaWorld iNspire, brought to you by Pegasystems. >> Hi everybody, this is Dave Vellante and welcome to theCUBE's coverage of PegaWorld iNspire 2020. And now that the dust has settled on the event, we wanted to have a little postmortem power panel, and I'm really excited to have three great guests here today. Adrian Swinscoe is a customer service and experience advisor and the best-selling author of a couple of books: "How to Wow" and "Punk CX." Adrian great to see you, thanks for coming on. >> Hey Dave. >> And Shelly Kramer's a principal, analyst, and a founding partner at Futurum Research, CUBE alum. Shelly, good to see you. >> Hi, great to see you too. >> And finally, Don Schuerman who is the CTO of Pegasystems and one of the people that was really highlighting the keynotes. Don, thanks for your time, appreciate you coming on. >> Great to be here. >> Guys, let's start with some of the takeaways from the event, and if you don't mind I'm going to set it up. I had some, I had many many notes. But I'll take a cue from Alan's keynote, where he talked about three things: rethinking the customer engagement, that whole experience, that as a service, I'm going to say that certainly the second part of last decade came to the front and center and we think is going to continue in spades. And then new tech, we heard about that. Don we're going to ask you to chime in on that. Modern software, microservices, we've got machine intelligence now. And then I thought there were some really good customer examples. We heard from Siemens, we heard from the CIO and head of digital at Aflac, the Bank of Australia. So, some really good customer examples. But Shelly, let me start with you. What were your big takeaways of PegaWorld iNspire 2020, the virtual edition? >> You know, what I love is a focus, and we have talked a lot about that here at Futurum Research, but what I love is the thinking that what really is important now is to think about rethinking and kind of tearing things apart. Especially when we're in a time, we're in difficult economic times, and so instead of focusing on rebuilding and relaunching as quickly as possible, I think that now's the time to really focus on reexamining what is it that our customers want? How is it that we can best serve them? And really sort of start from ground zero and examine our thinking. And I think that's really at the heart of digital transformation, and I think that both in this virtual event and in some interviews I was lucky enough to do in advance with some of the Pega senior team, that was really a key focus, is really thinking about how we can re-architect things, how we can do things in ways that are more efficient, that impact people more effectively, that impact the bottom line more effectively. And to me that's really exciting. >> So Adrian, CX is obviously your wheelhouse. A lot of the conversation at PegaWorld iNspire was of course about customer experience, customer service. How do you think the content went? What were some of the highlights for you? And maybe, what would you have liked to hear more of? >> Well I think, thanks Dave, I actually really enjoyed it. I actually kind of thought was, first of all I should say that I've been to a bunch of virtual summits and I thought this was one of the best ones I've done in terms of its pace and its interactivity. I love the fact that Don was bouncing around the screen, kind of showing us around the menu and things. I thought that was great. But the things that I thought really stood out for me was this idea of the context around accelerating digital transformation. And that's very contextual, it's almost being forced upon us. But then this idea of also the center-out thinking and the Process Fabric. Because it really reminded me of, and Don you can maybe correct me if I'm wrong here, is taking a systems-thinking approach to delivering the right outcomes for customers. Because it's always struck me that there's a contradiction at the heart of the rhetoric around customer-centricity where people say they want to do the right things by customers but then they force them down this channel-centric or process-centric way of thinking. And so actually I thought it was really refreshing to hear about this center-out and Process Fabric platform that Pega's building. And I thought it's really exciting because it felt like actually we're going to start to take a more systemic look and take to delivering great service and great experience. So I thought that was really great. Those were my big headlines out of the summit. >> So Don, one of the-- >> Adrian I think-- >> Go ahead, please. >> Yeah, I think the whole idea, you know, and Alan referred to center-out as a business architecture, and I think that's really an important concept because this is really about the intersection of that business goal. How do I truly become customer-centric? And then how do I actually make my technology do it? And it's really important for that to work where you put your business logic in the technology. If you continue to do it in the sort of channel-centric way or really data-centric, system-centric way that historically has been the approach, I don't think you can build a sustainable platform for great customer engagement. So I think that idea of a business architecture that you clued in on a little bit is really central to how we've been thinking about this. >> Let's stay on that for a second. But first of all, I just want to mention, you guys did a good job of not just trying to take a physical event and plug in into virtual. So congratulations on that. The virtual clicker toss, and you know, you were having some fun eating your eggs. I mean that was, that's great. And the Dropkick Murphys couldn't be live, but you guys still leveraged that, so well done. One of the better ones that I've seen. But I want to stay on your point there. Alan talked about some of the mistakes that are made, and one of the questions I have for you guys is, what is the state of customer experience today, and why the divergence between great, and good, and pretty crappy? And Alan talked about, well, people try to impose business process top-down, or they try to infuse logic in the database bottom-up. You really got to do that middle-out. So, Don I want to come back to you. Let's explore that a little bit. What do you really mean by middle-out? Where am I putting the actual business logic? >> Yeah, I think this is important, right. And I think that a lot of time we have experiences as customers. And I had one of these recently with a cable provider, where I spent a bunch of time on their website chatting with a chatbot of some kind, that then flipped me over to a human. When the chatbot flipped me to the human, the human didn't know what I was doing with the chatbot. And that human eventually told me I had to call somebody. So I picked up the phone, I made the phone call. And that person didn't know what I was doing on chat with the human or with the chatbot. So every time there's a customer, I'm restarting. I'm reexplaining where I am. And that to me is a direct result of that kind of channel-centric thinking, where all of my business logic ends up embedded in, "Well hey, we're going to build a cool chatbot. "And now we're going to build a cool chat system. "And by the way, "we're going to keep our contact centers running." But I'm not thinking holistically about the customer experience. And that's why we think this center-out approach is so important, because I want to go below the channel. And I want to think about that customer journey. What's the outcome I'm trying to get to? In the case of my interaction, I was just trying to increase my bandwidth so that I could do events like this, right? What's that outcome that I'm trying to get to and how do I get the customer to that outcome in a way that's as efficient for the business and as easy for the customer as possible regardless of what channel they're on. And I think that's a little bit of a new way of thinking. And again, it means thinking not just about the customer goal, but having an opinion, whether you are a business leader or an IT person, about where that logic belongs in your architecture. >> So, Adrian. Don just described the sort of bot and human experience, which mimics a lot of the human experience that we've all touched in the past. So, but the customer journey that Don talked about isn't necessarily one journey. There's multiple journeys. So what's your take on how organizations can do better with that kind of service. >> Well I think you're absolutely right, Dave. I mean, actually during the summer I was talking, I was listening to Paul Greenberg talk about the future of customer service. And Paul said something that I think was really straightforward but really insightful. He said, "Look, organizations think about customer journeys "but customers don't think about journeys "in the way that organizations do. "They think discontinuously." So it's like, "I'm going to go to channel one, "and then channel three, and then channel four, "and then channel five, and then back to channel two. "And then back to channel five again." And they expect those conversations to be picked up across those different channels. And so I think what we've got to do is develop, as Don said, build an architecture that is, that works around trying to support the different journeys but allows that flexibility and that adaptability for customers to jump around and to have one of those continuous but disconnected conversations. But it's up to us to try and connect them all, to deliver the service and experience that the customers actually want. >> Now Shelly, a lot of the customer experience actually starts with the employees, and employees don't like when the customer is yelling at them saying, "I just answered all those questions. "Why do I have to answer them again?" So you've, at your firm, you guys have written a lot about this, you've thought a lot about it, you have some data I know you shared on theCUBE one time that 80% of employees are disengaged. And so, that affects the customer experience, doesn't it? >> Yeah it does, you know. And I think that when I'm listening to Don's explanation about his cable company, I'm having flashbacks to what feels like hundreds of my own experiences. And you're just thinking, "This does not have to be this complicated!" You know, ten years ago that same thing that Don just described happened with phone calls. You know, you called one person and they passed you off to somebody else, and they passed you off to somebody else, and you were equally as frustrated as a customer. Now what's happening a lot of times is that we're plugging technology in, like a chat bot, that's supposed to make things better but we're not developing a system and processes throughout our organization, and also change management, what do I want to say, programs within the organization and so we're kind of forgetting all of those things. So what's happening is that we're still having customers having those same experiences that are a decade old, and technology is part of the mix. And it really shouldn't be that way. And so, one thing that I really enjoyed, speaking about employees, was listening to Rich Gilbert from Aflac. And he was talking about when you're moving from legacy processes to new ones, you have to plan for and invest in change management. And we talk about this all the time here at Futurum, you know technology alone is never the answer. It's technology plus people. And so you have to invest in people, you have to invest in their training in order to be able to support and manage change and to drive change. And I think one really important part of that equation is also listening to your employees and getting their feedback, and making them part of the process. Because when they are truly on your front lines, dealing with customers, many times dealing with stressed, upset, frustrated customers, you know, they have a lot of insights. And sometimes we don't bring them into those conversations, certainly early enough in the process to help, to let them help guide us in terms of the solutions and the processes that we put in place. I think that's really important. >> Yeah, a lot of-- >> Shelly, I think-- >> If I may, a lot of the frustration with some employees sometimes is those processes change, and they're unknown going into it. We saw that with COVID, Don. And so, your thoughts on this? >> Yeah, I mean, I think the environment employees are working in is changing rapidly. We've got a customer, a large telecommunications company in the UK where their customer service requests are now being handled by about 4,000 employees pulled from their marketing department working distributed because that's the world that we're in. And the thing I was going to say in response to Shelly is, Alan mentioned in his keynote this idea of design thinking. And one of the reasons why I think that's so important is that it's actually about giving the people on the front lines a voice. It's a format for engaging the employees who actually know the day-to-day experiences of the customers, the day-to-day experiences of a customer service agent, and pulling them into the solution. How do we develop the systems, how do we rethink our processing, how does that need to plug into the various channels that we have? And that's why a lot of our focus is not just on the customer service technology, but the underlying low code platform that allows us to build those processes and those chunks of the customer journey. We often refer to them as "microjourneys" that lead to a specific outcome. And if you're using a low code based platform, something that allows anybody to come in and define that process, you can actually pull employees from the front lines and put them directly on your project teams. And all of a sudden you get better engagement but you also get this incredible insight flowing into what you're doing because you're talking to the people who live this day in and day out. >> Well and when you have-- >> So let's stay on this for a second, if we can. Shelly, go ahead please. >> Sure. When you have a chance to talk with those people, to talk with those front line employees who are having an opportunity to work with low code, no code, they get so excited about it and their jobs are completely, the way they think about their jobs and their contribution to the company, and their contribution to the customer, and the customer experience, is just so wonderful to see. And it's such an easy thing to do, so I think that that's really a critical part of the equation as it relates to success with these programs. >> Yeah, staying close to the customer-- >> Can I jump in? >> Yeah, please Adrian. >> Can I jump in on that a little, a second. I think Shelly, you're absolutely right. I think that it's a really simple thing. You talk about engagement. And one of the key parts of engagement, it seems to me, is that, is giving people a voice and making them feel important and feel heard. And so to go and ask for their opinion and to help them get involved and make a difference to the work that they do, the outcomes that their customers receive, and the overall productivity and efficiency, can only have a positive impact. And it's almost like, it feels self-evident that you'd do that but unfortunately it's not very common. >> Right. It does feel self-evident. But we miss on that front a lot. >> So I want to ask, I'm going to come back to, we talked about people process, we'll come back to that. But I want to talk about the tech. You guys announced, the big announcement was the Pega Process Fabric. You talked about that, Don, as a platform for digital platforms. You've got all these cool microservices and dynamic APIs and being able to compose on the fly, so some pretty cool stuff there. I wonder, with the virtual event, you know, with the physical event you've got the hallway traffic, you talk to people and you get face-to-face reactions. Were you able to get your kind of real-time reactions to the announcement? What was that like? Share with us please. >> Yeah, so, we got well over 1,000 questions in during the event and a lot of them were either about Process Fabric or comments about it. So I think people are definitely excited about this. And when you strip away all of the buzzwords around microservices and cloud, et cetera, I think what we're really getting at here is that work is going to be increasingly more distributed. We are living proof of that right now, the four of us all coming here from different studios. But work is going to be distributed for a bunch of reasons. Because people are more distributed, because organizations increasingly are building customer journeys that aren't just inside their walls, but are connected to the partners and their ecosystem. I'm a bank but I may, as part of my mortgage process, connect somebody up to a home insurer. And all of a sudden the home buying process goes beyond my four walls. And then finally, as you get all of these employees engaged with building their low code apps and being citizen developers, you want to let the 1,000 flowers to bloom but you also need a way to connect that all back together. And Process Fabric is about putting the technology in place to allow us to take these distributed bits of work that we need to do and weave them together into experiences that are coherent for a customer and easy for an employee to navigate. Because I think it's going to be really really important that we do that. And even as we take our systems and break them up into microservices, well customers don't interact with microservices. Customers interact with journeys, with experiences, with the processes you lay out, and making sure we can connect that up together into something that feels easy for the customer and the employee, and gets them to that result they want quickly, that's what the vision of Process Fabric is all about. >> You know, it strikes me, I'm checking my notes here. You guys talked about a couple of examples. One was, I think you talked about the car as sort of a mobility experience, maybe, you know, it makes me wonder with all this AI and autonomous vehicle stuff going on, at what point is owning and driving your own vehicle really going to be not the norm anymore? But you talked about this totally transformed, sorry to use that word, but experience around autos. And certainly financial services is maybe a little bit more near-term. But I wonder Shelly, Futurum, you know, you guys look ahead, how far can we actually go with AI in this realm? >> Well, I think we can go pretty far and I think it'll happen pretty fast. And I think that we're seeing that already in terms of what happened when we had the Coronavirus COVID-19, and of course we're still navigating through that, is that all of a sudden things that we talked about doing, or thought about doing, or planned doing, you know later on in this year or 2021, we had to do all of those things immediately. And so again, it is kind of like ripping the Bandaid off. And we're finding that AI plays a tremendously important role in relieving the workload on the frontline workers, and being able to integrate empathy into decision making. And you know, I go back to, I remember when you all first rolled out the empathy part of your platform, Don, and just watching a demo on that of how you can slide this empathy meter to be warmer, and see in true dollars and cents over time the impact of treating your customers with more empathy, what that delivers to a company. And I think that AI that continues to build and learn and again, what we're having right now, is we're having this gigantic volume of needs, of conversation, of all these transactions that need to happen at once, and great volumes make for better outcomes as it relates to artificial intelligence and how learning can happen more quickly over time. So I think that it's, we're definitely going to see more use of AI more rapidly than we might've seen it before, and I don't think that's going to slow down, at all. Certainly, I mean there's no reason for it to slow down. The benefits are tremendous. The benefits are tremendous, and let me step back and say, following a conversation with Rob Walker on responsible AI, that's a whole different ball of wax. And I think that's something that Pega has really embraced and planted a flag in. So I think that we'll see great things ahead with AI, and I think that we'll see the Pega team really leading as it relates to ethical AI. And I think that's tremendously important as well. >> Well that's the other side of the coin, you know. I asked how far can we go and I guess you're alluding to how far should we go. But Adrian, we also heard about agility and empathy. I mean, I want an empathic service provider. Are agility and empathy related to customer service, and how so? >> Well, David, I think that's a great question. I think that, you talk about agility and talk about empathy, and I think the thing is, what we probably know from our own experience is that being empathetic is sometimes going to be really hard. And it takes time, and it takes practice to actually get better at it. It's almost like a new habit. Some people are naturally better at it than others. But you know, organizationally, I talk about that we need to almost build, almost like an empathetic musculature at an organizational level if we're going to achieve this. And it can be aided by technology, but we, when we develop new muscles it takes time. And sometimes you go through a bit of pain in doing that. So I think that's where the agility comes in, is that we have to test and learn and try new things, be willing to get things wrong and then correct, and then kind of move on. And then learn from these kind of things. And so I think the agility and empathy, it does go hand in hand and it's something that will drive growth and increasing empathetic interactions as we go forward. But I think it's also, just to build on Shelly's point, I think you're absolutely right that Pega has been leading the way in this sort of dimension, in terms of its T-switch and its empathetic advisor. But now the ethical AI testing or the ethical bias testing adds a dimension to that to make sure it's not just about all horsepower, but being able to make sure that you can steer your car. To use your analogy. >> So AI's coming whether we like it or not. Right, Shelly? Go ahead. >> It is. One real quick real world example here is, you know, okay so we have this time when a lot of consumers are furloughed. Out of work. Stressed about finances. And we have a lot of Pega's customers are in the financial services space. Some of the systems that they've established, they've developed over time, the processes they've developed over time is, "Oh, I'm talking with Shelly Kramer and she has a "blah-blah-blah account here. "And this would be a great time to sell her on "this additional service," or whatever. And when you can, so that was our process yesterday. But when you're working with an empathic mindset and you are also needing to be incredibly agile because of current circumstances and situations, your technology, the platform that you're using, can allow you to go, "Okay I'm dealing "with a really stressed customer. "This is not the best time "to offer any additional services." Instead what we need to ask is this series of questions: "How can we help?" Or, "Here are some options." Or whatever. And I think that it's little tweaks like that that can help you in the customer service realm be more agile, be more empathetic, and really deliver an amazing customer experience as a result. And that's the technology. >> If I could just add to that. Alan mentioned in his keynote a specific example, which is Commonwealth Bank of Australia. And they were able, multiple times this year, once during the Australian wildfires and then again in response to the COVID crisis, to completely shift and turn on a dime how they interacted with their customer, and to move from a prioritization of maybe selling things to a prioritization of responding to a customer need. And maybe offering payment deferrals or assistance to a customer. But back to what we were talking about earlier, that agility only happened because they didn't have the logic for that embedded in all their channels. They had it centralized. They had it in a common brain that allowed them to make that change in one place and instantly propagate it to all of the 18 different channels in which they touch their customer. And so, being able to have agility and that empathy, to my mind, is explicitly tied to that concept of a center-out business architecture that Alan was talking about. >> Oh, absolutely. >> And, you know, this leads to discussion about automation, and again, how far can we go, how far should we go? Don, you've been interviewed many many times, like any tech executive, about the impact of AI on jobs. And, you know, the typical response of course is, "No, we want augmentation." But the reality is, machines have always replaced humans it's just, now it's the first time in terms of cognitive function. So it's a little different for us this time around. But it's clear, as I said, AI is coming whether we like it or not. Automation is very clearly on the top of people's minds. So how do you guys see the evolution of automation, the injection of automation into applications, the ubiquity of automations coming in this next decade? Shelly, let's start with you. >> You know, I was thinking you were going to ask Don that question so I'm just listening and listening. (laughing) >> Okay, well we can go with Don, that's-- >> No I'm happy to answer it. It's fine, it just wasn't what I expected. You know, we are really immersed in the automation space. So I very much see the concerns that people on the front line have, that automation is going to replace them. And the reality of it is, if a job that someone does can be automated, it will be automated. It makes sense. It makes good business sense to do that. And I think that what we are looking at from a business agility standpoint, from a business resilience standpoint, from a business survival standpoint, is really how can we deliver most effectively to serve the needs of our customers. Period. And how we can do that quickly and efficiently and without frustration and in a way that is cost effective. All of those things play into what makes a successful business today, as well as what keeps employees, I'm sorry, as well as what keeps customers served, loyal, staying around. I think that we live in a time where customer loyalty is fleeting. And so I think that smart businesses have to look at how do we deepen the relationships that we have with customers? How can we use automation to do that? And the thing about it, you know, I'll go back to the example that Don gave about his cable company that all of us have lived through. It's just like, "Oh my gosh. "There's got to be a better way." So compare that to, and I'm sure all of us can think of an experience where you had to deal with a customer service situation in some way or another, and it was the most awesome thing ever. And you walked away from it and you just went, "Oh my gosh. I know I was talking to a bot here or there." Or, "I know I was doing this, but that solved my problem. "I can't believe it was so easy! "I can't believe it was so easy! "I can't wait to buy something from this company again!" You know what I'm saying? And that's really, I think, the role that automation can play. Is that it can really help deepen existing relationships with our customers, and help us serve them better. And it can also help our employees do things that are more interesting and that are more relevant to the business. And I think that that's important too. So, yes, jobs will go. Yes, automation will slide into places where we've done things manually and repetitive processes before, but I think that's a good thing. >> So, we've got to end it shortly here but I'll give you guys each a last opportunity to chime in. And Adrian, I want to start with you. I invoked the T-word before, transformation, a kind of tongue-in-cheek joking because I know it's not your favorite word. But it is the industry's favorite word. Thinking ahead for the future, we've talked about AI, we've talked about automation, people, process and tech. What do you see as the future state of customer experience, this mix of human and machine? What do we have to look forward to? >> So I think that, first of all, let me tackle the transformation thing. I mean, I remember talking about this with Duncan Macdonald who is the CIO across at UPC, which is one of Pega's customers, on my podcast there the other week. And he talked about, he's the cosponsor of a three year digital transformation program. But then he appended the description of that by saying it's a transformation program that will never end. That's the thing that I think about, because actually, if you think about what we're talking about here, we're not transforming to anything in particular, you know. It's not like going from here to there. And actually, the thing that I think we need to start thinking about is, rather than transformation we actually need to think about an evolution. And adopting an evolutionary state. And we talked about being responsive. We talked about being adaptable. We talked about being agile. We talk about testing and learning and all these different sort of things, that's evolutionary, right? It's not transformational, it's evolutionary. If you think about Charles Darwin and the theory of the species, that's an evolutionary process. And there's a quote, as you've mentioned I authored this book called "Punk CX," there's a quote that I use in the book which is taken from a Bad Religion song called "No Control" and it's called, "There is no vestige of a beginning, "and no prospect of an end." And that quote comes from a 1788 book by James Hutton, which was one of the first treaties on geology, and what he found through all these studies was actually, the formation of the earth and its continuous formation, there is no vestige of a beginning, no prospect of an end. It's a continuous process. And I think that's what we've got to embrace is that actually change is constant. And as Alan says, you have to build for change and be ready for change. And have the right sort of culture, the right sort of business architecture, the right sort of technology to enable that. Because the world is getting faster and it is getting more competitive. This is probably not the last crisis that we will face. And so, like in most evolutionary things, it wasn't the fittest and the strongest that survived, it was the ones that were most adaptable that survived. And I think that's the kind of thing I want to land on, is actually how, it's the ones that kind of grasp that, grasp that whole concept are the ones that are going to succeed out of this. And, what they will do will be... We can't even imagine what they're going to do right now. >> And, thank you. And Shelly, it's not only responding to, as Adrian was saying, to crisis, but it's also being in a position to very rapidly take advantage of opportunities and that capability is going to be important. You guys are futurists, it's in the name. Your thoughts? >> Well I think that, you know, Adrian's comments were incredibly salient, as always. And I think that-- >> Thank you. >> The thing that this particular crisis that we are navigating through today has in many ways been bad, but in other ways, I think it's been incredibly good. Because it has forced us, in a way that we really haven't had to deal with before, to act quickly, to think quickly, to rethink and to embrace change. Oh, we've got to work from home! Oh, we've got 20 people that need to work from home, we have 20,000 people that need to work from home. What technology do we need? How do we take care of our customers? All of these things we've had to figure out in overdrive. And humans, generally speaking, aren't great at change. But what we are forced to do as a result of this pandemic is change. And rethink everything. And I think that, you know, the point about transformation not being a beginning and an end, we are never, ever, ever done. It is evolutionary and I think that as we look to the future and to one of your comments, we are going faster with more exciting technology solutions out there, with people who are incredibly smart, and so I think that it's exciting and I think that all we are going to see is more and more and more change, and I think it will be a time of great resilience, and we'll see some businesses survive and thrive, and we'll see other businesses not survive. But that's been our norm as well, so I think it's really, I think we have some things to thank this pandemic for. Which is kind of weird, but I also try to be fairly optimistic. But I do, I think we've learned a lot and I think we've seen some really amazing exciting things from businesses who have done this. >> Well thanks for sharing that silver lining, Shelly. And then, Don, I'm going to ask you to bring us to the finish line. And I'm going to close my final question to you, or pose it. You guys had the wrecking ball, and I've certainly observed, when it comes to things like digital transformations, or whatever you want to call it, that there was real complacency, and you showed that cartoon with the wrecking ball saying, "Ehh not in my life, not on my watch. "We're doing fine." Well, this pandemic has clearly changed people's thinking, automation is really top of mind now at executive. So you guys are in a good spot from that standpoint. But your final thoughts, please? >> Yeah, I mean, I want to concur with what Adrian and Shelly said and if I can drop another rock quote in there. This one is from Bob Dylan. And Dylan famously said, "The times they are a changing." But the quote that I keep on my wall is one that he tossed off during an interview where he said, "I accept chaos. "I'm not sure if it accepts me." But I think digital transformation looks a lot less like that butterfly emerging from a cocoon to go off happy to smell the flowers, and looks much more like accepting that we are in a world of constant and unpredictable change. And I think one of the things that the COVID crisis has done is sort of snapped us awake to that world. I was talking to the CIO of a large media company who is one of our customers, and he brought up the fact, you know, like Croom said, "We're all agile now. "I've been talking about five years, "trying to get this company to operate in an agile way, "and all of a sudden we had to do it. "We had no choice, we had to respond, "we had to try new things, we had to fail fast." And my hope is, as we think about what customer engagement and automation and business efficiency looks like in the future, we keep that mindset of trying new things and continuously adapting. Evolving. At the end of the day, our company's brand promise is, "Build for change." And we chose that because we think that that's what organizations, the one thing they can design for. They can design for a future that will continue to change. And if you put the right architecture in place, if you take that center-out mindset, you can support those immediate needs, but set yourself up for a future of continuous change and continuous evolution and adaptation. >> Well guys, I'll quote somebody less famous. Jeff Frick, who said, "The answer to every question "lives somewhere in a CUBE interview." and you guys have given us a lot of answers. I really appreciate your time. I hope that next year at PegaWorld iNspire we can see each other face-to-face and do some live interviews. But really appreciate the insights and all your good work. Thank you. >> Thank you. >> Absolutely. >> And thank you for watching everybody, this is Dave Vellante and our coverage of PegaWorld iNspire 2020. Be right back, right after this short break. (lighthearted music)

Published Date : Jun 9 2020

SUMMARY :

brought to you by Pegasystems. And now that the dust Shelly, good to see you. and one of the people that from the event, and if you don't mind And I think that's really at the heart of And maybe, what would you and the Process Fabric. And it's really important for that to work and one of the questions And that to me is a direct So, but the customer journey And Paul said something that I think was And so, that affects the and the processes that we put in place. If I may, a lot of the And the thing I was going to for a second, if we can. of the equation as it relates to success And one of the key parts of But we miss on that front a lot. and being able to compose on the fly, and gets them to that But I wonder Shelly, Futurum, you know, And I think that we're seeing side of the coin, you know. I talk about that we need to almost build, we like it or not. And that's the technology. that allowed them to make But the reality is, machines that question so I'm just And the thing about it, you know, And Adrian, I want to start with you. And actually, the thing that I think and that capability is And I think that-- And I think that, you know, And I'm going to close in the future, we keep that mindset and you guys have given And thank you for watching everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

DavidPERSON

0.99+

Rebecca KnightPERSON

0.99+

AlanPERSON

0.99+

JeffPERSON

0.99+

AdrianPERSON

0.99+

Peter BurrisPERSON

0.99+

PaulPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Adrian SwinscoePERSON

0.99+

Jeff BrewerPERSON

0.99+

MAN Energy SolutionsORGANIZATION

0.99+

2017DATE

0.99+

TonyPERSON

0.99+

ShellyPERSON

0.99+

Dave VellantePERSON

0.99+

VolkswagenORGANIZATION

0.99+

Tony FergussonPERSON

0.99+

PegaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GreenbergPERSON

0.99+

James HuttonPERSON

0.99+

Shelly KramerPERSON

0.99+

Stu MinimanPERSON

0.99+

Rob WalkerPERSON

0.99+

DylanPERSON

0.99+

10QUANTITY

0.99+

June 2019DATE

0.99+

Corey QuinnPERSON

0.99+

DonPERSON

0.99+

SantikaryPERSON

0.99+

CroomPERSON

0.99+

chinaLOCATION

0.99+

Tony FergusonPERSON

0.99+

30QUANTITY

0.99+

60 drugsQUANTITY

0.99+

roland cleoPERSON

0.99+

UKLOCATION

0.99+

Don SchuermanPERSON

0.99+

cal polyORGANIZATION

0.99+

SantiPERSON

0.99+

1985DATE

0.99+

Duncan MacdonaldPERSON

0.99+

Silicon ValleyLOCATION

0.99+

millionsQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

one yearQUANTITY

0.99+

10 yearsQUANTITY

0.99+

PegasystemsORGANIZATION

0.99+

80%QUANTITY

0.99+

Wikibon Action Item | The Roadmap to Automation | April 27, 2018


 

>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (upbeat digital music) >> Cameraman: Three, two, one. >> Hi. Once again, we're broadcasting from our beautiful Palo Alto studios, theCUBE studios, and this week we've got another great group. David Floyer in the studio with me along with George Gilbert. And on the phone we've got Jim Kobielus and Ralph Finos. Hey, guys. >> Hi there. >> So we're going to talk about something that's going to become a big issue. It's only now starting to emerge. And that is, what will be the roadmap to automation? Automation is going to be absolutely crucial for the success of IT in the future and the success of any digital business. At its core, many people have presumed that automation was about reducing labor. So introducing software and other technologies, we would effectively be able to substitute for administrative, operator, and related labor. And while that is absolutely a feature of what we're talking about, the bigger issue is ultimately is that we cannot conceive of more complex workloads that are capable of providing better customer experience, superior operations, all the other things a digital business ultimately wants to achieve. If we don't have a capability for simplifying how those underlying resources get put together, configured, or organized, orchestrated, and ultimately sustained delivery of. So the other part of automation is to allow for much more work that can be performed on the same resources much faster. It's a basis for how we think about plasticity and the ability to reconfigure resources very quickly. Now, the challenge is this industry, the IT industry has always used standards as a weapon. We use standards as a basis of creating eco systems or scale, or mass for even something as, like mainframes. Where there weren't hundreds of millions of potential users. But IBM was successful at using that as a basis for driving their costs down and approving a superior product. That's clearly what Microsoft and Intel did many years ago, was achieve that kind of scale through the driving more, and more, and more, ultimately, volume of the technology, and they won. But along the way though, each time, each generation has featured a significant amount of competition at how those interfaces came together and how they worked. And this is going to be the mother of all standard-oriented competition. How does one automation framework and another automation framework fit together? One being able to create value in a way that serves another automation framework, but ultimately as a, for many companies, a way of creating more scale onto their platform. More volume onto that platform. So this notion of how automation is going to evolve is going to be crucially important. David Floyer, are APIs going to be enough to solve this problem? >> No. That's a short answer to that. This is a very complex problem, and I think it's worthwhile spending a minute just on what are the component parts that need to be brought together. We're going to have a multi-cloud environment. Multiple private clouds, multiple public clouds, and they've got to work together in some way. And the automation is about, and you've got the Edge as well. So you've got a huge amount of data all across all of these different areas. And automation and orchestration across that, are as you said, not just about efficiency, they're about making it work. Making it able to be, to work and to be available. So all of the issues of availability, of security, of compliance, all of these difficult issues are a subject to getting this whole environment to be able to work together through a set of APIs, yes, but a lot lot more than that. And in particular, when you think about it, to me, volume of data is critical. Is who has access to that data. >> Peter: Now, why is that? >> Because if you're dealing with AI and you're dealing with any form of automation like this, the more data you have, the better your models are. And if you can increase that amount of data, as Google show every day, you will maintain that handle on all that control over that area. >> So you said something really important, because the implied assumption, and obviously, it's a major feature of what's going on, is that we've been talking about doing more automation for a long time. But what's different this time is the availability of AI and machine learning, for example, >> Right. as a basis for recognizing patterns, taking remedial action or taking predictive action to avoid the need for remedial action. And it's the availability of that data that's going to improve the quality of those models. >> Yes. Now, George, you've done a lot of work around this a whole notion of ML for ITOM. What are the kind of different approaches? If there's two ways that we're looking at it right now, what are the two ways? >> So there are two ends of the extreme. One is I want to see end to end what's going on across my private cloud or clouds. As well as if I have different applications in different public clouds. But that's very difficult. You get end-to-end visibility but you have to relax a lot of assumptions about what's where. >> And that's called the-- >> Breadth first. So the pro is end-to-end visibility. Con is you don't know how all the pieces fit together quite as well, so you get less fidelity in terms of diagnosing root causes. >> So you're trying to optimize at a macro level while recognizing that you can't optimize at a micro level. >> Right. Now the other approach, the other end of the spectrum, is depth first. Where you constrain the set of workloads and services that you're building and that you know about, and how they fit together. And then the models, based on the data you collect there, can become so rich that you have very very high fidelity root cause determination which allows you to do very precise recommendations or even automated remediation. What we haven't figured out hot to do yet is marry the depth first with the breadth first. So that you have multiple focus depth first. That's very tricky. >> Now, if you think about how the industry has evolved, we wrote some stuff about what we call, what I call the iron triangle. Which is basically a very tight relationship between specialists in technology. So the people who were responsible for a particular asset, be it storage, or the system, or the network. The vendors, who provided a lot of the knowledge about how that worked, and therefore made that specialist more or less successful and competent. And then the automation technology that that vendor ultimately provided. Now, that was not automation technology that was associated with AI or anything along those lines. It was kind of out of the box, buy our tool, and this is how you're going to automate various workflows or scripts, or whatever else it might be. And every effort to try to break that has been met with screaming because, well, you're now breaking my automation routines. So the depth-first approach, even without ML, has been the way that we've done it historically. But, David, you're talking about something different. It's the availability of the data that starts to change that. >> Yeah. >> So are we going to start seeing new compacts put in place between users and vendors and OEMs and a lot of these other folks? And it sounds like it's going to be about access to the data. >> Absolutely. So you're going to start. let's start at the bottom. You've got people who have a particular component, whatever that component is. It might be storage. It might be networking. Whatever that component is. They have products in that area which will be collecting data. And they will need for their particular area to provide a degree of automation. A degree of capability. And they need to do two things. They need to do that optimization and also provide data to other people. So they have to have an OEM agreement not just for the equipment that they provide, but for the data that they're going to give and the data they're going to give back. The automatization of the data, for example, going up and the availability of data to help themselves. >> So contracts effectively mean that you're going to have to negotiate value capture on the data side as well as the revenue side. >> Absolutely. >> The ability to do contracting historically has been around individual products. And so we're pretty good at that. So we can say, you will buy this product. I'm delivering you the value. And then the utility of that product is up to you. When we start going to service contracts, we get a little bit different kind of an arrangement. Now, it's an ongoing continuous delivery. But for the most part, a lot of those service contracts have been predicated to known in advance classes of functions, like Salesforce, for example. Or the SASS business where you're able to write a contract that says over time you will have access to this service. When we start talking about some of this automation though, now we're talking about ongoing, but highly bespoke, and potentially highly divergent, over a relatively short period of time, that you have a hard time writing contracts that will prescribe the range of behaviors and the promise about how those behaviors are actually going to perform. I don't think we're there yet. What do you guys think? >> Well, >> No, no way. I mean, >> Especially when you think about realtime. (laughing) >> Yeah. It has to be realtime to get to the end point of automating the actual reply than the actual action that you take. That's where you have to get to. You can't, It won't be sufficient in realtime. I think it's a very interesting area, this contracts area. If you think about solutions for it, I would be going straight towards blockchain type architectures and dynamic blockchain contracts that would have to be put in place. >> Peter: But they're not realtime. >> The contracts aren't realtime. The contracts will never be realtime, but the >> Accessed? access to the data and the understanding of what data is required. Those will be realtime. >> Well, we'll see. I mean, the theorem's what? Every 12 seconds? >> Well. That's >> Everything gets updated? >> That's To me, that's good enough. >> Okay. >> That's realtime enough. It's not going to solve the problem of somebody >> Peter: It's not going to solve the problem at the edge. >> At the very edge, but it's certainly sufficient to solve the problem of contracts. >> Okay. >> But, and I would add to that and say, in addition to having all this data available. Let's go back like 10, 20 years and look at Cisco. A lot of their differentiation and what entrenched them was sort of universal familiarity with their admin interfaces and they might not expose APIs in a way that would make it common across their competitors. But if you had data from them and a constrained number of other providers for around which you would build let's say, these modern big data applications. It's if you constrain the problem, you can get to the depth first. >> Yeah, but Cisco is a great example of it's an archetype for what I said earlier, that notion of an iron triangle. You had Cisco admins >> Yeah. that were certified to run Cisco gear and therefore had a strong incentive to ensure that more Cisco gear was purchased utilizing a Cisco command line interface that did incorporate a fair amount of automation for that Cisco gear and it was almost impossible for a lot of companies to penetrate that tight arrangement between the Cisco admin that was certified, the Cisco gear, and the COI. >> And the exact same thing happened with Oracle. The Oracle admin skillset was pervasive within large >> Peter: Happened with everybody. >> Yes, absolutely >> But, >> Peter: The only reason it didn't happen in the IBM mainframe, David, was because of a >> It did happen, yeah, >> Well, but it did happen, but governments stepped in and said, this violates antitrust. And IBM was forced by law, by court decree, to open up those interfaces. >> Yes. That's true. >> But are we going to see the same type of thing >> I think it's very interesting to see the shape of this market. When we look a little bit ahead. People like Amazon are going to have IAS, they're going to be running applications. They are going to go for the depth way of doing things across, or what which way around is it? >> Peter: The breadth. They're going to be end to end. >> But they will go depth in individual-- >> Components. Or show of, but they will put together their own type of things for their services. >> Right. >> Equally, other players like Dell, for example, have a lot of different products. A lot of different components in a lot of different areas. They have to go piece by piece and put together a consortium of suppliers to them. Storage suppliers, chip suppliers, and put together that outside and it's going to have to be a different type of solution that they put together. HP will have the same issue there. And as of people like CA, for example, who we'll see an opportunity for them to be come in again with great products and overlooking the whole of all of this data coming in. >> Peter: Oh, sure. Absolutely. >> So there's a lot of players who could be in this area. Microsoft, I missed out, of course they will have the two ends that they can combine together. >> Well, they may have an advantage that nobody else has-- >> Exactly. Yeah. because they're strong in both places. But I have Jim Kobielus. Let me check, are you there now? Do we got Jim back? >> Can you hear me? >> Peter: I can barely hear you, Jim. Could we bring Jim's volume up a little bit? So, Jim, I asked the question earlier, about we have the tooling for AI. We know how to get data. How to build models and how to apply the models in a broad brush way. And we're certainly starting to see that happen within the IT operations management world. The ITOM world, but we don't yet know how we're going to write these contracts that are capable of better anticipating, putting in place a regime that really describes how the, what are the limits of data sharing? What are the limits of derivative use? Et cetera. I argued, and here in the studio we generally agreed, that's we still haven't figured that out and that this is going to be one of the places where the tension between, at least in the B2B world, data availability and derivative use and where you capture value and where those profitables go, is going to be significant. But I want to get your take. Has the AI community >> Yeah. started figuring out how we're going to contractually handle obligations around data, data use, data sharing, data derivative use. >> The short answer is, no they have not. The longer answer is, that can you hear me, first of all? >> Peter: Barely. >> Okay. Should I keep talking? >> Yeah. Go ahead. >> Okay. The short answer is, no that the AI community has not addressed those, those IP protection issues. But there is a growing push in the AI community to leverage blockchain for such requirements in terms of block chains to store smart contracts where related to downstream utilization of data and derivative models. But that's extraordinarily early on in its development in terms of insight in the AI community and in the blockchain community as well. In other words, in fact, in one of the posts that I'm working on right now, is looking at a company called 8base that's actually using blockchain to store all of those assets, those artifacts for the development and lifecycle along with the smart contracts to drive those downstream uses. So what I'm saying is that there's lots of smart people like yourselves are thinking about these problems, but there's no consensus, definitely, in the AI community for how to manage all those rights downstream. >> All right. So very quickly, Ralph Finos, if you're there. I want to get your perspective >> Yeah. on what this means from markets, market leadership. What do you think? How's this going to impact who are the leaders, who's likely to continue to grow and gain even more strength? What're your thoughts on this? >> Yeah. I think, my perspective on this thing in the near term is to focus on simplification. And to focus on depth, because you can get return, you can get payback for that kind of work and it simplifies the overall picture so when you're going broad, you've got less of a problem to deal with. To link all these things together. So I'm going to go with the Shaker kind of perspective on the world is to make things simple. And to focus there. And I think the complexity of what we're talking about for breadth is too difficult to handle at this point in time. I don't see it happening any time in the near future. >> Although there are some companies, like Splunk, for example, that are doing a decent job of presenting a more of a breadth approach, but they're not going deep into the various elements. So, George, really quick. Let's talk to you. >> I beg to disagree on that one. >> Peter: Oh! >> They're actually, they built a platform, originally that was breadth first. They built all these, essentially, forwarders which could understand the formats of the output of all sorts of different devices and services. But then they started building what they called curated experiences which is the equivalent of what we call depth first. They're doing it for IT service management. They're doing it for what's called user behavior. Analytics, which is it's a way of tracking bad actors or bad devices on a network. And they're going to be pumping out more of those. What's not clear yet, is how they're going to integrate those so that IT service management understands security and vice versa. >> And I think that's one of the key things, George, is that ultimately, the real question will be or not the real question, but when we think about the roadmap, it's probably that security is going to be early on one of the things that gets addressed here. And again, it's not just security from a perimeter standpoint. Some people are calling it a software-based perimeter. Our perspective is the data's going to go everywhere and ultimately how do you sustain a zero trust world where you know your data is going to be out in the clear so what are you going to do about it? All right. So look. Let's wrap this one up. Jim Kobielus, let's give you the first Action Item. Jim, Action Item. >> Action Item. Wow. Action Item Automation is just to follow the stack of assets that drive automation and figure out your overall sharing architecture for sharing out these assets. I think the core asset will remain orchestration models. I don't think predictive models in AI are a huge piece of the overall automation pie in terms of the logic. So just focus on building out and protecting and sharing and reusing your orchestration models. Those are critically important. In any domain. End to end or in specific automation domains. >> Peter: David Floyer, Action Item. >> So my Action Item is to acknowledge that the world of building your own automation yourself around a whole lot of piece parts that you put together are over. You won't have access to a sufficient data. So enterprises must take a broad view of getting data, of getting components that have data be giving them data. Make contracts with people to give them data, masking or whatever it is and become part of a broader scheme that will allow them to meet the automation requirements of the 21st century. >> Ralph Finos, Action Item. >> Yeah. Again, I would reiterate the importance of keeping it simple. Taking care of the depth questions and moving forward from there. The complexity is enormous, and-- >> Peter: George Gilbert, Action Item. >> I say, start with what customers always start with with a new technology, which is a constrained environment like a pilot and there's two areas that are potentially high return. One is big data, where it's been a multi vendor or multi-vendor component mix, and a mess. And so you take that and you constrain that and make that a depth-first approach in the cloud where there is data to manage that. And the second one is security, where we have now a more and more trained applications just for that. I say, don't start with a platform. Start with those solutions and then start adding more solutions around that. >> All right. Great. So here's our overall Action Item. The question of automation or roadmap to automation is crucial for multiple reasons. But one of the most important ones is it's inconceivable to us to envision how a business can institute even more complex applications if we don't have a way of improving the degree of automation on the underlying infrastructure. How this is going to play out, we're not exactly sure. But we do think that there are a few principals that are going to be important that users have to focus on. Number one is data. Be very clear that there is value in your data, both to you as well as to your suppliers and as you think about writing contracts, don't write contracts that are focused on a product now. Focus on even that product as a service over time where you are sharing data back and forth in addition to getting some return out of whatever assets you've put in place. And make sure that the negotiations specifically acknowledge the value of that data to your suppliers as well. Number two, that there is certainly going to be a scale here. There's certainly going to be a volume question here. And as we think about where a lot of the new approaches to doing these or this notion of automation, is going to come out of the cloud vendors. Once again, the cloud vendors are articulating what the overall model is going to look like. What that cloud experience is going to look like. And it's going to be a challenge to other suppliers who are providing an on-premises true private cloud and Edge orientation where the data must live sometimes it is not something that they just want to do because they want to do it. Because that data requires it to be able to reflect that cloud operating model. And expect, ultimately, that your suppliers also are going to have to have very clear contractual relationships with the cloud players and each other for how that data gets shared. Ultimately, however, we think it's crucially important that any CIO recognized that the existing environment that they have right now is not converged. The existing environment today remains operators, suppliers of technology, and suppliers of automation capabilities and breaking that up is going to be crucial. Not only to achieving automation objectives, but to achieve a converged infrastructure, hyper converged infrastructure, multi-cloud arrangements, including private cloud, true private cloud, and the cloud itself. And this is going to be a management challenge, goes way beyond just products and technology, to actually incorporating how you think about your shopping, organized, how you institutionalize the work that the business requires, and therefore what you identify as a tasks that will be first to be automated. Our expectation, security's going to be early on. Why? Because your CEO and your board of directors are going to demand it. So think about how automation can be improved and enhanced through a security lens, but do so in a way that ensures that over time you can bring new capabilities on with a depth-first approach at least, to the breadth that you need within your shop and within your business, your digital business, to achieve the success and the results that you want. Okay. Once again, I want to thank David Floyer and George Gilbert here in the studio with us. On the phone, Ralph Finos and Jim Kobielus. Couldn't get Neil Raiden in today, sorry Neil. And I am Peter Burris, and this has been an Action Item. Talk to you again soon. (upbeat digital music)

Published Date : Apr 27 2018

SUMMARY :

and welcome to another Wikibon Action Item. And on the phone we've got Jim Kobielus and Ralph Finos. and the ability to reconfigure resources very quickly. that need to be brought together. the more data you have, is the availability of AI and machine learning, And it's the availability of that data What are the kind of different approaches? You get end-to-end visibility but you have to relax So the pro is end-to-end visibility. while recognizing that you can't optimize at a micro level. So that you have multiple focus depth first. that starts to change that. And it sounds like it's going to be about access to the data. and the data they're going to give back. have to negotiate value capture on the data side and the promise about how those behaviors I mean, Especially when you think about realtime. than the actual action that you take. but the access to the data and the understanding I mean, the theorem's what? To me, that's good enough. It's not going to solve the problem of somebody but it's certainly sufficient to solve the problem in addition to having all this data available. Yeah, but Cisco is a great example of and therefore had a strong incentive to ensure And the exact same thing happened with Oracle. to open up those interfaces. They are going to go for the depth way of doing things They're going to be end to end. but they will put together their own type of things that outside and it's going to have to be a different type Peter: Oh, sure. the two ends that they can combine together. Let me check, are you there now? and that this is going to be one of the places to contractually handle obligations around data, The longer answer is, that and in the blockchain community as well. I want to get your perspective How's this going to impact who are the leaders, So I'm going to go with the Shaker kind of perspective Let's talk to you. I beg to disagree And they're going to be pumping out more of those. Our perspective is the data's going to go everywhere Action Item Automation is just to follow that the world of building your own automation yourself Taking care of the depth questions and make that a depth-first approach in the cloud Because that data requires it to be able to reflect

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

David FloyerPERSON

0.99+

Jim KobielusPERSON

0.99+

DavidPERSON

0.99+

George GilbertPERSON

0.99+

Peter BurrisPERSON

0.99+

GeorgePERSON

0.99+

NeilPERSON

0.99+

PeterPERSON

0.99+

April 27, 2018DATE

0.99+

Ralph FinosPERSON

0.99+

IBMORGANIZATION

0.99+

Neil RaidenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

21st centuryDATE

0.99+

two waysQUANTITY

0.99+

8baseORGANIZATION

0.99+

10QUANTITY

0.99+

hundredsQUANTITY

0.99+

oneQUANTITY

0.99+

SplunkORGANIZATION

0.99+

two areasQUANTITY

0.99+

OracleORGANIZATION

0.99+

OneQUANTITY

0.99+

HPORGANIZATION

0.99+

each generationQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

IntelORGANIZATION

0.99+

both placesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

bothQUANTITY

0.98+

two thingsQUANTITY

0.98+

ThreeQUANTITY

0.98+

twoQUANTITY

0.98+

SASSORGANIZATION

0.98+

this weekDATE

0.97+

each timeQUANTITY

0.97+

two endsQUANTITY

0.97+

todayDATE

0.97+

GoogleORGANIZATION

0.96+

firstQUANTITY

0.96+

second oneQUANTITY

0.94+

CALOCATION

0.92+