Image Title

Search Results for Colombo:

Bob Muglia, George Gilbert & Tristan Handy | How Supercloud will Support a new Class of Data Apps


 

(upbeat music) >> Hello, everybody. This is Dave Vellante. Welcome back to Supercloud2, where we're exploring the intersection of data analytics and the future of cloud. In this segment, we're going to look at how the Supercloud will support a new class of applications, not just work that runs on multiple clouds, but rather a new breed of apps that can orchestrate things in the real world. Think Uber for many types of businesses. These applications, they're not about codifying forms or business processes. They're about orchestrating people, places, and things in a business ecosystem. And I'm pleased to welcome my colleague and friend, George Gilbert, former Gartner Analyst, Wiki Bond market analyst, former equities analyst as my co-host. And we're thrilled to have Tristan Handy, who's the founder and CEO of DBT Labs and Bob Muglia, who's the former President of Microsoft's Enterprise business and former CEO of Snowflake. Welcome all, gentlemen. Thank you for coming on the program. >> Good to be here. >> Thanks for having us. >> Hey, look, I'm going to start actually with the SuperCloud because both Tristan and Bob, you've read the definition. Thank you for doing that. And Bob, you have some really good input, some thoughts on maybe some of the drawbacks and how we can advance this. So what are your thoughts in reading that definition around SuperCloud? >> Well, I thought first of all that you did a very good job of laying out all of the characteristics of it and helping to define it overall. But I do think it can be tightened a bit, and I think it's helpful to do it in as short a way as possible. And so in the last day I've spent a little time thinking about how to take it and write a crisp definition. And here's my go at it. This is one day old, so gimme a break if it's going to change. And of course we have to follow the industry, and so that, and whatever the industry decides, but let's give this a try. So in the way I think you're defining it, what I would say is a SuperCloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. >> Boom. Nice. Okay, great. I'm going to go back and read the script on that one and tighten that up a bit. Thank you for spending the time thinking about that. Tristan, would you add anything to that or what are your thoughts on the whole SuperCloud concept? >> So as I read through this, I fully realize that we need a word for this thing because I have experienced the inability to talk about it as well. But for many of us who have been living in the Confluence, Snowflake, you know, this world of like new infrastructure, this seems fairly uncontroversial. Like I read through this, and I'm just like, yeah, this is like the world I've been living in for years now. And I noticed that you called out Snowflake for being an example of this, but I think that there are like many folks, myself included, for whom this world like fully exists today. >> Yeah, I think that's a fair, I dunno if it's criticism, but people observe, well, what's the big deal here? It's just kind of what we're living in today. It reminds me of, you know, Tim Burns Lee saying, well, this is what the internet was supposed to be. It was supposed to be Web 2.0, so maybe this is what multi-cloud was supposed to be. Let's turn our attention to apps. Bob first and then go to Tristan. Bob, what are data apps to you? When people talk about data products, is that what they mean? Are we talking about something more, different? What are data apps to you? >> Well, to understand data apps, it's useful to contrast them to something, and I just use the simple term people apps. I know that's a little bit awkward, but it's clear. And almost everything we work with, almost every application that we're familiar with, be it email or Salesforce or any consumer app, those are applications that are targeted at responding to people. You know, in contrast, a data application reacts to changes in data and uses some set of analytic services to autonomously take action. So where applications that we're familiar with respond to people, data apps respond to changes in data. And they both do something, but they do it for different reasons. >> Got it. You know, George, you and I were talking about, you know, it comes back to SuperCloud, broad definition, narrow definition. Tristan, how do you see it? Do you see it the same way? Do you have a different take on data apps? >> Oh, geez. This is like a conversation that I don't know has an end. It's like been, I write a substack, and there's like this little community of people who all write substack. We argue with each other about these kinds of things. Like, you know, as many different takes on this question as you can find, but the way that I think about it is that data products are atomic units of functionality that are fundamentally data driven in nature. So a data product can be as simple as an interactive dashboard that is like actually had design thinking put into it and serves a particular user group and has like actually gone through kind of a product development life cycle. And then a data app or data application is a kind of cohesive end-to-end experience that often encompasses like many different data products. So from my perspective there, this is very, very related to the way that these things are produced, the kinds of experiences that they're provided, that like data innovates every product that we've been building in, you know, software engineering for, you know, as long as there have been computers. >> You know, Jamak Dagani oftentimes uses the, you know, she doesn't name Spotify, but I think it's Spotify as that kind of example she uses. But I wonder if we can maybe try to take some examples. If you take, like George, if you take a CRM system today, you're inputting leads, you got opportunities, it's driven by humans, they're really inputting the data, and then you got this system that kind of orchestrates the business process, like runs a forecast. But in this data driven future, are we talking about the app itself pulling data in and automatically looking at data from the transaction systems, the call center, the supply chain and then actually building a plan? George, is that how you see it? >> I go back to the example of Uber, may not be the most sophisticated data app that we build now, but it was like one of the first where you do have users interacting with their devices as riders trying to call a car or driver. But the app then looks at the location of all the drivers in proximity, and it matches a driver to a rider. It calculates an ETA to the rider. It calculates an ETA then to the destination, and it calculates a price. Those are all activities that are done sort of autonomously that don't require a human to type something into a form. The application is using changes in data to calculate an analytic product and then to operationalize that, to assign the driver to, you know, calculate a price. Those are, that's an example of what I would think of as a data app. And my question then I guess for Tristan is if we don't have all the pieces in place for sort of mainstream companies to build those sorts of apps easily yet, like how would we get started? What's the role of a semantic layer in making that easier for mainstream companies to build? And how do we get started, you know, say with metrics? How does that, how does that take us down that path? >> So what we've seen in the past, I dunno, decade or so, is that one of the most successful business models in infrastructure is taking hard things and rolling 'em up behind APIs. You take messaging, you take payments, and you all of a sudden increase the capability of kind of your median application developer. And you say, you know, previously you were spending all your time being focused on how do you accept credit cards, how do you send SMS payments, and now you can focus on your business logic, and just create the thing. One of, interestingly, one of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse, inside of your data lake. These are core concepts that, you know, you would imagine that the business would be able to create applications around very easily, but in fact that's not the case. It's actually quite challenging to, and involves a lot of data engineering pipeline and all this work to make these available. And so if you really want to make it very easy to create some of these data experiences for users, you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes, and they don't need to. >> So how rich can that API layer grow if you start with metric definitions that you've defined? And DBT has, you know, the metric, the dimensions, the time grain, things like that, that's a well scoped sort of API that people can work within. How much can you extend that to say non-calculated business rules or governance information like data reliability rules, things like that, or even, you know, features for an AIML feature store. In other words, it starts, you started pragmatically, but how far can you grow? >> Bob is waiting with bated breath to answer this question. I'm, just really quickly, I think that we as a company and DBT as a product tend to be very pragmatic. We try to release the simplest possible version of a thing, get it out there, and see if people use it. But the idea that, the concept of a metric is really just a first landing pad. The really, there is a physical manifestation of the data and then there's a logical manifestation of the data. And what we're trying to do here is make it very easy to access the logical manifestation of the data, and metric is a way to look at that. Maybe an entity, a customer, a user is another way to look at that. And I'm sure that there will be more kind of logical structures as well. >> So, Bob, chime in on this. You know, what's your thoughts on the right architecture behind this, and how do we get there? >> Yeah, well first of all, I think one of the ways we get there is by what companies like DBT Labs and Tristan is doing, which is incrementally taking and building on the modern data stack and extending that to add a semantic layer that describes the data. Now the way I tend to think about this is a fairly major shift in the way we think about writing applications, which is today a code first approach to moving to a world that is model driven. And I think that's what the big change will be is that where today we think about data, we think about writing code, and we use that to produce APIs as Tristan said, which encapsulates those things together in some form of services that are useful for organizations. And that idea of that encapsulation is never going to go away. It's very, that concept of an API is incredibly useful and will exist well into the future. But what I think will happen is that in the next 10 years, we're going to move to a world where organizations are defining models first of their data, but then ultimately of their business process, their entire business process. Now the concept of a model driven world is a very old concept. I mean, I first started thinking about this and playing around with some early model driven tools, probably before Tristan was born in the early 1980s. And those tools didn't work because the semantics associated with executing the model were too complex to be written in anything other than a procedural language. We're now reaching a time where that is changing, and you see it everywhere. You see it first of all in the world of machine learning and machine learning models, which are taking over more and more of what applications are doing. And I think that's an incredibly important step. And learned models are an important part of what people will do. But if you look at the world today, I will claim that we've always been modeling. Modeling has existed in computers since there have been integrated circuits and any form of computers. But what we do is what I would call implicit modeling, which means that it's the model is written on a whiteboard. It's in a bunch of Slack messages. It's on a set of napkins in conversations that happen and during Zoom. That's where the model gets defined today. It's implicit. There is one in the system. It is hard coded inside application logic that exists across many applications with humans being the glue that connects those models together. And really there is no central place you can go to understand the full attributes of the business, all of the business rules, all of the business logic, the business data. That's going to change in the next 10 years. And we'll start to have a world where we can define models about what we're doing. Now in the short run, the most important models to build are data models and to describe all of the attributes of the data and their relationships. And that's work that DBT Labs is doing. A number of other companies are doing that. We're taking steps along that way with catalogs. People are trying to build more complete ontologies associated with that. The underlying infrastructure is still super, super nascent. But what I think we'll see is this infrastructure that exists today that's building learned models in the form of machine learning programs. You know, some of these incredible machine learning programs in foundation models like GPT and DALL-E and all of the things that are happening in these global scale models, but also all of that needs to get applied to the domains that are appropriate for a business. And I think we'll see the infrastructure developing for that, that can take this concept of learned models and put it together with more explicitly defined models. And this is where the concept of knowledge graphs come in and then the technology that underlies that to actually implement and execute that, which I believe are relational knowledge graphs. >> Oh, oh wow. There's a lot to unpack there. So let me ask the Colombo question, Tristan, we've been making fun of your youth. We're just, we're just jealous. Colombo, I'll explain it offline maybe. >> I watch Colombo. >> Okay. All right, good. So but today if you think about the application stack and the data stack, which is largely an analytics pipeline. They're separate. Do they, those worlds, do they have to come together in order to achieve Bob's vision? When I talk to practitioners about that, they're like, well, I don't want to complexify the application stack cause the data stack today is so, you know, hard to manage. But but do those worlds have to come together? And you know, through that model, I guess abstraction or translation that Bob was just describing, how do you guys think about that? Who wants to take that? >> I think it's inevitable that data and AI are going to become closer together? I think that the infrastructure there has been moving in that direction for a long time. Whether you want to use the Lakehouse portmanteau or not. There's also, there's a next generation of data tech that is still in the like early stage of being developed. There's a company that I love that is essentially Cross Cloud Lambda, and it's just a wonderful abstraction for computing. So I think that, you know, people have been predicting that these worlds are going to come together for awhile. A16Z wrote a great post on this back in I think 2020, predicting this, and I've been predicting this since since 2020. But what's not clear is the timeline, but I think that this is still just as inevitable as it's been. >> Who's that that does Cross Cloud? >> Let me follow up on. >> Who's that, Tristan, that does Cross Cloud Lambda? Can you name names? >> Oh, they're called Modal Labs. >> Modal Labs, yeah, of course. All right, go ahead, George. >> Let me ask about this vision of trying to put the semantics or the code that represents the business with the data. It gets us to a world that's sort of more data centric, where data's not locked inside or behind the APIs of different applications so that we don't have silos. But at the same time, Bob, I've heard you talk about building the semantics gradually on top of, into a knowledge graph that maybe grows out of a data catalog. And the vision of getting to that point, essentially the enterprise's metadata and then the semantics you're going to add onto it are really stored in something that's separate from the underlying operational and analytic data. So at the same time then why couldn't we gradually build semantics beyond the metric definitions that DBT has today? In other words, you build more and more of the semantics in some layer that DBT defines and that sits above the data management layer, but any requests for data have to go through the DBT layer. Is that a workable alternative? Or where, what type of limitations would you face? >> Well, I think that it is the way the world will evolve is to start with the modern data stack and, you know, which is operational applications going through a data pipeline into some form of data lake, data warehouse, the Lakehouse, whatever you want to call it. And then, you know, this wide variety of analytics services that are built together. To the point that Tristan made about machine learning and data coming together, you see that in every major data cloud provider. Snowflake certainly now supports Python and Java. Databricks is of course building their data warehouse. Certainly Google, Microsoft and Amazon are doing very, very similar things in terms of building complete solutions that bring together an analytics stack that typically supports languages like Python together with the data stack and the data warehouse. I mean, all of those things are going to evolve, and they're not going to go away because that infrastructure is relatively new. It's just being deployed by companies, and it solves the problem of working with petabytes of data if you need to work with petabytes of data, and nothing will do that for a long time. What's missing is a layer that understands and can model the semantics of all of this. And if you need to, if you want to model all, if you want to talk about all the semantics of even data, you need to think about all of the relationships. You need to think about how these things connect together. And unfortunately, there really is no platform today. None of our existing platforms are ultimately sufficient for this. It was interesting, I was just talking to a customer yesterday, you know, a large financial organization that is building out these semantic layers. They're further along than many companies are. And you know, I asked what they're building it on, and you know, it's not surprising they're using a, they're using combinations of some form of search together with, you know, textual based search together with a document oriented database. In this case it was Cosmos. And that really is kind of the state of the art right now. And yet those products were not built for this. They don't really, they can't manage the complicated relationships that are required. They can't issue the queries that are required. And so a new generation of database needs to be developed. And fortunately, you know, that is happening. The world is developing a new set of relational algorithms that will be able to work with hundreds of different relations. If you look at a SQL database like Snowflake or a big query, you know, you get tens of different joins coming together, and that query is going to take a really long time. Well, fortunately, technology is evolving, and it's possible with new join algorithms, worst case, optimal join algorithms they're called, where you can join hundreds of different relations together and run semantic queries that you simply couldn't run. Now that technology is nascent, but it's really important, and I think that will be a requirement to have this semantically reach its full potential. In the meantime, Tristan can do a lot of great things by building up on what he's got today and solve some problems that are very real. But in the long run I think we'll see a new set of databases to support these models. >> So Tristan, you got to respond to that, right? You got to, so take the example of Snowflake. We know it doesn't deal well with complex joins, but they're, they've got big aspirations. They're building an ecosystem to really solve some of these problems. Tristan, you guys are part of that ecosystem, and others, but please, your thoughts on what Bob just shared. >> Bob, I'm curious if, I would have no idea what you were talking about except that you introduced me to somebody who gave me a demo of a thing and do you not want to go there right now? >> No, I can talk about it. I mean, we can talk about it. Look, the company I've been working with is Relational AI, and they're doing this work to actually first of all work across the industry with academics and research, you know, across many, many different, over 20 different research institutions across the world to develop this new set of algorithms. They're all fully published, just like SQL, the underlying algorithms that are used by SQL databases are. If you look today, every single SQL database uses a similar set of relational algorithms underneath that. And those algorithms actually go back to system R and what IBM developed in the 1970s. We're just, there's an opportunity for us to build something new that allows you to take, for example, instead of taking data and grouping it together in tables, treat all data as individual relations, you know, a key and a set of values and then be able to perform purely relational operations on it. If you go back to what, to Codd, and what he wrote, he defined two things. He defined a relational calculus and relational algebra. And essentially SQL is a query language that is translated by the query processor into relational algebra. But however, the calculus of SQL is not even close to the full semantics of the relational mathematics. And it's possible to have systems that can do everything and that can store all of the attributes of the data model or ultimately the business model in a form that is much more natural to work with. >> So here's like my short answer to this. I think that we're dealing in different time scales. I think that there is actually a tremendous amount of work to do in the semantic layer using the kind of technology that we have on the ground today. And I think that there's, I don't know, let's say five years of like really solid work that there is to do for the entire industry, if not more. But the wonderful thing about DBT is that it's independent of what the compute substrate is beneath it. And so if we develop new platforms, new capabilities to describe semantic models in more fine grain detail, more procedural, then we're going to support that too. And so I'm excited about all of it. >> Yeah, so interpreting that short answer, you're basically saying, cause Bob was just kind of pointing to you as incremental, but you're saying, yeah, okay, we're applying it for incremental use cases today, but we can accommodate a much broader set of examples in the future. Is that correct, Tristan? >> I think you're using the word incremental as if it's not good, but I think that incremental is great. We have always been about applying incremental improvement on top of what exists today, but allowing practitioners to like use different workflows to actually make use of that technology. So yeah, yeah, we are a very incremental company. We're going to continue being that way. >> Well, I think Bob was using incremental as a pejorative. I mean, I, but to your point, a lot. >> No, I don't think so. I want to stop that. No, I don't think it's pejorative at all. I think incremental, incremental is usually the most successful path. >> Yes, of course. >> In my experience. >> We agree, we agree on that. >> Having tried many, many moonshot things in my Microsoft days, I can tell you that being incremental is a good thing. And I'm a very big believer that that's the way the world's going to go. I just think that there is a need for us to build something new and that ultimately that will be the solution. Now you can argue whether it's two years, three years, five years, or 10 years, but I'd be shocked if it didn't happen in 10 years. >> Yeah, so we all agree that incremental is less disruptive. Boom, but Tristan, you're, I think I'm inferring that you believe you have the architecture to accommodate Bob's vision, and then Bob, and I'm inferring from Bob's comments that maybe you don't think that's the case, but please. >> No, no, no. I think that, so Bob, let me put words into your mouth and you tell me if you disagree, DBT is completely useless in a world where a large scale cloud data warehouse doesn't exist. We were not able to bring the power of Python to our users until these platforms started supporting Python. Like DBT is a layer on top of large scale computing platforms. And to the extent that those platforms extend their functionality to bring more capabilities, we will also service those capabilities. >> Let me try and bridge the two. >> Yeah, yeah, so Bob, Bob, Bob, do you concur with what Tristan just said? >> Absolutely, I mean there's nothing to argue with in what Tristan just said. >> I wanted. >> And it's what he's doing. It'll continue to, I believe he'll continue to do it, and I think it's a very good thing for the industry. You know, I'm just simply saying that on top of that, I would like to provide Tristan and all of those who are following similar paths to him with a new type of database that can actually solve these problems in a much more architected way. And when I talk about Cosmos with something like Mongo or Cosmos together with Elastic, you're using Elastic as the join engine, okay. That's the purpose of it. It becomes a poor man's join engine. And I kind of go, I know there's a better answer than that. I know there is, but that's kind of where we are state of the art right now. >> George, we got to wrap it. So give us the last word here. Go ahead, George. >> Okay, I just, I think there's a way to tie together what Tristan and Bob are both talking about, and I want them to validate it, which is for five years we're going to be adding or some number of years more and more semantics to the operational and analytic data that we have, starting with metric definitions. My question is for Bob, as DBT accumulates more and more of those semantics for different enterprises, can that layer not run on top of a relational knowledge graph? And what would we lose by not having, by having the knowledge graph store sort of the joins, all the complex relationships among the data, but having the semantics in the DBT layer? >> Well, I think this, okay, I think first of all that DBT will be an environment where many of these semantics are defined. The question we're asking is how are they stored and how are they processed? And what I predict will happen is that over time, as companies like DBT begin to build more and more richness into their semantic layer, they will begin to experience challenges that customers want to run queries, they want to ask questions, they want to use this for things where the underlying infrastructure becomes an obstacle. I mean, this has happened in always in the history, right? I mean, you see major advances in computer science when the data model changes. And I think we're on the verge of a very significant change in the way data is stored and structured, or at least metadata is stored and structured. Again, I'm not saying that anytime in the next 10 years, SQL is going to go away. In fact, more SQL will be written in the future than has been written in the past. And those platforms will mature to become the engines, the slicer dicers of data. I mean that's what they are today. They're incredibly powerful at working with large amounts of data, and that infrastructure is maturing very rapidly. What is not maturing is the infrastructure to handle all of the metadata and the semantics that that requires. And that's where I say knowledge graphs are what I believe will be the solution to that. >> But Tristan, bring us home here. It sounds like, let me put pause at this, is that whatever happens in the future, we're going to leverage the vast system that has become cloud that we're talking about a supercloud, sort of where data lives irrespective of physical location. We're going to have to tap that data. It's not necessarily going to be in one place, but give us your final thoughts, please. >> 100% agree. I think that the data is going to live everywhere. It is the responsibility for both the metadata systems and the data processing engines themselves to make sure that we can join data across cloud providers, that we can join data across different physical regions and that we as practitioners are going to kind of start forgetting about details like that. And we're going to start thinking more about how we want to arrange our teams, how does the tooling that we use support our team structures? And that's when data mesh I think really starts to get very, very critical as a concept. >> Guys, great conversation. It was really awesome to have you. I can't thank you enough for spending time with us. Really appreciate it. >> Thanks a lot. >> All right. This is Dave Vellante for George Gilbert, John Furrier, and the entire Cube community. Keep it right there for more content. You're watching SuperCloud2. (upbeat music)

Published Date : Jan 4 2023

SUMMARY :

and the future of cloud. And Bob, you have some really and I think it's helpful to do it I'm going to go back and And I noticed that you is that what they mean? that we're familiar with, you know, it comes back to SuperCloud, is that data products are George, is that how you see it? that don't require a human to is that one of the most And DBT has, you know, the And I'm sure that there will be more on the right architecture is that in the next 10 years, So let me ask the Colombo and the data stack, which is that is still in the like Modal Labs, yeah, of course. and that sits above the and that query is going to So Tristan, you got to and that can store all of the that there is to do for the pointing to you as incremental, but allowing practitioners to I mean, I, but to your point, a lot. the most successful path. that that's the way the that you believe you have the architecture and you tell me if you disagree, there's nothing to argue with And I kind of go, I know there's George, we got to wrap it. and more of those semantics and the semantics that that requires. is that whatever happens in the future, and that we as practitioners I can't thank you enough John Furrier, and the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

PJ Kirner, Illumio | AWS re:Inforce 2022


 

(upbeat music) >> Hi, everybody. We're wrapping up day two of AWS Re:Inforce 2022. This is theCUBE, my name is Dave Vellante. And one of the folks that we featured, one of the companies that we featured in the AWS startup showcase season two, episode four, was Illumio. And of course their here at the security theme event. PJ Kerner is CTO and Co-Founder of Illumio. Great to see you, welcome back to theCUBE. >> Thanks for having me. >> I always like to ask co-founders, people with co-founder in their titles, like go back to why you started the company. Let's go back to 2013. Why'd you start the company? >> Absolutely. Because back in 2013, one of the things that we sort of saw as technology trends, and it was mostly AWS was, there were really three things. One was dynamic workloads. People were putting workloads into production faster and faster. You talk about auto scale groups and now you talk about containers. Like things were getting faster and faster in terms of compute. Second thing was applications were getting more connected, right? The Netflix architecture is one define that kind of extreme example of hyper connectivity, but applications were, we'd call it the API economy or whatever, they were getting more connected. And the third problem back in 2013 was the problems around lateral movement. And at that point it was more around nation state actors and APTs that were in those environments for a lot of those customers. So those three trends were kind of, what do we need to do in security differently? And that's how Illumio started. >> So, okay, you say nation state that's obviously changed in the ROI of for hackers has become pretty good. And I guess your job is to reduce the ROI, but so what's the relationship PJ between the API economy, you talked about in that lateral movement? Are they kind of go hand in hand? >> They do. I think one thing that we have as a mission is, and I think it's really important to understand is to prevent breaches from becoming cyber disasters, right? And I use this metaphor around kind the submarine. And if you think about how submarines are built, submarines are built with water tight compartments inside the submarine. So when there is a physical breach, right, what happens? Like you get a torpedo or whatever, and it comes through the hall, you close off that compartment, there are redundant systems in place, but you close off that compartment, that one small thing you've lost, but the whole ship hasn't gone down and you sort of have survived. That's physical kind of resiliency and those same kind of techniques in terms of segmentation, compartmentalization inside your environments, is what makes good cyber resiliency. So prevent it from becoming a disaster. >> So you bring that micro segmentation analogy, the submarine analogy with micro segmentation to logical security, correct? >> Absolutely, yes. >> So that was your idea in 2013. Now we fast forward to 2022. It's no longer just nation states, things like ransomware are top of mind. I mean, everybody's like worried about what happened with solar winds and Log4j and on and on and on. So what's the mindset of the CISO today? >> I think you said it right. So ransomware, because if you think about the CIA triangle, confidentiality, integrity, availability, what does ransomware really does? It really attacks the availability problem, right? If you lock up all your laptops and can't actually do business anymore, you have an availability problem, right. They might not have stole your data, but they locked it up, but you can't do business, maybe you restore from backups. So that availability problem has made it more visible to CEOs and board level, like people. And so they've been talking about ransomware as a problem. And so that has given the CISO either more dollars, more authority to sort of attack that problem. And lateral movement is the primary way that ransomware gets around and becomes a disaster, as opposed to just locking up one machine when you lock up your entire environment, and thus some of the fear around colonial pipeline came in, that's when the disaster comes into play and you want to be avoiding that. >> Describe in more detail what you mean by lateral movement. I think it's implied, but you enter into a point and then instead of going, you're saying necessarily directly for the asset that you're going after, you're traversing the network, you're traversing other assets. Maybe you could describe that. >> Yeah, I mean, so often what happens is there's an initial point of breach. Like someone has a password or somebody clicked on a phishing link or something, and you have compromise into that environment, right? And then you might be compromised into a low level place that doesn't have a lot of data or is not worthwhile. Then you have to get from that place to data that is actually valuable, and that's where lateral movement comes into place. But also, I mean, you bring up a good point is like lateral movement prevention tools. Like, one way we've done some research around if you like, segmentation is, imagine putting up a maze inside your data center or cloud, right. So that, like how the attacker has to get from that initial breach to the crown jewels takes a lot longer when you have, a segmented environment, as opposed to, if you have a very flat network, it is just go from there to go find that asset. >> Hence, you just increase the denominator in the ROI equation and that just lowers the value for the hacker. They go elsewhere. >> It is an economic, you're right, it's all about economics. It's a time to target is what some our research like. So if you're a quick time to target, you're much easier to sort of get that value for the hacker. If it's a long time, they're going to get frustrated, they're going to stop and might not be economically viable. It's like the, you only have to run faster than the-- >> The two people with the bear chasing you, right. (laughs) Let's talk about zero trust. So it's a topic that prior to the pandemic, I think a lot of people thought it was a buzzword. I have said actually, it's become a mandate. Having said that others, I mean, AWS in particular kind of rolled their eyes and said, ah, we've always been zero trust. They were sort of forced into the discussion. What's your point of view on zero trust? Is it a buzzword? Does it have meaning, what is that meaning to Illumio? >> Well, for me there's actually two, there's two really important concepts. I mean, zero trust is a security philosophy. And so one is the idea of least privilege. And that's not a new idea. So when AWS says they've done it, they have embraced these privileges, a lot of good systems that have been built from scratch do, but not everybody has least privilege kind of controls everywhere. Secondly, least privilege is not about a one time thing. It is about a continuously monitoring. If you sort of take, people leave the company, applications get shut down. Like you need to shut down that access to actually continuously achieve that kind of least privilege stance. The other part that I think is really important that has come more recently is the assume breach mentality, right? And assume breach is something where you assume the attacker is, they've already clicked on, like stop trying to prevent. Well, I mean, you always still should probably prevent the people from clicking on the bad links, but from a security practitioner point of view, assume this has already happened, right. They're already inside. And then what do you have to do? Like back to what I was saying about setting up that maze ahead of time, right. To increase that time to target, that's something you have to do if you kind of assume breach and don't think, oh, a harder shell on my submarine is going to be the way I'm going to survive, right. So that mentality is, I will say is new and really important part of a zero trust philosophy. >> Yeah, so this is interesting because I mean, you kind of the old days, I don't know, decade plus ago, failure meant you get fired, breach meant you get fired. So we want to talk about it. And then of course that mentality had to change 'cause everybody's getting breached and this idea of least privilege. So in other words, if someone's not explicitly or a machine is not explicitly authorized to access an asset, they are not allowed, it's denied. So it's like Frank Slootman would say, if there's doubt, there's no doubt. And so is that right? >> It is. I mean, and if you think about it back to the disaster versus the breach, imagine they did get into an application. I mean, lamps stacks will have vulnerabilities from now to the end of time and people will get in. But what if you got in through a low value asset, 'cause these are some of the stories, you got in through a low value asset and you were sort of contained and you had access to that low value data. Let's say you even locked it up or you stole it all. Like it's not that important to the customer. That's different than when you pivot from that low value asset now into high value assets where it becomes much more catastrophic for those customers. So that kind of prevention, it is important. >> What do you make of this... Couple things, we've heard a lot about encrypt everything. It seems like these days again, in the old days, you'd love to encrypt everything, but there was always a performance hit, but we're hearing encrypt everything, John asked me the day John Furrier is like, okay, we're hearing about encrypting data at rest. What about data in motion? Now you hear about confidential computing and nitro and they're actually encrypting data in the flow. What do you make of that whole confidential computing down at the semiconductor level that they're actually doing things like enclaves and the arm architecture, how much of the problem does that address? How much does it still leave open? >> That's a hard question to answer-- >> But you're a CTO. So that's why I can ask you these questions. >> But I think it's the age old adage of defense in depth. I mean, I do think equivalent to what we're kind of doing from the networking point of view to do network segmentation. This is another layer of that compartmentalization and we'll sort of provide similar containment of breach. And that's really what we're looking for now, rather than prevention of the breach and rather than just detection of the breach, containment of that breach. >> Well, so it's actually similar philosophy brought to the wider network. >> Absolutely. And it needs to be brought at all levels. I think that's the, no one level is going to solve the problem. It's across all those levels is where you have to. >> What are the organizational implications of, it feels like the cloud is now becoming... I don't want to say the first layer of defense because it is if you're all in the cloud, but it's not, if you're a hybrid, but it's still, it's becoming increasingly a more important layer of defense. And then I feel like the CISO and the development team is like the next layer maybe audit is the third layer of defense. How are you seeing organizations sort of respond to that? The organizational roles changing, the CISO role changing. >> Well there's two good questions in there. So one is, there's one interesting thing that we are seeing about people. Like a lot of our customers are hybrid in their environment. They have a cloud, they have an on-prem environment and these two things need to work together. And in that case, I mean, the massive compute that you can be doing in the AWS actually increases the attack surface on that hybrid environment. So there's some challenges there and yes, you're absolutely right. The cloud brings some new tools to play, to sort of decrease that. But it's an interesting place we see where there's a attack surface that occurs between different infrastructure types, between AWS and on-prem of our environment. Now, the second part of your question was really around how the developers play into this. And I'm a big proponent of, I mean, security is kind of a team sport. And one of the things that we've done in some of our products is help people... So we all know the developers, like they know they're part of the security story, right? But they're not security professionals. They don't have all of the tools and all of the experience. And all of the red teaming time to sort of know where some of their mistakes might be made. So I am optimistic. They do their best, right. But what the security team needs is a way to not just tell them, like slap on the knuckles, like developer you're doing the wrong thing, but they really need a way to sort of say, okay, yes, you could do better. And here's some concrete ways that you can do better. So a lot of our systems kind of look at data, understand the data, analyze the data, and provide concrete recommendations. And there's a virtual cycle there. As long as you play the team sport, right. It's not a us versus them. It's like, how can we both win there? >> So this is a really interesting conversation because the developer all of a sudden is increasingly responsible for security. They got to worry about they're using containers. Now they got to worry about containers security. They got to worry about the run time. They got to worry about the platform. And to your point, it's like, okay, this burden is now on them. Not only do they have to be productive and produce awesome code, they got to make sure it's secure. So that role is changing. So are they up for the task? I mean, I got to believe that a lot of developers are like, oh, something else I have to worry about. So how are your customers resolving that? >> So I think they're up for the task. I think what is needed though, is a CISO and a security team again, who knows it's a team sport. Like some technologies adopted from the top down, like the CIO can say, here's what we're doing and then everybody has to do it. Some technologies adopted from the bottom up, right. It's where this individual team says, oh, we're using this thing and we're using these tools. Oh yeah, we're using containers and we're using this flavor of containers. And this other group uses Lambda services and so on. And the security team has to react because they can't mandate. They have to sort of work with those teams. So I see the best groups of people is where you have security teams who know they have to enable the developers and the developers who actually want to work with the security team. So it's the right kind of person, the right kind of CISO, right kind of security teams. It doesn't treat it as adversarial. And it works when they both work together. And that's where, your question is, how ingrained is that in the industry, that I can't say, but I know that does work. And I know that's the direction people are going. >> And I understand it's a spectrum, but I hear what you're saying. That is the best practice, the right organizational model, I guess it's cultural. I mean, it's not like there's some magic tool to make it all, the security team and the dev team collaboration tool, maybe there is, I don't know, but I think the mindset and the culture has to really be the starting point. >> Well, there is. I just talk about this idea. So however you sort of feel about DevOps and DevSecOps and so on, one core principle I see is really kind of empathy between like the developers and the operations folks, so the developers and the security team. And one way I actually, and we act like this at Illumio but one thing we do is like, you have to truly have empathy. You kind have to do somebody else's job, right. Not just like, think about it or talk about it, like do it. So there are places where the security team gets embedded deep in the organization where some of the developers get embedded in the operations work and that empathy. I know whether they go back to do what they were doing, what they learned about how the other side has to work. Some of the challenges, what they see is really valuable in sort of building that collaboration. >> So it's not job swapping, but it's embedding, is maybe how they gain that empathy. >> Exactly. And they're not experts in all those things, but do them take on those summer responsibilities, be accountable for some of those things. Now, not just do it on the side and go over somebody's shoulder, but like be accountable for something. >> That's interesting, not just observational, but actually say, okay, this is on you for some period of time. >> That is where you actually feel the pain of the other person, which is what is valuable. And so that's how you can build one of those cultures. I mean, you do need support all the way from the top, right. To be able to do that. >> For sure. And of course there are lightweight versions of that. Maybe if you don't have the stomach for... Lena Smart was on this morning, CISO of Mongo. And she was saying, she pairs like the security pros that can walk on water with the regular employees and they get to ask all these Colombo questions of the experts and the experts get to hear it and say, oh, I have to now explain this like I'm explaining it to a 10 year old, or maybe not a 10 year old, but a teenager, actually teenager's probably well ahead of us, but you know what I'm saying? And so that kind of cross correlation, and then essentially the folks that aren't security experts, they absorb enough and they can pass it on throughout the organization. And that's how she was saying she emphasizes culture building. >> And I will say, I think, Steve Smith, the CISO of AWS, like I've heard him talk a number of times and like, they do that here at like, they have some of the spirit and they've built it in and it's all the way from the top, right. And that's where if you have security over and a little silo off to the side, you're never going to do that. When the CEO supports the security professionals as a part of the business, that's when you can do the right thing. >> So you remember around the time that you and you guys started Illumio, the conversation was, security must be a board level topic. Yes, it should be, is it really, it was becoming that way. It wasn't there yet. It clearly is now, there's no question about it. >> No, ransomware. >> Right, of course. >> Let's thank ransomware. >> Right. Thank you. Maybe that's a silver lining. Now, the conversation is around, is it a organizational wide issue? And it needs to be, it needs to be, but it really isn't fully. I mean, how many organizations actually do that type of training, certainly large organizations do. It's part of the onboarding process, but even small companies are starting to do that now saying, okay, as part of the onboarding process, you got to watch this training video and sure that you've done it. And maybe that's not enough, but it's a start. >> Well, and I do think that's where, if we get back to zero trust, I mean, zero trust being a philosophy that you can adopt. I mean, we apply that kind of least privilege model to everything. And when people know that people know that this is something we do, right. That you only get access to things 'cause least privileges, you get access to absolutely to the things you need to do your job, but nothing more. And that applies to everybody in the organization. And when people sort of know this is the culture and they sort of work by that, like zero trust being that philosophy sort of helps infuse it into the organization. >> I agree with that, but I think the hard part of that in terms of implementing it for organizations is, companies like AWS, they have the tools, the people, the practitioners that can bring that to bear, many organizations don't. So it becomes an important prioritization exercise. So they have to say, okay, where do we want to apply that least privilege and apply that technology? 'Cause we don't have the resources to do it across the entire portfolio. >> And I'll give you a simple example of where it'll fail. So let's say, oh, we're least privilege, right. And so you asked for something to do your job and it takes four weeks for you to get that access. Guess what? Zero trust out the door at that organization. If you don't have again, the tools, right. To be able to walk that walk. And so it is something where you can't just say it, right. You do have to do it. >> So I feel like it's pyramid. It's got to start. I think it's got to be top down. Maybe not, I mean certainly bottom up from the developer mindset. No question about that. But in terms of where you start. Whether it's financial data or other confidential data, great. We're going to apply that here and we're not going to necessarily, it's a balance, where's the risk? Go hard on those places where there's the biggest risk. Maybe not create organizational friction where there's less risk and then over time, bring that in. >> And I think, I'll say one of the failure modes that we sort of seen around zero trust, if you go too big, too early, right. You actually have to find small wins in your organization and you pointed out some good ones. So focus on like, if you know where critical assets are, that's a good place to sort of start. Building it into the business as usual. So for example, one thing we recommend is people start in the developing zero trust segmentation policy during the development, or at least the test phase of rolling out a new application as you sort of work your way into production, as opposed to having to retro segment everything. So get it into the culture, either high value assets or work like that, or just pick something small. We've actually seen customers use our software to sort of like lock down RDP like back to ransomware, loves RDP lateral movement. So why can we go everywhere to everywhere with RDP? Well, you need it to sort of solve some problems, but just focus on that one little slice of your environment, one application and lock that down. That's a way to get started and that sort of attacks the ransomware problem. So there's lots of ways, but you got to make some demonstrable first steps and build that momentum over time to sort of get to that ultimate end goal. >> PJ Illumio has always been a thought leader in security generally in this topic specifically. So thanks for coming back on theCUBE. It's always great to have you guys. >> All right. Thanks, been great. >> All right. And thank you for watching. Keep it right there. This is Dave Vellante for theCUBE's coverage of AWS re:Inforce 2022 from Boston. We'll be right back. (upbeat music)

Published Date : Jul 27 2022

SUMMARY :

And one of the folks that we featured, like go back to why you And the third problem back in 2013 was in the ROI of for hackers And if you think about So that was your idea in 2013. And so that has given the for the asset that you're going after, and you have compromise into and that just lowers the It's like the, you only have into the discussion. And then what do you have to do? And so is that right? and you had access to that low value data. and the arm architecture, you these questions. detection of the breach, brought to the wider network. And it needs to be brought at all levels. CISO and the development team And all of the red teaming time And to your point, it's like, okay, And the security team has to react and the culture has to the other side has to work. So it's not job swapping, Now, not just do it on the side but actually say, okay, this is on you And so that's how you can and they get to ask all And that's where if you have security over around the time that you And it needs to be, it needs to be, to the things you need to do So they have to say, okay, And so you asked for But in terms of where you start. So get it into the culture, It's always great to have you guys. All right. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

Lena SmartPERSON

0.99+

Steve SmithPERSON

0.99+

AWSORGANIZATION

0.99+

PJ KernerPERSON

0.99+

2013DATE

0.99+

JohnPERSON

0.99+

PJ KirnerPERSON

0.99+

twoQUANTITY

0.99+

CIAORGANIZATION

0.99+

four weeksQUANTITY

0.99+

two peopleQUANTITY

0.99+

2022DATE

0.99+

PJ IllumioPERSON

0.99+

OneQUANTITY

0.99+

third problemQUANTITY

0.99+

IllumioORGANIZATION

0.99+

oneQUANTITY

0.99+

three trendsQUANTITY

0.99+

three thingsQUANTITY

0.99+

one machineQUANTITY

0.99+

BostonLOCATION

0.99+

two good questionsQUANTITY

0.99+

third layerQUANTITY

0.99+

second partQUANTITY

0.98+

pandemicEVENT

0.98+

10 year oldQUANTITY

0.98+

zero trustQUANTITY

0.98+

John FurrierPERSON

0.98+

Second thingQUANTITY

0.98+

first stepsQUANTITY

0.98+

bothQUANTITY

0.98+

DevSecOpsTITLE

0.97+

one thingQUANTITY

0.97+

10 year oldQUANTITY

0.97+

todayDATE

0.97+

SecondlyQUANTITY

0.97+

two really important conceptsQUANTITY

0.96+

first layerQUANTITY

0.96+

DevOpsTITLE

0.95+

NetflixORGANIZATION

0.95+

day twoQUANTITY

0.95+

CISOPERSON

0.94+

LambdaTITLE

0.94+

one timeQUANTITY

0.93+

MongoORGANIZATION

0.93+

ZeroQUANTITY

0.93+

theCUBEORGANIZATION

0.92+

two thingsQUANTITY

0.92+

one interesting thingQUANTITY

0.91+

one little sliceQUANTITY

0.9+

one applicationQUANTITY

0.9+

decade plus agoDATE

0.89+

zeroQUANTITY

0.89+

CTOPERSON

0.85+

Couple thingsQUANTITY

0.82+

re:Inforce 2022TITLE

0.79+

this morningDATE

0.78+

one core principleQUANTITY

0.77+

around zero trustQUANTITY

0.76+

one wayQUANTITY

0.74+

CISOORGANIZATION

0.73+

Andy Smith, Laminar | AWS re:Inforce 2022


 

>>Welcome back to Boston. Everybody watching the cubes coverage, OFS reinforce 22 from Boston, Atlanta chow lobster, the SOS a ruin in my summer, Andy and Smith is here is the CMO of laminar. Andy. Good to see you. Good >>To see you. Great to be >>Here. So laminar came outta stealth last year, 2021, sort of, as we were exiting the isolation economy. Yeah. Why was laminar started >>Really about there's there's two mega trends in the industry that, that created a problem that wasn't being addressed. Right? So the two mega trends was cloud transformation. Obviously that's been going on for a while, but what most people doesn't don't realize is it really accelerated with COVID right? Being all, everybody having to be remote, et cetera, various stats I've read like increased five times, right? So cloud transformation are now you are now problem, right? That's going on? And then the other next big mega trend is data democratization. So more data in the cloud than ever before. And this is, this is just going and going and going. And the result of those two things, more data in the cloud, how am I securing that data? You know, the, the, the breach culture we're in like every day, a new, a new data breach coming up, et cetera, just one Twitter, one yesterday, et cetera. The, those two things have caused a gap with data security teams and, and that's what he >>Heard at attract. Yeah. So, you know, to your point and we track this stuff pretty carefully quarterly, and you saw, it was really interesting trend. You actually saw AWS's growth rate accelerate during the pandemic. Absolutely. You know? Absolutely. So you're talking about, you know, a couple of hundred billion dollars for the big four clouds. If you, if you include Alibaba and it's still growing at 35, you know, 40% a year, which is astounding, so, okay. So more cloud, more data. Explain why that's a, a problem for practitioners. >>Yeah, exactly. The reality is in, in the security, what, what are we doing? What all the security it's about protecting your data in the end, right? Like, like we're here at this at, at reinforce all these security vendors here really it's about protecting your data, your sensitive data. And, but what, what had been happening is all the focus was on the infrastructure, the network, et cetera, et cetera, and not as much focus, particularly on the data and, and the move to the cloud gave the developers and the data scientists, way more power. They don't longer have to ask for permission. And so they can just do what they want. And it's actually wonderful for the business. The business is moving faster, you spin up applications sooner, you get new, new insights. So all those things are really great, but because the developer has so much power, they can just copy data over here, make a backup over here, new et cetera. And, and security has no idea about all these copies of the, of the data that are out there. And they're typically not as well protected as that main production source. And that's the gap that >>Exists. Okay. So there was this shift from sort of perimeter hardening the perimeter, hardening the infrastructure and, and now your premises, it's moving to the data we saw when, when there was during the pandemic, there was definitely a shift to end point security. There was a shift to cloud security rethinking the network, but it was still a lot of, you know, kind of cha chasing the whackamole and people have talked about this is a data problem for years. Yeah. But it was, it's taken a while for, for companies, for the technology industry to, to come at it. You guys are one of the first, if not the first. Yeah. Why do you think it took so long? Is this cuz it's really hard. >>Yeah. I mean, it, it's hard. You need to focus on it. The, the traditional security has been around the network and the box, right. And those are still necessary. It's important to, you know, your use identity to cover the edge, to, to make sure people can't get into the box, but you also have to have data. So what, what happens is there's really good solutions for enterprise data security, looking at database, you know, technology, et cetera. There are good solutions for cloud infrastructure security. So the CSP of the world and the CWPP are protecting containers, you know, protecting the infrastructure. But there really wasn't much for cloud everything you build and run in the cloud. So basically your custom application, your custom applications in the IAS and PAs environments, there really wasn't anything solving that. And that's really where laminar is focused. >>Okay. So you guys use this term shadow data. We talk about shadow. It what's shadow data. >>Yeah. So what we're finding at a hundred percent of our customer environments and our POVs and talking to CISOs out there is that they have these shadow data assets and shadow data elements that they have no clue that existed. So here's the example. Everybody knows the main RDS database that is in production. And this is where, you know, our, our data is taken from. But what people don't realize is there's a copy of that. You know, in a dev environment, somebody went to run a test and they was supposed to be there for two weeks. But then that developer left forgot, left it there. They left the company, oh, now it's been there for two years that there was an original SQL database left over from a lift and shift project. They got moved to RDS, but nobody deleted that thing there, you know, it's a database connected to an application, the application left, but that database, that abandoned database is still sitting. These are all real life customer examples of shadow data that we run into. And there's, and what the problem is that main production data store is secured pretty well. It's following all your policies, et cetera. But all these shadow data resources are typically less well protected unmonitored. And that is what the attackers are after. >>So you're, you know, the old, the, the Watergate follow the money, you're following the data, >>Following the data. >>How do you follow that data if there's so much of it, it, and it's, you know, sometimes, you know, not really well understood where it is. How do you know where >>It is? Yeah. It's the beauty of partnering with somebody like AWS, right? So with each of the cloud providers, we actually take a role in your cloud account and use the APIs from the cloud provider to see all the changes in all the instances are going on. Like it is, the problem is way more complicated in the cloud because I mean, AWS has over 200 services, dozens of ways to store data, right. It's wonderful for the developer, but it's very hard for the security practitioner. And so, because we have that visibility through the cloud provider's APIs, we can see all those changes that are happening. We can then say, ah, that's a data store. Let me go analyze, make a copy, have a snapshot of that and do the analyzing of that data right inside our customer's account without pulling the data out. And we have complete visibility to everything. And then we can give that data catalog over to the customer. >>All right. I gotta ask you a couple Colombo questions. So if you know, we talk about encryption, everything's encrypted everything. If, if the data is encrypted, why then would I need laminar? >>Because I mean, we'll make sure that the data's encrypted okay. Right. Often. So it's not supposed to be and not right. Two is, we're gonna tell you what type of data is inside there. Oh, is this, is this health information? Is it personal identify information? Is it credit cards? You know, et cetera, C so we'll classify the data for you. We will also, then there's things like retention, period. How long should we, I hold onto that data, all the things about what are, who has access, what's the exposure level for that data. And so when you, when you think about data security posture, what's the posture of that data you're looking at at those data policies. It's something that has been very well defined and written down. But in the past, there was just no way to go verify that those, that, that, that policy is actually being followed. And so we're doing that verification automatically. >>So without the context, you can't answer those other questions. So you make sure it's encrypted. If it's not, or you can at least notify me that it's not, you don't do the encryption. Right. Or do you, >>We don't do it ourselves, but we can give you here. Here's the command in and the Amazon to go encrypt it >>Right. Then I can automate that. And then the classification is key because now you're telling me the context. So I can say, okay, apply this policy to that data, retain it for this long, get rid of it after X number of years, or if it's work, process, get rid of it now. Yeah. And then who should have access to that data. And so you can help at least inform how to enforce those policies. >>Exactly. And so we, we, we call it guided remediation because what we're, you know, talking to a CISO, they're like, I need 400 more alerts, like a hole in the head like that. Doesn't do me any good. If you can't tell me how to resolve the, the, the, this security gap that I have or this, then it doesn't do any good. And, and the first, first it starts with who do I need to go talk to? Right. So they have hundreds, if not thousands of developers. Oh, great. You found this issue. I, I, I don't know who to go. Like, I can't just delete it myself, but I need to go talk to somebody really, should this be deleted? We need, do we really, really need to hold onto this? So we, we help guide who the data owner is. So we give you who to talk to. You, give you all the context. Here's the data, here's the data asset that it's in. Here's our suggestion. Here's the problem. Here's our suggestion for >>Solution. And you started the company on AWS >>Started on AWS. Absolutely. >>So what's of course it's best cloud and why not start there? So what's the relationship like, I mean, how'd you get started? You said, okay, Hey, we're we got an idea for a company. We're gonna build it on AWS. We're gonna become a customer. We're gonna, you know, >>We actually, so insight partners is our main investor. Yeah. And they were very helpful in giving us access to literally hundreds of CSOs, who we had conversations with before we actually launched the company. And so we did some shifting and to, to figure out our exact use case. But by the time we came to market, it was in February this year, we actually GAed the product that, where like product market fit nailed because we'd had so many conversations that we knew the problem in the market that we needed to solve. And we knew where we needed to solve it first. And, and the, the, the relationship we AWS is great. We just got on the marketplace, just became a, a partner. So really good. Good >>Start. So I gotta ask you, so I always ask this question. So how do you actually know when you have product market fit? >>You it's about those conversations. Right. You know, so like, I I've been to lots of startups and sometimes you're you're, you, you each have a conversation and then they, they saying, oh, well kind of want this. And we kind of like that. And so it, the more conversations you have, the more, you know, you're solving a real problem. Right. And, and, and, and, and you re react to what that, what that prospect is telling you back and, or that advisor or that whoever we're talking to. And, and every single one of the CISO conversations we had was I don't have a good inventory of my data in the cloud. >>The reason I asked that, cause I always ask the startups, like, when do you scale? Cause I think startups sometimes scale too fast. They try to scale too fast, they'll hire 50 sales people. And then they, you know, churn, you know, they, they got a 50% churn, but they're trying to optimize their go to market when they got 50% of their customers are gonna leave. So it's, it's gotta be the sequential thing. So, so you got product market fit. So are, are you in the scaling phase >>Now? We are. Yeah. Yeah, yeah. So now it's about how quickly can we deliver? We, we we're ramping customer base significantly. And, and you know, we've got a whole go to market team in, you know, sales and marketing in the us and, and often off to the races >>And you just run on AWS or you run another clouds. >>It's multi-cloud so AWS, Azure, GCP, et cetera. >>Okay. So then my least my next question is it sort of, you can do this within each of the individual clouds today. Do you see a day and maybe it's here today is where you can create a single experience across those clouds >>Today. It's a single experience across cloud. So our SaaS, we have our SaaS portion runs in AWS, but the actual data analysis runs in each cloud provider. So AWS, Azure, GCP and snowflake too, actually. >>Ah, okay. So I come through your whatever portal, like if I can use that term. Yep. And that's running on AWS. Yes. You're SAS, as you say, and then you go out to these other environments, GCP, Azure, AWS itself, and snowflake. Yep. And I see laminar, is that right? Or >>There's a piece running inside our customer's environment. Okay. So, so we have a customer, they, the, we have, we get a role inside of their cloud account or read only role inside of their cloud account. And we spin up serverless functions in that cloud account. That's where all the analysis happens. And that's why we don't take any data out of the environment. So it all stays there. And, and therefore we don't, we don't actually see the data outside of the environment. Like, I, I can tell you there's a metadata comes out. I can tell you, there are credit cards inside that data store, but I can't tell you exactly which credit card it is cuz I don't know. So all the important actions happens are there and just the metadata metadata comes out. So we can give you a cross cloud dashboard of all your sensitive data. >>And of course, so take the example of snowflake. They're going across clouds, they're building what we call super cloud sort of, of a layer that floats on top. You're just sort of going wherever that data goes. >>Yeah, exactly. So, so each of there's a component that lives in the customer's environment in the, in those multi-cloud environments and then a single view of the world dashboard that is our SaaS component that runs an AWS. So >>You guys are, is, am I correct? You're series a funded >>Series, a funded yeah, exactly. >>And, and already scaling to go to market. Yeah. Which is, which is early to scale. Right. I mean you've got startup experience. Right? >>Absolutely. >>How does it compare? >>Well, what was amazing here was access. I mean, really it was through the relationship with insight. It was access to the CISOs that I had never had at any of the other startups I was with. You're trying to get meetings, you're meeting with a lot of practitioners, you know, et cetera. But getting all those conversations with buyers was, was super valuable for us to say, ah, I know I'm solving a real problem that has value that they will pay for. Right. And, and, and so that, that was a year and a half probably still of all that work going on. We just, just waited to GA until we understood the market >>Better. Yeah. Insight. They're amazing. The way to talk about scaling. I mean, they've just the last 10 years that comp that, that PE firm has just gone wild in terms of just their, their philosophy, their approach, their cadence, their consistency. And now of course their portfolio. >>Yeah. And, and they started doing a little bit earlier and earlier stage. I mean, I, I always think of them as PE too, but you know, they, they did our seed round. Right. They did our a round and, and they're doing earlier stages, but particularly what they saw in Laar was exactly what we started this conversation with. They saw cloud transformation speeding up, they saw data democratization happening. They're like, we need to invest in this now because this is a now a problem to solve. >>Yeah. It's interesting. Cuz when you go back even pre 2010, you talk to, you know, look at insight, they would wait. They would invest in companies unless there was, you know, on the way to five plus million dollar ARR, they weren't doing seed deals. Totally. Like they saw, wow, these actually can be pretty lucrative and we can play and we have a point of view and yeah. So cool. Well, congratulations. I'll give you the final word. What, what should we be watching for from, from Laar as sort of, you know, milestones that you guys want to hit and, and indicators of success. >>Yeah. Now it's all about growth partnerships, you know, integrations with, with other of the players out here. Right. And so, you know, like scaling our AWS partnership is one of the key aspects for us. And so, you know, just look for, look for the name out there and, and you'll start, you'll start to see it a lot more. And, and if, if you have the need, you know, come look us up. Laar security.com. >>Awesome. Well thanks very much for coming to Cuban. Good luck. Appreciate it. All right. >>Wonderful. Thanks. You're >>Welcome. All right. Keep it right there, everybody. This is Dave ante. We'll be back right after this short break from AWS reinvent 2022 in Boston. You're watching the cue.

Published Date : Jul 27 2022

SUMMARY :

Andy and Smith is here is the CMO of laminar. Great to be Yeah. So the two mega trends was cloud it's still growing at 35, you know, 40% a year, which is astounding, so, okay. And that's the gap that lot of, you know, kind of cha chasing the whackamole and the world and the CWPP are protecting containers, you know, protecting the infrastructure. We talk about shadow. And this is where, you know, our, our data is taken from. How do you follow that data if there's so much of it, it, and it's, you know, sometimes, of that and do the analyzing of that data right inside our customer's account without pulling the data out. So if you know, we talk about encryption, But in the past, there was just no way to go verify that those, that, that, that policy So without the context, you can't answer those other questions. We don't do it ourselves, but we can give you here. And so you can help at And so we, we, we call it guided remediation because what we're, you know, And you started the company on AWS Started on AWS. We're gonna, you know, But by the time we came to market, it was in February this year, So how do you actually know when you have product market fit? the more conversations you have, the more, you know, you're solving a real problem. And then they, you know, churn, you know, they, And, and you know, we've got a whole go to market team in, Do you see a day and maybe it's here today is where you can create a single experience across So our SaaS, we have our SaaS portion runs in AWS, You're SAS, as you say, and then you go out to So we can give you a cross cloud dashboard of all your sensitive data. And of course, so take the example of snowflake. So And, and already scaling to go to market. And, and, and so that, that was a year and a half probably And now of course their portfolio. but you know, they, they did our seed round. They would invest in companies unless there was, you know, on the way to five plus you know, like scaling our AWS partnership is one of the key aspects for All right. You're Keep it right there, everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy SmithPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

50%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

two yearsQUANTITY

0.99+

BostonLOCATION

0.99+

two weeksQUANTITY

0.99+

hundredsQUANTITY

0.99+

2021DATE

0.99+

AndyPERSON

0.99+

thousandsQUANTITY

0.99+

35QUANTITY

0.99+

five timesQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

over 200 servicesQUANTITY

0.99+

a year and a halfQUANTITY

0.99+

last yearDATE

0.99+

TodayDATE

0.99+

DavePERSON

0.99+

WatergateORGANIZATION

0.99+

firstQUANTITY

0.99+

two thingsQUANTITY

0.99+

TwitterORGANIZATION

0.99+

February this yearDATE

0.99+

oneQUANTITY

0.99+

eachQUANTITY

0.98+

AtlantaLOCATION

0.98+

two mega trendsQUANTITY

0.98+

TwoQUANTITY

0.98+

SmithPERSON

0.98+

40% a yearQUANTITY

0.98+

dozensQUANTITY

0.97+

each cloudQUANTITY

0.97+

pandemicEVENT

0.96+

five plus million dollarQUANTITY

0.96+

LaminarPERSON

0.95+

CSOsQUANTITY

0.95+

2022DATE

0.94+

single experienceQUANTITY

0.94+

hundred percentQUANTITY

0.94+

LaarPERSON

0.93+

400 more alertsQUANTITY

0.92+

CubanLOCATION

0.91+

SASORGANIZATION

0.89+

50 sales peopleQUANTITY

0.89+

developersQUANTITY

0.88+

Laar security.comOTHER

0.87+

COVIDOTHER

0.85+

a dayQUANTITY

0.85+

singleQUANTITY

0.85+

waysQUANTITY

0.84+

hundred billion dollarsQUANTITY

0.84+

preDATE

0.84+

2010DATE

0.84+

SQLTITLE

0.84+

IASTITLE

0.83+

AzureORGANIZATION

0.82+

last 10 yearsDATE

0.8+

single viewQUANTITY

0.79+

CWPPORGANIZATION

0.77+

Sanjeev Mohan, SanjMo | MongoDB World 2022


 

>>Mhm. Mhm. Yeah. Hello, everybody. Welcome to the Cubes. Coverage of Mongo db World 2022. This is the first Mongo live mongo DB World. Since 2019, the Cube has covered a number of of mongo shows actually going back to when the company was called Engine. Some of you may recall Margo since then has done an i p o p o in 2017, it's It's been a rocket ship company. It's up. It'll probably do 1.2 billion in revenue this year. It's got a billion dollars in cash on the balance sheet. Uh, despite the tech clash, it's still got a 19 or $20 million valuation growing above 50% a year. Uh, company just had a really strong quarter, and and there seems to be hitting on all cylinders. My name is Dave Volonte. And here to kick it off with me as Sanjeev Mohan, who was the principal at Sanremo. So great to see you. You become a wonderful cube contributor, Former Gartner analyst. Really sharp? No, the database space in the data space generally really well, so thanks for coming back on >>you. You know, it's just amazing how exciting. The entire data space is like they used to say. Companies are All companies are software companies. All companies are data >>companies, >>so data has become the the foundation. >>They say software is eating the world. Data is eating software and a little little quips here. But this is a good size show. Four or 5000 people? I don't really know exactly. You know the numbers, but it's exciting. And of course, a lot of financial services were here at the Javits Centre. Um, let's let's lay down the basics for people of Mongo, DB is a is a document database, but they've been advancing. That's a document database as an alternative to R D. B M s. Explain that, but explain also how Mongo has broadened its capabilities and serving a lot more use cases. >>So that's my forte is like databases technology. But before even I talk about that, I have to say I am blown away by this mongo db world because mongo db uh, in beckons to all of us during the pandemic has really come of age, and it's a billion dollar company. Now we are in this brand new Javits Centre That's been built during the pandemic. And and now the company is holding this event the high 1000 people last year. So I think this company has really grown. And why has it drawn is because its offerings have grown to more developers than just a document database document databases. Revolution revolutionised the whole DBM s space where no sequel came up. Because for a change, you don't need a structured schema. You could start bringing data in this document model scheme, uh, like varying schema. But since then, they've added, uh, things like such. So they have you seen such? They added a geospatial. They had a time series last year, and this year they keep adding more and more so like, for example, they are going to add some column store indexes. So from being a purely transactional, they are now starting to address analytical. And they're starting to address more use cases, like, you know, uh, like what? What was announced this morning at keynote was faceted search. So they're expanding the going deeper and deeper into these other data >>structures. Taking Lucy made a search of first class citizens, but I want to ask you some basic questions about document database. So it's no fixed schemes. You put anything in there? Actually, so more data friendly. They're trying to simplify the use of data. Okay, that's that's pretty clear. >>What are the >>trade offs of a document database? >>So it's not like, you know, one technology has solved every problem. Every technology comes with its own tradeoffs. So in a document, you basically get rid of joining tables with primary foreign keys because you can have a flexible schemer and so and wouldn't sing single document. So it's very easy to write and and search. But when you have a lot of repeated elements and you start getting more and more complex, your document size can start expanding quite a bit because you're trying to club everything into a single space. So So that is where the complexity goes >>up. So what does that mean for for practitioner, it means they have to think about what? How they how they are ultimately gonna structure, how they're going to query so they can get the best performances that right. So they're gonna put some time in up front in order to make it pay back at the tail end, but clearly it's it's working. But is that the correct way of thinking about >>100% in, uh, the sequel world? You didn't care about the sequel. Analytical queries You just cared about how your data model was structured and then sequel would would basically such any model. But in the new sequel world, you have to know your patterns before you. You invest into the database so it's changed that equation where you come in knowing what you are signing up. >>So a couple of questions, if I can kind of Colombo questions so to Margo talks about how it's really supporting mission critical applications and at the same time, my understanding is the architecture of mongo specifically, or a document database in general. But specifically, you've got a a primary, uh, database, and you and that is the sort of the master, if you will, right and then you can create secondaries. But so help me square the circle between mission critical and really maybe a more of a focus on, say, consistency versus availability. Do customers have to sort of think about and design in that availability? How do they do that? How a Mongol customers handling that. >>So I have to say, uh, my experience of mongo db was was that the whole company, the whole ethos was developed a friendly. So, to be honest, I don't think Mongo DB was as much focused on high availability, disaster, recovery, even security. To some extent, they were more focused on developer productivity. >>And you've experienced >>simplicity. Make it simple, make the developers productive as fast as you can. What has really, uh, was an inflexion point for Mongo DB was the launch of Atlas because the atlas they were able to introduce all of these management features and hide it abstracted from the end users. So now they've got, you know, like 2014 is when Atlas came out and it was in four regions. But today they're in 100 regions, so they keep expanding, then every hyper scale cloud provider, and they've abstracted that whole managed. >>So Atlas, of course, is the managed database as a service in the cloud. And so it's those clouds, cloud infrastructure and cloud tooling that has allowed them to go after those high available application. My other question is when you talk about adding search, geospatial time series There are a lot of specialised databases that take time series persons. You have time series specialists that go deep into time series can accompany like Mongo with an all in one strategy. Uh, how close can they get to that functionality? Do they have to be? You know, it's kind of a classic Microsoft, you know, maybe not perfect, but good enough. I mean, can they compete with those other areas? Uh, with those other specialists? And what happens to those specialists if the answer is yes. What's your take on that? If that question >>makes sense So David, this is not a mongo db only issue This is this is an issue with, you know, anytime serious database, any graph database Should I put a graph database or should I put a multifunctional database multidimensional database? And and I really think there is no right or wrong answer. It just really comes down to your use case. If you have an extremely let's, uh, complex graph, you know, then maybe you should go with best of breed purpose built database. But more and more, we're starting to see that organisations are looking to simplify their environment by going in for maybe a unified database that has multiple data structures. Yeah, well, >>it's certainly it's interesting when you hear Mongo speak. They don't They don't call out Oracle specifically, but when they talk about legacy r d m r d B m s that don't scale and are complex and are expensive, they're talking about Oracle first. And of course, there are others. Um, And then when they talk about, uh, bespoke databases the horses for courses, databases that they show a picture of that that's like the poster child for Amazon. Of course, they don't call out Amazon. They're a great partner of Amazon's. But those are really the sort of two areas that mangoes going after, Um, now Oracle. Of course, we'll talk about their converged strategy, and they're taking a similar approach. But so help us understand the difference. There is just because they're sort of or close traditional r d B M s, and they have all the drawbacks associated with that. But by the way, there are some benefits as well. So how do you see that all playing >>out? So you know it. Really, uh, it's coming down to the the origins of these databases. Uh, I think they're converging to a point where they are offering similar services. And if you look at some of the benchmark numbers or you talk to users, I from a business point of view, I I don't think there's too much of a difference. Uh, technology writes. The difference is that Mongo DB started in the document space. They were more interested in availability rather than consistency. Oracle started in the relation database with focus on financial services, so asset compliance is what they're based on. And since then they've been adding other pieces, so so they differ from where they started. Oracle has been in the industry for some since 19 seventies, so they have that maturity. But then they have that legacy, >>you know, I love. Recently, Oracle announced the mongo db uh, kpi. So basically saying why? Why leave Oracle when you can just, you know, do the market? So that, to me, is a sign that Mongo DB is doing well because the Oracle calls you out, whether your workday or snowflake or mongo. You know, whoever that's a sign to me that you've got momentum and you're stealing share in that marketplace, and clearly Mongo is they're growing at 50 plus percent per year. So thinking about the early I mentioned 10 gen Early on, I remember that one of the first conferences I went to mongo conferences. It was just It was all developers. A lot of developers here as well. But they have really, since 2014, expanded the capabilities you talk about, Atlas, you talked about all these other you know, types of databases that they've added. If it seems like Mongo is becoming a platform company, uh, what are your thoughts on that in terms of them sort of up levelling the message there now, a billion dollar plus company. What's the next? You know, wave for Mongo. >>So, uh, Oracle announced mongo db a p i s a W s has document d. B has cost most db so they all have a p. I compatible a p. I s not the source code because, you know, mongo DB has its own SPL licence, so they have written their own layer on top. But at the end of the day, you know, if you if you these companies have to keep innovating to catch up with Mongo DB because we can announce a brand new capability, then all these other players have to catch up. So other cloud providers have 80% or so of capabilities, but they'll never have 100% of what Mongo DB has. So people who are diehard Mongo DB fans they prefer to stay on mongo db. They are now able to write more applications like you know, mongo DB bought realm, which is their front end. Uh, like, you know, like, if you're on social media kind of thing, you can build your applications and sink it with Atlas. So So mongo DB is now at a point where they are adding more capabilities that more like developers like, You know, five G is coming. Autonomous cars are coming, so now they can address Iot kind of use cases. So that's why it's becoming such a juggle, not because it's becoming a platform rather than a single document database. >>So atlases, the near the midterm future. Today it's about 60% of revenues, but they have what we call self serve, which is really the traditional on premise stuff. They're connecting those worlds. You're bringing up the point that. Of course, they go across clouds. You also bring up the point that they've got edge plays. We're gonna talk to Verizon later on today. And they're they've got, uh, edge edge activity going on with developers. I I call it Super Cloud. Right, This layer that floats above. Now, of course, a lot of the super Cloud concert says we're gonna hide the underlying complexity. But for developers, they wanna they might want to tap those primitives, so presumably will let them do that. But But that hybrid that what we call Super Cloud that is a new wave of innovation, is it not? And do you? Do you agree with that? And do you see that as a real opportunity from Mongo in terms of penetrating a new tan? >>Yes. So I see this is a new opportunity. In fact, one of the reasons mongo DB has grown so quickly is because they are addressing more markets than they had three pandemic. Um, Also, there are all gradations of users. Some users want full control. They want an eye as kind of, uh, someone passed. And some businesses are like, you know, we don't care. We don't want to deal with the database. So today we heard, uh, mongo db. Several went gear. So now they have surveillance capability, their past. But if you if you're more into communities, they have communities. Operator. So they're addressing the full stack of different types of developers different workloads, different geographical regions. So that that's why the market is expected. >>We're seeing abstraction layers, you know, throughout the started a physical virtual containers surveillance and eventually SuperClubs Sanjeev. Great analysis. Thanks so much for taking your time to come with the cube. Alright, Keep it right there. But right back, right after this short break. This is Dave Volonte from the Javits Centre. Mongo db World 2022. Thank you. >>Mm.

Published Date : Jun 7 2022

SUMMARY :

So great to see you. like they used to say. You know the numbers, but it's exciting. So they have you seen such? Taking Lucy made a search of first class citizens, but I want to ask you So it's not like, you know, one technology has solved every problem. But is that the correct way of thinking about But in the new sequel world, you have to know your patterns before you. is the sort of the master, if you will, right and then you can create secondaries. So I have to say, uh, my experience of mongo db was was that the So now they've got, you know, like 2014 is when Atlas came out and So Atlas, of course, is the managed database as a service in the cloud. let's, uh, complex graph, you know, then maybe you should go So how do you see that all playing in the industry for some since 19 seventies, so they have that So that, to me, is a sign that Mongo DB is doing well because the Oracle calls you out, db. They are now able to write more applications like you know, mongo DB bought realm, So atlases, the near the midterm future. So now they have surveillance We're seeing abstraction layers, you know, throughout the started a physical virtual containers surveillance

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VolontePERSON

0.99+

AmazonORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

FourQUANTITY

0.99+

OracleORGANIZATION

0.99+

1.2 billionQUANTITY

0.99+

2017DATE

0.99+

Sanjeev MohanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

80%QUANTITY

0.99+

last yearDATE

0.99+

$20 millionQUANTITY

0.99+

MongoORGANIZATION

0.99+

MargoPERSON

0.99+

100%QUANTITY

0.99+

LucyPERSON

0.99+

2014DATE

0.99+

this yearDATE

0.99+

19QUANTITY

0.99+

todayDATE

0.99+

5000 peopleQUANTITY

0.99+

100 regionsQUANTITY

0.99+

oneQUANTITY

0.99+

four regionsQUANTITY

0.98+

pandemicEVENT

0.98+

TodayDATE

0.97+

MargoORGANIZATION

0.97+

firstQUANTITY

0.97+

1000 peopleQUANTITY

0.97+

about 60%QUANTITY

0.97+

one technologyQUANTITY

0.97+

2019DATE

0.95+

first conferencesQUANTITY

0.95+

above 50% a yearQUANTITY

0.94+

single spaceQUANTITY

0.94+

AtlasTITLE

0.94+

mongo DBTITLE

0.93+

two areasQUANTITY

0.93+

single documentQUANTITY

0.93+

atlasesTITLE

0.92+

19 seventiesDATE

0.92+

this morningDATE

0.91+

AtlasORGANIZATION

0.9+

Mongo DBTITLE

0.89+

billion dollarQUANTITY

0.86+

one strategyQUANTITY

0.85+

Mm.PERSON

0.84+

50 plus percent per yearQUANTITY

0.84+

Javits CentreLOCATION

0.83+

>100%QUANTITY

0.82+

coupleQUANTITY

0.81+

Mongo db World 2022EVENT

0.81+

single document databaseQUANTITY

0.79+

GartnerORGANIZATION

0.77+

mongo dbTITLE

0.77+

10 genDATE

0.77+

threeQUANTITY

0.77+

Mongo DBORGANIZATION

0.74+

billion dollarsQUANTITY

0.74+

mongo dbTITLE

0.72+

SanremoLOCATION

0.72+

MongoDB World 2022EVENT

0.69+

Jim Cushman, CPO, Collibra


 

>> From around the globe, it's theCUBE, covering Data Citizens'21. Brought to you by Collibra. >> We're back talking all things data at Data Citizens '21. My name is Dave Vellante and you're watching theCUBE's continuous coverage, virtual coverage #DataCitizens21. I'm here with Jim Cushman who is Collibra's Chief Product Officer who shared the company's product vision at the event. Jim, welcome, good to see you. >> Thanks Dave, glad to be here. >> Now one of the themes of your session was all around self-service and access to data. This is a big big point of discussion amongst organizations that we talk to. I wonder if you could speak a little more toward what that means for Collibra and your customers and maybe some of the challenges of getting there. >> So Dave our ultimate goal at Collibra has always been to enable service access for all customers. Now, one of the challenges is they're limited to how they can access information, these knowledge workers. So our goal is to totally liberate them and so, why is this important? Well, in and of itself, self-service liberates, tens of millions of data lyric knowledge workers. This will drive more rapid, insightful decision-making, it'll drive productivity and competitiveness. And to make this level of adoption possible, the user experience has to be as intuitive as say, retail shopping, like I mentioned in my previous bit, like you're buying shoes online. But this is a little bit of foreshadowing and there's even a more profound future than just enabling a self-service, that we believe that a new class of shopper is coming online and she may not be as data-literate as our knowledge worker of today. Think of her as an algorithm developer, she builds machine learning or AI. The engagement model for this user will be, to kind of build automation, personalized experiences for people to engage with data. But in order to build that automation, she too needs data. Because she's not data literate, she needs the equivalent of a personal shopper. Someone that can guide her through the experience without actually having her know all the answers to the questions that would be asked. So this level of self-service goes one step further and becomes an automated service. One to really help find the best unbiased in a labeled training data to help train an algorithm in the future. >> That's, okay please continue. >> No please, and so all of this self and automated service, needs to be complemented with kind of a peace of mind that you're letting the right people gain access to it. So when you automate it, it's like, well, geez are the right people getting access to this. So it has to be governed and secured. This can't become like the Wild Wild West or like a data, what we call a data flea market or you know, data's everywhere. So, you know, history does quickly forget the companies that do not adjust to remain relevant. And I think we're in the midst of an exponential differentiation in Collibra data intelligence cloud is really kind of established to be the key catalyst for companies that will be on the winning side. >> Well, that's big because I mean, I'm a big believer in putting data in the hands of those folks in the line of business. And of course the big question that always comes up is, well, what about governance? What about security? So to the extent that you can federate that, that's huge. Because data is distributed by its very nature, it's going to stay that way. It's complex. You have to make the technology work in that complex environment, which brings me to this idea of low code or no code. It's gaining a lot of momentum in the industry. Everybody's talking about it, but there are a lot of questions, you know, what can you actually expect from no code and low code who were the right, you know potential users of that? Is there a difference between low and no? And so from your standpoint, why is this getting so much attention and why now, Jim? >> You don't want me to go back even 25 years ago we were talking about four and five generational languages that people were building. And it really didn't re reach the total value that folks were looking for because it always fell short. And you'd say, listen, if you didn't do all the work it took to get to a certain point how are you possibly going to finish it? And that's where the four GLs and five GLs fell short as capability. With our stuff where if you really get a great self-service how are you going to be self-service if it still requires somebody right though? Well, I guess you could do it if the only self-service people are people who write code, well, that's not bad factor. So if you truly want the ability to have something show up at your front door, without you having to call somebody or make any efforts to get it, then it needs to generate itself. The beauty of doing a catalog, new governance, understanding all the data that is available for choice, giving someone the selection that is using objective criteria, like this is the best objective cause if it's quality for what you want or it's labeled or it's unbiased and it has that level of deterministic value to it versus guessing or civic activity or what my neighbor used or what I used on my last job. Now that we've given people the power with confidence to say, this is the one that I want, the next step is okay, can you deliver it to them without them having to write any code? So imagine being able to generate those instructions from everything that we have in our metadata repository to say this is exactly the data I need you to go get and perform what we call a distributed query against those data sets and bringing it back to them. No code written. And here's the real beauty Dave, pipeline development, data pipeline development is a relatively expensive thing today and that's why people spend a lot of money maintaining these pipelines but imagine if there was zero cost to building your pipeline would you spend any money to maintain it? Probably not. So if we can build it for no cost, then why maintain it? Just build it every time you need it. And it then again, done on a self-service basis. >> I really liked the way you're thinking about this cause you're right. A lot of times when you hear self self-service it's about making the hardcore developers, you know be able to do self service. But the reality is, and you talk about that data pipeline it's complex a business person sitting there waiting for data or wants to put in new data and it turns out that the smallest unit is actually that entire team. And so you sit back and wait. And so to the extent that you can actually enable self-serve for the business by simplification that is it's been the holy grail for a while, isn't it? >> I agree. >> Let's look a little bit dig into where you're placing your bets. I mean, your head of products, you got to make bets, you know, certainly many many months if not years in advance. What are your big focus areas of investment right now? >> Yeah, certainly. So one of the things we've done very successfully since our origin over a decade ago, was building a business user-friendly software and it was predominantly kind of a plumbing or infrastructure area. So, business users love working with our software. They can find what they're looking for and they don't need to have some cryptic key of how to work with it. They can think about things in their terms and use our business glossary and they can navigate through what we call our data intelligence graph and find just what they're looking for. And we don't require a business to change everything just to make it happen. We give them kind of a universal translator to talk to the data. But with all that wonderful usability the common compromise that you make as well, its only good up to a certain amount of information, kind of like Excel. You know, you can do almost anything with Excel, right? But when you get to into large volumes, it becomes problematic and now you need that, you know go with a hardcore database and application on top. So what the industry is pulling us towards is far greater amounts of data not that just millions or even tens of millions but into the hundreds of millions and billions of things that we need to manage. So we have a huge focus on scale and performance on a global basis and that's a mouthful, right? Not only are you dealing with large amounts at performance but you have to do it in a global fashion and make it possible for somebody who might be operating in a Southeast Asia to have the same experience with the environment as they would be in Los Angeles. And the data needs to therefore go to the user as opposed to having the user come to the data as much as possible. So it really does put a lot of emphasis on some of what you call the non-functional requirements also known as the ilities and so our ability to bring the data and handle those large enterprise grade capabilities at scale and performance globally is what's really driving a good number of our investments today. >> I want to talk about data quality. This is a hard topic, but it's one that's so important. And I think it's been really challenging and somewhat misunderstood when you think about the chief data officer role itself, it kind of emerged from these highly regulated industries. And it came out of the data quality, kind of a back office role that's kind of gone front and center and now is, you know pretty strategic. Having said that, the you know, the prevailing philosophy is okay, we got to have this centralized data quality approach and that it's going to be imposed throughout. And it really is a hard problem and I think about, you know these hyper specialized roles, like, you know the quality engineer and so forth. And again, the prevailing wisdom is, if I could centralize that it can be lower cost and I can service these lines of business when in reality, the real value is, you know speed. And so how are you thinking about data quality? You hear so much about it. Why is it such a big deal and why is it so hard in a priority in the marketplace? You're thoughts. >> Thanks for that. So we of course acquired a data quality company, not burying delete, earlier this year LGQ and the big question is, okay, so why, why them and why now, not before? Well, at least a decade ago you started hearing people talk about big data. It was probably around 2009, it was becoming the big talk and what we don't really talk about when we talk about this ever expanding data, the byproduct is, this velocity of data, is increasing dramatically. So the speed of which new data is being presented the way in which data is changing is dramatic. And why is that important to data quality? Cause data quality historically for the last 30 years or so has been a rules-based business where you analyze the data at a certain point in time and you write a rule for it. Now there's already a room for error there cause humans are involved in writing those rules, but now with the increased velocity, the likelihood that it's going to atrophy and become no longer a valid or useful rule to you increases exponentially. So we were looking for a technology that was doing it in a new way similar to the way that we do auto classification when we're cataloging attributes is how do we look at millions of pieces of information around metadata and decide what it is to put it into context? The ability to automatically generate these rules and then continuously adapt as data changes to adjust these rules, is really a game changer for the industry itself. So we chose OwlDQ for that very reason. It's not only where they had this really kind of modern architecture to automatically generate rules but then to continuously monitor the data and adjust those rules, cutting out the huge amounts of costs, clearly having rules that aren't helping you save and frankly, you know how this works is, you know no one really complains about it until there's the squeaky wheel, you know, you get a fine or exposes and that's what is causing a lot of issues with data quality. And then why now? Well, I think and this is my speculation, but there's so much movement of data moving to the cloud right now. And so anyone who's made big investments in data quality historically for their on-premise data warehouses, Netezzas, Teradatas, Oracles, et cetera or even their data lakes are now moving to the cloud. And they're saying, hmm, what investments are we going to carry forward that we had on premise? And which ones are we going to start a new from and data quality seems to be ripe for something new and so these new investments in data in the cloud are now looking up. Let's look at new next generation method of doing data quality. And that's where we're really fitting in nicely. And of course, finally, you can't really do data governance and cataloging without data quality and data quality without data governance and cataloging is kind of a hollow a long-term story. So the three working together is very a powerful story. >> I got to ask you some Colombo questions about this cause you know, you're right. It's rules-based and so my, you know, immediate like, okay what are the rules around COVID or hybrid work, right? If there's static rules, there's so much unknown and so what you're saying is you've got a dynamic process to do that. So and one of the my gripes about the whole big data thing and you know, you referenced that 2009, 2010, I loved it, because there was a lot of profound things about Hadoop and a lot of failings. And one of the challenges is really that there's no context in the big data system. You know, the data, the folks in the data pipeline, they don't have the business context. So my question is, as you it's and it sounds like you've got this awesome magic to automate, who would adjudicates the dynamic rules? How does, do humans play a role? What role do they play there? >> Absolutely. There's the notion of sampling. So you can only trust a machine for certain point before you want to have some type of a steward or a assisted or supervised learning that goes on. So, you know, suspect maybe one out of 10, one out of 20 rules that are generated, you might want to have somebody look at it. Like there's ways to do the equivalent of supervised learning without actually paying the cost of the supervisor. Let's suppose that you've written a thousand rules for your system that are five years old. And we come in with our ability and we analyze the same data and we generate rules ourselves. We compare the two themselves and there's absolutely going to be some exact matching some overlap that validates one another. And that gives you confidence that the machine learning did exactly what you did and what's likelihood that you guessed wrong and machine learning guessed wrong exactly the right way that seems pretty, pretty small concern. So now you're really saying, well, why are they different? And now you start to study the samples. And what we learned, is that our ability to generate between 60 and 70% of these rules anytime we were different, we were right. Almost every single time, like almost every, like only one out of a hundred where was it proven that the handwritten rule was a more profound outcome. And of course, it's machine learning. So it learned, and it caught up the next time. So that's the true power of this innovation is it learns from the data as well as the stewards and it gives you confidence that you're not missing things and you start to trust it, but you should never completely walk away. You should constantly do your periodic sampling. >> And the secret sauce is math. I mean, I remember back in the mid two thousands it was like 2006 timeframe. You mentioned, you know, auto classification. That was a big problem with the federal rules of civil procedure trying to figure out, okay, you know, had humans classifying humans don't scale, until you had, you know, all kinds of support, vector machines and probabilistic, latent semantic indexing, but you didn't have the compute power or the data corpus to really do it well. So it sounds like a combination of you know, cheaper compute, a lot more data and machine intelligence have really changed the game there. Is that a fair assumption? >> That's absolutely fair. I think the other aspect that to keep in mind is that it's an innovative technology that actually brings all that compute as close into the data as possible. One of the greatest expenses of doing data quality was of course, the profiling concept bringing up the statistics of what the data represents. And in most traditional senses that data is completely pulled out of the database itself, into a separate area and now you start talking about terabytes or petabytes of data that takes a long time to extract that much information from a database and then to process through it all. Imagine bringing that profiling closer into the database, what's happening in the NAPE the same space as the data, that cuts out like 90% of the unnecessary processing speed. It also gives you the ability to do it incrementally. So you're not doing a full analysis each time, you have kind of an expensive play when you're first looking at a full database and then maybe over the course of a day, an hour, 15 minutes you've only seen a small segment of change. So now it feels more like a transactional analysis process. >> Yeah and that's, you know, again, we talked about the old days of big data, you know the Hadoop days and the boat was profound was it was all about bringing five megabytes of code to a petabyte of data, but that didn't happen. We shoved it all into a central data lake. I'm really excited for Collibra. It sounds like you guys are really on the cutting edge and doing some really interesting things. I'll give you the last word, Jim, please bring us on. >> Yeah thanks Dave. So one of the really exciting things about our solution is, it trying to be a combination of best of breed capabilities but also integrated. So to actually create a full and complete story that customers are looking for, you don't want to have them worry about a complex integration in trying to manage multiple vendors and the times of their releases, et cetera. If you can find one customer that you don't have to say well, that's good enough, but every single component is in fact best of breed that you can find in it's integrated and they'll manage it as a service. You truly unlock the power of your data, literate individuals in your organization. And again, that goes back to our overall goal. How do we empower the hundreds of millions of people around the world who are just looking for insightful decision? Did they feel completely locked it's as if they're looking for information before the internet and they're kind of limited to whatever their local library has and if we can truly become somewhat like the internet of data, we make it possible for anyone to access it without controls but we still govern it and secure it for privacy laws, I think we do have a chance to to change the world for better. >> Great. Thank you so much, Jim. Great conversation really appreciate your time and your insights. >> Yeah, thank you, Dave. Appreciate it. >> All right and thank you for watching theCUBE's continuous coverage of Data Citizens'21. My name is Dave Vellante. Keep it right there for more great content. (upbeat music)

Published Date : Jun 17 2021

SUMMARY :

Brought to you by Collibra. and you're watching theCUBE's and maybe some of the And to make this level So it has to be governed and secured. And of course the big question and it has that level of And so to the extent that you you got to make bets, you know, And the data needs to and that it's going to and frankly, you know how this works is, So and one of the my gripes and it gives you confidence or the data corpus to really do it well. of data that takes a long time to extract Yeah and that's, you know, again, is in fact best of breed that you can find Thank you so much, Jim. you for watching theCUBE's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim CushmanPERSON

0.99+

DavePERSON

0.99+

JimPERSON

0.99+

Dave VellantePERSON

0.99+

90%QUANTITY

0.99+

CollibraORGANIZATION

0.99+

2009DATE

0.99+

OraclesORGANIZATION

0.99+

NetezzasORGANIZATION

0.99+

LGQORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

ExcelTITLE

0.99+

TeradatasORGANIZATION

0.99+

twoQUANTITY

0.99+

2010DATE

0.99+

15 minutesQUANTITY

0.99+

2006DATE

0.99+

millions of piecesQUANTITY

0.99+

millionsQUANTITY

0.99+

tens of millionsQUANTITY

0.99+

an hourQUANTITY

0.99+

five GLsQUANTITY

0.99+

Southeast AsiaLOCATION

0.99+

oneQUANTITY

0.99+

four GLsQUANTITY

0.99+

billionsQUANTITY

0.99+

HadoopTITLE

0.99+

hundreds of millionsQUANTITY

0.98+

20 rulesQUANTITY

0.98+

threeQUANTITY

0.98+

70%QUANTITY

0.98+

each timeQUANTITY

0.98+

one customerQUANTITY

0.98+

earlier this yearDATE

0.97+

10QUANTITY

0.97+

todayDATE

0.95+

a decade agoDATE

0.95+

firstQUANTITY

0.95+

a dayQUANTITY

0.95+

25 years agoDATE

0.94+

CollibraPERSON

0.94+

hundreds of millions of peopleQUANTITY

0.94+

fourQUANTITY

0.94+

petabytesQUANTITY

0.91+

over a decade agoDATE

0.9+

terabytesQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

five years oldQUANTITY

0.88+

CPOPERSON

0.87+

Wild Wild WestLOCATION

0.86+

tens of millions of dataQUANTITY

0.86+

OneQUANTITY

0.84+

five generational languagesQUANTITY

0.83+

a thousand rulesQUANTITY

0.81+

single componentQUANTITY

0.8+

60QUANTITY

0.8+

last 30 yearsDATE

0.79+

Data Citizens'21TITLE

0.78+

zero costQUANTITY

0.77+

five megabytes of codeQUANTITY

0.76+

OwlDQORGANIZATION

0.7+

single timeQUANTITY

0.69+

Data Citizens '21EVENT

0.67+

Chief Product OfficerPERSON

0.64+

hundredQUANTITY

0.63+

two thousandsQUANTITY

0.63+

DataEVENT

0.58+

#DataCitizens21EVENT

0.58+

petabyteQUANTITY

0.49+

COVIDOTHER

0.48+

Wim Coekaerts, Oracle | CUBEconversations


 

(bright upbeat music) >> Hello everyone, and welcome to this exclusive Cube Conversation. We have the pleasure today to welcome, Wim Coekaerts, senior vice president of software development at Oracle. Wim, it's good to see you. How you been, sir? >> Good, it's been a while since we last talked but I'm excited to be here, as always. >> It was during COVID though and so I hope to see you face to face soon. But so Wim, since the Barron's Article declared Oracle a Cloud giant, we've really been sort of paying attention and amping up our coverage of Oracle and asking a lot of questions like, is Oracle really a Cloud giant? And I'll say this, we've always stressed that Oracle invests in R&D and of course there's a lot of D in that equation. And over the past year, we've seen, of course the autonomous database is ramping up, especially notable on Exadata Cloud@Customer, we've covered that extensively. We covered the autonomous data warehouse announcement, the blockchain piece, which of course got me excited 'cause I get to talk about crypto with Juan. Roving Edge, which for everybody who might not be familiar with that, it's an edge cloud service, dedicated regions that you guys announced, which is a managed cloud region. And so it's clear, you guys are serious about cloud. These are all cloud first services using second gen OCI. So, Oracle's making some moves but the question is, what are customers doing? Are they buying this stuff? Are they leaning into these new deployment models for the databases? What can you tell us? >> You know, definitely. And I think, you know, the reason that we have so many different services is that not every customer is the same, right? One of the things that people don't necessarily realize, I guess, is in the early days of cloud lots of startups went there because they had no local infrastructure. It was easy for them to get started in something completely new. Our customers are mostly enterprise customers that have huge data centers in many cases, they have lots of real estate local. And when they think about cloud they're wondering how can we create an environment that doesn't cause us to have two ops teams and two ways of managing things. And so, they're trying to figure out exactly what it means to take their real estate and either move it wholesale to the cloud over a period of years, or they say, "Hey, some of these things need to be local maybe even for regulatory purposes." Or just because they want to keep some data locally within their own data centers but then they have to move other things remotely. And so, there's many different ways of solving the problem. And you can't just say, "Here's one cloud, this is where you go and that's it." So, we basically say, if you're on prem, we provide you with cloud services on-premises, like dedicated regions or Oracle Exadata Cloud@Customer and so forth so that you get the benefits of what we built for cloud and spend a lot of time on, but you can run them in your own data center or people say, "No, no, no. I want to get rid of my data centers, I do it remotely." Okay, then you do it in Oracle cloud directly. Or you have a hybrid model where you say, "Some stays local, some is remote." The nice thing is you get the exact same API, the exact same way of managing things, no matter how you deploy it. And that's a big differentiator. >> So, is it fair to say that you guys have, I think of it as a purpose built club, 'cause I talk to a lot of customers. I mean, take an insurance app like Claims, and customers tell me, "I'm not putting that into the public cloud." But you're making a case that it actually might make sense in your cloud because you can support those mission critical applications with the exact same experience, same API, same... I can get, you know, take Rack for instance, I can't get, you know, real application clusters in an Amazon cloud but presumably I can get them in your cloud. So, is it fair to say you have a purpose built cloud specifically for the most demanding applications? Is that a right way to look at it or not necessarily? >> Well, it's interesting. I think the thing to be careful of is, I guess, purpose built cloud might for some people mean, "Oh, you can only do things if it's Oracle centric." Right, and so I think that fundamentally, Oracle cloud provides a generic cloud. You can run anything you want, any application, any deployment model that you have. Whether you're an Oracle customer or not, we provide you with a full cloud service, right? However, given that we know and have known, obviously for a long time, how our products run best, when we designed OCI gen two, when we designed the networking stack, the storage layer and all that stuff, we made sure that it would be capable of running our more complex environments because our advantage is, Oracle customers have a place where they can run Oracle the best. Right, and so obviously the context of purpose-built fits that model, where yes, we've made some design choices that allow us to run Rack inside OCI and allow us to deploy Exadatas inside OCI which you cannot do in other clouds. So yes, it's purpose built in that sense but I would caution on the side of that it sometimes might imply that it's unique to Oracle products and I guess one way to look at it is if you can run Oracle, you can run everything else, right? Because it's such a complex suite of products that if you can run that then it it'll support any other (mumbling). >> Right. Right, it's like New York city. You make it there, you can make it anywhere. If I can run the most demanding mission critical applications, well, then I can run a web app for instance, okay. I got a question on tooling 'cause there's a lot of tooling, like sometimes it makes my eyes bleed when I look at all this stuff and doesn't... Square the circle for me, doesn't autonomous, an autonomous database like Autonomous Linux, for instance, doesn't it eliminate the need for all these management tools? >> You know, it does. It eliminates the need for the management at the lower level, right. So, with the autonomous Linux, what we offer and what we do is, we automatically patch the operating system for you and make sure it's secure from a security patching point of view. We eliminate the downtime, so when we do it then you don't have to restart applications. However, we don't know necessarily what the app is that is installed on top of it. You know, people can deploy their own applications, they can run third party applications, they can use it for development environments and so forth. So, there's sort of the core operating system layer and on the database side, you know, we take care of database patching and upgrades and storage management and all that stuff. So the same thing, if you run your own application inside the database, we can manage the database portion but we don't manage the application portion just like on the operating system. And so, there's still a management level that's required, no matter what, a level above that. And the other thing and I think this is what a lot of the stuff we're doing is based on is, you still have tons of stuff on-premises that needs full management. You have applications that you migrate that are not running Autonomous Linux, could be a Windows application that's running or it could be something on a different Linux distribution or you could still have some databases installed that you manage yourself, you don't want to use the autonomous or you're on a third-party. And so we want to make sure that we can address all of them with a single set of tools, right. >> Okay, so I wonder, can you give us just an overview, just briefly of the products that comprise into the cloud services, your management solution, what's in that portfolio? How should we think about it? >> Yeah, so it basically starts with Enterprise Manager on-premises, right? Which has been the tool that our Oracle database customers in particular have been using for many years and is widely used by our customer base. And so you have those customers, most of their real estate is on-premises and they can use enterprise management with local. They have it running and they don't want to change. They can keep doing that and we keep enhancing as you know, with newer versions of Enterprise Manager getting better. So, then there's the transition to cloud and so what we've been doing over the last several years is basically, looking at the things, well, one aspect is looking at things people, likes of Enterprise Manager and make sure that we provide similar functionality in Oracle cloud. So, we have Performance Hub for looking at how the database performance is working. We have APM for Application Performance Monitoring, we have Logging Analytics that looks at all the different log files and helps make sense of it for you. We have Database Management. So, a lot of the functionality that people like in Enterprise Manager mentioned the database that we've built into Oracle cloud, and, you know, a number of other things that are coming Operations Insights, to look at how databases are performing and how we can potentially do consolidation and stuff. So we've basically looked at what people have been using on-premises, how we can replicate that in Oracle cloud and then also, when you're in a cloud, how you can make make use of all the base services that a cloud vendor provides, telemetry, logging and so forth. And so, it's a broad portfolio and what it allows us to do with our customers is say, "Look, if you're predominantly on-prem, you want to stay there, keep using Enterprise Manager. If you're starting to move to Oracle cloud, you can first use EM, look at what's happening in the cloud and then switch over, start using all the management products we have in the cloud and let go of the Enterprise Manager instance on-premise. So you can gradually shift, you can start using more and more. Maybe you start with analytics first and then you start with insights and then you switch to database management. So there's a whole suite of possibilities. >> (indistinct) you mentioned APM, I've been watching that space, it's really evolved. I mean, you saw, you know, years ago, Splunk came out with sort of log analytics, maybe simplified that a little bit, now you're seeing some open source stuff come out. You're seeing a lot of startups come out, you saw Cisco made an acquisition with AppD and that whole space is transforming it seems that the future is all about that end to end visibility, simplifying the ability to remediate problems. And I'm thinking, okay, you just mentioned, you guys have a lot of these capabilities, you got Autonomous, is that sort of where you're headed with your capabilities? >> It definitely is and in fact, one of the... So, you know, APM allows you to say, "Hey, here's my web browser and it's making a connection to the database, to a middle tier" and it's hard for operations people in companies to say, hey, the end user calls and says, "You know, my order entry system is slow. Is it the browser? Is it the middle tier that they connect to? Is it the database that's overloaded in the backend?" And so, APM helps you with tracing, you know, what happens from where to where, where the delays are. Now, once you know where the delay is, you need to drill down on it. And then you need to go look at log files. And that's where the logging piece comes in. And what happens very often is that these log files are very difficult to read. You have networking log files and you have database log files and you have reslog files and you almost have to be an expert in all of these things. And so, then with Logging Analytics, we basically provide sort of an expert dashboard system on top of that, that allows us to say, "Hey! When you look at logging for the network stack, here are the most important errors that we could find." So you don't have to go and learn all the details of these things. And so, the real advantages of saying, "Hey, we have APM, we have Logging Analytics, we can tie the two together." Right, and so we can provide a solution that actually helps solve the problem, rather than, you need to use APM for one vendor, you need to use Logging Analytics from another vendor and you know, that doesn't necessarily work very well. >> Yeah and that's why you're seeing with like the ELK Stack it's cool, you're an open source guy, it's cool as an open source, but it's complicated to set up all that that brings. So, that's kind of a cool approach that you guys are taking. You mentioned Enterprise Manager, you just made a recent announcement, a new release. What's new in that new release? >> So Enterprise Manager 13.5 just got released. And so EM keeps improving, right? We've made a lot of changes over over the years and one of the things we've done in recent years is do more frequent updates sort of the cloud model frequent updates that are not just bug fixes but also introduce new functionality so people get more stuff more frequently rather than you know, once a year. And that's certainly been very attractive because it shows that it's a lively evolving product. And one of the main focus areas of course is cloud. And so a lot of work that happens in Enterprise Manager is hybrid cloud, which basically means I run Enterprise Manager and I have some stuff in Oracle cloud, I might have some other stuff in another cloud vendors environment and so we can actually see which databases are where and provide you with one consolidated view and one tool, right? And of course it supports Autonomous Database and Exadata in cloud servers and so forth. So you can from EM see both your databases on-premises and also how it's doing in in Oracle cloud as you potentially migrate things over. So that's one aspect. And then the other one is in terms of operations and automation. One of the things that we started doing again with Enterprise Manager in the last few years is making sure that everything has a REST API. So we try to make the experience with Enterprise Manager be very similar to how people work with a cloud service. Most folks now writing automation tools are used to calling REST APIs. EM in the early days didn't have REST APIs, now we're making sure everything works that way. And one of the advantages is that we can do extensibility without having to rewrite the product, that we just add the API clause in the agent and it makes it a lot easier to become part of the modern system. Another thing that we introduced last year but that we're evolving with more dashboards and so forth is the Grafana plugin. So even though Enterprise Manager provides lots of cool tools, a lot of cloud operations folks use a tool called Grafana. And so we provide a plugin that allows customers to have Grafana dashboards but the data actually comes out of Enterprise Manager. So that allows us to integrate EM into a more cloudy world in a cloud environment. I think the other important part is making sure that again, Enterprise Manager has sort of a cloud feel to it. So when you do patching and upgrades, it's near zero downtime which basically means that we do all the upgrades for you without having to bring EM down. Because even though it's a management tool, it's used for operations. So if there were downtime for patching Enterprise Manager for an hour, then for that hour, it's a blackout window for all the monitoring we do. And so we want to avoid that from happening, so now EM is upgrading, even though all the events are still happening and being processed, and then we do a very short switch. So that help our operations people to be more available. >> Yes. I mean, I've been talking about Automated Operations since, you know, lights out data centers since the eighties back in (laughs). I remember (indistinct) data center one-time lights out there were storage tech libraries in there and so... But there were a lot of unintended consequences around, you know, automated ops, and so people were sort of scared to go there, at least lean in too much but now with all this machine intelligence... So you're talking about ops automation, you mentioned the REST APIs, the Grafana plugins, the Cloud feel, is that what you're bringing to the table that's unique, is that unique to Oracle? >> Well, the integration with Oracle in that sense is unique. So one example is you mentioned the word migration, right? And so database migration tends to be something, you know, customers obviously take very serious. We go from one place, you have to move all your data to another place that runs in a slightly different environment. And so how do you know whether that migration is going to work? And you can't migrate a thousand databases manually, right? So automation, again, it's not just... Automation is not just to say, "Hey, I can do an upgrade of a system or I can make sure that nothing is done by hand when you patch something." It's more about having a huge fleet of servers and a huge fleet of databases. How can you move something from one place to another and automate that? And so with EM, you know, we start with sort of the prerequisite phase. So we're looking at the existing environment, how much memory does it need? How much storage does it use? Which version of the database does it have? How much data is there to move? Then on the target side, we see whether the target can actually run in that environment. Then we go and look at, you know, how do you want to migrate? Do you want to migrate everything from a sort of a physical model or do you want to migrate it from a logical model? Do you want to do it while your environment is still running so that you start backing up the data to the target database while your existing production system is still running? Then we do a short switch afterwards, or you say, "No, I want to bring my database down. I want to do the migrate and then bring it back up." So there's different deployment models that we can let our customers pick. And then when the migration is done, we have a ton of health checks that can validate whether the target database will run through basically the exact same way. And then you can say, "I want to migrate 10 databases or 50 databases" and it'll work, It's all automated out of the box. >> So you're saying, I mean, you've looked at the prevailing way you've done migrations, historically you'd have to freeze the code and then migrate, and it would take forever, it was a function of the number of lines of code you had. And then a lot of times, you know, people would say, "We're not going to freeze the code" and then they would almost go out of business trying to merge the two. You're saying in 2021, you can give customers the choice, you can migrate, you could change the, you know, refuel the plane while you're in midair? Is that essentially what you're saying? >> That's a good way of describing it, yeah. So your existing database is running and we can do a logical backup and restore. So while transactions are happening we're still migrating it over and then you can do a cutoff. It makes the transition a lot easier. But the other thing is that in the past, migrations would typically be two things. One is one database version to the next, more upgrades than migration. Then the second one is that old hardware or a different CPU architecture are moving to newer hardware in a new CPU architecture. Those were sort of the typical migrations that you had prior to Cloud. And from a CIS admin point of view or a DBA it was all something you could touch, that you could physically touch the boxes. When you move to cloud, it's this nebulous thing somewhere in a data center that you have no access to. And that by itself creates a barrier to a lot of admins and DBA's from saying, "Oh, it'll be okay." There's a lot of concern. And so by baking in all these tests and the prerequisites and all the dashboards to say, you know, "This is what you use. These are the features you use. We know that they're available on the other side so you can do the migration." It helps solve some of these problems and remove the barriers. >> Well that was just kind of same same vision when you guys came up with it. I don't know, quite a while ago now. And it took a while to get there with, you know, you had gen one and then gen two but that is, I think, unique to Oracle. I know maybe some others that are trying to do that as well, but you were really the first to do that and so... I want to switch topics to talk about security. It's hot topic. You guys, you know, like many companies really focused on security. Does Enterprise Manager bring any of that over? I mean, the prevailing way to do security often times is to do scripts and write, you know, custom security policy scripts are fragile, they break, what can you tell us about security? >> Yeah. So there's really two things, you know. One is, we obviously have our own best security practices. How we run a database inside Oracle for our own world, we've learned about that over the years. And so we sort of baked that knowledge into Enterprise Manager. So we can say, "Hey, if you install this way, we do the install and the configuration based on our best practice." That's one thing. The other one is there's STIG, there's PCI and they're ShipBob, those are the main ones. And so customers can do their own way. They can download the documentation and do it manually. But what we've done is, and we've done this for a long time, is basically bake those policies into Enterprise Manager. So you can say, "Here's my database this needs to be PCI compliant or it needs to be HIPAA compliant and you push a button and then we validate the policies in those documents or in those prescript described files. And we make sure that the database is combined to that. And so we take that manual work and all that stuff basically out of the picture, we say, "Push this button and we'll take care of it." >> Now, Wim, but just quick sidebar here, last time we talked, it was under a year ago. It was definitely during COVID and it's still during COVID. We talked about the state of the penguin. So I'm wondering, you know, what's the latest update for Linux, any Linux developments that we should be aware of? >> Linux, we're still working very hard on Autonomous Linux and that's something where we can really differentiate and solve a problem. Of course, one of the things to mention is that Enterprise Manager can can do HIPAA compliance on Oracle Linux as well. So the security practices are not just for the database it can also go down to the operating system. Anyway, so on the Autonomous Linux side, you know, management in an Oracle Cloud's OS management is evolving. We're spending a lot of time on integrating log capturing, and if something were to go wrong that we can analyze a log file on the fly and send you a notification saying, "Hey, you know there was this bug and here's the cause." And it was potentially a fix for it to Autonomous Linux and we're putting a lot of effort into that. And then also sort of IT/operation management where we can look at the different applications that are running. So you're running a web server on a Linux environment or you're running some Java processes, we can see what's running. We can say, "Hey, here's the CPU utilization over the past week or the past year." And then how is this evolving? Say, if something suddenly spikes we can say, "Well, that's normal, because every Monday morning at 10 o'clock there's a spike or this is abnormal." And then you can start drilling this down. And this comes back to overtime integration with whether it's APM or Logging Analytics, we can tie the dots, right? We can connect them, we can say, "Push this thing, then click on that link." We give you the information. So it's that integration with the entire cloud platform that's really happening now >> Integration, there's that theme again. I want to come back to migration and I think you did a good job of explaining how you sort of make that non-disruptive and you know, your customers, I think, you know, generally you're pushing you know, that experience which makes people more comfortable. But my question is, why do people want to migrate if it works and it's on prem, are they doing it just because they want to get out of the data center business? Or is it a better experience in the cloud? What can you tell us there? >> You know, it's a little bit of everything. You know, one is, of course the idea that data center maintenance costs are very high. The other one is that when you run your own data center, you know, we obviously have this problem but when you're a cloud vendor, you have these problems but we're in this business. But if you buy a server, then in three years that server basically is depreciated by new versions and they have to do migration stuff. And so one of the advantages with cloud is you push a button, you have a new version of the hardware, basically, right? So the refreshes happen on a regular basis. You don't have to go and recycle that yourself. Then the other part is the subscription model. It's a lot easier to pay for what you use rather than you have a data center whether it's used or not, you pay for it. So there's the cost advantages and predictability of what you need, you pay for, you can say, "Oh next year we need to get x more of EMs." And it's easier to scale that, right? We take care of dealing with capacity planning. You don't have to deal with capacity planning of hardware, we do that as the cloud vendor. So there's all these practical advantages you get from doing it remotely and that's really what the appeal is. >> Right. So, as it relates to Enterprise Manager, did you guys have to like tear down the code and rebuild it? Was it entire like redo? How did you achieve that? >> No, no, no. So, Enterprise Manager keeps evolving and you know, we changed the underlying technologies here and there, piecemeal, not sort of a wholesale replacement. And so in talking about five, there's a lot of new stuff but it's built on the existing EM core. And so we're just, you know, improving certain areas. One of the things is, stability is important for our customers, obviously. And so by picking things piecemeal, we replace one engine rather than the whole thing. It allows us to introduce change more slowly, right. And then it's well-tested as a unit and then when we go on to the next thing. And then the other one is I mentioned earlier, a lot of the automation and extensibility comes from REST APIs. And so instead of basically re-writing everything we just provide a REST endpoint and we make all the new features that we built automatically be REST enabled. So that makes it a lot easier for us to introduce new stuff. >> Got it. So if I want to poke around with this new version of Enterprise Manager, can I do that? Is there a place I can go, do I have to call a rep? How does that work? >> Yeah, so for information you can just go to oracle.com/enterprise manager. That's the website that has all the data. The other thing is if you're already playing with Oracle Cloud or you use Oracle Cloud, we have Enterprise Manager images in the marketplace. So if you have never used EM, you can go to Oracle Cloud, push a button in the marketplace and you get a full Enterprise Manager installation in a matter of minutes. And then you can just start using that as well. >> Awesome. Hey, I wanted to ask you about, you know, people forget that you guys are the stewards of MySQL and we've been looking at MySQL Database Cloud service with HeatWave Did you name that? And so I wonder if you could talk about what you're doing with regard to managing HeatWave environments? >> So, HeatWave is the MySQL option that helps with analytics, right? And it really accelerates MySQL usage by 100 x and in some cases more and it's transparent to the customer. So as a MySQL user, you connect with standard MySQL applications and APIs and SQL and everything. And the HeatWave part is all done within the MySQL server. The engine itself says, "Oh, this SQL query, we can offload to the backend HeatWave cluster," which then goes in memory operations and blazingly fast returns it to you. And so the nice thing is that it turns every single MySQL database into also a data warehouse without any change whatsoever in your application. So it's been widely popular and it's quite exciting. I didn't personally name it, HeatWave, that was not my decision, but it sounds very cool. >> That's very cool. >> Yeah, It's a very cool name. >> We love MySQL, we started our company on the lamp stack, so like many >> Oh? >> Yeah, yeah. >> Yeah, yeah. That's great. So, yeah. And so with HeatWave or MySQL in general we're basically doing the same thing as we have done for the Oracle Database. So we're going to add more functionality in our database management tools to also look at HeatWave. So whether it's doing things like performance hub or generic database management and monitoring tools, we'll expand that in, you know, in the near future, in the future. >> That's great. Well, Wim, it's always a pleasure. Thank you so much for coming back in "The Cube" and letting me ask all my Colombo questions. It was really a pleasure having you. (mumbling) >> It's good be here. Thank you so much. >> You're welcome. And thank you for watching, everybody, this is Dave Vellante. We'll see you next time. (bright music)

Published Date : Apr 27 2021

SUMMARY :

How you been, sir? but I'm excited to be here, as always. And so it's clear, you guys and so forth so that you get So, is it fair to say you that if you can run that You make it there, you and on the database side, you know, and then you switch to it seems that the future is all about and you know, that doesn't approach that you guys are taking. all the upgrades for you since, you know, lights out And so with EM, you know, of lines of code you had. and then you can do a cutoff. is to do scripts and write, you know, and you push a button and So I'm wondering, you know, And then you can start drilling this down. and you know, your customers, And so one of the advantages with cloud is did you guys have to like tear And so we're just, you know, How does that work? And then you can just And so I wonder if you could And so the nice thing is that it turns we'll expand that in, you know, Thank you so much for Thank you so much. And thank you for watching, everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Wim CoekaertsPERSON

0.99+

50 databasesQUANTITY

0.99+

CiscoORGANIZATION

0.99+

10 databasesQUANTITY

0.99+

2021DATE

0.99+

Enterprise ManagerTITLE

0.99+

New YorkLOCATION

0.99+

EnterpriseTITLE

0.99+

MySQLTITLE

0.99+

JavaTITLE

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

an hourQUANTITY

0.99+

Enterprise ManagerTITLE

0.99+

WindowsTITLE

0.99+

SQLTITLE

0.99+

100 xQUANTITY

0.99+

OneQUANTITY

0.99+

next yearDATE

0.99+

one toolQUANTITY

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.98+

todayDATE

0.98+

second oneQUANTITY

0.98+

firstQUANTITY

0.98+

one exampleQUANTITY

0.98+

Enterprise Manager 13.5TITLE

0.98+

one aspectQUANTITY

0.98+

one engineQUANTITY

0.97+

WimPERSON

0.97+

gen oneQUANTITY

0.97+

bothQUANTITY

0.97+

once a yearQUANTITY

0.97+

Oracle CloudTITLE

0.97+

one wayQUANTITY

0.97+

GrafanaTITLE

0.97+

BarronPERSON

0.97+

first servicesQUANTITY

0.96+

HeatWaveORGANIZATION

0.96+

past yearDATE

0.96+

one-timeQUANTITY

0.96+

gen twoQUANTITY

0.96+

one placeQUANTITY

0.96+

past weekDATE

0.96+

two waysQUANTITY

0.95+

Sagar Kadakia | CUBE Conversation, December 2020


 

>> From The Cube Studios in Palo Alto and Boston connecting with thought-leaders all around the world, this is a Cube Conversation. >> Hello, everyone, and welcome to this Cube Conversation, I'm Dave Vellante. Now, you know I love data, and today we're going to introduce you to a new data and analytical platform, and we're going to take it to the world of cloud database and data warehouses. And with me is Sagar Kadakia who's the head of Enterprise IT (indistinct) 7Park Data. Sagar, welcome back to the Cube. Good to see you. >> Thank you so much, David. I appreciate you having me back on. >> Hey, so new gig for you, how's it going? Tell us about 7Park Data. >> Yeah. Look, things are going well. It started at about two months ago, just a, you know, busy. I had a chance last, you know a few months to kind of really dig into the dataset. We have a tremendous amount of research coming out in Q4 Q1 around kind of the public cloud database market public cloud analytics market. So, you know, really looking forward to that. >> Okay, good. Well, let's bring up the first slide. Let's talk about where this data comes from. Tell us a little bit more about the platform. Where's the insight. >> Yeah, absolutely. So I'll talk a little about 7Park and then we'd kind of jump into the data a little bit. So 7Park was founded in 2012 in terms of differentiator, you know with other alternative data firms, you know we use NLP machine learning, you know AI to really kind of, you know, structure like noisy and unstructured data sets really kind of generate insight from that. And so, because a lot of that know how we ended up being acquired by Vista back in 2018. And really like for us, you know the mandate there is to really, you know look across all their different portfolio companies and try to generate insight from all the data assets you know, that these portfolio companies have. So, you know, today we're going to be talking about you know, one of the data sets from those companies it's that cloud infrastructure data set. We get it from one of the portfolio companies that you know, helps organizations kind of manage and optimize their cloud spend. It's real time data. We essentially get this aggregated daily. So this certainly different than, you know your traditional providers maybe giving you quarterly or kind of by annual data. This is incredibly granular, real time all the way down to the invoice level. So within this cloud infrastructure dataset we're tracking several billion dollars worth of spend across AWS, Azure and GCP. Something like 350 services across like 20 plus markets. So, you know, security machine learning analytics database which we're going to talk about today. And again like the granularity of the KPIs I think is kind of really what kind of you know, differentiates this dataset you know, with just within database itself, you know we're tracking over 20 services. So, you know, lots to kind of look forward to kind of into Q4 and Q1. >> So, okay. So the main spring of your data is if I'm a customer and I there's a service out there there are many services like this that can help me optimize my spend and the way they do that is I basically connect their APIs. So they have visibility on what the transactions that I'm making my usage statistics et cetera. And then you take that and then extrapolate that and report on that. Is that right? >> Exactly. Yeah. We're seeing just on this one data set that we're going to talk about today, it's something like six 700 million rows worth of data. And so kind of what we do is, you know we kind of have the insight layer on top of that or the analytics layer on top of all that unstructured data, so that we can get a feel for, you know a whole host of different kind of KPIs spend, adoption rates, market share, you know product size, retention rates, spend, you know, net price all that type of stuff. So, yeah, that's exactly what we're doing. >> Love it, there's more transparency the better. Okay. So, so right, because this whole world of market sizing has been very opaque you know, over the years, and it's like you know, backroom conversations, whether it's IDC, Gartner who's got what don't take, you know and the estimations and it's very, very, you know it's not very transparent so I'm excited to see what you guys have. Okay. So, so you have some data on the public cloud and specifically the database market that you want to share with our audience. Let's bring up the next graphic here. What are we looking at here Sagar? What are these blue lines and red lines what's this all about? >> Yeah. So and look, we can kind of start at the kind of the 10,000 foot view kind of level here. And so what we're looking at here is our estimates for the entire kind of cloud database market, including data warehousing. If you look all the way over to the right I'll kind of explain some of these bars in a minute but just high level, you know we're forecasting for this year, $11.8 billion. Now something to kind of remember about that is that's just AWS, Azure and GCP, right? So that's not the entire cloud database market. It's just specific to those three providers. What you're looking at here is the breakout and blue and purple is SQL databases and then no SQL databases. And so, you know, to no one's surprise here and you can see, you know SQL database is obviously much larger from a revenue standpoint. And so you can see just from this time last year, you know the database market has grown 40% among these three cloud providers. And, you know, though, we're not showing it here, you know from like a PI perspective, you know database is playing a larger and larger role for all three of these providers. And so obviously this is a really hot market, which is why, you know we're kind of discussing a lot of the dynamics. You don't need to Q and Q Q4 and Q1 >> So, okay. Let's get into some of the specific firm-level data. You have numbers that you want to share on Amazon Redshift and Google BigQuery, and some comments on Snowflake let's bring up the next graphic. So tell us, it says public cloud data, warehousing growth tempered by Snowflake, what's the data showing. And let's talk about some of the implications there. >> Yeah, no problem. So yeah, this is kind of one of the markets, you know that we kind of did a deep dive in tomorrow and we'll kind of get this, you know, get to this in a few minutes, we're kind of doing a big CIO panel kind of covering data, warehousing, RDBMS documents store key value, graph all these different database markets but I thought it'd be great, you know just cause obviously what's occurring here and with snowflake to kind of talk about, you know the data warehousing market, you know, look if you look here, these are some of the KPIs that we have you know, and I'll kind of start from the left. Here are some of the orange bars, the darker orange bars. Those are our estimates for AWS Redshift. And so you can see here, you know we're projecting about 667 million in revenue for Redshift. But if you look at the lighter arm bars, you can see that the service went from representing about 2% of you know, AWS revenue to about 1.5%. And we think some of that is because of Snowflake. And if we kind of, take a look at some of these KPIs you know, below those bar charts here, you know one of the things that we've been looking at is, you know how are longer-term customer spending and how are let's just say like newer customers spending, so to speak. So kind of just like organic growth or kind of net expansion analysis. And if you look at on the bottom there, you'll see, you know customers in our dataset that we looked at, you know that were there 3Q20 as well as 3Q19 their spend on AWS Redshift is 23%. Right? And then look at the bifurcation, right? When we include essentially all the new customers that onboard it, right after 3Q19, look at how much they're bringing down the spend increase. And it's because, you know a lot of spend that was perhaps meant for Redshift is now going to Snowflake. And look, you would expect longer-term customers to spend more than newer customers. But really what we're doing is here is really highlighting the stark contrast because you have kind of back to back KPIs here, you know between organic spend versus total spend and obviously the deceleration in market share kind of coming down. So, you know, something that's interesting here and we'll kind of continue tracking that. >> Okay. So let's maybe come back to this mass Colombo questions here. So the start with the orange side. So we're talking about Snowflake being 667 million. These are your estimates extrapolated based on what we talked about earlier, 1.5% of the AWS portfolio of course you see things like, they continue to grow. Amazon made a bunch of storage announcements last week at the first week of re-invent (indistinct) I mean just name all kinds of databases. And so it's competing with a lot of other services in the portfolio and then, but it's interesting to see Google BigQuery a much larger percentage of the portfolio, which again to me, makes sense people like BigQuery. They like the data science components that are built in the machine learning components that are built in. But then if you look at Snowflake's last quarter and just on a run rate basis, it's over there over $600 million. Now, if you just multiply their last quarter by four from a revenue standpoint. So they got Redshift in their sites, you know if this is, you know to the extent this is the correct number and I know it's an estimate but I haven't seen any better numbers out there. Interesting Sagar, I mean Snowflake surpassed the value of snowflakes or past service now last Friday, it's probably just in trading today you know, on Monday it's maybe Snowflake is about a billion dollars less than the in value than IBM. So you're saying snowflake in a lot of attention, post IPO the thing is even exploded more. I mean, it's crazy. And I presume that's rippled into the customer interest areas. Now the ironic thing here of course, is that that snowflake most of its revenue comes from AWS running on AWS at the same time, AWS and or Redshift and snowflake compete. So you have this interesting dynamic going on. >> Yeah. You know, we've spoken to so many CIOs about kind of the dynamics here with Redshift and BigQuery and Snowflake, you know as it kind of pertains to, you know, Redshift and Snowflake. I think, you know, what I've heard the most is, look if you're using Redshift, you're going to keep using it. But if you're new to data warehousing kind of, so to speak you're going to move to Snowflake, or you're going to start with Snowflake, you know, that and I think, you know when it comes to data warehousing, you're seeing a lot of decisions kind of coming from, you know, bottom up now. So a lot of developers and so obviously their preference is going to be Snowflake. And then when you kind of look at BigQuery here over to the right again, like look you're seeing revenue growth, but again, as a as a percentage of total, you know, GCP revenue you're seeing it come down and look, we don't show it here. But another dynamic that we're seeing amongst BigQuery is that we are seeing adoption rates fall versus this time last year. So we think, again, that could be because of Snowflake. Now, one thing to kind of highlight here with BigQuery look it's kind of the low cost alternative, you know, so to speak, you know once Redshift gets too expensive, so to speak, you know you kind of move over to, to BigQuery and we kind of put some price KPIs down here all the way at the bottom of the chart, you know kind of for both of them, you know when you kind of think about the net price per kind of TB scan, you know, Redshift does it pro rate right? It's five bucks or whatever you, you know whatever you scan in, whereas, you know GCP and get the first terabyte for free. And then everything is prorated after that. And so you can see the net price, right? So that's the price that people actually pay. You can see it's significantly lower that than Redshift. And again, you know it's a lower cost alternative. And so when you think about, you know organizations or CIO's that want to save some money certainly BigQuery, you know, is an option. But certainly I think just overall, you know, Snowflake is is certainly having, you know, an impact here and you can see it from, you know the percentage of total revenue for both these coming down. You know, if we look at other AWS database services or you mentioned a few other services, you know we're not seeing that trend, we're seeing, you know percentage of total revenue hang in or accelerate. And so that's kind of why we want to point this out as this is something unique, you know for AWS and GCP where even though you're seeing growth, it's decelerating. And then of course you can kind of see the percentage of revenue represents coming down. >> I think it's interesting to look at these two companies and then of course Snowflake. So if you think about Snowflake and BigQuery both of those started in the cloud they were true born in the cloud databases. Whereas Redshift was a deal that Amazon did, you know with parxl back in the day, one time license fee and then they re-engineered it to be kind of cloud based. And so there is some of that historical o6n-prem baggage in there. I know that AWS did a tremendous job in rearchitecting that but nonetheless, so I'll give you a couple of examples. If you go back to last year's reinvent 2019 of course Snowflake was really the first to popularize this idea of separating compute from storage and even compute from compute, which is kind of nuance. So I won't go into that, but the idea being you can dial up or dial down compute as you need it you can even turn off compute in the world of Snowflake and just, you know, you're paying an S3 for storage charges. What Amazon did last reinvent was they announced the separation of compute and storage, but what the way they did it was they did it with a tiering architecture. So you can't ever actually fully turn off the compute, but it's great. I mean, it's customers I've talked to say, yes I'm saving a lot of money, you know, with this approach. But again, there's these little nuances. So what Snowflake announced this year was their data cloud and what the data cloud is as a whole new architecture. It's based on this global mesh. It lives across both AWS and Azure and GCP. And what Snowflake has done is they've taken they've abstracted the complexity of the clouds. So you don't even necessarily have to know what you're running on. You have to worry about it any Snowflake user inside of that data cloud if given access can share data with any other user. So it's a very powerful concept that they're doing. AWS at reinvent this year announced something called AWS glue elastic views which basically allows you to take data across their entire database portfolio. And I'm going to put, share in quotes. And I put it in quotes because it's essentially doing copying from a source pushing to a target AWS database and then doing a change data management capture and pushes that over time. So it, it feels like kind of an attempt to do their own data cloud. The advantages of AWS is that they've got way more data stores than just Snowflake cause it's one data store. So was AWS says Aurora dynamo DB Redshift on and on and on streaming databases, et cetera where Snowflake is just Snowflake. And so it's going to be interesting to see, you know these two juxtaposing philosophies but I want it to sort of lay that out because this is just it's setting up as a really interesting dynamic. Then you can bring in Azure as well with Microsoft and what they're doing. And I think this is going to be really fascinating to see how this plays out over the next decade. >> Yeah. I think some of the points you brought up maybe a little bit earlier were just around like the functional limits of a Redshift. Right. And I think that's where, you know Snowflake obviously does it does very, very well you know, you kind of have these, you know kind of to come, you know, you kind of have these, you know if you kind of think about like the market drivers right? Like, let's think about even like the prior slide that we showed, where we saw overall you know, database growth, like what's driving all of that what's driving Redshift, right. Obviously proximity application, interdependencies, right. Costs. You get all the credits or people are already working with the big three providers. And so there's so many reasons to continue spending with them, obviously, you know, COVID-19 right. Obviously all these apps being developed right in the cloud versus data centers and things of that nature. So you have all of these market drivers, you know for the cloud database services for Redshift. And so from that perspective, you know you kind of think, well why are people even to go to a third party vendor? And I think, you know, at that point it has to be the functional superiority. And so again, like a lot of times it depends on, you know, where decisions are coming from you know, top down or bottom up obviously at the engineering at the developer level they're going to want better functionality. Maybe, you know, top-down sometimes, you know it's like, look, we have a lot of credits, you know we're trying to save money, you know from a security perspective it could just be easier to spin something up you know, in AWS, so to speak. So, yeah, I think these are all the dynamics that, you know organizations have to figure out every day, but at least within the data warehousing space, you are seeing spend go towards Snowflake and it's going away to an extent as we kind of see, you know growth decelerate for both of these vendors, right. It's not that revenue's not going out there is growth which is that growth is, it's just not the same as it used to be, you know, so to speak. So yeah, this is a interesting area to kind of watch and I think across all the other markets as well, you know when you think about document store, right you have AWS document DB, right. What are the impacts there with with Mongo and some of these other kind of third party data warehousing vendors, right. Having to compete with all the, you know all the different services offered by AWS Azure like the cosmos and all that stuff. So, yeah, it's definitely kind of turning into a battle Royal, you know as we kind of head into, into 2021. And so I think having all these KPIs is really helping us kind of break down and figure out, you know which areas like data warehousing are slowing down. But then what other areas in database where they're seeing a tremendous amount of acceleration, like as we said, database revenue is driving. Like it's becoming a bigger part of their overall revenue. And so they are doing well. It just, you know, there's obviously snowflake they have to compete with here. >> Well, and I think maybe to your point I infer from your point, it's not necessarily a zero sum game. And as I was discussing before, I think Snowflake's really trying to create a new market. It's not just trying to steal share from the Terra datas and the Redshifts and the PCPs of the world, big queries and and Azure SQL server and Oracle and so forth. They're trying to create a whole new concept called the data cloud, which to me is really important because my prediction is what Snowflake is doing. And they don't even really talk a ton about this but they sort of do, if you squint through the lines I think what they're doing is first of all, simplicity is there, what they're doing. And then they're putting data in the hands of business people, business line people who have domain context, that's a whole new way of thinking about a data architecture versus the prevalent way to do a data pipeline is you got data engineers and data scientists, and you ingest data. It's goes to the beginning of the pipeline and that's kind of a traditional way to do it. And kind of how I think most of the AWS customers do it. I think over time, because of the simplicity of Snowflake you're going to see people begin to look at new ways to architect data. Anyway, we're almost out of time here but I want to bring up the next slide which is a graphic, which talks about a database discussion that you guys are having on 12/8 at 2:00 PM Eastern time with Bain and Verizon who what's this all about. >> Yeah. So, you know, one of the things we wanted to do is we kind of kick off a lot of the, you know Q4 Q1 research or putting on the database spark. It is just like kind of, we did, you know we did today, which obviously, you know we're really going to expand on tomorrow at a at 2:00 PM is discuss all the different KPIs. You know, we track something like 20 plus database services. So we're going to be going through a lot more than just kind of Redshift and BigQuery. Look at all the dynamics there, look at, you know how they're very against some of the third party vendors like the Snowflake, like a Mongo DB, as an example we got some really great, you know, thought leaders you know, Michael Delzer and Praveen from verizon they're going to kind of help, or they're going to opine on all the dynamics that we're seeing. And so it's going to be a very kind of, you know structured wise, it's going to be very quantitative but then you're going to have this beautiful qualitative discussion to kind of help support a lot of the data points that we're capturing. And so, yeah, we're really excited about the panel you know, from, you know, why you should join standpoint. Look, it's just, it's great, competitive Intel. If you're a third party, you know, database, data warehousing vendor, this is the type of information that you're going to want to know, you know, adoption rates market sizing, retention rates, you know net price reservers, on demand dynamics. You know, we're going through a lot that tomorrow. So I'm really excited about that. I'm just in general, really excited about a lot of the research that we're kind of putting out. So >> That's interesting. I mean, and we were talking earlier about AWS glue elastic views. I'd love to see your view of all the database services from Amazon. Cause that's where it's really designed to do is leverage those across those. And you know, you listen to Andrew, Jesse talk they've got a completely different philosophy than say Oracle, which says, Hey we've got one database to do all things Amazon saying we need that fine granularity. So it's going to be again. And to the extent that you're providing market context they're very excited to see that data Sagar and see how that evolves over time. Really appreciate you coming back in the cube and look forward to working with you. >> Appreciate Dave. Thank you so much. >> All right. Welcome. Thank you everybody for watching. This is Dave Vellante for the cube. We'll see you next time. (upbeat music)

Published Date : Dec 21 2020

SUMMARY :

all around the world, and today we're going to introduce you I appreciate you having me back on. Hey, so new gig for I had a chance last, you know more about the platform. the mandate there is to really, you know And then you take that so that we can get a feel for, you know and it's like you know, And so, you know, to You have numbers that you want one of the markets, you know if this is, you know of the chart, you know interesting to see, you know kind of to come, you know, you and you ingest data. It is just like kind of, we did, you know And you know, you listen Thank you so much. Thank you everybody for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AndrewPERSON

0.99+

AmazonORGANIZATION

0.99+

2012DATE

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

7ParkORGANIZATION

0.99+

MondayDATE

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

40%QUANTITY

0.99+

DavePERSON

0.99+

$11.8 billionQUANTITY

0.99+

2018DATE

0.99+

JessePERSON

0.99+

VerizonORGANIZATION

0.99+

December 2020DATE

0.99+

23%QUANTITY

0.99+

five bucksQUANTITY

0.99+

Sagar KadakiaPERSON

0.99+

SagarPERSON

0.99+

1.5%QUANTITY

0.99+

10,000 footQUANTITY

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

7Park DataORGANIZATION

0.99+

verizonORGANIZATION

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

last quarterDATE

0.99+

todayDATE

0.99+

two companiesQUANTITY

0.99+

350 servicesQUANTITY

0.99+

bothQUANTITY

0.99+

2021DATE

0.99+

GartnerORGANIZATION

0.99+

over $600 millionQUANTITY

0.99+

last FridayDATE

0.99+

BainORGANIZATION

0.99+

OracleORGANIZATION

0.99+

first slideQUANTITY

0.99+

667 millionQUANTITY

0.99+

PraveenPERSON

0.99+

CubeORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

three providersQUANTITY

0.98+

tomorrowDATE

0.98+

about 2%QUANTITY

0.98+

RedshiftTITLE

0.98+

about 1.5%QUANTITY

0.98+

20 plus marketsQUANTITY

0.98+

six 700 million rowsQUANTITY

0.98+

first terabyteQUANTITY

0.98+

Michael DelzerPERSON

0.98+

2:00 PMDATE

0.98+

SnowflakeTITLE

0.98+

twoQUANTITY

0.98+

firstQUANTITY

0.98+

threeQUANTITY

0.98+

last yearDATE

0.98+

Sandra Hamilton, Commvault | Commvault GO 2019


 

>>Live from Denver, Colorado. It's the cube covering comm vault. Go 2019 brought to you by Combolt. Hey, >>I'll come back to the cube date to have our coverage of Combalt go. 19 Lisa Martin with Stu. Met a man. We are in Colorado. Please welcome to the cube Sandy Hamilton, the VP of customer success. Been a convo four and a half months. So welcome to the Q book and the call. Sandy, thank you very much for having me. I really appreciate the opportunity to sit here with you this morning and share a little bit about what's going on at Commonwealth and it's been great. You guys are here. It's been fantastic. We had a great day yesterday. We got to speak with Sanjay, with Rob, Don foster, Mercer, a whole bunch of your customers. Well exactly the vibe, the positivity from the channel to the customer to the course. Even the OJI calm ball guys that I worked a couple of 10 years ago that are still here, it does really feel like a new combo and you're part of that on. >>Sanjay probably brought you in and the spring of 2019 and we've seen a lot of progress and a lot of momentum from comm vault in terms of leadership changes, sills structured new programs for channel. Exciting stuff. You kicked off this morning's keynote and you had the opportunity to introduce Jimmy Chen who if you haven't seen free solo, I haven't seen it. I'm watching it as soon as they get home from us. Amazing. But what a great way to introduce failure and why it's important to be prepared because it is going to happen. I just thought that was a great tone. Especially talking with you. Who leads customer success. >> Absolutely. Thank you Lisa very much and good morning Sue. Appreciate it. You know it's interesting cause when I think about customer success here at Comvalt, there's so many different facets to it. There really is all about engaging with our customers across everything that they do and we want to make sure our customers are prepared for something that will likely happen to them someday. >>Right. We have one of our customers talking about a cyber attack down there on their environment and how we were actually able to help them recover. So it's also that preparedness that Jimmy talked about, right? And making sure that you are training as much as you can, being prepared for what may come and knowing how to recover from that as he, as he talked about. I also think one of the things that we do really well is we listened to our customers when they give us feedback. So it's about how did those customers use what we did differently or how did they try it? And it wasn't exactly what they thought. And so how do we continue to innovate with the feedback from our customers? >>Sandy, one of the things we're hearing loud and clear from your customers is they're not alone. They're ready. I love, we have, Matthew is coming on a little bit later talking about, he's like, I'm here and my other person that does disaster, he's here too. So you know, I'm doing my own free solo. We've been talking about in tech, it's the technology and the people working together. You talked a little bit in your keynote about automated workflows, machine learning, talk about some of those pieces as to how the innovation that Combolt's bringing out is going to enable and simplify the lives of, >>yeah, I mean I think it, I think it does come down to how are we really taking care of the backend, if you will, from a technology perspective and what can we make more automated, you know, more secure. You know, you think about things like, I was even talking about new automated workflows around scheduling, even your backup windows, right? And if you think about, you know, the complexity that goes into scheduling all of that across all of your environments, we have the ability to actually have you just set what your windows should be and we'll manage all the complexities in the background, which allows you to go do things like this for customers to come to do things like this. >>So Sandy, I tell you, some of us, there's that little bit of nervousness around automation and even customers talking about, Oh well I can just do it over text. And I'm just thinking back to the how many times have I responded to the wrong text thread and Oh my gosh, what if that was my, you know, data that I did the wrong thing with. >>Yeah. Yeah. I mean, you know, one of the things that I love about this company, and again I've been here for a short period of time, but our worldwide customer support organization is just, you know, one of the hallmarks I think of this company, right? And how we're actually there for those customers at any point in time whenever they need any type of um, you know, help and support. And it isn't just the, you know, when you actually need that, when something goes wrong, it's also proactively we have professional services people, you know, we have all kinds of folks in between. Our partners play a huge role in making sure that our customers are successful with what they have going on. Let's dig into and dissect the customer life cycle. Help us understand what that's like for one and existing combo customer. Cause we talked to a couple of yesterday who've been combo customers for you know, a decade. >>So walk us through a customer life cycle for an incumbent customer as well as a new customer who is like Sanjay said yesterday, one of the things that surprised him is that a lot of customers don't know Combolt so what's the life cycle like for the existing customers and those new ones? >> Yeah, so you know, our fantastic install base of customers that we have today, one of the things that we are striving to continue to do is to make sure we're engaged with them from the beginning to the end. And the end isn't when they end, it's when you know, we're then fully deployed helping them do what they need to go in their environment. I think one of the great things about where we are with Comvalt right now is we actually have new products, new technologies, right? Have you guys had been exposed to, how are we making sure that the customers that we've had for a while are truly understanding what those new capabilities are? >>So if you think about it for us, it's how are we helping them to actually do more with their existing Convolt investment and potentially leverage us in other ways across their environment. Um, so we have, you know, our team of, you know, great, uh, you know, sales reps as well as our fantastic, you know, sales engineers, um, all the way through. Again, you know, PS and support, those people are always in contact with our customers, helping them to understand what we can really do across that life cycle and if they need to make changes along the way, we're here to help them, you know, do that as well. For a newer customer. One of the things that we're really focused on right now is that initial sort of onboarding for them and what set experience like for those customers. So having more of a, of a programmatic touch with those customers to make sure that we're more consistent in what we're doing. So they are actually receiving a lot of the same information at the same time and we're able to actually help them actually frankly in a more accelerated fashion, which is I think really important for them to get up and running as well. >>And when we talked about metallic yesterday with Rob and some other folks and I think a gentleman from Sirius, one of your launch partners, yes, Michael Gump. And you know the fact that that technology has the ability for partners to evaluate exactly what is going on with their customers so that they can potentially be even predictive to customers in terms of whether they're backing up end points or O three 65 I thought that was a really interesting capability that Colombo now has. It's giving that insights and the intelligence even to the partners to be able to help those customers make better decisions before they even know what to do makes exactly. >>They and their son, our partners are such a key part here to everything that we're really trying to do. And especially with the metallic, it's all through partners, right? And so we're really trying to drive that behavior and that means we've really have to ensure that we are bringing all of those partners into the same fold. They should have the same, you know, capabilities that we do. It's one of the, one of the also things that I'm trying to work on right now is how are we making sure our partners are better enabled around the things that we have in the capability. So we're working on, as part of those partner programs that you mentioned is do they have the right tools, if you will, and knowledge to go do what they need to go do to help our customers as well because it really is a partnership. >>Yeah. So Sandy, we've been looking at various different aspects of the change required to deliver metallic, which is now a SAS offering from a services and from a support standpoint, I think of a different experience from SAS as opposed to enterprise software. So bring, bring us, bring us your perspective. Yeah. This >>comes back a little bit to the onboarding experience, right? Where it's got to be much more digital touch. It's gotta be much more hands off cause that's the way the are thinking about buying metallic in the first place. Right? They don't have to have a sales rep, they can go by metallic, you know, frankly on their website right now, metallic.io, you know, you can go there, you can get everything you need to get started. Um, and so we want to make sure that the customers have different ways of engaging. And so some of that could very much be digital. Some of that can be, you know, different avenues of how they're working. They're wanting to work with us. But when you also then think about that type of a model, you start to think about consumption matters, right? And how much they're using and are they using everything that they purchased. >>And so we actually have a small team of customer success managers right now in the organization that are working with all of the new customers that we have in the SAS world to say, how are you doing? How's that going? You know, how's your touch? Is there anything that's presenting a challenge for you? Making sure they really do fully understand the capabilities end to end of that technology so that we can really get them onboarded super quick. As you probably know from talking to those guys, we're not having any services really around metallic cause it's not designed to need those services, which is huge. You know, I think in not only the SAS space but for Convolt as well. I think it's a new era and it also provides, frankly an opportunity for our partners to continue to engage with those customers going forward as well. >>One of the first things that I reacted to when I saw metallic, a Combalt venture was venture. I wanted to understand that. And so as we were talking yesterday with some of the gentlemen I mentioned, it's a startup within Combalt. Yeah. So coming from puppet but shoot dead in which Sonjay Mirchandani ran very successfully. Got puppet global. Your take on going from a startup like puppet to an incumbent like convo and now having this venture within it. Yeah. You know, I think it's one of the brilliant things that Sanjay and the team did very early on to recognize what Rob Calu, Ian and the rest of the folks were doing around this idea of what is now metallic. And they had been noodling it and Sanjay's like, that's got a really good opportunity. However we got to go capitalize on that now and bring that to market for our customers now. >>And if we had continued on in the way that we were, which is where it was night jobs and we didn't necessarily have all the dedicated people to go do it, you know, we may not have metallic right now. And so it was, it was really a great thing within the company to really go pull those resources out of what they were doing and say, you guys are a little startup, you know, here you go do it. And we actually had a little celebratory toast the other night with that team because of what, just a fantastic job that they've done. And one of the common threads in something everybody said was the collaboration that it really brought, not only within that team but across Combalt because there's a singular goal in bringing this to market for our customers. So it's been a great experience. I think we're going to leverage it and do more. So Sandy, >>before we let you go, need to talk a little bit about the. >>Fabulous. If I had one here I would, but I don't. So, um, a couple of months ago at VMworld, I don't know if you guys were there, you guys were probably there. Um, we actually started this thing called the D data therapy dog park. And there we had a number of puppies and they were outside. Folks came by, you know, visited. They stopped, they distressed, they got to pet a puppy. I mean, the social media was just out of this world, right? And we had San Francisco policemen there. It was, it was, it was great. Even competitors, I will say even competitors were there. It was, it was pretty funny. But, um, by the end of it, over 50% of the dogs that were there actually got adopted out, um, you know, into homes where they otherwise wouldn't have. Um, since then there've been a couple of people that have actually copied this little idea and you know, P places are springing up. >>So we have a, what we call it, data therapy dog park here where you can go in and get your puppy fix, you know, sit with the dogs and relax for a bit. But you know, we're super excited about it as well because, you know, it's sort of a fun play on what we do, but, but it's also, I think, you know, a great thing for the community and something that is near and dear to my heart. I have four dogs. Um, and so I'm not planning on taking another one home, but I'm doing my best to get some of these adopted. So if anybody out there is interested, just let me know. >>Oh, that was adoptable. All of them cheese. I'm picking up a new puppy and about eight days. So other ones of friends. I've got to have dogs enough for you. Do you need a third? We'll have a friend that has two puppies at the same time and said it's not that much more. I have had one before. You're good to go. We can, we can hook you up. Oh no. But one of the great things is it also, first of all, imitation is the highest form of flattery or for other competitors that are doing something similar, but you also just speak to the fact that we're all people, right? We are. We're traveling, especially for people that go to a lot of conferences and it's just one of those nice human elements that similar with the stories that customers share about, Hey, this is a failure that we had and this is how it helped us to recover from that. It's the same thing with, you can't be in a bad mood with, I think puppies, cupcakes and balloons. So if there were, I know that I could finish a show today >>that's like I took one of the little puppies when I was rehearsing yesterday on main stage. I took one of them with me out there and I was just holding it the whole time, you know? It was really, >>this was great. I'm afraid to venture back into the data therapy document. You're proud taking another one home OU was. Andy. It's been a pleasure to have very much. I appreciate it. Appreciate the time. Thank you and hope you have a great rest of the event. If you need anything, let us know. I'm sure we will and I can't wait to talk to you next year when you've been a comm vault for a whole like 16 months and hearing some great stories we do as well. All right. Take care. First two men, a man, Sandy Hamilton, the puppies, and I'm Lisa Martin. You're watching the cue from Convault go and 19 thanks for watching.

Published Date : Oct 16 2019

SUMMARY :

Go 2019 brought to you by Combolt. here with you this morning and share a little bit about what's going on at Commonwealth and it's been great. morning's keynote and you had the opportunity to introduce Jimmy Chen who success here at Comvalt, there's so many different facets to it. And making sure that you are training So you know, I'm doing my own free solo. to actually have you just set what your windows should be and we'll manage all the complexities in the background, what if that was my, you know, data that I did the wrong thing with. And it isn't just the, you know, when you actually need that, it's when you know, we're then fully deployed helping them do what they need to go in that life cycle and if they need to make changes along the way, we're here to help them, you know, do that as well. fact that that technology has the ability for partners to evaluate exactly what is They should have the same, you know, capabilities that we do. to enterprise software. They don't have to have a sales rep, they can go by metallic, you know, frankly on As you probably know from talking to those guys, we're not having any services really around metallic cause One of the first things that I reacted to when I saw metallic, a Combalt venture was venture. have all the dedicated people to go do it, you know, we may not have metallic right now. Um, since then there've been a couple of people that have actually copied this little idea and you know, So we have a, what we call it, data therapy dog park here where you can go in and get your puppy fix, for other competitors that are doing something similar, but you also just speak to the fact that we're all people, just holding it the whole time, you know? I'm sure we will and I can't wait to talk to you next year when you've been a comm vault for a whole like 16

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sandy HamiltonPERSON

0.99+

MatthewPERSON

0.99+

JimmyPERSON

0.99+

Lisa MartinPERSON

0.99+

IanPERSON

0.99+

ColoradoLOCATION

0.99+

Sandra HamiltonPERSON

0.99+

RobPERSON

0.99+

SandyPERSON

0.99+

Jimmy ChenPERSON

0.99+

AndyPERSON

0.99+

SanjayPERSON

0.99+

LisaPERSON

0.99+

Rob CaluPERSON

0.99+

Michael GumpPERSON

0.99+

SuePERSON

0.99+

Sonjay MirchandaniPERSON

0.99+

two puppiesQUANTITY

0.99+

StuPERSON

0.99+

ComboltORGANIZATION

0.99+

16 monthsQUANTITY

0.99+

spring of 2019DATE

0.99+

yesterdayDATE

0.99+

CombaltORGANIZATION

0.99+

ComvaltORGANIZATION

0.99+

MercerPERSON

0.99+

next yearDATE

0.99+

Denver, ColoradoLOCATION

0.99+

ConvoltORGANIZATION

0.99+

four dogsQUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

San FranciscoLOCATION

0.98+

todayDATE

0.98+

four and a half monthsQUANTITY

0.98+

VMworldORGANIZATION

0.98+

Don fosterPERSON

0.98+

over 50%QUANTITY

0.97+

OneQUANTITY

0.97+

thirdQUANTITY

0.95+

19QUANTITY

0.94+

about eight daysQUANTITY

0.94+

SASORGANIZATION

0.93+

2019DATE

0.93+

this morningDATE

0.93+

a manQUANTITY

0.93+

first thingsQUANTITY

0.93+

OJIORGANIZATION

0.91+

ColomboORGANIZATION

0.88+

a couple of months agoDATE

0.87+

GoCOMMERCIAL_ITEM

0.86+

a decadeQUANTITY

0.86+

10 years agoDATE

0.85+

metallicORGANIZATION

0.83+

D data therapy dog parkORGANIZATION

0.8+

ConvaultTITLE

0.8+

this morningDATE

0.8+

Mark Penny, University of Leicester | Commvault GO 2019


 

>>live >>from Denver, Colorado. It's the Q covering com vault Go 2019. Brought to you by combo. >>Hey, welcome to the Cube. Lisa Martin in Colorado for CONMEBOL Go 19. Statement. A man is with me this week, and we are pleased to welcome one of combos, longtime customers from the University of Leicester. We have Mark Penny, the systems specialist in infrastructure. Mark. Welcome to the Cube. >>Hi. It's good to be here. >>So you have been a convo customer at the UNI for nearly 10 years now, just giving folks an idea of about the union got 51 different academic departments about five research institutes. Cool research going on, by the way and between staff and students. About 20,000 folks, I'm sure all bringing multiple devices onto the campus. So talk to us about you came on board in 20 ton. It's hard to believe that was almost 10 years ago and said, All right, guys, we really got to get a strategy around back up, talk to us about way back then what? You guys were doing what you saw as an opportunity. What you're doing with combo today, a >>time and the There's a wide range of backup for us. There was no really assurance that we were getting back up. So we had a bit of convert seven that was backing up the Windows infrastructure. There was tyranny storage manager backing up a lot of Linux. And there was Amanda and open source thing. And then there was a LL sorts of scripts and things. So, for instance, of'em where backups were done by creating an array snapshot with the script, then mounting that script into that snapshot into another server backing up the server with calm bolt on the restore process is an absolute takes here. It was very, very difficult, long winded, required a lot of time on the checks. For this, it really was quite quite difficult to run it. Use a lot of stuff. Time we were, as far as the corporate side was concerned it exclusively on tape resource manager, we're using disc. Amanda was again for tape in a different, completely isolated system. Coupled with this, there had been a lack of investment in the data centers themselves, so the network hadn't really got a lot of throughput. This men that way were using data private backup networks in order to keep back up data off the production networks because there was really challenges over bandwidth contention backups on. So consider it over around and so on. If you got a back up coming into the working day defect student So Way started with a blank sheet of paper in many respects on went out to see what was available on Dhe. There was the usual ones it with the net back up, typically obviously again on convert Arc Serve has. But what was really interesting was deed Implication was starting to come in, But at the time, convo tonight just be released, and it had an absolutely killer feature for us, which was client side duplication. This men that we could now get rid of most of this private backup network that was making a lot of complex ISI. So it also did backup disk on back up to tape. So at that point, way went in with six Media agents. Way had a few 100 terabytes of disk storage. The strategy was to keep 28 days on disk and then the long term retention on tape into a tape library. WeII kept back through it about 2013 then took the decision. Disc was working, so let's just do disco only on save a whole load of effort. In even with a take life, you've got to refresh the tapes and things. So give it all on disk with D Duplication way, basically getting a 1 to 1. So if we had take my current figures about 1.5 petabytes of front side protected data, we've got about 1.5 petabytes in the back up system, which, because of all the synthetic fools and everything, we've got 12 months retention. We've got 28 days retention. It works really, really well in that and that that relationship, almost 1 to 1 with what's in the back up with all the attention with plants like data, has been fairly consistent since we went all disc >>mark. I wonder if you'd actually step back a second and talks about the role in importance of data in your organization because way went through a lot of the bits and bytes in that is there. But as a research organization, you know, I expect that data is, you know, quite a strategic component of the data >>forms your intellectual property. It's what is caught your research. It's the output of your investigations. So where were doing Earth Operational science. So we get data from satellites and that is then brought down roars time, little files. They then get a data set, which will consist of multiple packages of these, these vials and maybe even different measurements from different satellites that then combined and could be used to model scenarios climate change, temperature or pollution. All these types of things coming in. It's how you then take that raw data work with it. In our case, we use a lot of HPC haIf of computing to manipulate that data. And a lot of it is how smart researchers are in getting their code getting the maximum out of that data on. Then the output of that becomes a paper project on dhe finalized final set of of date, which is the results, which all goes with paper. We've also done the a lot of genetics and things like that because the DNA fingerprinting with Alec Jeffrey on what was very interesting with that one is how it was those techniques which then identified the bones that were dug up under the car park in Leicester, which is Richard >>Wright documentary. >>Yeah, on that really was quite exciting. The way that well do you really was quite. It's quite fitting, really, techniques that the university has discovered, which were then instrumental in identifying that. >>What? One of the interesting things I found in this part of the market is used to talk about just protecting my data. Yeah, a lot of times now it's about howto. Why leverage my data even Maur. How do I share my data? How do I extract more value out of the data in the 10 years you've been working with calm Boulder? Are you seeing that journey? Is that yes, the organization's going down. >>There's almost there's actually two conflicting things here because researchers love to share their data. But some of the data sets is so big that can be quite challenging. Some of the data sets. We take other people's Day to bring it in, combining with our own to do our own modeling. Then that goes out to provide some more for somebody else on. There's also issues about where data could exist, so there's a lot of very strict controls about the N. H s data. So health data, which so n hs England that can't then go out to Scotland on Booth. Sometimes the regulatory compliance almost gets sidelines with the excitement about research on way have quite a dichotomy of making sure that where we know about the data, that the appropriate controls are there and we understand it on Hopefully, people just don't go on, put it somewhere. It's not because some of the data sets for medical research, given the data which has got personal, identifiable information in it, that then has to be stripped out. So you've got an anonymous data set which they can then work with it Z assuring that the right data used the right information to remove so that you don't inadvertently go and then expose stuff s. So it's not just pure research on it going in this silo and in this silo it's actually ensuring that you've got the right bits in the right place, and it's being handled correctly >>to talk to us about has you know, as you pointed out, this massive growth and data volumes from a university perspective, health data perspective research perspective, the files are getting bigger and bigger In the time that you've started this foundation with combo in the last 9 10 years. Tremendous changes not just and data, but talking about complaints you've now got GDP are to deal with. Give us a perspective and snapshot of your of your con vault implementation and how you've evolved that as all the data changes, compliance changes and converts, technology has evolved. So if you take >>where we started off, we had a few 100 petabytes of disk. It's just before we migrated. Thio on Premise three Cloud Libraries That point. I think I got 2.1 petabytes of backup. Storage on the volume of data is exponentially growing covers the resolution of the instruments increases, so you can certainly have a four fold growth data that some of those are quite interesting things. They when I first joined the great excitement with a project which has just noticed Betty Colombo, which is the Mercury a year for in space agency to Demeter Mercury and they wanted 50 terabytes and way at that time, that was actually quite a big number way. We're thinking, well, we make the split. What? We need to be careful. Yes. Okay. 50 terrorizes that over the life of project. And now that's probably just to get us going. Not much actually happened with it. And then storage system changed and they still had their 50 terabytes with almost nothing in it way then understood that the spacecraft being launched and that once it had been launched, which was earlier this year, it was going to take a couple of years before the first data came back. Because it has to go to Venus. It has to go around Venus in the wrong direction, against gravity to slow it down. Then it goes to Mercury and the rial bolt data then starts coming back in. You'd have thought going to Mercury was dead easy. You just go boom straight in. But actually, if you did that because of gravity of the sun, it would just go in. You'd never stop. Just go straight into the sun. You lose your spacecraft. >>Nobody wants >>another. Eggs are really interesting. Is artfully Have you heard of the guy? A satellite? >>Yes. >>This is the one which is mapping a 1,000,000,000 stars in the Milky Way. It's now gone past its primary mission, and it's got most of that data. Huge data sets on DDE That data, there's, ah, it's already being worked on, but they are the university Thio task, packaging it and cleansing it. We're going to get a set of that data we're going to host. We're currently hosting a national HPC facility, which is for space research that's being replaced with an even bigger, more powerful one. Little probably fill one of our data centers completely. It's about 40 racks worth, and that's just to process that data because there's so much information that's come from it. And it's It's the resolution. It's the speed with which it can be computed on holding so much in memory. I mean, if you take across our current HPC systems, we've got 100 terabytes of memory across two systems, and those numbers were just unthinkable even 10 years ago, a terrible of memory. >>So Mark Lease and I would like to keep you here all way to talk about space, Mark todo of our favorite topics. But before we get towards the end, but a lot of changes, that combo, it's the whole new executive team they bought Hedvig. They land lost this metallic dot io. They've got new things. It's a longtime customer. What your viewpoint on com bold today and what what you've been seeing quite interesting to >>see how convoy has evolved on dhe. These change, which should have happened between 10 and 11 when they took the decision on the next generation platform that it would be this by industry. Sand is quite an aggressive pace of service packs, which are then come out onto this schedule. And to be fair, that schedule is being stuck to waken plan ahead. We know what's happening on Dhe. It's interesting that they're both patches and the new features and stuff, and it's really great to have that line to work, too. Now, Andi way with platform now supports natively stone Much stuff. And this was actually one of the decisions which took us around using our own on Prem Estimate Cloud Library. We were using as you to put a tear on data off site on with All is working Great. That can we do s3 on friend on. It's supported by convoy is just a cloud library. Now, When we first started that didn't exist. Way took the decision. It will proof of concept and so on, and it all worked, and we then got high for scale as well. It's interesting to see how convoy has gone down into the appliance 11 to, because people want to have to just have a box unpack it. Implicated. If you haven't got a technical team or strong yo skills in those area, why worry about putting your own system together? Haifa scale give you back up in a vault on the partnerships with were in HP customer So way we're using Apollo's RS in storage. Andi Yeah, the Apollo is actually the platform. If we bought Heifer Scale, it would have gone on an HP Apollo as well, because of the way with agreements, we've got invited. Actually, it's quite interesting how they've gone from software. Hardware is now come in, and it's evolving into this platform with Hedvig. I mean, there was a convoy object store buried in it, but it was very discreet. No one really knew about it. You occasionally could see a term on it would appear, but it it wasn't something which they published their butt object store with the increasing data volumes. Object Store is the only way to store. There's these volumes of data in a resilient and durable way. Eso Hedvig buying that and integrating in providing a really interesting way forward. And yet, for my perspective, I'm using three. So if we had gone down the Hedvig route from my perspective, what I would like to see is I have a story policy. I click on going to point it to s three, and it goes out it provision. The bucket does the whole lot in one a couple of clicks and that's it. Job done. I don't need to go out, create the use of create the bucket, and then get one out of every little written piece in there. And it's that tight integration, which is where I see benefits coming in you. It's giving value to the platform and giving the customer the assurance that you've configured correctly because the process is an automated in convoy has ensured that every step of the way the right decisions being made on that. Yet with metallic, that's everything is about it's actually tried and tested products with a very, very smart work for a process put round to ensure that the decisions you make. You don't need to be a convoy expert to get the outcome and get the backups. >>Excellent. Well, Mark, thank you for joining Student on the Cape Talking about tthe e evolution that the University of Leicester has gone through and your thoughts on com bolts evolution in parallel. We appreciate your time first to Minutemen. I'm Lisa Martin. You're watching the cue from combo go 19.

Published Date : Oct 15 2019

SUMMARY :

It's the Q covering com vault We have Mark Penny, the systems So talk to us about you came on board in 20 ton. So at that point, way went in with six Media agents. quite a strategic component of the data It's the output of your investigations. It's quite fitting, really, techniques that the university has discovered, the data in the 10 years you've been working with calm Boulder? it Z assuring that the right data used the right information to remove so to talk to us about has you know, as you pointed out, this massive growth and data volumes the great excitement with a project which has just noticed Betty Colombo, Is artfully Have you heard of the guy? It's the speed with which it can be computed on but a lot of changes, that combo, it's the whole new executive team they bought Hedvig. that the decisions you make. We appreciate your time first to Minutemen.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Mark PennyPERSON

0.99+

28 daysQUANTITY

0.99+

VenusLOCATION

0.99+

ColoradoLOCATION

0.99+

20 tonQUANTITY

0.99+

MarkPERSON

0.99+

University of LeicesterORGANIZATION

0.99+

100 terabytesQUANTITY

0.99+

50 terabytesQUANTITY

0.99+

2.1 petabytesQUANTITY

0.99+

LeicesterLOCATION

0.99+

Milky WayLOCATION

0.99+

Mark LeasePERSON

0.99+

12 monthsQUANTITY

0.99+

HPORGANIZATION

0.99+

Alec JeffreyPERSON

0.99+

50 terabytesQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

University of LeicesterORGANIZATION

0.99+

EarthLOCATION

0.99+

51 different academic departmentsQUANTITY

0.99+

100 petabytesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

two systemsQUANTITY

0.99+

1QUANTITY

0.99+

ScotlandLOCATION

0.99+

1,000,000,000 starsQUANTITY

0.99+

MercuryLOCATION

0.99+

first dataQUANTITY

0.99+

ApolloORGANIZATION

0.99+

50QUANTITY

0.98+

MaurPERSON

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.98+

10 years agoDATE

0.98+

tonightDATE

0.97+

todayDATE

0.97+

AmandaPERSON

0.97+

About 20,000 folksQUANTITY

0.97+

OneQUANTITY

0.97+

LinuxTITLE

0.97+

firstQUANTITY

0.97+

bothQUANTITY

0.96+

EnglandLOCATION

0.96+

2019DATE

0.96+

WindowsTITLE

0.96+

WrightPERSON

0.96+

Betty ColomboPERSON

0.96+

this weekDATE

0.95+

ThioPERSON

0.95+

N. HLOCATION

0.93+

earlier this yearDATE

0.93+

RichardPERSON

0.93+

nearly 10 yearsQUANTITY

0.93+

ThioORGANIZATION

0.92+

six Media agentsQUANTITY

0.9+

sunLOCATION

0.89+

about 1.5 petabytesQUANTITY

0.87+

HedvigORGANIZATION

0.87+

about 40 racksQUANTITY

0.87+

a yearQUANTITY

0.85+

four foldQUANTITY

0.84+

2013DATE

0.83+

10QUANTITY

0.82+

about five research institutesQUANTITY

0.79+

clicksQUANTITY

0.78+

two conflicting thingsQUANTITY

0.78+

Prem EstimateORGANIZATION

0.76+

Eso HedvigORGANIZATION

0.75+

AndiPERSON

0.74+

ApolloCOMMERCIAL_ITEM

0.73+

Object StoreTITLE

0.68+

Mike Scarpelli | ServiceNow Knowledge14


 

but cute at servicenow knowledge 14 is sponsored by service now here are your hosts Dave vellante and Jeff trick okay we're back this is Dave vellante with Jeff Frick were here live at moscone south and this is the knowledge 14 conference 6600 people here growing was about 4,000 last year you seen this conference grow and about the same pace as a services service now stop line they're growing at sixty percent plus on pace to do over 600 million in revenue this year on pace to be a billion-dollar company and we have the CFO here Mike Scarpelli cube alum Mike great to see you again thank you so this is amazing I mean Moscone is a great venue of the aria last year's kind of intimate you know and now you're really sort of blowing it out I would expect next year you're going to be in the into the big time of conferences well I got a budget for that Tiffany I'm a budget I know it's going to cost more just like the attendance is going up fifty sixty percent the costs are going up as well too but our partners are really important and our partners offset a lot of those costs will get over eight million in sponsorship revenue to offset that so when we expect next year will see a corresponding increase in the sponsorship revenue as well well it's impressive you have a lot of strong partners particularly the system integrator consultancy types you know we saw I hope it will miss somebody definitely saw sent you there last night we start Ernie young giving a presentation k p.m. ET le is about so cloud sherpas yeah the cloud shippers and so we had them on earlier she have a lot of these facilitators which is a great sign for you and they're realizing okay there's there's money to be made around the ServiceNow ecosystem helping customers implement so that's going to make you really happy no you know one of the things that's really important for us with the system integrators is today they haven't really brought us any deals but they've been very influential in accelerating deals and we think that theme is going to continue and based upon what they're seeing they're able to do in the ServiceNow ecosystem in terms of professional service consulting engagement we think that's going to start to motivate them to now bring us into deals that we were never in before but what they have been able to do as well besides just accelerate is have the deals grow beyond IT and we see that numerous on global 2000 accounts for us and you're not trying to land grab the professional services business that's clear effect when you talk to some of your customers when I've ever last year when your customer scoop is complaining that your your price is real high on the surface of suck which it probably makes you happy because it leaves more room for you for your partners and that's really not a long-term piece of your revenue II think you've said publicly you want to be less than fifteen percent of your business right yes yes we have a little bit of a ongoing debate internally my preference is not to see the professional service organization grow in terms of headcount with the pure implementation people the area that I would like to see it grow is more on the training side unfortunately some of our customers they insist that we are part of the professional service engagement so those are more the ones that we're going to be involved and if a customer is looking for a lower-cost alternative we want to make it fair for our partner so that we're not competing with them so they can come up with a lower price to offer a good quality service is important though that it's not going for the lowest price our partners need to make investment so it can be a quality implementations this is a number of early implementations that were done by partners that were some of our smaller partners where they really didn't meet the the expectations of those customers that we've had to go in and fix some of those engagements so the number one goal for our professional service is to ensure we have happy customers because happy customers renew and buy more which are two of the key drivers for our growth so you keep growing like crazy blew it out last quarter to get a 181 million in Billings revenues up 60-plus percent you're throwing off cash hitting all your metrics of course the stock went down oh there you go not much more you could do but you got to really be pleased with the consistent performance and really predictability it seems of the company yeah no I'm since I've been the CFO company it's going to be coming on three years suit in the summer the one thing that I will say about this business model is it's extremely predictable in terms of the the forecasting and what helps with that is the fact that we have such high renewal rates that really helps because we really since I've been here we've never lost any major accounts I think our renewal rate has been averaging north of ninety-five percent and in terms of our upsells or up sells have been very consistent on average they run about a third of our business every quarter and that was Frank has made comments before too that if we don't sign on another customer we can still grow twenty-five percent per year plus just based upon the upsell business opportunity that we have within our existing installed base of customers that's penetrating accounts deeper more seats more licenses more processes and applications yeah the main grower of our upsells are the main contributor to our upsells within our customers really has been additional seat licenses because many of our customers we still have even fully penetrated IT and as we roll out more applications or make our applications more feature-rich as we talked about as Frank his keynote he talked a little bit today aitee costing we've always had that as an application but that's going to be coming out as a much more feature-rich application it's going to be a lot more usable to some of our customers when that goes live that's going to drive more licenses because many times it's different people with an IT that are the process users behind that and then it's going outside of IT as well with the adoption of people enterprise service management concept that Frank's been talking about that will drive incremental users as well too we do have some additional products such as orchestration discovery with a vast majority of our growth and customers is additional licensing so very consistent performance like I say the stock pull back a little bits interesting you guys worked a Splunk tableau smoking hot stocks of all pullback obviously it's almost like you trade as a groupie even though completely different companies completely different business models you don't compete really at all but so you kind of got to be flattering to be in that yeah obviously but it's I looked at as X this is good in a way this is a healthy you know pull back it's maybe a buying opportunity for people that wanted to get in and there are a lot of folks that I'm sure they're looking at that do you I mean how much attention do you even pay for it i know most CFOs i took a say look we can't control it all we can control is you know what we can control and that's what we focus on but you even look at things like that you order your thoughts on you know and unfortunately there is a little bit of a psychology going on here with some of our employees and they're always asking and my comment to them is the only price that matters is the day you sell and this pullback that we've seen recently this is not uncommon was I expecting it to happen right now you know I don't if I if I could predict those things a lot of different line of business but what I will say history is the best indicator of the future and even a company like salesforce com one of our large investors last week he sent me an email and said you do realize that in the first five years of sales force being a public it had forgot if it was four or five fifty percent pullbacks in the stock price so this this happens it will happen I guarantee it will happen again sometime in the future but not just with us with all the other companies I'd be more concerned if it was we were the only company that traded down and everyone stayed up but we're all trading down we all came back today it's interesting and you kind of burned the shorts last year and they've made some money now but but you know Peter Lynch they don't ever short great companies and it's very hard to too short great companies your timing has to be perfect so and your core business you know like for instance a workday is is fundamentally very profitable or you know it should be right and because you're spending like crazy on sales and marketing you're expanding into into AP you're expanding your total available market and you're still throwing off cash what if you can talk about that a little bit you had said off camera your goal is to really be you know so throw off little cash basically be cash flow breakeven yes yes so you know you can only grow at a certain pace last quarter we added 150 new people into our sales and marketing organization that was the the largest number that we've ever added in one quarter we actually added 273 net new employees in q1 that was the most we've ever added in a quarter and even with all those ads we still had very good positive cash flow so it's pretty hard to add at any faster pace than what we're doing right now and so you know I just I don't see us being cashflow negative anytime in the future right now unless something happened and write it have to be a pretty major catastrophe thing and it's not going to be specific to service now it will be kind of across the board we're all CIOs stop spending and the other thing I learned here I thought maybe I just wasn't paying attention to earlier conference calls but the AP focus a large percentage of the global 2000 is in asia-pacific so you're out nation-building right I won't if he could talk about that sure so in two thousand and from March 31st 2013 till March 31st 2014 we open up in 10 new countries most of those were in asia-pacific there's still more countries we're going to be going into an asia-pacific and why are we going into these countries we're going into these countries because that's where the global 2000 accounts are that is our strategy because we focus on quality of customers not quantity of customers what I mean by quality of quality customers one that can grow over time to be a very large customer and even in 2013 we went into Italy and people said at the time well why are you going into Italy we went to Italy because they have global 2000 have 30-something global 2000 accounts even though the Italian economy wasn't doing well global 2000 customers still spend it's not specific to that country their global we signed to global 2000 counts in Italy last quarter so we have a history of showing that if we go into those countries we will be successful in winning those global 2000 and will continue there are some global 2000 so in geographies where it's going to take some time before we actually have a physical presence such as mainland China we do not have any sales people in mainland China today Russia we did not have any people in Russia today how about Ukraine you know we have no one in Ukraine today good thing about Hitler you get to go visit there that's your country I wanted to talk about the TAM yesterday last year we had I kind of watched it but but I was asking Colombo questions about the team because it was you know very interesting I saw a lot of potential want to try to understand how big it could be you and I talked about you had said its north eight billion of course the the stock took off i think it probably 10 billion from a value standpoint I didn't my own tam of mid year I did a blog post I had it up to 30 billion so I started to understand it was a top down it wasn't a bottom up but you guys are starting to sort of communicate to him a little bit differently you got had the help desk and then beyond that the IT Service Management and then you you've essentially got the operations strike the operations management and even now sort of enterprise and business management so I wonder if you could talk about how you look at the the tam and any attempts that you've made to quantify it sure so there's really four markets we play in that really intersect with one another in the core of our market is the IT Service Management that's kind of our beachhead and how we go into accounts in that market right now when historically when we went public gartner groups of the world they looked at it as a helpdesk replacement market they were saying as a 1.4 to 1.6 billion dollar market what they were missing is there's many other things in that space IT service management such as ppm such as our cmdb such as asset management a lot of these things aren't in your traditional help desk we think based upon the rate at which we've been extracting from the market that somewhere we can afford a six billion dollar market opportunity just IT Service Management and then IT Service Management is a subset of the overall enterprise service management market that Frank has been talking about we talked about in our analyst state we think that is potentially as high as 10x the size of our IT Service Management so that can get you up to say that 40 billion dollar plus and then you as well have the IT operations management space IT Service Management you just have the legacy vendors down there nothing innovative happening down there service relationship a lot of white space a lot of stuff that's being done in email lotus notes microsoft access sharepoint those are the markets were going after there really are no true systems in and that's in that space it's those one-off custom apps IT operations management there is a lot of innovation happening down that in that space it is very crowded with some new vendors as well as the legacy vendors the area that will plan might be the whole 18 billion dollar market at IDC talks about you know it's still early innings but it's at least two billion of that market 24 billion will be going after and then Frank brought up this concept of the whole business analytics as well too we talked about we did our acquisition in mirror 42 in 2013 and the business analytics kind of sits at the top of enterprise service relationship management the market we can go after in there that's a that's a whole market into itself at least as big as the enterprise service management but we're not going after that whole market it's just the business analytics to the extent it relates to enterprise service management so that's at least a couple billion more unfortunately this is what we believe there is no published reports out there and times going to is going to tell it similar to when Salesforce went public no one believed the opportunity in front of it and now look how big that come have a 30 billion dollar plus company valuations are depends on what time of year it is what the markets doing but over the long term you know you can sort of do valuation analysis it in the CFO world is there some kind of thought in terms of the ratio between an organization's tan and it's in its valuation you know I mean these other things raid obviously the leadership etc but but for the top companies there a relationship I personally don't get wrapped up in valuation you know I can't control that I can't control public company multiples the only thing we have control over is running our own business and we're going to stay very focused on running our business and let other we'll take care of the valuation good business you picked a good one yes no I I'm very pleased with this one excellent all right Mike well listen thanks very much for coming on the cube we're up against the clock and I always appreciate you thank you Dave time up alrighty bryce bravely request with our next guest we're live from tony south this is dave vellante with jeff record right back

Published Date : Apr 30 2014

SUMMARY :

that matters is the day you sell and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike ScarpelliPERSON

0.99+

1.4QUANTITY

0.99+

March 31st 2013DATE

0.99+

2013DATE

0.99+

FrankPERSON

0.99+

JeffPERSON

0.99+

Mike ScarpelliPERSON

0.99+

Dave vellantePERSON

0.99+

RussiaLOCATION

0.99+

10 billionQUANTITY

0.99+

ItalyLOCATION

0.99+

UkraineLOCATION

0.99+

six billion dollarQUANTITY

0.99+

Jeff FrickPERSON

0.99+

HitlerPERSON

0.99+

three yearsQUANTITY

0.99+

sixty percentQUANTITY

0.99+

fourQUANTITY

0.99+

next yearDATE

0.99+

10xQUANTITY

0.99+

fiveQUANTITY

0.99+

24 billionQUANTITY

0.99+

MikePERSON

0.99+

30QUANTITY

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

DavePERSON

0.99+

10 new countriesQUANTITY

0.99+

Ernie youngPERSON

0.99+

less than fifteen percentQUANTITY

0.99+

yesterdayDATE

0.99+

181 millionQUANTITY

0.99+

March 31st 2014DATE

0.99+

microsoftORGANIZATION

0.99+

dave vellantePERSON

0.99+

last quarterDATE

0.99+

todayDATE

0.99+

asia-pacificLOCATION

0.99+

twoQUANTITY

0.99+

18 billion dollarQUANTITY

0.99+

6600 peopleQUANTITY

0.99+

over 600 millionQUANTITY

0.98+

first five yearsQUANTITY

0.98+

billion-dollarQUANTITY

0.98+

150 new peopleQUANTITY

0.98+

TiffanyPERSON

0.98+

jeffPERSON

0.98+

two thousandQUANTITY

0.98+

q1DATE

0.98+

over eight millionQUANTITY

0.98+

eight billionQUANTITY

0.97+

60-plus percentQUANTITY

0.97+

this yearDATE

0.96+

last nightDATE

0.96+

lotus notesTITLE

0.96+

2000 accountsQUANTITY

0.96+

273 net new employeesQUANTITY

0.95+

mainland ChinaLOCATION

0.95+

IDCORGANIZATION

0.94+

fifty sixty percentQUANTITY

0.94+

MosconeLOCATION

0.94+

knowledge 14ORGANIZATION

0.93+

Peter LynchPERSON

0.93+

k p.m. ETDATE

0.93+

oneQUANTITY

0.93+

twenty-five percent per yearQUANTITY

0.93+

ServiceNowORGANIZATION

0.92+

30 billion dollarQUANTITY

0.9+

ninety-five percentQUANTITY

0.9+

ItalianOTHER

0.9+

every quarterQUANTITY

0.89+

APORGANIZATION

0.88+

about 4,000QUANTITY

0.87+

mid yearDATE

0.86+