Image Title

Search Results for Confluence:

Tech Titans and the Confluence of the Data Cloud L3Fix


 

>>with me or three amazing guest Panelists. One of the things that we can do today with data that we say weren't able to do maybe five years ago. >>Yes, certainly. Um, I think there's lots of things that we can integrate specific actions. But if you were to zoom out and look at the big picture, our ability to reason through data to inform our choices to data with data is bigger than ever before. There are still many companies have to decide to sample data or to throw away older data, or they don't have the right data from from external companies to put their decisions and actions in context. Now we have the technology and the platforms toe, bring all that data together, tear down silos and look 3 60 of a customer or entire action. So I think it's reasoning through data that has increased the capability of organizations dramatically in the last few years. >>So, Milan, when I was a young pup at I D. C. I started the storage program there many, many moons ago, and and so I always pay attention to what's going on storage back in my mind. And as three people forget. Sometimes that was actually the very first cloud product announced by a W s, which really ushered in the cloud era. And that was 2006 and fundamentally changed the way we think about storing data. I wonder if you could explain how s three specifically and an object storage generally, you know, with get put really transform storage from a blocker to an enabler of some of these new workloads that we're seeing. >>Absolutely. I think it has been transformational for many companies in every industry. And the reason for that is because in s three you can consolidate all the different data sets that today are scattered around so many companies, different data centers. And so if you think about it, s three gives the ability to put on structure data, which are video recordings and images. It puts semi structured data, which is your CSP file, which every company has lots of. And it has also support for structure data types like parquet files which drive a lot of the business decisions that every company has to make today. And so if you think about S three, which launched on Pi Day in March of 2000 and six s three started off as an object store, but it has evolved into so much more than that where companies all over the world, in every industry are taking those different data sets. They're putting it in s three. They're growing their data and then they're growing the value that they capture on top of that data. And that is the separation we see that snowflake talks about. And many of the pioneers across different industries talk about which is a separation of the growth of storage and the growth of your computer applications. And what's happening is that when you have a place to put your data like s three, which is secure by default and has the availability in the durability of the operational profile, you know, and can trust, then the innovation of the application developers really take over. And you know, one example of that is where we have a customer and the financial sector, and they started to use us three to put their customer care recordings, and they were just using it for storage because that obviously data set grows very quickly, and then somebody in their fraud department got the idea of doing machine learning on top of those customer care recordings. And when they did that, they found really interesting data that they could then feed into their fraud detection models. And so you get this kind of alchemy of innovation that that happens when you take the data sets of today and yesterday and tomorrow you put them all in one place, which is dust free and the innovation of your application. Developers just takes over and builds not just what you need today, but what you need in the future as well. >>Thank you for that Mark. I want to bring you into this panel. It's it's great to have you here, so so thank you. I mean, Tableau has been a game changer for organizations. I remember my first by tableau conference, passionate, uh, customers and and really bringing cloud like agility and simplicity. Thio visualization just totally change the way people thought about data and met with massive data volumes and simplified access. And now we're seeing new workloads that are developing on top of data and snowflake data in the cloud. Can you talk about how your customers are really telling stories and bringing toe life those stories with data on top of things like, that's three, which my mom was just talking about. >>Yeah, for sure. Building on what Christian male I have already said you are. Our mission tableau has always been to help people see and understand data. And you look at the amazing advances they're happening in storage and data processing and now you, when you that the data that you can see and play with this so amazing, right? Like at this point in time, yeah, it's really nothing short of a new microscope or a new telescope that really lets you understand patterns. They were always there in the world, but you literally couldn't see them because of the limitations of the amount of data that you could bring into the picture because of the amount of processing power in the amount of sharing of data that you could bring into the picture. And now, like you said, these three things are coming together. This amazing ability to see and tell stories with your data, combined with the fact that you've got so much more data at your fingertips, the fact that you can now process that data. Look at that data. Share that data in ways that was never possible. Again, I'll go back to that analogy. It feels like the invention of a new microscope, a new telescope, a new way to look at the world and tell stories and get thio. Insights that were just were never possible before. >>So thank you for that. And Christian, I want to come back to this notion of the data cloud, and, you know, it's a very powerful concept, and of course it's good marketing. But But I wonder if you could add some additional color for the audience. I mean, what more can you tell us about the data cloud, how you're seeing it, it evolving and maybe building on some of the things that Mark was just talking about just in terms of bringing this vision into reality? >>Certainly. Yeah, Data Cloud, for sure, is bigger and more concrete than than just the marketing value of it. The big insight behind our vision for the data cloud is that just a technology capability, just a cloud data platform is not what gets organizations to be able to be, uh, data driven to be ableto make great use of data or be um, highly capable in terms of data ability. Uh, the other element beyond technology is the access and availability off Data toe put their own data in context or enrich, based on the no literal data from other third parties. So the data cloud the way to think about it is is a combination of both technology, which for snowflake is our cloud data platform and all. The work loves the ability to do data warehousing, enquiries and speeds and feeds fit in there and data engineering, etcetera. But it's also how do we make it easier for our customers to have access to the data they need? Or they could benefit to improve the decisions for for their own organizations? Think of the analogy off a set top box. I can give you a great, technically set top box, but if there's no content on the other side, it makes it difficult for you to get value out of it. That's how we should all be thinking about the data cloud. It's technology, but it's also seamless access to data >>in my life. Can >>you give us >>a sense of the scope And what kind of scale are you seeing with snowflake on on AWS? >>Well, Snowflake has always driven as Christian. That was a very high transaction rate, the S three. And in fact, when Chris and I were talking, uh, just yesterday we were talking about some of the things that have really been, um, been remarkable about the long partnership that we've had over the years. And so I'll give you an example of of how that evolution has really worked. So, as you know, as three has eyes, you know, the first a W s services launched, and we have customers who have petabytes hundreds of petabytes and exabytes of storage in history. And so, from the ground up, s three has been built for scale. And so when we have customers like Snowflake that have very high transaction rates for requests for ESRI storage, we put our customer hat on and we asked, we asked customers like like, Snowflake, how do you think about performance? Not just what performance do you need, but how do you think about performance? And you know, when Christians team were walking through the demands of making requests? Two, there s three data. They were talking about some pretty high spikes over time and just a lot of volume. And so when we built improvements into our performance over time, we put that hat on for work. You know, Snowflake was telling us what they needed, and then we built our performance model not around a bucket or an account. We built it around a request rate per prefix, because that's what Snowflake and other customers told us they need it. And so when you think about how we scale our performance, we Skillet based on a prefix and not a popular account, which other cloud providers dio, we do it in this unique way because 90% of our customer roadmap across AWS comes from customer request. And that's what Snowflake and other customers were saying is that Hey, I think about my performance based on a prefix of an object and not some, you know, arbitrary semantic of how I happened to organize my buckets. I think the other thing I would also throw out there for scale is, as you might imagine, s Tree is a very large distributed system. And again, if I go back to how we architected for our performance improvements. We architected in such a way that a customer like snowflake could come in and they could take advantage of horizontally scaling. They can do parallel data retrievals and puts in gets for your data. And when they do that, they can get tens of thousands of requests for second because they're taking advantage of the scale of s tree. And so you know when when when we think about scale, it's not just scale, which is the growth of your storage, which every customer needs. I D. C says that digital data is growing at 40% year over year, and so every customer needs a place to put all of those storage sets that are growing. But the way we also to have worked together for many years is this. How can we think about how snowflake and other customers are driving these patterns of access on top of the data, not just elasticity of the storage, but the access. And then how can we architect, often very uniquely, as I talked about with our request rate in such a way that they can achieve what they need to do? Not just today but in the future, >>I don't know you. Three companies here there don't often take their customer hats off. Mark, I wonder if you could come to you. You know, during the Data Cloud Summit, we've been exploring this notion that innovation in technology is really evolved from point products. You know, the next generation of server or software tool toe platforms that made infrastructure simpler, uh, are called functions. And now it's evolving into leveraging ecosystems. You know, the power of many versus the resource is have one. So my question is, you know, how are you all collaborating and creating innovations that your customers could leverage? >>Yeah, for sure. So certainly, you know, tableau and snowflake, you know, kind of were dropped that natural partners from the beginning, right? Like putting that visualization engine on top of snowflake thio. You know, combine that that processing power on data and the ability to visualize it was obvious as you talk about the larger ecosystem. Now, of course, tableau is part of salesforce. Um and so there's a much more interesting story now to be told across the three companies. 1, 2.5, maybe a zoo. We talk about tableau and salesforce combined together of really having this full circle of salesforce. You know, with this amazing set of business APS that so much value for customers and getting the data that comes out of their salesforce applications, putting it into snowflakes so that you can combine that share, that you process it, combine it with data not just for across salesforce, but from your other APS in the way that you want and then put tableau on top of it. Now you're talking about this amazing platform ecosystem of data, you know, coming from your most valuable business applications in the world with the most, you know, sales opportunity, objects, marketing service, all of that information flowing into this flexible data platform, and then this amazing visualization platform on top of it. And there's really no end of the things that our customers can do with that combination. >>Christian, we're out of time. But I wonder if you could bring us home and I want to end with, you know, let's say, you know, people. Some people here, maybe they don't Maybe they're still struggling with cumbersome nature of let's say they're on Prem data warehouses. You know the kids just unplug them because they rely on them for certain things, like reporting. But But let's say they want to raise the bar on their data and analytics. What would you advise for the next step? For them? >>I think the first part or first step to take is around. Embrace the cloud and they promise and the abilities of cloud technology. There's many studies where relative to peers, companies that embracing data are coming out ahead and outperforming their peers and with traditional technology on print technology. You ended up with a proliferation of silos and copies of data, and a lot of energy went into managing those on PREM systems and making copies and data governance and security and cloud technology. And the type of platform the best snowflake has brought to market enables organizations to focus on the data, the data model, data insights and not necessarily on managing the infrastructure. So I think that with the first recommended recommendation from from our end embraced cloud, get into a modern cloud data platform, make sure you're spending your time on data not managing infrastructure and seeing what the infrastructure lets you dio. >>Okay, this is Dave, Volunteer for the Cube. Thank you for watching. Keep it right there with mortgage rate content coming your way.

Published Date : Nov 20 2020

SUMMARY :

One of the things that we can do today with data But if you were to zoom out and look at the big picture, our ability to reason through data I wonder if you could explain how s three specifically and an object storage generally, And what's happening is that when you have a place to put your data like s three, It's it's great to have you here, so so thank you. the fact that you can now process that data. But But I wonder if you could add the other side, it makes it difficult for you to get value out of it. in my life. And so when you think about how we So my question is, you know, how are you in the world with the most, you know, sales opportunity, objects, marketing service, But I wonder if you could bring us home and I want to end with, you know, let's say, And the type of platform the best snowflake has brought to market enables Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

90%QUANTITY

0.99+

2006DATE

0.99+

DavePERSON

0.99+

March of 2000DATE

0.99+

TwoQUANTITY

0.99+

40%QUANTITY

0.99+

firstQUANTITY

0.99+

three dataQUANTITY

0.99+

MarkPERSON

0.99+

AWSORGANIZATION

0.99+

Three companiesQUANTITY

0.99+

tomorrowDATE

0.99+

bothQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

five years agoDATE

0.99+

three companiesQUANTITY

0.99+

Data Cloud SummitEVENT

0.99+

three peopleQUANTITY

0.99+

first partQUANTITY

0.99+

SnowflakeTITLE

0.98+

first stepQUANTITY

0.98+

todayDATE

0.98+

hundreds of petabytesQUANTITY

0.98+

one placeQUANTITY

0.98+

threeQUANTITY

0.98+

one exampleQUANTITY

0.98+

secondQUANTITY

0.98+

tens of thousandsQUANTITY

0.96+

three thingsQUANTITY

0.96+

ESRIORGANIZATION

0.94+

snowflakeTITLE

0.94+

sixTITLE

0.91+

ChristianORGANIZATION

0.91+

s threeTITLE

0.9+

oneQUANTITY

0.89+

SnowflakeORGANIZATION

0.89+

three amazing guest PanelistsQUANTITY

0.87+

3 60QUANTITY

0.85+

threeTITLE

0.84+

I D. C.LOCATION

0.84+

ChristiansORGANIZATION

0.83+

yearsDATE

0.83+

first cloud productQUANTITY

0.83+

many moons agoDATE

0.82+

MilanPERSON

0.82+

lastDATE

0.79+

Data CloudORGANIZATION

0.77+

CubeORGANIZATION

0.73+

S threeTITLE

0.7+

TableauORGANIZATION

0.7+

Cloud L3FixTITLE

0.69+

exabytesQUANTITY

0.68+

D. CPERSON

0.62+

S threeCOMMERCIAL_ITEM

0.6+

Pi DayEVENT

0.59+

TechORGANIZATION

0.59+

cloudORGANIZATION

0.58+

DataORGANIZATION

0.56+

2.5QUANTITY

0.45+

TreeTITLE

0.44+

tableauORGANIZATION

0.42+

Bob Muglia, George Gilbert & Tristan Handy | How Supercloud will Support a new Class of Data Apps


 

(upbeat music) >> Hello, everybody. This is Dave Vellante. Welcome back to Supercloud2, where we're exploring the intersection of data analytics and the future of cloud. In this segment, we're going to look at how the Supercloud will support a new class of applications, not just work that runs on multiple clouds, but rather a new breed of apps that can orchestrate things in the real world. Think Uber for many types of businesses. These applications, they're not about codifying forms or business processes. They're about orchestrating people, places, and things in a business ecosystem. And I'm pleased to welcome my colleague and friend, George Gilbert, former Gartner Analyst, Wiki Bond market analyst, former equities analyst as my co-host. And we're thrilled to have Tristan Handy, who's the founder and CEO of DBT Labs and Bob Muglia, who's the former President of Microsoft's Enterprise business and former CEO of Snowflake. Welcome all, gentlemen. Thank you for coming on the program. >> Good to be here. >> Thanks for having us. >> Hey, look, I'm going to start actually with the SuperCloud because both Tristan and Bob, you've read the definition. Thank you for doing that. And Bob, you have some really good input, some thoughts on maybe some of the drawbacks and how we can advance this. So what are your thoughts in reading that definition around SuperCloud? >> Well, I thought first of all that you did a very good job of laying out all of the characteristics of it and helping to define it overall. But I do think it can be tightened a bit, and I think it's helpful to do it in as short a way as possible. And so in the last day I've spent a little time thinking about how to take it and write a crisp definition. And here's my go at it. This is one day old, so gimme a break if it's going to change. And of course we have to follow the industry, and so that, and whatever the industry decides, but let's give this a try. So in the way I think you're defining it, what I would say is a SuperCloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. >> Boom. Nice. Okay, great. I'm going to go back and read the script on that one and tighten that up a bit. Thank you for spending the time thinking about that. Tristan, would you add anything to that or what are your thoughts on the whole SuperCloud concept? >> So as I read through this, I fully realize that we need a word for this thing because I have experienced the inability to talk about it as well. But for many of us who have been living in the Confluence, Snowflake, you know, this world of like new infrastructure, this seems fairly uncontroversial. Like I read through this, and I'm just like, yeah, this is like the world I've been living in for years now. And I noticed that you called out Snowflake for being an example of this, but I think that there are like many folks, myself included, for whom this world like fully exists today. >> Yeah, I think that's a fair, I dunno if it's criticism, but people observe, well, what's the big deal here? It's just kind of what we're living in today. It reminds me of, you know, Tim Burns Lee saying, well, this is what the internet was supposed to be. It was supposed to be Web 2.0, so maybe this is what multi-cloud was supposed to be. Let's turn our attention to apps. Bob first and then go to Tristan. Bob, what are data apps to you? When people talk about data products, is that what they mean? Are we talking about something more, different? What are data apps to you? >> Well, to understand data apps, it's useful to contrast them to something, and I just use the simple term people apps. I know that's a little bit awkward, but it's clear. And almost everything we work with, almost every application that we're familiar with, be it email or Salesforce or any consumer app, those are applications that are targeted at responding to people. You know, in contrast, a data application reacts to changes in data and uses some set of analytic services to autonomously take action. So where applications that we're familiar with respond to people, data apps respond to changes in data. And they both do something, but they do it for different reasons. >> Got it. You know, George, you and I were talking about, you know, it comes back to SuperCloud, broad definition, narrow definition. Tristan, how do you see it? Do you see it the same way? Do you have a different take on data apps? >> Oh, geez. This is like a conversation that I don't know has an end. It's like been, I write a substack, and there's like this little community of people who all write substack. We argue with each other about these kinds of things. Like, you know, as many different takes on this question as you can find, but the way that I think about it is that data products are atomic units of functionality that are fundamentally data driven in nature. So a data product can be as simple as an interactive dashboard that is like actually had design thinking put into it and serves a particular user group and has like actually gone through kind of a product development life cycle. And then a data app or data application is a kind of cohesive end-to-end experience that often encompasses like many different data products. So from my perspective there, this is very, very related to the way that these things are produced, the kinds of experiences that they're provided, that like data innovates every product that we've been building in, you know, software engineering for, you know, as long as there have been computers. >> You know, Jamak Dagani oftentimes uses the, you know, she doesn't name Spotify, but I think it's Spotify as that kind of example she uses. But I wonder if we can maybe try to take some examples. If you take, like George, if you take a CRM system today, you're inputting leads, you got opportunities, it's driven by humans, they're really inputting the data, and then you got this system that kind of orchestrates the business process, like runs a forecast. But in this data driven future, are we talking about the app itself pulling data in and automatically looking at data from the transaction systems, the call center, the supply chain and then actually building a plan? George, is that how you see it? >> I go back to the example of Uber, may not be the most sophisticated data app that we build now, but it was like one of the first where you do have users interacting with their devices as riders trying to call a car or driver. But the app then looks at the location of all the drivers in proximity, and it matches a driver to a rider. It calculates an ETA to the rider. It calculates an ETA then to the destination, and it calculates a price. Those are all activities that are done sort of autonomously that don't require a human to type something into a form. The application is using changes in data to calculate an analytic product and then to operationalize that, to assign the driver to, you know, calculate a price. Those are, that's an example of what I would think of as a data app. And my question then I guess for Tristan is if we don't have all the pieces in place for sort of mainstream companies to build those sorts of apps easily yet, like how would we get started? What's the role of a semantic layer in making that easier for mainstream companies to build? And how do we get started, you know, say with metrics? How does that, how does that take us down that path? >> So what we've seen in the past, I dunno, decade or so, is that one of the most successful business models in infrastructure is taking hard things and rolling 'em up behind APIs. You take messaging, you take payments, and you all of a sudden increase the capability of kind of your median application developer. And you say, you know, previously you were spending all your time being focused on how do you accept credit cards, how do you send SMS payments, and now you can focus on your business logic, and just create the thing. One of, interestingly, one of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse, inside of your data lake. These are core concepts that, you know, you would imagine that the business would be able to create applications around very easily, but in fact that's not the case. It's actually quite challenging to, and involves a lot of data engineering pipeline and all this work to make these available. And so if you really want to make it very easy to create some of these data experiences for users, you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes, and they don't need to. >> So how rich can that API layer grow if you start with metric definitions that you've defined? And DBT has, you know, the metric, the dimensions, the time grain, things like that, that's a well scoped sort of API that people can work within. How much can you extend that to say non-calculated business rules or governance information like data reliability rules, things like that, or even, you know, features for an AIML feature store. In other words, it starts, you started pragmatically, but how far can you grow? >> Bob is waiting with bated breath to answer this question. I'm, just really quickly, I think that we as a company and DBT as a product tend to be very pragmatic. We try to release the simplest possible version of a thing, get it out there, and see if people use it. But the idea that, the concept of a metric is really just a first landing pad. The really, there is a physical manifestation of the data and then there's a logical manifestation of the data. And what we're trying to do here is make it very easy to access the logical manifestation of the data, and metric is a way to look at that. Maybe an entity, a customer, a user is another way to look at that. And I'm sure that there will be more kind of logical structures as well. >> So, Bob, chime in on this. You know, what's your thoughts on the right architecture behind this, and how do we get there? >> Yeah, well first of all, I think one of the ways we get there is by what companies like DBT Labs and Tristan is doing, which is incrementally taking and building on the modern data stack and extending that to add a semantic layer that describes the data. Now the way I tend to think about this is a fairly major shift in the way we think about writing applications, which is today a code first approach to moving to a world that is model driven. And I think that's what the big change will be is that where today we think about data, we think about writing code, and we use that to produce APIs as Tristan said, which encapsulates those things together in some form of services that are useful for organizations. And that idea of that encapsulation is never going to go away. It's very, that concept of an API is incredibly useful and will exist well into the future. But what I think will happen is that in the next 10 years, we're going to move to a world where organizations are defining models first of their data, but then ultimately of their business process, their entire business process. Now the concept of a model driven world is a very old concept. I mean, I first started thinking about this and playing around with some early model driven tools, probably before Tristan was born in the early 1980s. And those tools didn't work because the semantics associated with executing the model were too complex to be written in anything other than a procedural language. We're now reaching a time where that is changing, and you see it everywhere. You see it first of all in the world of machine learning and machine learning models, which are taking over more and more of what applications are doing. And I think that's an incredibly important step. And learned models are an important part of what people will do. But if you look at the world today, I will claim that we've always been modeling. Modeling has existed in computers since there have been integrated circuits and any form of computers. But what we do is what I would call implicit modeling, which means that it's the model is written on a whiteboard. It's in a bunch of Slack messages. It's on a set of napkins in conversations that happen and during Zoom. That's where the model gets defined today. It's implicit. There is one in the system. It is hard coded inside application logic that exists across many applications with humans being the glue that connects those models together. And really there is no central place you can go to understand the full attributes of the business, all of the business rules, all of the business logic, the business data. That's going to change in the next 10 years. And we'll start to have a world where we can define models about what we're doing. Now in the short run, the most important models to build are data models and to describe all of the attributes of the data and their relationships. And that's work that DBT Labs is doing. A number of other companies are doing that. We're taking steps along that way with catalogs. People are trying to build more complete ontologies associated with that. The underlying infrastructure is still super, super nascent. But what I think we'll see is this infrastructure that exists today that's building learned models in the form of machine learning programs. You know, some of these incredible machine learning programs in foundation models like GPT and DALL-E and all of the things that are happening in these global scale models, but also all of that needs to get applied to the domains that are appropriate for a business. And I think we'll see the infrastructure developing for that, that can take this concept of learned models and put it together with more explicitly defined models. And this is where the concept of knowledge graphs come in and then the technology that underlies that to actually implement and execute that, which I believe are relational knowledge graphs. >> Oh, oh wow. There's a lot to unpack there. So let me ask the Colombo question, Tristan, we've been making fun of your youth. We're just, we're just jealous. Colombo, I'll explain it offline maybe. >> I watch Colombo. >> Okay. All right, good. So but today if you think about the application stack and the data stack, which is largely an analytics pipeline. They're separate. Do they, those worlds, do they have to come together in order to achieve Bob's vision? When I talk to practitioners about that, they're like, well, I don't want to complexify the application stack cause the data stack today is so, you know, hard to manage. But but do those worlds have to come together? And you know, through that model, I guess abstraction or translation that Bob was just describing, how do you guys think about that? Who wants to take that? >> I think it's inevitable that data and AI are going to become closer together? I think that the infrastructure there has been moving in that direction for a long time. Whether you want to use the Lakehouse portmanteau or not. There's also, there's a next generation of data tech that is still in the like early stage of being developed. There's a company that I love that is essentially Cross Cloud Lambda, and it's just a wonderful abstraction for computing. So I think that, you know, people have been predicting that these worlds are going to come together for awhile. A16Z wrote a great post on this back in I think 2020, predicting this, and I've been predicting this since since 2020. But what's not clear is the timeline, but I think that this is still just as inevitable as it's been. >> Who's that that does Cross Cloud? >> Let me follow up on. >> Who's that, Tristan, that does Cross Cloud Lambda? Can you name names? >> Oh, they're called Modal Labs. >> Modal Labs, yeah, of course. All right, go ahead, George. >> Let me ask about this vision of trying to put the semantics or the code that represents the business with the data. It gets us to a world that's sort of more data centric, where data's not locked inside or behind the APIs of different applications so that we don't have silos. But at the same time, Bob, I've heard you talk about building the semantics gradually on top of, into a knowledge graph that maybe grows out of a data catalog. And the vision of getting to that point, essentially the enterprise's metadata and then the semantics you're going to add onto it are really stored in something that's separate from the underlying operational and analytic data. So at the same time then why couldn't we gradually build semantics beyond the metric definitions that DBT has today? In other words, you build more and more of the semantics in some layer that DBT defines and that sits above the data management layer, but any requests for data have to go through the DBT layer. Is that a workable alternative? Or where, what type of limitations would you face? >> Well, I think that it is the way the world will evolve is to start with the modern data stack and, you know, which is operational applications going through a data pipeline into some form of data lake, data warehouse, the Lakehouse, whatever you want to call it. And then, you know, this wide variety of analytics services that are built together. To the point that Tristan made about machine learning and data coming together, you see that in every major data cloud provider. Snowflake certainly now supports Python and Java. Databricks is of course building their data warehouse. Certainly Google, Microsoft and Amazon are doing very, very similar things in terms of building complete solutions that bring together an analytics stack that typically supports languages like Python together with the data stack and the data warehouse. I mean, all of those things are going to evolve, and they're not going to go away because that infrastructure is relatively new. It's just being deployed by companies, and it solves the problem of working with petabytes of data if you need to work with petabytes of data, and nothing will do that for a long time. What's missing is a layer that understands and can model the semantics of all of this. And if you need to, if you want to model all, if you want to talk about all the semantics of even data, you need to think about all of the relationships. You need to think about how these things connect together. And unfortunately, there really is no platform today. None of our existing platforms are ultimately sufficient for this. It was interesting, I was just talking to a customer yesterday, you know, a large financial organization that is building out these semantic layers. They're further along than many companies are. And you know, I asked what they're building it on, and you know, it's not surprising they're using a, they're using combinations of some form of search together with, you know, textual based search together with a document oriented database. In this case it was Cosmos. And that really is kind of the state of the art right now. And yet those products were not built for this. They don't really, they can't manage the complicated relationships that are required. They can't issue the queries that are required. And so a new generation of database needs to be developed. And fortunately, you know, that is happening. The world is developing a new set of relational algorithms that will be able to work with hundreds of different relations. If you look at a SQL database like Snowflake or a big query, you know, you get tens of different joins coming together, and that query is going to take a really long time. Well, fortunately, technology is evolving, and it's possible with new join algorithms, worst case, optimal join algorithms they're called, where you can join hundreds of different relations together and run semantic queries that you simply couldn't run. Now that technology is nascent, but it's really important, and I think that will be a requirement to have this semantically reach its full potential. In the meantime, Tristan can do a lot of great things by building up on what he's got today and solve some problems that are very real. But in the long run I think we'll see a new set of databases to support these models. >> So Tristan, you got to respond to that, right? You got to, so take the example of Snowflake. We know it doesn't deal well with complex joins, but they're, they've got big aspirations. They're building an ecosystem to really solve some of these problems. Tristan, you guys are part of that ecosystem, and others, but please, your thoughts on what Bob just shared. >> Bob, I'm curious if, I would have no idea what you were talking about except that you introduced me to somebody who gave me a demo of a thing and do you not want to go there right now? >> No, I can talk about it. I mean, we can talk about it. Look, the company I've been working with is Relational AI, and they're doing this work to actually first of all work across the industry with academics and research, you know, across many, many different, over 20 different research institutions across the world to develop this new set of algorithms. They're all fully published, just like SQL, the underlying algorithms that are used by SQL databases are. If you look today, every single SQL database uses a similar set of relational algorithms underneath that. And those algorithms actually go back to system R and what IBM developed in the 1970s. We're just, there's an opportunity for us to build something new that allows you to take, for example, instead of taking data and grouping it together in tables, treat all data as individual relations, you know, a key and a set of values and then be able to perform purely relational operations on it. If you go back to what, to Codd, and what he wrote, he defined two things. He defined a relational calculus and relational algebra. And essentially SQL is a query language that is translated by the query processor into relational algebra. But however, the calculus of SQL is not even close to the full semantics of the relational mathematics. And it's possible to have systems that can do everything and that can store all of the attributes of the data model or ultimately the business model in a form that is much more natural to work with. >> So here's like my short answer to this. I think that we're dealing in different time scales. I think that there is actually a tremendous amount of work to do in the semantic layer using the kind of technology that we have on the ground today. And I think that there's, I don't know, let's say five years of like really solid work that there is to do for the entire industry, if not more. But the wonderful thing about DBT is that it's independent of what the compute substrate is beneath it. And so if we develop new platforms, new capabilities to describe semantic models in more fine grain detail, more procedural, then we're going to support that too. And so I'm excited about all of it. >> Yeah, so interpreting that short answer, you're basically saying, cause Bob was just kind of pointing to you as incremental, but you're saying, yeah, okay, we're applying it for incremental use cases today, but we can accommodate a much broader set of examples in the future. Is that correct, Tristan? >> I think you're using the word incremental as if it's not good, but I think that incremental is great. We have always been about applying incremental improvement on top of what exists today, but allowing practitioners to like use different workflows to actually make use of that technology. So yeah, yeah, we are a very incremental company. We're going to continue being that way. >> Well, I think Bob was using incremental as a pejorative. I mean, I, but to your point, a lot. >> No, I don't think so. I want to stop that. No, I don't think it's pejorative at all. I think incremental, incremental is usually the most successful path. >> Yes, of course. >> In my experience. >> We agree, we agree on that. >> Having tried many, many moonshot things in my Microsoft days, I can tell you that being incremental is a good thing. And I'm a very big believer that that's the way the world's going to go. I just think that there is a need for us to build something new and that ultimately that will be the solution. Now you can argue whether it's two years, three years, five years, or 10 years, but I'd be shocked if it didn't happen in 10 years. >> Yeah, so we all agree that incremental is less disruptive. Boom, but Tristan, you're, I think I'm inferring that you believe you have the architecture to accommodate Bob's vision, and then Bob, and I'm inferring from Bob's comments that maybe you don't think that's the case, but please. >> No, no, no. I think that, so Bob, let me put words into your mouth and you tell me if you disagree, DBT is completely useless in a world where a large scale cloud data warehouse doesn't exist. We were not able to bring the power of Python to our users until these platforms started supporting Python. Like DBT is a layer on top of large scale computing platforms. And to the extent that those platforms extend their functionality to bring more capabilities, we will also service those capabilities. >> Let me try and bridge the two. >> Yeah, yeah, so Bob, Bob, Bob, do you concur with what Tristan just said? >> Absolutely, I mean there's nothing to argue with in what Tristan just said. >> I wanted. >> And it's what he's doing. It'll continue to, I believe he'll continue to do it, and I think it's a very good thing for the industry. You know, I'm just simply saying that on top of that, I would like to provide Tristan and all of those who are following similar paths to him with a new type of database that can actually solve these problems in a much more architected way. And when I talk about Cosmos with something like Mongo or Cosmos together with Elastic, you're using Elastic as the join engine, okay. That's the purpose of it. It becomes a poor man's join engine. And I kind of go, I know there's a better answer than that. I know there is, but that's kind of where we are state of the art right now. >> George, we got to wrap it. So give us the last word here. Go ahead, George. >> Okay, I just, I think there's a way to tie together what Tristan and Bob are both talking about, and I want them to validate it, which is for five years we're going to be adding or some number of years more and more semantics to the operational and analytic data that we have, starting with metric definitions. My question is for Bob, as DBT accumulates more and more of those semantics for different enterprises, can that layer not run on top of a relational knowledge graph? And what would we lose by not having, by having the knowledge graph store sort of the joins, all the complex relationships among the data, but having the semantics in the DBT layer? >> Well, I think this, okay, I think first of all that DBT will be an environment where many of these semantics are defined. The question we're asking is how are they stored and how are they processed? And what I predict will happen is that over time, as companies like DBT begin to build more and more richness into their semantic layer, they will begin to experience challenges that customers want to run queries, they want to ask questions, they want to use this for things where the underlying infrastructure becomes an obstacle. I mean, this has happened in always in the history, right? I mean, you see major advances in computer science when the data model changes. And I think we're on the verge of a very significant change in the way data is stored and structured, or at least metadata is stored and structured. Again, I'm not saying that anytime in the next 10 years, SQL is going to go away. In fact, more SQL will be written in the future than has been written in the past. And those platforms will mature to become the engines, the slicer dicers of data. I mean that's what they are today. They're incredibly powerful at working with large amounts of data, and that infrastructure is maturing very rapidly. What is not maturing is the infrastructure to handle all of the metadata and the semantics that that requires. And that's where I say knowledge graphs are what I believe will be the solution to that. >> But Tristan, bring us home here. It sounds like, let me put pause at this, is that whatever happens in the future, we're going to leverage the vast system that has become cloud that we're talking about a supercloud, sort of where data lives irrespective of physical location. We're going to have to tap that data. It's not necessarily going to be in one place, but give us your final thoughts, please. >> 100% agree. I think that the data is going to live everywhere. It is the responsibility for both the metadata systems and the data processing engines themselves to make sure that we can join data across cloud providers, that we can join data across different physical regions and that we as practitioners are going to kind of start forgetting about details like that. And we're going to start thinking more about how we want to arrange our teams, how does the tooling that we use support our team structures? And that's when data mesh I think really starts to get very, very critical as a concept. >> Guys, great conversation. It was really awesome to have you. I can't thank you enough for spending time with us. Really appreciate it. >> Thanks a lot. >> All right. This is Dave Vellante for George Gilbert, John Furrier, and the entire Cube community. Keep it right there for more content. You're watching SuperCloud2. (upbeat music)

Published Date : Jan 4 2023

SUMMARY :

and the future of cloud. And Bob, you have some really and I think it's helpful to do it I'm going to go back and And I noticed that you is that what they mean? that we're familiar with, you know, it comes back to SuperCloud, is that data products are George, is that how you see it? that don't require a human to is that one of the most And DBT has, you know, the And I'm sure that there will be more on the right architecture is that in the next 10 years, So let me ask the Colombo and the data stack, which is that is still in the like Modal Labs, yeah, of course. and that sits above the and that query is going to So Tristan, you got to and that can store all of the that there is to do for the pointing to you as incremental, but allowing practitioners to I mean, I, but to your point, a lot. the most successful path. that that's the way the that you believe you have the architecture and you tell me if you disagree, there's nothing to argue with And I kind of go, I know there's George, we got to wrap it. and more of those semantics and the semantics that that requires. is that whatever happens in the future, and that we as practitioners I can't thank you enough John Furrier, and the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Patrick Coughlin, Splunk | AWS re:Invent 2022


 

>>Hello and welcome back to the Cube's coverage of AWS Reinvent 2022. I'm John Furrier, host of the Cube. We got a great conversation with Patrick Kauflin, vice president of Go to Market Strategy and specialization at Splunk. We're talking about the open cybersecurity scheme of framework, also known as the O C sf, a joint strategic collaboration between Splunk and aws. It's got a lot of traction momentum. Patrick, thanks for coming on the cube for reinvent coverage. >>John, great to be here. I'm excited for this. >>You know, I love this open source movement and open source and continues to add value, almost sets the standards. You know, we were talking at the CNCF Linux Foundation this past fall about how standards are coming outta open source. Not so much the the classic standards groups, but you start to see the developers voting with their code groups deciding what to adopt de facto standards and security is a real key part of that where data becomes key for resilience. And this has been the top conversation at reinvent and all around the industry, is how to make data a key part of building into cyber resilience. So I wanna get your thoughts about the problem that you see that's emerging that you guys are solving with this group kind of collaboration around the ocs f >>Yeah, well look, John, I I think, I think you, you've already, you've already hit the high notes there. Data is proliferating across the enterprise. The attack surface area is rapidly expanding. The threat landscape is ever changing. You know, we, we just had a, a lot of scares around open SSL before that we had vulnerabilities and, and Confluence and Atlassian, and you go back to log four J and SolarWinds before that and, and challenges with the supply chain. In this year in particular, we've had a, a huge acceleration in, in concerns and threat vectors around operational technology. In our customer base alone, we saw a huge uptake, you know, and double digit percentage of customers that we're concerned about the traditional vectors like, like ransomware, like business email compromise, phishing, but also from insider threat and others. So you've got this, this highly complex environment where data continues to proliferate and flow through new applications, new infrastructure, new services, driving different types of outcomes in the digitally transformed enterprise of today. >>And, and what happens there is, is our customers, particularly in security, are, are left with having to stitch all of this together. And they're trying to get visibility across multiple different services, infrastructure applications across a number of different point solutions that they've bought to help them protect, defend, detect, and respond better. And it's a massive challenge. And you know, when our, when our customers come to us, they are often looking for ways to drive more consolidation across a variety of different solutions. They're looking to drive better outcomes in terms of speed to detection. How do I detect faster? How do I bind the thing that when bang in the night faster? How do I then fix it quickly? And then how do I layer in some automation so hopefully I don't have to do it again? Now, the challenge there that really OCF Ocsf helps to, to solve is to do that effectively, to detect and to respond at the speed at which attackers are demanding. >>Today we have to have normalization of data across this entire landscape of tools, infrastructure, services. We have to have integration to have visibility, and these tools have to work together. But the biggest barrier to that is often data is stored in different structures and in different formats across different solution providers, across different tools that are, that are, that our customers are using. And that that lack of data, normalization, chokes the integration problem. And so, you know, several years ago, a number of very smart people, and this was, this was a initiative s started by Splunk and AWS came together and said, look, we as an industry have to solve this for our customers. We have to start to shoulder this burden for our customers. We can't, we can't make our customers have to be systems integrators. That's not their job. Our job is to help make this easier for them. And so OCS was born and over the last couple of years we've built out this, this collaboration to not just be AWS and Splunk, but over 50 different organizations, cloud service providers, solution providers in the cybersecurity space have come together and said, let's decide on a single unified schema for how we're gonna represent event data in this industry. And I'm very proud to be here today to say that we've launched it and, and I can't wait to see where we go next. >>Yeah, I mean, this is really compelling. I mean, it's so much packed in that, in that statement, I mean, data normalization, you mentioned chokes, this the, the solution and integration as you call it. But really also it's like data's not just stored in silos. It may not even be available, right? So if you don't have availability of data, that's an important point. Number two, you mentioned supply chain, there's physical supply chain that's coming up big time at reinvent this time as well as in open source, the software supply chain. So you now have the perimeter's been dead for multiple years. We've been talking with that for years, everybody knows that. But now combined with the supply chain problem, both physical and software, there's so much more to go on. And so, you know, the leaders in the industry, they're not sitting on their hands. They know this, but they're just overloaded. So, so how do leaders deal with this right now before we get into the ocs f I wanna just get your thoughts on what's the psychology of the, of the business leader who's facing this landscape? >>Yeah, well, I mean unfortunately too many leaders feel like they have to face these trade offs between, you know, how and where they are really focusing cyber resilience investments in the business. And, and often there is a siloed approach across security, IT developer operations or engineering rather than the ability to kind of drive visibility integration and, and connection of outcomes across those different functions. I mean, the truth is the telemetry that, that you get from an application for application performance monitoring or infrastructure monitoring is often incredibly valuable when there's a security incident and vice versa. Some of the security data that, that you may see in a security operation center can be incredibly valuable in trying to investigate a, a performance degradation in an application and understanding where that may come from. And so what we're seeing is this data layer is collapsing faster than the org charts are or the budget line items are in the enterprise. And so at Splunk here, you know, we believe security resilience is, is fundamentally a data problem. And one of the things that we do often is, is actually help connect the dots for our customers and bring our customers together across the silos they may have internally so that they can start to see a holistic picture of what resilience means for their enterprise and how they can drive faster detection outcomes and more automation coverage. >>You know, we recently had an event called Super Cloud, we're going into the next gen kind of a cloud, how data and security are all kind of part of this NextGen application. It's not just us. And we had a panel that was titled The Innovators Dilemma, kind of talk about you some of the challenges. And one of the panelists said, it's not the innovator's dilemma, it's the integrator's dilemma. And you mentioned that earlier, and I think this a key point right now into integration is so critical, not having the data and putting pieces together now open source is becoming a composability market. And I think having things snap together and work well, it's a platform system conversation, not a tool conversation. So I really wanna get into where the OCS f kind of intersects with this area people are working on. It's not just solution architects or cloud cloud native SREs, especially where DevSecOps is. So this that's right, this intersection is critical. How does Ocsf integrate into that integration of the data making that available to make machine learning and automation smarter and more relevant? >>Right, right. Well look, I mean, I I think that's a fantastic question because, you know, we talk about, we use Bud buzzwords like machine learning and, and AI all the time. And you know, I know they're all over the place here at Reinvent and, and the, there's so much promise and hope out there around these technologies and these innovations. However, machine learning AI is only as effective as the data is clean and normalized. And, and we will not realize the promise of these technologies for outcomes in resilience unless we have better ways to normalize data upstream and better ways to integrate that data to the downstream tools where detection and response is happening. And so Ocsf was really about the industry coming together and saying, this is no longer the job of our customers. We are going to create a unified schema that represents the, an event that we will all bite down on. >>Even some of us are competitors, you know, this is, this is that, that no longer matters because at the point, the point is how do we take this burden off of our customers and how do we make the industry safer together? And so 15 initial members came together along with AWS and Splunk to, to start to create that, that initial schema and standardize it. And if you've ever, you know, if you've ever worked with a bunch of technical grumpy security people, it's kind of hard to drive consensus about around just about anything. But, but I, I'm really happy to see how quickly this, this organization has come together, has open sourced the schema, and, and, and just as you said, like I think this, this unlocks the potential for real innovation that's gonna be required to keep up with the bad guys. But right now is getting stymied and held back by the lack of normalization and the lack of integration. >>I've always said Splunk was a, it eats data for breakfast, lunch, and dinner and turns it into insights. And I think you bring up the silo thing. What's interesting is the cross company sharing, I think this hits point on, so I see this as a valuable opportunity for the industry. What's the traction on that? Because, you know, to succeed it does take a village, it takes a community of security practitioners and, and, and architects and developers to kind of coalesce around this defacto movement has been, has been the uptake been good? How's traction? Can you share your thoughts on how this is translating across companies? >>Yeah, absolutely. I mean, look, I, I think cybersecurity has a, has a long track record of, of, of standards development. There's been some fantastic standards recently. Things like sticks and taxi for threat intelligence. There's been things like the, you know, the Mir attack framework coming outta mi mir and, and, and the adoption, the traction that we've seen with Attack in particular has been amazing to, to watch how that has kind of roared onto the scene in the last couple of years and has become table stakes for how you do security operations and incident response. And, you know, I think with ocs f we're gonna see something similar here, but, you know, we are in literally the first innings of, of this. So right now, you know, we're architecting this into our, into every part of our sort of backend systems here at Polan. I know our our collaborators at AWS and elsewhere are doing it too. >>And so I think it starts with bringing this standard now that the standard exists on a, you know, in schema format and there, there's, you know, confluence and Jira tickets around it, how do we then sort of build this into the code of, of the, the collaborators that have been leading the way on this? And you know, it's not gonna happen overnight, but I think in the coming quarters you'll start to see this schema be the standard across the leaders in this space. Companies like Splunk and AWS and others who are leading the way. And often that's what helps drive adoption of a standard is if you can get the, the big dogs, so to speak, to, to, to embrace it. And, and, you know, there's no bigger one than aws and I think there's no, no more important one than Splunk in the cybersecurity space. And so as we adopt this, we hope others will follow. And, and like I said, we've got over 50 organizations contributing to it today. And so I think we're off to a running >>Start. You know, it's interesting, choking innovation or having things kind of get, get slowed down has really been a problem. We've seen successes recently over the past few years. Like Kubernetes has really unlocked and accelerated the cloud native worlds of runtime with containers to, to kind of have the consensus of the community to say, Hey, if we just do this, it gets better. I think this is really compelling with the o the ocs F because if people can come together around this and get unified as well as all the other official standards, things can go highly accelerated. So I think, I think it looks really good and I think it's great initiative and I really appreciate your insight on that, on, on your relationship with Amazon. Okay. It's not just a partnership, it's a strategic collaboration. Could you share that relationship dynamic, how to start, how's it going, what's strategic about it? Share to the audience kind of the relationship between Splunk and a on this important OCS ocsf initiative. >>Look, I, I mean I think this, this year marks the, the 10th year anniversary that, that Splunk and AWS have been collaborating in a variety of different ways. I, I think our, our companies have a fantastic and, and long standing relationship and we've, we've partnered on a number of really important projects together that bring value obviously to our individual companies, but also to our shared customers. When I think about some of the most important customers at Splunk that I spend a significant amount of time with, I I I know how many of those are, are AWS customers as well, and I know how important AWS is to them. So I think it's, it's a, it's a collaboration that is rooted in, in a respect for each other's technologies and innovation, but also in a recognition that, that our shared customers want to see us work better together over time. And it's not, it's not two companies that have kind of decided in a back room that they should work together. It's actually our customers that are, that are pushing us. And I think we're, we're both very customer centric organizations and I think that has helped us actually be better collaborators and better partners together because we're, we're working back backwards from our customers >>As security becomes a physical and software approach. We've seen the trend where even Steven Schmidt at Amazon Web Services is, is the cso, he is not the CSO anymore. So, and I asked him why, he says, well, security's also physical stuff too. So, so he's that's right. Whole lens is now expanded. You mentioned supply chain, physical, digital, this is an important inflection point. Can you summarize in your mind why open cybersecurity schema for is important? I know the unification, but beyond that, what, why is this so important? Why should people pay attention to this? >>You know, I, if, if you'll let me be just a little abstract in meta for a second. I think what's, what's really meaningful at the highest level about the O C S F initiative, and that goes beyond, I think, the tactical value it will provide to, to organizations and to customers in terms of making them safer over the coming years and, and decades. I think what's more important than that is it's really the, one of the first times that you've seen the industry come together and say, we got a problem. We need to solve. That, you know, doesn't really have anything to do with, with our own economics. Our customers are, are hurt. And yeah, some of us may be competitors, you know, we got different cloud service providers that are participating in this along with aws. We got different cybersecurity solution providers participating in this along with Splunk. >>But, but folks who've come together and say, we can actually solve this problem if, if we're able to kind of put aside our competitive differences in the markets and approach this from the perspective of what's best for information security as a whole. And, and I think that's what I'm most proud of and, and what I hope we can do more of in other places in this industry, because I think that kind of collaboration from real market leaders can actually change markets. It can change the, the, the trend lines in terms of how we are keeping up with the bad guys. And, and I'd like to see a lot more of >>That. And we're seeing a lot more new kind of things emerging in the cloud next kind of this next generation architecture and outcomes are happening. I think it's interesting, you know, we always talk about sustainability, supply chain sustainability about making the earth a better place. But you're hitting on this, this meta point about businesses are under threat of going under. I mean, we want to keep businesses to businesses to be sustainable, not just, you know, the, the environment. So if a business goes outta business business, which they, their threats here are, can be catastrophic for companies. I mean, there is, there is a community responsibility to protect businesses so they can sustain and and stay Yeah. Stay producing. This is a real key point. >>Yeah. Yeah. I mean, look, I think, I think one of the things that, you know, we, we, we complain a lot of in, in cyber security about the lack of, of talent, the talent shortage in cyber security. And every year we kinda, we kind of whack ourselves over the head about how hard it is to bring people into this industry. And it's true. But one of the things that I think we forget, John, is, is how important mission is to so many people in what they do for a living and how they work. And I think one of the things that cybersecurity is strongest in information Security General and has been for decades is this sense of mission and people work in this industry be not because it's, it's, it's always the, the, the most lucrative, but because it, it really drives a sense of safety and security in the enterprises and the fabric of the economy that we use every day to go through our lives. And when I think about the spun customers and AWS customers, I think about the, the different products and tools that power my life and, and we need to secure them. And, and sometimes that means coming to work every day at that company and, and doing your job. And sometimes that means working with others better, faster, and stronger to help drive that level of, of, of maturity and security that this industry >>Needs. It's a human, is a human opportunity, human problem and, and challenge. That's a whole nother segment. The role of the talent and the human machines and with scale. Patrick, thanks so much for sharing the information and the insight on the Open cybersecurity schema frame and what it means and why it's important. Thanks for sharing on the Cube, really appreciate it. >>Thanks for having me, John. >>Okay, this is AWS Reinvent 2022 coverage here on the Cube. I'm John Furry, you're the host. Thanks for watching.

Published Date : Nov 30 2022

SUMMARY :

I'm John Furrier, host of the Cube. John, great to be here. Not so much the the classic standards groups, and you go back to log four J and SolarWinds before that and, And you know, when our, when our customers come But the biggest barrier to that is often data And so, you know, the leaders in the industry, they're not sitting on their hands. And one of the things that we do often is, And one of the panelists said, it's not the innovator's dilemma, it's the integrator's dilemma. And you know, I know they're all over the place here at Reinvent and, and the, has open sourced the schema, and, and, and just as you said, like I think this, And I think you bring up the silo thing. that has kind of roared onto the scene in the last couple of years and has become table And you know, it's not gonna happen overnight, but I think in the coming quarters you'll start to see I think this is really compelling with the o the And I think we're, we're both very customer centric organizations I know the unification, but beyond that, what, why is you know, we got different cloud service providers that are participating in this along with aws. And, and I'd like to see a lot more of I think it's interesting, you know, we always talk about sustainability, But one of the things that I think we forget, John, is, is how important The role of the talent and the human machines and with scale. Okay, this is AWS Reinvent 2022 coverage here on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Patrick KauflinPERSON

0.99+

JohnPERSON

0.99+

PatrickPERSON

0.99+

AmazonORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

John FurryPERSON

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Patrick CoughlinPERSON

0.99+

two companiesQUANTITY

0.99+

awsORGANIZATION

0.99+

TodayDATE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

CNCF Linux FoundationORGANIZATION

0.98+

ConfluenceORGANIZATION

0.98+

15 initial membersQUANTITY

0.98+

this yearDATE

0.98+

several years agoDATE

0.98+

ReinventORGANIZATION

0.97+

OCSORGANIZATION

0.97+

singleQUANTITY

0.97+

over 50 organizationsQUANTITY

0.97+

SolarWindsORGANIZATION

0.96+

first timesQUANTITY

0.95+

JORGANIZATION

0.95+

The Innovators DilemmaTITLE

0.95+

SplunkPERSON

0.94+

PolanORGANIZATION

0.92+

OcsfORGANIZATION

0.89+

decadesQUANTITY

0.89+

NextGenORGANIZATION

0.88+

earthLOCATION

0.88+

Go to Market StrategyORGANIZATION

0.87+

OcsfTITLE

0.87+

MirTITLE

0.86+

CubeCOMMERCIAL_ITEM

0.85+

AtlassianORGANIZATION

0.85+

organizationsQUANTITY

0.82+

10th year anniversaryQUANTITY

0.82+

last couple of yearsDATE

0.81+

over 50QUANTITY

0.79+

2022TITLE

0.79+

yearsQUANTITY

0.76+

Reinvent 2022TITLE

0.75+

OCFORGANIZATION

0.74+

first inningsQUANTITY

0.74+

DevSecOpsTITLE

0.73+

secondQUANTITY

0.7+

past fallDATE

0.68+

CTITLE

0.66+

JiraTITLE

0.65+

yearsDATE

0.63+

Super CloudEVENT

0.58+

the panelistsQUANTITY

0.56+

KubernetesTITLE

0.53+

Patrick Coughlin | AWS re:Invent 2022


 

foreign welcome back to thecube's coverage of AWS re invent 2022 I'm John Furrier host of thecube we've got a great conversation with Patrick Coughlin vice president of go to market strategy and specialization at Splunk we're talking about the open cyber security schema framework also known as the ocsf a joint strategic collaboration between Splunk and AWS it's got a lot of traction momentum Patrick thanks for coming on thecube for reinvent coverage John great to be here I'm excited for this you know I love this open source movement and open source continues to add value almost sets the standards you know we were talking at the cncf Linux Foundation this past fall about how standards are coming out of Open Source not so much the the classic standards groups but you start to see the developers voting with their code groups deciding what to adopt to fact those standards and security is a real key part of that where data becomes key for resilience and this has been the top conversation at re invent and all around the industry is how to make data a key part of building into cyber resilience so I want to get your thoughts about the problem that you see that's emerging that you guys are solving with this group kind of collaboration around the ocsf yeah well look John I I think I think you you've already you've already hit the high notes there uh data is proliferating across the Enterprise uh the attack surface area is rapidly expanding the threat landscape is Ever Changing uh you know we we just had a a lot of uh uh scares around openssl before that we had vulnerabilities and Confluence in atlassian and you go back to log 4J and solarwinds before that um and challenges with the supply chain uh in this year in particular we've had a huge acceleration in in concerns and threat vectors around uh operational technology in our customer base alone we saw a huge uptick you know in double digit percentage of customers that we're concerned about the traditional vectors like like ransomware uh like business email compromise phishing but also from Insider threat and others um so you've got this this highly complex Flex environment where data continues to proliferate and flow through new applications new infrastructure new Services driving different types of outcomes in the digitally transformed Enterprise of today and and what happens there is is our customers particularly in security are left with having to stitch all of this together and they're trying to get visibility across multiple different Services infrastructure applications across a number of different point solutions that they've bought to help them protect defend detect and respond better and it's a massive Challenge and uh you know when our when our customers come to us they are often looking for ways to drive more consolidation uh across a variety of different solutions they're looking to drive better outcomes in terms of speed to detection how do I detect faster how do I find the thing that when banging in the night faster um how do I then fix it quickly and then how do I layer in some automation so hopefully I don't have to do it again now the Challenger that really ocf ocsf helps to to solve is to do that effectively to detect and to respond to the speed at which attackers are demanding today we have to have normalization of data across this entire landscape of tools infrastructure Services we have to have integration to have visibility um and these tools have to work together but the biggest barrier to that is often data is stored in different structures and in different formats across different solution providers across different tools that are that are that our customers are using um and that that lack of data normalization chokes the integration problem and so um you know several years ago a number of very smart people in this position this was a initiative started by Splunk and AWS came together and said look we as an industry have to solve this for our customers we have to start to shoulder this burden for our customers we can't we can't make our customers have to be systems integrators that's not their job our job is to help make this easier for them and so ocsf was born and over the last couple of years um we've built out this this collaboration to not just be AWS and Splunk uh but over uh 50 different organizations um uh um cloud service providers solution providers in the cyber security space have come together and said let's decide on a single unified schema for how we're going to represent event data in this industry um and uh I'm very proud to be here today to say that we've launched it and and um uh I can't wait to see where we go next yeah I mean this is really compelling I mean there's so much packed in that in that statement I mean data normalization you mentioned chokes this the the solution and the integration as you call it but really also it's like data is not just stored in silos it may not even be available right so if you don't have availability of data that's an important Point number two you mentioned supply chain there's physical supply chain is coming up big time at re invent this time as well as in open source the software supply chain so you now have the perimeter has been dead for multiple years we've been talking about that for years everybody knows that but now combined with the supply chain problem both physical and software there's so much more to go on and so you know the leaders in the industry they're not sitting on their hands they know this but they're just overloaded so so how do leaders deal with this right now before we get into the ocsf I want to just get your thoughts on what's the psychology of the of the business leader who's facing this landscape yeah well I mean unfortunately too many leaders feel like they have to face these trade-offs between you know how and where they are really focusing cyber resilience investments in the business um and and often there is a siled approach across security I.T developer operations or engineering rather than the ability to kind of Drive visibility integration and and connection of outcomes across those different functions I mean the truth is the Telemetry that that you get from an application for application performance monitoring or infrastructure monitoring is often incredibly valuable when there's a security incident and vice versa some of the security data um that you may see in a security operations center can be incredibly valuable when trying to investigate a performance degradation in an application and understanding where that may come from and so what we're seeing is this data layer is collapsing faster than the org charts are or the budget line items are in the Enterprise and so at Splunk here you know we believe security resilience is is fundamentally a data problem and one of the things that we do often is is actually help connect the dots for our customers and bring our customers together across the silos they may have internally so that they can start to see a holistic picture of what resilience means for their Enterprise and how they can drive faster detection outcomes and more automation coverage you know we recently had an event called super cloud we're going into the next gen kind of a cloud how data and security are all kind of part of this next-gen applications not just SAS and we had a panel that was titled the innovators dilemma kind of talk about getting some of the challenges and one of the panelists said it's not the innovators dilemma it's the integrators dilemma and you mentioned that earlier I think this is a key point right now integration is so critical not having the data and putting pieces together and now open source is becoming a composability market and I think having things snap together and work well it's a platform system conversation not a tool conversation so I really want to get into where the ocsf kind of intersects with this area people are working on it's not just solution Architects or cloud cloud native sres especially where devsecops is so this this intersection is critical how does ocsf integrate into that integration of the data making that available to make machine learning and automation smarter and more relevant right right well look I mean I I think that's a fantastic question because you know we talk about we use buzzwords like machine learning and AI all the time and you know I I know they're all over the place here at reinvented and and um there's so much promise and hope out there around these Technologies and these Innovations however uh machine learning AI is only as effective as the data is clean and normalized uh and and we will not realize the promise of these Technologies for outcomes in resilience unless we have better ways to normalize data upstream and better ways to integrate that data to the downstream tools where detection and response is happening and so ocsf was really about the industry coming together and saying this is no longer the job of our customers we are going to create a unified schema that represents the an event that we will all bite down on even some of us are competitors you know this is this is that that no longer matters because at the point the point is how do we take this burden off of our customers and how do we make the industry safer together um and so 15 initial members came together um along with AWS and Splunk to to start to create that uh that initial schema and standardize it and if you've ever you know if you ever worked with a bunch of technical grumpy security people it's kind of hard to drive consensus about around just about anything but uh um but I'm really happy to see how quickly this this organization Has Come Together has open sourced the schema um and and just as you said like I think this this unlocks the potential for real Innovation that's going to be required to keep up with the bad guys but right now is getting stymied and held back by the lack of normalization and the lack of integration I've always said Splunk was a it's AIDS data for breakfast lunch and dinner and turns it into insights and I think you bring up The Silo thing what's interesting is the cross company sharing I think this hits point on so I see this as a valuable opportunity for the industry what's the traction on that because you know to succeed it does take a village takes a community of security practitioners and and Architects and developers to kind of coalesce around this de facto movement has been has been uptake been good that's attraction can you share your thoughts on how this is translating across companies yeah absolutely I mean look I I think um cyber security has a long track record of of Standards development um there's been some fantastic standards recently things like um sticks and taxi for threat intelligence there's been things like the you know the minor attack framework coming out of my miter and and the adoption the traction that we've seen with attack in particular has been amazing to watch how that has kind of roared onto the scene in the last couple of years and has become table Stakes for um how you do security operations and incident response um and you know I think with ocsf we're going to see something similar here but you know we are in literally the first Innings of of this um so right now you know we're architecting this into our um into every part of our sort of back end systems here at spelunk I know um our collaborators at AWS and elsewhere are doing it too and so I think it starts with bringing this standard now the standard exists on a uh you know in schema format um and there's you know Confluence and jira tickets around it how do we then sort of build this into the code of of the the collaborators that have been leading the way on this and you know it's not going to happen overnight but I think in the coming quarters you'll start to see this schema um be the standard um across the leaders in this space companies like Splunk and AWS and others who are leading the way and often that's what helps Drive adoption of a standard is if you can get the big dogs so to speak to to embrace it and you know there's no bigger one than AWS and I think there's no no more important one than Splunk in the cyber security space and so as we adopt this we hope others will follow and like I said we've got over 50 organizations contributing to it today and so um I think we're off to a running start you know it's interesting choking Innovation or having things kind of get get slowed down has really been a problem we've seen successes recently over the past few years like kubernetes has really unlocked and accelerated the cloud native worlds of runtime with containers to kind of have the consensus of the community say hey if you we just do this it gets better I think this is really compelling with the ocsf because if people can come together around this and get unified as well as other the other official standards things can go highly accelerated so I think I think it looks really good and I think it's great initiative and I really appreciate your Insight on that on on your relationship with Amazon okay it's not just the Partnerships it's a strategic collaboration could you share that uh relationship Dynamic how to start how's it going what's strategic about it share to the audience kind of the relationship between Splunk and natives on this important ocsf initiative look I I mean I think this this year marks the the 10th year anniversary that that Splunk and AWS have been collaborating in a variety of different ways um I I think our our companies have um a fantastic and long-standing relationship and we've we've partnered on a number of really important projects together that bring value um obviously to our individual companies uh but also to our shared customers um uh when I think about some of the most important customers at Splunk that I spend a significant amount of time with um uh I I know how many of those are our AWS customers as well and I know how important AWS is to them so I think it's it's a it's a collaboration that is rooted in in a respect for each other's Technologies um and Innovation but also in a recognition that that our shared customers want to see us work better together over time and it's not it's not two companies that have kind of decided in a back room that they should work together it's actually our customers that are that are pushing us and I think we're both very customer-centric organizations and I think that has helped us actually be better collaborators and better Partners together um because we're working back backwards from our customers as security becomes a physical and software approach we've seen the trend where even Steven Schmidt at Amazon web services is the CSO he's not the CSO anymore so why he says well security is also physical stuff too so so lens is now expanded you mentioned supply chain physical digital this is an important inflection point can you summarize in your mind why open cyber security scheme information is important I know the unification but beyond that what why is this so important why should people pay attention to this you know I if if you'll let me be just a little abstract and meta for a second yeah I think what's what's really meaningful at the highest level about the ocsf initiative um and then it goes beyond I think the Tactical value it will provide to to organizations and to customers in terms of making them safer um over the coming years and and decades I think what's more important than that is it's really the one of the first times that you've seen um the industry come together and say we got a problem we need to solve that you know doesn't really have anything to do with with our own economics um our customers are are hurting and yeah some of us may be competitors um uh you know we got different cloud service providers that are participating in this along with AWS we've got different cyber security solution providers participating in this along with spelunk um but but folks have come together and say we can actually solve this problem um if if we're able to kind of put aside our competitive differences in the markets and approach this from the perspective of what's best for information security as a whole um and and I think that's what I'm most proud of uh and and what I hope we can do more of in other places in this industry because I think that kind of collaboration from real Market leaders can actually um change markets it can change the the the trend lines in terms of how we are keeping up with the bad guys and and I'd like to see a lot more of that and we're seeing a lot more new kind of things emerging in the cloud next kind of this next Generation architecture and alcohol thumbs are happening I think it's interesting you know we always talk about sustainability supply chain sustainability about making the earth a better place but you're hitting on this this meta point about businesses are under threat of going under I mean we want to keep businesses to businesses to be sustainable not just you know the the environment so if a business goes out of business which the threats here are can be catastrophic for companies I mean there is there is a community responsibility to protect businesses so they can sustain and stay stay producing this is a real key point yeah yeah I mean look I think I think one of the things that you know we We complain a lot in in cyber security about the lack of of talent the talent shortage and cyber security and every year we kind of we kind of uh whack ourselves over the head about how hard it is to bring people into this industry and it's true um but one of the things that I think we forget John is is how important mission is to so many people in what they do for a living and how they work and I think one of the things that cyber security is strongest in information security General and has been for decades is this sense of mission and people work in this industry not because it's it's it's always the the the most lucrative but because it really drives a sense of um Safety and Security in the Enterprises and the fabric of the economy that we use every day to go through our lives and when I think about the sport customers and AWS customers I think about um um the the different products and tools that power my life and and we need to secure them and and sometimes that means coming to work every day at that company and doing your job and sometimes that means working with others better faster and stronger to help drive that level of of maturity and security that this industry needs it's a human it's a human opportunity human problem and and challenge that's a whole other segment the role of the talent and the human machines and with scale Patrick thanks so much for sharing the information and the Insight on the open cyber security schema frame and what it means and why it's important thanks for sharing on thecube really appreciate it thanks for having me John okay this is AWS re invent 2022 coverage here on thecube I'm John Furrier the host thanks for watching foreign [Music]

Published Date : Nov 4 2022

SUMMARY :

one of the things that you know we We

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

AWSORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

John FurrierPERSON

0.99+

Steven SchmidtPERSON

0.99+

AmazonORGANIZATION

0.99+

PatrickPERSON

0.99+

15 initial membersQUANTITY

0.99+

two companiesQUANTITY

0.99+

oneQUANTITY

0.99+

JohnPERSON

0.99+

todayDATE

0.98+

decadesQUANTITY

0.96+

this yearDATE

0.96+

several years agoDATE

0.95+

10th yearQUANTITY

0.95+

bothQUANTITY

0.95+

singleQUANTITY

0.95+

last couple of yearsDATE

0.92+

2022DATE

0.92+

atlassianTITLE

0.91+

over 50 organizationsQUANTITY

0.91+

earthLOCATION

0.9+

one of the thingsQUANTITY

0.88+

last couple of yearsDATE

0.88+

firstQUANTITY

0.85+

ocsfORGANIZATION

0.85+

secondQUANTITY

0.78+

first timesQUANTITY

0.78+

past fallDATE

0.73+

too manyQUANTITY

0.73+

ChallengerTITLE

0.73+

one ofQUANTITY

0.72+

SplunkPERSON

0.72+

many peopleQUANTITY

0.72+

Linux FoundationORGANIZATION

0.72+

thingsQUANTITY

0.7+

50 different organizationsQUANTITY

0.68+

re:Invent 2022TITLE

0.66+

the panelistsQUANTITY

0.61+

past few yearsDATE

0.58+

spelunkORGANIZATION

0.57+

ocsfTITLE

0.57+

overQUANTITY

0.56+

Point number twoQUANTITY

0.55+

ocsfPERSON

0.5+

ConfluenceORGANIZATION

0.46+

SiloTITLE

0.37+

UNLIST TILL 4/2 - Autonomous Log Monitoring


 

>> Sue: Hi everybody, thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled "Autonomous Monitoring Using Machine Learning". My name is Sue LeClaire, director of marketing at Vertica, and I'll be your host for this session. Joining me is Larry Lancaster, founder and CTO at Zebrium. Before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slide and click submit. There will be a Q&A session at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer them offline. Alternatively, you can also go and visit Vertica forums to post your questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also, just a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available for you to view on demand later this week. We'll send you a notification as soon as it's ready. So, let's get started. Larry, over to you. >> Larry: Hey, thanks so much. So hi, my name's Larry Lancaster and I'm here to talk to you today about something that I think who's time has come and that's autonomous monitoring. So, with that, let's get into it. So, machine data is my life. I know that's a sad life, but it's true. So I've spent most of my career kind of taking telemetry data from products, either in the field, we used to call it in the field or nowadays, that's been deployed, and bringing that data back, like log file stats, and then building stuff on top of it. So, tools to run the business or services to sell back to users and customers. And so, after doing that a few times, it kind of got to the point where I was really sort of sick of building the same kind of thing from scratch every time, so I figured, why not go start a company and do it so that we don't have to do it manually ever again. So, it's interesting to note, I've put a little sentence here saying, "companies where I got to use Vertica" So I've been actually kind of working with Vertica for a long time now, pretty much since they came out of alpha. And I've really been enjoying their technology ever since. So, our vision is basically that I want a system that will characterize incidents before I notice. So an incident is, you know, we used to call it a support case or a ticket in IT, or a support case in support. Nowadays, you may have a DevOps team, or a set of SREs who are monitoring a production sort of deployment. And so they'll call it an incident. So I'm looking for something that will notice and characterize an incident before I notice and have to go digging into log files and stats to figure out what happened. And so that's a pretty heady goal. And so I'm going to talk a little bit today about how we do that. So, if we look at logs in particular. Logs today, if you look at log monitoring. So monitoring is kind of that whole umbrella term that we use to talk about how we monitor systems in the field that we've shipped, or how we monitor production deployments in a more modern stack. And so basically there are log monitoring tools. But they have a number of drawbacks. For one thing, they're kind of slow in the sense that if something breaks and I need to go to a log file, actually chances are really good that if you have a new issue, if it's an unknown unknown problem, you're going to end up in a log file. So the problem then becomes basically you're searching around looking for what's the root cause of the incident, right? And so that's kind of time-consuming. So, they're also fragile and this is largely because log data is completely unstructured, right? So there's no formal grammar for a log file. So you have this situation where, if I write a parser today, and that parser is going to do something, it's going to execute some automation, it's going to open or update a ticket, it's going to maybe restart a service, or whatever it is that I want to happen. What'll happen is later upstream, someone who's writing the code that produces that log message, they might do something really useful for me, or for users. And they might go fix a spelling mistake in that log message. And then the next thing you know, all the automation breaks. So it's a very fragile source for automation. And finally, because of that, people will set alerts on, "Oh, well tell me how many thousands of errors are happening every hour." Or some horrible metric like that. And then that becomes the only visibility you have in the data. So because of all this, it's a very human-driven, slow, fragile process. So basically, we've set out to kind of up-level that a bit. So I touched on this already, right? The truth is if you do have an incident, you're going to end up in log files to do root cause. It's almost always the case. And so you have to wonder, if that's the case, why do most people use metrics only for monitoring? And the reason is related to the problems I just described. They're already structured, right? So for logs, you've got this mess of stuff, so you only want to dig in there when you absolutely have to. But ironically, it's where a lot of the information that you need actually is. So we have a model today, and this model used to work pretty well. And that model is called "index and search". And it basically means you treat log files like they're text documents. And so you index them and when there's some issue you have to drill into, then you go searching, right? So let's look at that model. So 20 years ago, we had sort of a shrink-wrap software delivery model. You had an incident. With that incident, maybe you had one customer and you had a monolithic application and a handful of log files. So it's perfectly natural, in fact, usually you could just v-item the log file, and search that way. Or if there's a lot of them, you could index them and search them that way. And that all worked very well because the developer or the support engineer had to be an expert in those few things, in those few log files, and understand what they meant. But today, everything has changed completely. So we live in a software as a service world. What that means is, for a given incident, first of all you're going to be affecting thousands of users. You're going to have, potentially, 100 services that are deployed in your environment. You're going to have 1,000 log streams to sift through. And yet, you're still kind of stuck in the situation where to go find out what's the matter, you're going to have to search through the log files. So this is kind of the unacceptable sort of position we're in today. So for us, the future will not be index and search. And that's simply because it cannot scale. And the reason I say that it can't scale is because it all kind of is bottlenecked by a person and their eyeball. So, you continue to drive up the amount of data that has to be sifted through, the complexity of the stack that has to be understood, and you still, at the end of the day, for MTTR purposes, you still have the same bottleneck, which is the eyeball. So this model, I believe, is fundamentally broken. And that's why, I believe in five years you're going to be in a situation where most monitoring of unknown unknown problems is going to be done autonomously. And those issues will be characterized autonomously because there's no other way it can happen. So now I'm going to talk a little bit about autonomous monitoring itself. So, autonomous monitoring basically means, if you can imagine in a monitoring platform and you watch the monitoring platform, maybe you watch the alerts coming from it or more importantly, you kind of watch the dashboards and try to see if something looks weird. So autonomous monitoring is the notion that the platform should do the watching for you and only let you know when something is going wrong and should kind of give you a window into what happened. So if you look at this example I have on screen, just to take it really slow and absorb the concept of autonomous monitoring. So here in this example, we've stopped the database. And as a result, down below you can see there were a bunch of fallout. This is an Atlassian Stack, so you can imagine you've got a Postgres database. And then you've got sort of Bitbucket, and Confluence, and Jira, and these various other components that need the database operating in order to function. So what this is doing is it's calling out, "Hey, the root cause is the database stopped and here's the symptoms." Now, you might be wondering, so what. I mean I could go write a script to do this sort of thing. Here's what's interesting about this very particular example, and I'll show a couple more examples that are a little more involved. But here's the interesting thing. So, in the software that came up with this incident and opened this incident and put this root cause and symptoms in there, there's no code that knows anything about timestamp formats, severities, Atlassian, Postgres, databases, Bitbucket, Confluence, there's no regexes that talk about starting, stopped, RDBMS, swallowed exception, and so on and so forth. So you might wonder how it's possible then, that something which is completely ignorant of the stack, could come up with this description, which is exactly what a human would have had to do, to figure out what happened. And I'm going to get into how we do that. But that's what autonomous monitoring is about. It's about getting into a set of telemetry from a stack with no prior information, and understanding when something breaks. And I could give you the punchline right now, which is there are fundamental ways that software behaves when it's breaking. And by looking at hundreds of data sets that people have generously allowed us to use containing incidents, we've been able to characterize that and now generalize it to apply it to any new data set and stack. So here's an interesting one right here. So there's a fella, David Gill, he's just a genius in the monitoring space. He's been working with us for the last couple of months. So he said, "You know what I'm going to do, is I'm going to run some chaos experiments." So for those of you who don't know what chaos engineering is, here's the idea. So basically, let's say I'm running a Kubernetes cluster and what I'll do is I'll use sort of a chaos injection test, something like litmus. And basically it will inject issues, it'll break things in my application randomly to see if my monitoring picks it up. And so this is what chaos engineering is built around. It's built around sort of generating lots of random problems and seeing how the stack responds. So in this particular case, David went in and he deleted, basically one of the tests that was presented through litmus did a delete of a pod delete. And so that's going to basically take out some containers that are part of the service layer. And so then you'll see all kinds of things break. And so what you're seeing here, which is interesting, this is why I like to use this example. Because it's actually kind of eye-opening. So the chaos tool itself generates logs. And of course, through Kubernetes, all the log files locations that are on the host, and the container logs are known. And those are all pulled back to us automatically. So one of the log files we have is actually the chaos tool that's doing the breaking, right? And so what the tool said here, when it went to determine what the root cause was, was it noticed that there was this process that had these messages happen, initializing deletion lists, selection a pod to kill, blah blah blah. It's saying that the root cause is the chaos test. And it's absolutely right, that is the root cause. But usually chaos tests don't get picked up themselves. You're supposed to be just kind of picking up the symptoms. But this is what happens when you're able to kind of tease out root cause from symptoms autonomously, is you end up getting a much more meaningful answer, right? So here's another example. So essentially, we collect the log files, but we also have a Prometheus scraper. So if you export Prometheus metrics, we'll scrape those and we'll collect those as well. And so we'll use those for our autonomous monitoring as well. So what you're seeing here is an issue where, I believe this is where we ran something out of disk space. So it opened an incident, but what's also interesting here is, you see that it pulled that metric to say that the spike in this metric was a symptom of this running out of space. So again, there's nothing that knows anything about file system usage, memory, CPU, any of that stuff. There's no actual hard-coded logic anywhere to explain any of this. And so the concept of autonomous monitoring is looking at a stack the way a human being would. If you can imagine how you would walk in and monitor something, how you would think about it. You'd go looking around for rare things. Things that are not normal. And you would look for indicators of breakage, and you would see, do those seem to be correlated in some dimension? That is how the system works. So as I mentioned a moment ago, metrics really do kind of complete the picture for us. We end up in a situation where we have a one-stop shop for incident root cause. So, how does that work? Well, we ingest and we structure the log files. So if we're getting the logs, we'll ingest them and we'll structure them, and I'm going to show a little bit what that structure looks like and how that goes into the database in a moment. And then of course we ingest and structure the Prometheus metrics. But here, structure really should have an asterisk next to it, because metrics are mostly structured already. They have names. If you have your own scraper, as opposed to going into the time series Prometheus database and pulling metrics from there, you can keep a lot more information about metadata about those metrics from the exporter's perspective. So we keep all of that too. Then we do our anomaly detection on both of those sets of data. And then we cross-correlate metrics and log anomalies. And then we create incidents. So this is at a high level, kind of what's happening without any sort of stack-specific logic built in. So we had some exciting recent validation. So Mayadata's a pretty big player in the Kubernetes space. Essentially, they do Kubernetes as a managed service. They have tens of thousands of customers that they manage their Kubernetes clusters for them. And then they're also involved, both in the OpenEBS project, as well as in the Litmius project I mentioned a moment ago. That's their tool for chaos engineering. So they're a pretty big player in the Kubernetes space. So essentially, they said, "Oh okay, let's see if this is real." So what they did was they set up our collectors, which took three minutes in Kubernetes. And then they went and they, using Litmus, they reproduced eight incidents that their actual, real-world customers had hit. And they were trying to remember the ones that were the hardest to figure out the root cause at the time. And we picked up and put a root cause indicator that was correct in 100% of these incidents with no training configuration or metadata required. So this is kind of what autonomous monitoring is all about. So now I'm going to talk a little bit about how it works. So, like I said, there's no information included or required about, so if you imagine a log file for example. Now, commonly, over to the left-hand side of every line, there will be some sort of a prefix. And what I mean by that is you'll see like a timestamp, or a severity, and maybe there's a PID, and maybe there's function name, and maybe there's some other stuff there. So basically that's kind of, it's common data elements for a large portion of the lines in a given log file. But you know, of course, the contents change. So basically today, like if you look at a typical log manager, they'll talk about connectors. And what connectors means is, for an application it'll generate a certain prefix format in a log. And that means what's the format of the timestamp, and what else is in the prefix. And this lets the tool pick it up. And so if you have an app that doesn't have a connector, you're out of luck. Well, what we do is we learn those prefixes dynamically with machine learning. You do not have to have a connector, right? And what that means is that if you come in with your own application, the system will just work for it from day one. You don't have to have connectors, you don't have to describe the prefix format. That's so yesterday, right? So really what we want to be doing is up-leveling what the system is doing to the point where it's kind of working like a human would. You look at a log line, you know what's a timestamp. You know what's a PID. You know what's a function name. You know where the prefix ends and where the variable parts begin. You know what's a parameter over there in the variable parts. And sometimes you may need to see a couple examples to know what was a variable, but you'll figure it out as quickly as possible, and that's exactly how the system goes about it. As a result, we kind of embrace free-text logs, right? So if you look at a typical stack, most of the logs generated in a typical stack are usually free-text. Even structured logging typically will have a message attribute, which then inside of it has the free-text message. For us, that's not a bad thing. That's okay. The purpose of a log is to inform people. And so there's no need to go rewrite the whole logging stack just because you want a machine to handle it. They'll figure it out for themselves, right? So, you give us the logs and we'll figure out the grammar, not only for the prefix but also for the variable message part. So I already went into this, but there's more that's usually required for configuring a log manager with alerts. You have to give it keywords. You have to give it application behaviors. You have to tell it some prior knowledge. And of course the problem with all of that is that the most important events that you'll ever see in a log file are the rarest. Those are the ones that are one out of a billion. And so you may not know what's going to be the right keyword in advance to pick up the next breakage, right? So we don't want that information from you. We'll figure that out for ourselves. As the data comes in, essentially we parse it and we categorize it, as I've mentioned. And when I say categorize, what I mean is, if you look at a certain given log file, you'll notice that some of the lines are kind of the same thing. So this one will say "X happened five times" and then maybe a few lines below it'll say "X happened six times" but that's basically the same event type. It's just a different instance of that event type. And it has a different value for one of the parameters, right? So when I say categorization, what I mean is figuring out those unique types and I'll show an example of that next. Anomaly detection, we do on top of that. So anomaly detection on metrics in a very sort of time series by time series manner with lots of tunables is a well-understood problem. So we also do this on the event types occurrences. So you can think of each event type occurring in time as sort of a point process. And then you can develop statistics and distributions on that, and you can do anomaly detection on those. Once we have all of that, we have extracted features, essentially, from metrics and from logs. We do pattern recognition on the correlations across different channels of information, so different event types, different log types, different hoses, different containers, and then of course across to the metrics. Based on all of this cross-correlation, we end up with a root cause identification. So that's essentially, at a high level, how it works. What's interesting, from the perspective of this call particularly, is that incident detection needs relationally structured data. It really does. You need to have all the instances of a certain event type that you've ever seen easily accessible. You need to have the values for a given sort of parameter easily, quickly available so you can figure out what's the distribution of this over time, how often does this event type happen. You can run analytical queries against that information so that you can quickly, in real-time, do anomaly detection against new data. So here's an example of that this looks like. And this kind of part of the work that we've done. At the top you see some examples of log lines, right? So that's kind of a snippet, it's three lines out of a log file. And you see one in the middle there that's kind of highlighted with colors, right? I mean, it's a little messy, but it's not atypical of the log file that you'll see pretty much anywhere. So there, you've got a timestamp, and a severity, and a function name. And then you've got some other information. And then finally, you have the variable part. And that's going to have sort of this checkpoint for memory scrubbers, probably something that's written in English, just so that the person who's reading the log file can understand. And then there's some parameters that are put in, right? So now, if you look at how we structure that, the way it looks is there's going to be three tables that correspond to the three event types that we see above. And so we're going to look at the one that corresponds to the one in the middle. So if we look at that table, there you'll see a table with columns, one for severity, for function name, for time zone, and so on. And date, and PID. And then you see over to the right with the colored columns there's the parameters that were pulled out from the variable part of that message. And so they're put in, they're typed and they're in integer columns. So this is the way structuring needs to work with logs to be able to do efficient and effective anomaly detection. And as far as I know, we're the first people to do this inline. All right, so let's talk now about Vertica and why we take those tables and put them in Vertica. So Vertica really is an MPP column store, but it's more than that, because nowadays when you say "column store", people sort of think, like, for example Cassandra's a column store, whatever, but it's not. Cassandra's not a column store in the sense that Vertica is. So Vertica was kind of built from the ground up to be... So it's the original column store. So back in the cStor project at Berkeley that Stonebraker was involved in, he said let's explore what kind of efficiencies we can get out of a real columnar database. And what he found was that, he and his grad students that started Vertica. What they found was that what they can do is they could build a database that gives orders of magnitude better query performance for the kinds of analytics I'm talking about here today. With orders of magnitude less data storage underneath. So building on top of machine data, as I mentioned, is hard, because it doesn't have any defined schemas. But we can use an RDBMS like Vertica once we've structured the data to do the analytics that we need to do. So I talked a little bit about this, but if you think about machine data in general, it's perfectly suited for a columnar store. Because, if you imagine laying out sort of all the attributes of an event type, right? So you can imagine that each occurrence is going to have- So there may be, say, three or four function names that are going to occur for all the instances of a given event type. And so if you were to sort all of those event instances by function name, what you would find is that you have sort of long, million long runs of the same function name over and over. So what you have, in general, in machine data, is lots and lots of slowly varying attributes, lots of low-cardinality data that it's almost completely compressed out when you use a real column store. So you end up with a massive footprint reduction on disk. And it also, that propagates through the analytical pipeline. Because Vertica does late materialization, which means it tries to carry that data through memory with that same efficiency, right? So the scale-out architecture, of course, is really suitable for petascale workloads. Also, I should point out, I was going to mention it in another slide or two, but we use the Vertica Eon architecture, and we have had no problems scaling that in the cloud. It's a beautiful sort of rewrite of the entire data layer of Vertica. The performance and flexibility of Eon is just unbelievable. And so I've really been enjoying using it. I was skeptical, you could get a real column store to run in the cloud effectively, but I was completely wrong. So finally, I should mention that if you look at column stores, to me, Vertica is the one that has the full SQL support, it has the ODBC drivers, it has the ACID compliance. Which means I don't need to worry about these things as an application developer. So I'm laying out the reasons that I like to use Vertica. So I touched on this already, but essentially what's amazing is that Vertica Eon is basically using S3 as an object store. And of course, there are other offerings, like the one that Vertica does with pure storage that doesn't use S3. But what I find amazing is how well the system performs using S3 as an object store, and how they manage to keep an actual consistent database. And they do. We've had issues where we've gone and shut down hosts, or hosts have been shut down on us, and we have to restart the database and we don't have any consistency issues. It's unbelievable, the work that they've done. Essentially, another thing that's great about the way it works is you can use the S3 as a shared object store. You can have query nodes kind of querying from that set of files largely independently of the nodes that are writing to them. So you avoid this sort of bottleneck issue where you've got contention over who's writing what, and who's reading what, and so on. So I've found the performance using separate subclusters for our UI and for the ingest has been amazing. Another couple of things that they have is they have a lot of in-database machine learning libraries. There's actually some cool stuff on their GitHub that we've used. One thing that we make a lot of use of is the sequence and time series analytics. For example, in our product, even though we do all of this stuff autonomously, you can also go create alerts for yourself. And one of the kinds of alerts you can do, you can say, "Okay, if this kind of event happens within so much time, and then this kind of an event happens, but not this one," Then you can be alerted. So you can have these kind of sequences that you define of events that would indicate a problem. And we use their sequence analytics for that. So it kind of gives you really good performance on some of these queries where you're wanting to pull out sequences of events from a fact table. And timeseries analytics is really useful if you want to do analytics on the metrics and you want to do gap filling interpolation on that. It's actually really fast in performance. And it's easy to use through SQL. So those are a couple of Vertica extensions that we use. So finally, I would like to encourage everybody, hey, come try us out. Should be up and running in a few minutes if you're using Kubernetes. If not, it's however long it takes you to run an installer. So you can just come to our website, pick it up and try out autonomous monitoring. And I want to thank everybody for your time. And we can open it up for Q and A.

Published Date : Mar 30 2020

SUMMARY :

Also, just a reminder that you can maximize your screen And one of the kinds of alerts you can do, you can say,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Larry LancasterPERSON

0.99+

David GillPERSON

0.99+

VerticaORGANIZATION

0.99+

100%QUANTITY

0.99+

Sue LeClairePERSON

0.99+

five timesQUANTITY

0.99+

LarryPERSON

0.99+

S3TITLE

0.99+

three minutesQUANTITY

0.99+

six timesQUANTITY

0.99+

SuePERSON

0.99+

100 servicesQUANTITY

0.99+

ZebriumORGANIZATION

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

five yearsQUANTITY

0.99+

TodayDATE

0.99+

yesterdayDATE

0.99+

bothQUANTITY

0.99+

KubernetesTITLE

0.99+

oneQUANTITY

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

SQLTITLE

0.99+

one customerQUANTITY

0.98+

three linesQUANTITY

0.98+

three tablesQUANTITY

0.98+

each eventQUANTITY

0.98+

hundredsQUANTITY

0.98+

first peopleQUANTITY

0.98+

1,000 log streamsQUANTITY

0.98+

20 years agoDATE

0.98+

eight incidentsQUANTITY

0.98+

tens of thousands of customersQUANTITY

0.97+

later this weekDATE

0.97+

thousands of usersQUANTITY

0.97+

StonebrakerORGANIZATION

0.96+

each occurrenceQUANTITY

0.96+

PostgresORGANIZATION

0.96+

One thingQUANTITY

0.95+

three event typesQUANTITY

0.94+

millionQUANTITY

0.94+

VerticaTITLE

0.94+

one thingQUANTITY

0.93+

4/2DATE

0.92+

EnglishOTHER

0.92+

four function namesQUANTITY

0.86+

day oneQUANTITY

0.84+

PrometheusTITLE

0.83+

one-stopQUANTITY

0.82+

BerkeleyLOCATION

0.82+

ConfluenceORGANIZATION

0.79+

double arrowQUANTITY

0.79+

last couple of monthsDATE

0.79+

one ofQUANTITY

0.76+

cStorORGANIZATION

0.75+

a billionQUANTITY

0.73+

Atlassian StackORGANIZATION

0.72+

EonORGANIZATION

0.71+

BitbucketORGANIZATION

0.68+

couple more examplesQUANTITY

0.68+

LitmusTITLE

0.65+

Jace Moreno, Microsoft | Enterprise Connect 2019


 

>> Live from Orlando, Florida, it's theCUBE, covering Enterprise Connect 2019. Brought to you by Five9. >> Hi, welcome back to theCUBE's coverage of Enterprise Connect 2019. I'm Lisa Martin with my co-host for the week Stu Miniman, we are in Five9's booth here at this event, excited to welcome to theCUBE for the first time Jace Moreno, Microsoft Teams Developer Platform Lead from Microsoft, Jace, welcome to theCUBE. >> Thank you for having me, it's a pleasure. >> So we're excited that you're here because you are on the main stage tomorrow morning with Lori Wright. But talk to us about Microsoft Teams. You've been with Microsoft for awhile now, about 10 months with Teams. Talk to us about this tool for collaboration that companies can use from 10 people in a meeting to 10,000? >> Yeah, you'll hear us tomorrow. The phrase we're coining is an intelligent workplace for everyone, right? And I think for a long time, we've been perceived as an organization who builds tools, a lot of times with the Enterprise Knowledge Worker, the whole goal is to dispel that. There's multiple people out there, millions of people who are frontline workers, whatever you want to call 'em but the folks that are interfacing with your actual customers. And so we need to make sure that we are developing tools that are for them. But overall as I look at the product and what we've delivered, it's about bringing you one single place to go to for collaboration, right? So and that is bringing together your tools, whether or not Microsoft built them into one experience and then process these in workflows around them. >> So do you find that in terms of traction that the, like the enterprises and maybe the more senior generations that have been working with Microsoft tools for a long time get it or I mean, 'cause I can imagine there's kind of a cultural gap there with, whether it's a large enterprise like a Microsoft or maybe a smaller organization, There are people in this modern workforce that have very different perspectives, different cultures. How can Teams help to maybe break down some of those barriers and really be a platform for innovation? >> That's a great question. I think we've been battling that cultural, digital clash for a long time to be fair. I think it really comes out with Teams, though. Because it is an entirely different way of working. It's not just chat anymore, right? It's collaboration. It's bringing together all of these experiences and so I think there's a maturity curve for some of our average users to be fair. We're already seeing that curve take off as we speak. But what I often give advice to customers and to partners, I call 'em superpowers but you got to find that one reason that really gets people over the line because we get asked all the time, "Hey, everybody loves it "but we want to get 'em to use this as the one tool, "the one place that I go so I know that everything "I send in our organization goes to that single place. "How do I deliver that?" And I go, "Just give 'em a reason." That's what it comes down to honestly and I genuinely see that with organizations. We're seeing incredible examples of organizations leveraging partner integrations where it's bringing out their culture rather than them trying to evolve it, if that makes sense. >> So Jace, I'm glad you brought up the partners there and when I hear developer platform, all right, bring us inside a little bit. Everything API compatible, when people think about developers, there have been developers in the Microsoft space. .NET's got its great ecosystem there but what is it like to be in the Microsoft ecosystem here in 2019? >> It's a fun place to be. I will say, I've even stopped using the term developer when I say platform though to be fair because, and the reason I bring this up, what we've actually built allows a lot of IT professionals to build as well on Teams. PowerShell Scripts as an example is a huge opportunity for customers. Frankly, I've never written a line of code in my life and I built a bot for Teams. So it's pretty amazing what we're enabling but when we look at a lot of what partners are building, it's where are they seeing opportunities in the marketplace? So Five9 as an example with customer care, great opportunity there where we can extend the capabilities that a contact center as an example might need inside of Teams if they want to explore that. >> I love, I actually got to interview Jeffrey Snover at Microsoft Ignite last year who of course created PowerShell and he was like more excited now than he was when it was created quite a long time ago. So when I look around this platform, tell us some of the partners that you're working with. I saw some of the early notes that things like Zoom, and gosh you know, talk about some of the partners you're working with. >> So one thing I'll touch on too that I don't know if I fully answered your last question is what I'm hearing from our partners who have built on Teams and I'll touch on which ones in a second, we call it the extensibility of our platform but quite literally what it means is they are, we are allowing partners to allow their solutions to render in different ways inside of Teams and what we're hearing from partners, I had a conversation with Disco the other day as an example, so they built a, I'm not doing them a service by explaining it like this but it's a kudos bot essentially that they've delivered and it's actually bringing out that culture. But they told us the beauty of the Teams platform is that they don't only show up as a bot to the end users, they actually, we've offered them other ways to interact with the end user, so whatever's more comfortable for me inside of team, and my interaction with that solution, it's easy for them to have that correspondence. But in terms of top partnerships that we're looking at, we've had some incredible integrations built recently. ADP just launched theirs pretty recently to check payroll and build sort of a time off process flow if you will, with the bot. Polly's been a great one from day one. We have integrations with partners like Atlassian for a DevOps tool, so Jira and Confluence Cloud, Trello for project management, I could go on forever but we have over 250 in the store right now and that is growing very rapidly. This is what we spend most of our time on. So the initial focus was what are the tools out there that most people need to get their job done every day? That's where we'll start and now we're really evolving that and we're seeing some incredible things being built as we speak. >> So Jace, being at Enterprise Connect, this is an event where it's been around for a long time and has evolved quite considerably as Enterprise Communication and Collaborations has but one of things that when I was doing research to prep for the show that I'm reading is that the customer experience is table stakes. It's make or break. But some of the recommendations that when a company is, whether it's within a business unit buying software and services or at the corporate level, the customer has to have a seat there so that the decision is being made. Are we implementing tools and technologies and services that are actually going to delight our customers, not just retain them but drive customer lifetime value? In your role, where are some of Microsoft's customers in terms of helping to evolve the evolution of the platform? >> That's a great question, I'm really glad you asked it. It's been fun in my role because what we're seeing is a lot of customers who have taken the platform and built integrations to their tools. So think outside of productivity for a second, think IT support, think employee resources, they're building those integrations and they're leveraging those as a way to drive that organic broad adoption inside of their companies. Because they don't want to do the IT force anymore, they want people to love it like you said and naturally take to it and so I keep coming back to that, I call it superpowers, again it might be a ridiculous term but it's those superpowers you deliver to your people that allow them to get their work done better, get them to love that product and to your point, not want to ever leave it 'cause you can get a majority of your work done every day in that place. So we've seen some really cool ones. A couple examples that we just shared recently, Dentsu's a great one, so they have a three person Change Management Team for a 50,000 person global organization, okay? Three people, got to scale that right? Can't do that one on one training and so they initially took Teams and integrated it into their current website, internet, internal portals to essentially create a chatbot that helped people learn how to use the technology they delivered. Now they're taken that one step further because they saw such great success and they're going to different centers of excellence inside the organization saying, "Hey, do you want to get on board? "Because we'd like to make this the bot "that you interact with as an employee of Dentsu." So it's just incredible but it's driving again that adoption they're seeing, leveraging some of the simple stuff that we have on the platform. Does that answer your question? >> Yes very well, thank you. >> So when I look at some of the macro trends about communication, where I've heard some great success stories is internally just being able to collaborate with some of my internal people, Teams has done really well. Collaborating between various organizations still seems to have more challenges. Can you just bring us a little bit of insight as to why I hear great success stories there and not negatives on Teams but just it's still challenging if I have multiple organizations? We all understand even just doing a conference call or heck, a video call between lots of different companies still in 2019's a challenge. >> Yeah look, I mean I'll give you a couple answers here. We are young, I mean it's two years old as a product. So the momentum's been incredible but I'm not going to sit here and tell you we don't have things to work on, we absolutely do. What I will say though, take Enterprise Connect for example, we actually have a Teams team for Enterprise Connect. There's, I actually checked this morning, there's 181 people in that team and a majority of them are guests, so external users, So vendors that we work with to help us plan this conference and bring it all together and a lot of that has been seamless. Yes, there are little things here or there that we're working on but in that respect it's been pretty incredible. I constantly am using it with external parties and I find though, I don't necessarily know if the challenge is in the interface itself, I think it ends up becoming this opportunity to really educate people on this new way of working. And so going back to our partners again, we're sitting here with Five9, but that becomes critical. How do we work better with these organizations who we have mutual customers with to create that experience together, right? And bring again, superpowers to the users. >> What about a security as a superpower? Where is that in these conversations? >> I mean everything we build has a layer of security. I actually just got out of a meeting, you'll see, we've got an announcement around this tomorrow. So I can't blow it unfortunately but the bottom, the foundation and core of everything that we do will be security focused, absolutely. >> All right, so I went to the Microsoft show last year, AI is also one of those things besides security. AI's infused anywhere, so where does AI fit into the whole Teams story? >> The way we see it, I look at this in a couple angles. So most people get onto Teams and it's kind of chat and collab at first, right? Not always the case but a lot of organizations do that. Then it goes to meetings then I think, and you'll see a lot of this cool stuff tomorrow, we're doing it on AI but it's how then do you proactively start delivering better experiences to your end users? So I think of things that we're looking at right now is taking data, and sending those as an example to your IT admins about giving them insight into how users are leveraging Teams. How do you improve that experience for them? So again, you drive that natural broad adoption but kind of assist them a little bit along the way. So tons of great examples around the board. I'm not sure if that fully answers your question but just the sky's the limit. I think of some other things we're looking at though, you'll see a lot coming in the form of transcription, translation, those services that really create inclusiveness which is a big focus for us. Again back to that point earlier, it's the intelligent workplace for everyone. We want to be able to provide services with our partnerships that can really reach anybody in the business world, right? And even in the consumer world in some sense. >> Well Jace, thanks so much for joining Stu and me on the program this afternoon. We're looking forward to hearing your keynote in the morning and sharing with us some of the excitement and things that are happening and announcements we're going to hear from Microsoft Teams tomorrow. >> My pleasure. Thank you so much for having me, appreciate it. >> Our pleasure, fFor Stu Miniman, I'm Lisa Martin. You're watching theCUBE's coverage of day one, Enterprise Connect 2019 from Orlando. Stick around, Stu and I will be right back with our next guest. (upbeat electronic jingle)

Published Date : Mar 19 2019

SUMMARY :

Brought to you by Five9. excited to welcome to theCUBE for the first time But talk to us about Microsoft Teams. So and that is bringing together your tools, So do you find that in terms of traction that the, and I genuinely see that with organizations. like to be in the Microsoft ecosystem here in 2019? and the reason I bring this up, what we've actually built I love, I actually got to interview Jeffrey Snover at that most people need to get their job done every day? that are actually going to delight our customers, that allow them to get their work done better, is internally just being able to and a lot of that has been seamless. the foundation and core of everything that we do AI fit into the whole Teams story? that can really reach anybody in the business world, right? We're looking forward to hearing your keynote Thank you so much for having me, appreciate it. right back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JacePERSON

0.99+

Lisa MartinPERSON

0.99+

Lori WrightPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Jace MorenoPERSON

0.99+

Stu MinimanPERSON

0.99+

Jeffrey SnoverPERSON

0.99+

2019DATE

0.99+

two yearsQUANTITY

0.99+

10 peopleQUANTITY

0.99+

StuPERSON

0.99+

Five9ORGANIZATION

0.99+

181 peopleQUANTITY

0.99+

tomorrow morningDATE

0.99+

DentsuORGANIZATION

0.99+

last yearDATE

0.99+

Three peopleQUANTITY

0.99+

OrlandoLOCATION

0.99+

10,000QUANTITY

0.99+

tomorrowDATE

0.99+

50,000 personQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

three personQUANTITY

0.98+

over 250QUANTITY

0.98+

PowerShellTITLE

0.98+

ADPORGANIZATION

0.98+

oneQUANTITY

0.98+

one toolQUANTITY

0.98+

PollyPERSON

0.97+

Enterprise ConnectORGANIZATION

0.96+

millions of peopleQUANTITY

0.96+

about 10 monthsQUANTITY

0.96+

Confluence CloudORGANIZATION

0.95+

first timeQUANTITY

0.95+

one reasonQUANTITY

0.95+

this afternoonDATE

0.94+

theCUBEORGANIZATION

0.93+

AtlassianORGANIZATION

0.91+

DiscoORGANIZATION

0.91+

one stepQUANTITY

0.9+

single placeQUANTITY

0.9+

this morningDATE

0.88+

one experienceQUANTITY

0.88+

Microsoft IgniteORGANIZATION

0.85+

couple examplesQUANTITY

0.84+

DevOpsTITLE

0.79+

firstQUANTITY

0.77+

one thingQUANTITY

0.76+

one placeQUANTITY

0.75+

JiraORGANIZATION

0.74+

.NETORGANIZATION

0.73+

coupleQUANTITY

0.72+

Enterprise ConnectEVENT

0.71+

day oneQUANTITY

0.67+

Enterprise Connect 2019EVENT

0.66+

a secondQUANTITY

0.66+

Connect 2019TITLE

0.66+

Enterprise CommunicationORGANIZATION

0.64+

Enterprise ConnectTITLE

0.61+

Michael Lauricella, Atlassian & Brooke Gravitt, Forty8Fifty | Splunk .conf2017


 

>> Announcer: Live, from Washington DC, it's the CUBE. Covering .conf2017. Brought to you by Splunk. >> And welcome back here on theCUBE. John Walls and Dave Vellante, we're in Washington DC for .conf2017, Splunk's annual get together coming up to the nation's capital for the first time. This is the eighth year for the show, and 7,000 plus attendees, 65 countries, quite a wide menu of activities going on here. We'll get into that a little bit later on. We're joined now by a couple of gentlemen, Michael Arahuleta who is the Vice President of Engineering at Atlassian, Michael, thank you for being with us. >> Thank you, actually it's Director of Business Development. >> John: Oh, Director of Business Development, my apologies >> He's doin' a great job >> My apologies. >> I don't need that. >> Oh very good. And Brooke Gravitt, who I believe is the VP of Engineering, >> There ya go. >> And the Chief Software Architect at Forty8Fifty. >> Yep, how ya doin'? >> No promotions or job assignments, I've gotcha on the right path there? >> Yeah, yeah. >> Good deal, alright. Thank you for joining us, both of you. First off, let's just set the stage a little bit for the folks watching at home, tell us a little bit about your company, descriptions, core competencies, and your responsibilities, and then we'll get into the intersection, of why the two of you are here. So Michael, why don't you lead off. >> So Atlassian, we, in our simplest form, right, we make team collaboration software. So our goal as a company is to really help make the tools that companies use to collaborate and communicate internally. Our primary focus, and kind of our bread and butter has always been making the tools that software companies use to turn around and make their software. Which is a great position to be in, and an increasingly we're seeing ourselves expand into providing that team collaboration software products like Jira, Confluence, BitBucket, and now, the new introduction of a product called Stride, which is a real time team collaboration product, not just for technical teams, but we're really seeing a great opportunity to empower all teams 'cause every team in every organization needs a better way to communicate and get things done. That's really what Atlassian core focus is all about. >> John: Gotcha. Brooke, if you would. >> Yeah, so Forty8Fifty Labs, we're the software development and DevOps focused subsidiary of Veristor Systems based out of Atlanta. We focus primarily on four key partners, which would be Atlassian, Splunk, QA Symphony, and Red Hat, and primarily, we do integrations and extensibility around products that these guys provide as well as hosting, training, and consulting on DevOps and Atlassian products. >> So the ideal state in your worlds is you've got -- true DevOps, Agile, infrastructure as code, I'll throw all the buzzwords out at ya, but essentially you're not tossing code from the development team into the operations team who them hacks the code, messes it up, points fingers, all that stuff is in part anyway what you're about eliminating, >> Right. >> And getting to value sooner. Okay, so that's the sort of end state Nirvana. Many companies struggle with that obviously, You got, what, Gartner has this term, bimodal IT, which everybody, you know, everybody criticizes but it's sort of true. You've got hybrid clouds, you've got, you know, different skillsets, what is the state of, Agile development, DevOps, where are we in terms of organizational maturity? Wonder if you guys could comment. >> I'll start with that right, I think -- Even though we've been talking about DevOps for a while and companies like Atlassian and Splunk, we live and breathe it. I still think when you look at the vast majority of enterprises, we're still at the early stages of effectively implementing this. I think we're still really bringing the right definition to what DevOps is, we're kind of go through those cycles where either a buzzword gets hot, everybody glams onto it, but no one really knows what it means. I think we're really getting into that truly understanding what DevOps means. I know we've been working hard at Atlassian to really define that strong ecosystem of partners. We really see ourselves as kind of in the middle of that DevOps lifecycle, and we integrate with so many great solutions around monitoring and logging, testing, other operational softwares, and things of that nature to really complete that DevOps lifecycle. I think we're really just now finally seeing it come together and finally starting to see even larger organizations, very large Fortune 100 companies talk about how they know they've got to get away from Waterfall, they've got to embrace Agile, and they've got to get to a true DevOps culture, and I think that's where Atlassian is very strong, devs have loved us for a long time. Operations teams are really learning to embrace Atlassian as well. I think we're really going to great position to be at that mesh of what truly is DevOps as it really emerges in the next couple years. >> Brooke, people come to Forty8Fifty, and they say, alright, teach me how to fish in the DevOps world, is that right? >> Yeah, absolutely. I mean, one of the challenges that you have in large enterprises is bringing these two groups of people together, and one of the easy ways is to go out and buy a tool, I think the harder and more difficult challenge that they face is the culture change that's required to really have a successful DevOps transformation. So we do a little bit of consulting in that area with workshops with folks like Gene Kim, Gary Gruver, Jez Humble that we bring in who are sort of industry icons for that sort of DevOps transformation. To assist, based on our experiences ourselves in previous companies or engagements with customers where we've been successful. >> So the cloud native guys, people who are doing predominantly cloud, or smaller companies, tech companies presumably, have glommed onto this, what about the sort of the Fortune 1000, the Global 2000, what are we seeing in terms of their adoption, I mean, you mentioned Waterfall before, you talk to some application development heads will say, well listen, we got to protect some of our Waterfall, because it's appropriate. What are you seeing in the sort of traditional enterprise? >> We see the traditional enterprise really embracing Agile in a very aggressive way. Obviously they wouldn't be working with Atlassian if they weren't, so our view is probably a little bit tilted. Companies that engage with us are the more open to that. But we're definitely seeing that the far and away the vast majority in the reports that we get from our partners like Forty8Fifty Labs is that increasingly larger and larger companies are really aggressively looking to embrace Agile, bring these methodologies in, and the other simple truth is with the way Atlassian sells -- the way we sell our products online, we have always sort of grown kind of bottoms up inside a lot of these large organizations, so where officially IT may still be doing something else, they're always countless smaller teams within the organization that have embraced Atlassian, are using Atlassian products, and then, a year down the road, or two years down the road, we tend to then emerge as the defacto solution for the organization after we kind of spread through all these different groups within the company. It's a great growth strategy, a lot are trying to replicate it. >> Okay, what's the Splunk angle? What do you guys do with Splunk, and how does it affect your business? >> Mike: Do you want to start? >> Sure, so, we're both a partner of Splunk, a customer of Splunk, and we use it in our own products in terms of our hosting, and support methodologies that we leverage at Forty8Fifty. We use the product day in and day out, and so with Atlassian, we have pulled together a connector that is -- one half of it is a Splunk app, it's available on Splunk base, and the other part is in the Atlassian marketplace, which allows us to send events from Juris Service Desk, ticketing events, over to Splunk to be indexed. You have a data model that ties in and allows you to get some metrics out of those events, and then the return trip is to -- based on real time searches, or alerts, or things that you have -- you're very interested in reports, you can trigger issues to be created inside of Jira. >> I think the only thing to add to that, so definitely, that's been a great relationship and partnership, and we're seeing an increasing number of our partners also become partners with Splunk and vice versa, which is great. The other strong side to this as well, is our own internal use of Splunk. So, we as a company, we always like to empower our different teams to pick whatever solution they want to use, and embrace that, and really give that authority to the individual teams. However, with logging, we were having a huge problem where all of our different teams were using over a whole host variety of different logging solutions, and frankly not to go into all the details, it was a mess. Our security team decided to embrace Splunk and start using Splunk, and really got a lot of value out of the solution and fell in love with the solution. Which says a lot, because our security team doesn't normally like much of anything, especially if it's not homegrown. That was a huge statement there, and then quickly Splunk now has spread to our cloud team which is growing rapidly as our cloud scales dramatically. Our developers are using it for troubleshooting, our SREs and our support team for incident management, and it's even spread to our marketplace, which is one of the larger marketplaces out there today for third party apps. Then the new product, Stride, for team collaboration is going to be very dependent on Splunk for logging as well. It's become that uniform fabric. I even heard a dev use a term which I've never heard a dev talk about logs and talk about log love, which is no PR, that is the direct statement from a developer, which I thought was amazing to hear. 'cause you know, they just want to code and make stuff, they don't want to deal when it actually breaks and have to fix it. But with Splunk they've actually -- They're telling me they actually enjoy that. So that's a great -- >> That's more than the answer is in the logs, that's there's value in our logs, right? >> Yeah, a ton of value, right? Because at the end of the day, these alerts are coming in and then we use tools like the Forty8Fifty Labs tool to get those tickets into Jira. Those logs and things are coming in, that means there's an issue and there's something to be resolved and there's customer pain. So the quicker we can resolve that, that log is that first indicator of what's going on in the cloud and in our platforms to help us figure out how do we keep that customer happy? This isn't just work, and just a task, this is about delivering customer value and that log can be that first indicator. The sooner you can get something resolved, the sooner the customer's back to getting stuff done and that's really our focus as a company, right? How do we enable people to get things done? >> Excuse me, when you are talking about your customers, what are their pain points? Today? I mean, big data's getting bigger and more capabilities, you've got all kinds of transport problems and storage problems, and security problems, so what are the pain points for the people who are just trying to get up to speed, trying to get into the game, and that the kind of services you're trying to bring to them to open their eyes. >> I think if you look at the value stream mapping and time to market for most businesses, where Splunk and Atlassian play in is getting that fast feedback. The closer in to the development side, the left hand side of value stream that you can pull in, key metrics, and get an understanding of where issues are, that actually -- it's much less expensive to fix problems in development than when they're in production, obviously. Rolling things like Splunk that can be used as a SIM to do some security analysis on, whether it be product code or business process early, rather than end up with a data breach or finding something after it's already in production. That kind of stuff, those are the challenges that a lot of the companies are facing is -- especially when the news, if you look at all the things that are goin on from a security perspective, taking these two products and being able to detect things that are going on, trends, any sort of unusual activity, and immediately having that come back for somebody in a service desk to work on either as a security incident or if it's a developer finding a bug early in the lifecycle, and augmenting your sort of infrastructure as code, the build out of the infrastructure itself. Being able to log all that data, and look at the metrics around that to help you build more robust enterprise class platforms for your teams. >> We've been sort of joking earlier about how the big data, nobody really talks about big data anymore, interestingly, Splunk who used to never talk about big data is now talking about big data, cause they're kind of living it. It's almost like same wine, new bottle with machine learning and AI and deep learning are all kind of the new big data buzzwords, but my question is, as practitioners, you were describing a situation where you can sort of identify a problem, maybe get an alert, and then manually I guess remediate that problem, how far away are we from -- so the machines automating that remediation? Thoughts on that? >> Am I first up? >> You guys kind of -- >> We've done a lot of automated remdediation. Close with remediation is what you call it. The big challenge is, it's a multi-disciplinary effort, so you might have folks that need to have expertise between network and systems and the application stack, maybe load balancing. There's a lot of different pieces there, so step one is you got to have folks that have the capacity to actually create the automation for their domain of expertise, and then you need to have sort of that cross platform DevOps mindset of being able to pull that together and the coordinator role of let's orchastrate all of the automations, and then hopefully out of that, combined with machine learning, some of the stuff that you can do in AWS, or with IBM's got out. You can take some of that analysis and be a little bit smarter about running the automation. In terms of whether that's scaling things up, or when -- For example, if you're in a financial industry and you've got a webpage that people are doing bill pay for, if you have a single website down, a web server down, out of a farm of 1000, in a traditional NOC, that would be kind of red on a dashboard. It's high, it's low priority, but it's high visibility and it's just noise, and so leveraging machine learning, people do that in Splunk to really refine what actually shows up in the NOC, that's something I think is compelling to customers. >> How are devs dealing with complexity, obviously, collaboration tools help, but I mean, the level of complexity today, versus when you think back to client server, is orders of magnitude greater for admins and developers, now you got to throw in containers and microservices, and the amount of data, is the industry keeping pace with the pace of escalation of complexity, and if so, how? >> I think we're trying. I think that's where we come into play. As this complexity increases really the only way you can solve it is through better communication and better tools to make sure that teams have the right information at their fingertips. The other challenge too is now in the world of the cloud, these teams need to be on 24/7. But you've got to kind of roll across the globe, and have your support teams in different time zones. You don't always have the right people online at the same time to be able to address, and you can't always talk directly, so that's where having the right tools and processes in place are extremely important so that team can know and know what did the team earlier do, how did they resolve this, where's the run book for this issue, and if this happens, how do we resolve it? How do we do so quickly? I think that tooling is key, and also too, this complexity is also as you guys were talking about before, being solved through some automation as well, and we're increasingly seeing that to where if this occurs and a certain thing occurs, then Jira can now automatically start to trigger some things for you, and then report back as to what it did. You're going to see more and more of that going forward as these models become more intelligent and we can redeploy, or if capacity is low, let's pull back resources, and let's not spend all this money on cloud computing platforms that we may not need because utilization is low. You're seeing all of those things start to happen and Jira as that workflow engine is that engine that's making those things happen in either an automated way at times, or just enabling people to communicate and do things in a very logical fashion. >> As ecosystem partners, how do you view the evolution of Splunk, is it becoming a application platform for you? Are you concerned about swim lanes? I wonder if you could talk about that? >> I personally, I don't see any real concerns of overlap between Splunk and Atlassian. In our view at Atlassian is, we tend to work very closely with people kind of fit into that frenemy category, and they're definitely a partner that we overlap with I think in very very few ways. If and when we ever do, I mean in a way, that's kind of something we always embrace as a company. I mean one thing we'll say a lot is overlap is better than a gap. Because if there's a gap between us and a partner, then that's going to result in customer pain. That means there's nothing that's filling that void. I'd rather have some overlap, and then give the customer the power to choose how do they want to do it. I mean, Splunk says you can probably do it this way, Atlassian says you could do it this way, as long as they can get stuff done, and that's always -- it's not a cliche from us, I mean that's a core message from Atlassian, then we're happy. Regardless if they completely embrace it our way, a little bit, a little deviation, that's not what really matters. >> Too much better than too little. >> Exactly. >> Is what it comes down to. Gentlemen, thanks for being with us. >> Thank you. >> We appreciate the time today and look forward to seeing you down the road and looking as your relationship continues. Not only between the two companies, but with Splunk as well. Thanks for being here. >> Mike: Thank you guys. >> We continue theCUBE does, live from Washington DC here at .conf2017, back with more in just a bit.

Published Date : Sep 26 2017

SUMMARY :

Brought to you by Splunk. This is the eighth year for the show, And Brooke Gravitt, who I believe is the VP of Engineering, And the Chief Software and then we'll get into the intersection, So our goal as a company is to really help make the tools Brooke, if you would. and primarily, we do integrations and extensibility Okay, so that's the sort of end state Nirvana. and they've got to get to a true DevOps culture, is the culture change that's required to really So the cloud native guys, people who are doing for the organization after we kind of spread through all these and the other part is in the Atlassian marketplace, and really give that authority to the individual teams. the sooner the customer's back to getting stuff done and that the kind of services you're trying and time to market for most businesses, are all kind of the new big data buzzwords, that have the capacity to actually create the automation of the cloud, these teams need to be on 24/7. and then give the customer the power to choose Gentlemen, thanks for being with us. and look forward to seeing you down the road conf2017, back with more in just a bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Gary GruverPERSON

0.99+

Brooke GravittPERSON

0.99+

MichaelPERSON

0.99+

Gene KimPERSON

0.99+

Dave VellantePERSON

0.99+

MikePERSON

0.99+

Michael ArahuletaPERSON

0.99+

AtlantaLOCATION

0.99+

AtlassianORGANIZATION

0.99+

twoQUANTITY

0.99+

SplunkORGANIZATION

0.99+

John WallsPERSON

0.99+

Washington DCLOCATION

0.99+

JohnPERSON

0.99+

two companiesQUANTITY

0.99+

bothQUANTITY

0.99+

Red HatORGANIZATION

0.99+

BrookePERSON

0.99+

Michael LauricellaPERSON

0.99+

GartnerORGANIZATION

0.99+

Forty8Fifty LabsORGANIZATION

0.99+

Veristor SystemsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Jez HumblePERSON

0.99+

65 countriesQUANTITY

0.99+

TodayDATE

0.99+

two groupsQUANTITY

0.99+

todayDATE

0.99+

AWSORGANIZATION

0.99+

eighth yearQUANTITY

0.99+

four key partnersQUANTITY

0.99+

oneQUANTITY

0.98+

Forty8FiftyORGANIZATION

0.98+

DevOpsTITLE

0.98+

first timeQUANTITY

0.98+

two yearsQUANTITY

0.98+

QA SymphonyORGANIZATION

0.97+

first indicatorQUANTITY

0.97+

two productsQUANTITY

0.97+

.conf2017EVENT

0.97+

FirstQUANTITY

0.97+

AgileTITLE

0.97+

JiraTITLE

0.96+

WaterfallEVENT

0.96+

1000QUANTITY

0.94+

7,000 plus attendeesQUANTITY

0.94+

single websiteQUANTITY

0.94+

Gaby Koren, Panaya - #SAPPHIRENOW - #theCUBE


 

>> Voiceover: Live, from Orlando, Florida, it's The Cube, covering Sapphire Now, headlining sponsored by SAP HANA Cloud, the leader in platform as a service, with support from Consolink, the cloud internet company. Now, here are your hosts, John Furrier and Peter Burse. >> Welcome back everyone, we are here live in Orlando, Florida for Sapphire Now, SiliconeANGLE Media's exclusive coverage of Sapphire. I'm John Furrier with Peter Burse. This is our flagship program, we go out to the events, and extract the citizen noise, you're watching The Cube. I want to do a shout-out to our sponsors. Without their help, we would not be here. SAP HANA Cloud Platform, Consolink at CONSOL Cloud, hot start up in Silicone Valley, and also we have Cap Gemini, we have EMC. Thanks so much for your support. Our next guest is Gaby Corin, who's the EVP of the Americas for Panaya, accompanied about a year ago by Infosys, now a part of Infosys. Welcome to the Cube. >> Thank you so much. >> Congratulations on the acquisition over a yeah ago, but you guys are to a part of the big machinery of Infosys, which is tier one systems integrated part of SAP's global channel, as they call it, but essentially, you're out serving customers all over the world. >> Gaby: That is correct, yes. >> At Infosys, what's your role in the Infosys organization, and what does your company do? >> Okay, so, I'll start with the company. Panaya was founded ten years ago. Our quest is to help customers to perform all their changes in their ERP environment. We basically analyze the environment, create that mapping, that baseline that helps them understand exactly what they're dealing with, then we support them in scoping out the changes, and then, we work with them throughout the journey of executing on all the testing cycles associated with all the changes. We serve about two thousand customers, and we are a hundred percent cloud-based solution. My role as EVP for the Americas is to support all customers in the region, and we're working very closely with Infosys into bringing Panaya as part of their offering to accelerate the processes, to bring innovation, and to bring much more efficiency to all the SAP projects and activities that they perform with our customers. >> We had the global partner person on earlier, and that was the big point, innovation's now at the center, not just delivery, which Infosys has been great at, but also other things, innovation, time is very important. >> Exactly. >> Your solution speeds things up, so share with us what it is, is it a SAS space? Is it code analyzers? Is it for QA? Is it for testing? What specifically do you guys solve? What problem do you solve? >> Great question. First of all, we are a SAS-based solution, so we do everything in the cloud. This helps, as you said, perform all the tasks faster and more efficiently. The pain that we're coming to address is the fact that change is constant in the ERP. The ERP is never an island, never an isolated solution. It's always in changes, the core of a lot of the businesses that we meet here, so change is their reality, they need to change all the time. They are highly customized, so every change that come from the vendor or from the business requires a lot of preparation and very fast execution, and this is where Panaya plays. We simulate the change virtually in the cloud, and we tell customers in advance what is going to happen to their environment all the way to the code line level what exactly is going to break, how to fix it, what to test, and we support them, again, throughout all the testing cycles from the unit test or the technical test all the way to user-acceptance test, UATs, that is a big pain to organization because of the collaboration. >> It's faster is the point. So, you guys speed up the process. >> Absolutely, we speed up the process, we reduce costs, we bring customers faster to market by about fifty percent, and we allow them to do their projects at the budget that they establish or lower. >> Give me an example of someone who has the problem, and what their environment looks like. Because everyone's trying to get to the cloud, and your solution is tailor-made perfectly for the cloud because it's very dev-ops-like. It makes things go faster, it's part of that whole agile iteration speed game, which we love, but the people trying to get there that are figuring it out, what's their environment, people who have the problem? What's their environment look like? Paint the picture. >> Virtually any SAP customer needs Panaya. >> John: That's a good plug. It's complicated. >> Yes. Their environment can have one instance, or multiple instances of SAP ECCs. They all have the need for testing because they perform testing all the way. They are trying to bring some of the applications to the cloud, but not necessarily. Most of our customers still are heavily on-premise based, so what we do is that we do all the analysis in the cloud, and this is how we help them do things much faster. >> So I got to ask you the Infosys question, because I'm a big fan of Vishal Sikka. For many years, I've watched his work at SAP, certainly. He was very, very early on and very right on a lot of technical decisions around how things played out. I watched him during the SOA days, going back to the web services days, which is the late 90's, early 2000s, he had the right call and vision on web services, and then service-oriented architectures. >> Yes. >> He brought a lot of great mojo to SAP and has always been very open-source driven. >> Right. >> John: And he's just a cool guy, so what's it like working there? I mean, is he always on top of the employees? Do you talk to him? What's it like inside the company at Infosys, and specifically Vishal, what's he up to? >> First of all, he's such a visionary. You listen to him and his vision. His vision is people and software. And he wants to make a difference when it comes to supporting customers, being an SI, being at a company that creates and makes a difference. He's also very personal, so he's very approachable. He loves ideas as innovation, and he believes that the innovations come from within, so he's a huge supporter of Panaya and bringing Panaya to every single Infosys customer and opportunity, but he has that vision that you don't replace a thing, you don't replace stuff. You take something, and you bring, but you learn to collaborate, and you understand that the environments needs to be flexible, and the only way to bring that flexibility is to take the existing environment and continue to bring innovation, even if it's in small steps, you bring that innovation to the table. And this is what makes it so unique to work for a guy like him. >> The traditional systems integrator relationship, there's always been tension, a lot of tension between customers and systems integrators. >> Gaby: Yes. >> Customers say they want something. Systems integrators have the expertise to do it. Customers want it fast, systems integrators sometimes use their experience to inflate billings, but the customer increasingly is in charge in almost all global markets. The question is are you helping your customers stay more in control of Infosys engagements? And if the answer is yes, how does that improve the value proposition of Infosys? >> Okay, that's a great question. One of the reasons that Panaya remains an independent and contained organization within Infosys is, besides commitment to support that, we sell direct a lot to our customers, and we support, we remain objective, whoever the customer chooses to work with, whether it's to do it in house or to use system integrators. And we have more and more projects that there are three, four, or five system integrators that are involved, and each one does a piece of the solution, and Panaya gives that control because of their analysis, because of the support on the planning stage. We paint the right picture of where you are today, where do you want to go, and in the journey of doing that. This is one of the claims of victory of Panaya is that we bring that control back to the hands of the customers exactly as they want to, because they want to understand what are they dealing with, what are the pricing, and SIs on the other hand, also understand that prices cannot continue to be cut forever and ever. But if you don't bring that innovation, that people plus software, it will be impossible to continue to compete in this market. >> They get more net contract value on the sales as they deliver value. >> Gaby: Exactly, to the customers. >> So if they're helping their customers drive more cash and revenue-- >> Well, I would presume that it actually starts with the contracting process for a lot of these efforts is itself very, very expensive and often leads to not a lot of value, and so I presume that in response to what you just mentioned, John, that you're generating artifacts to make it easy for the customer, the SAP customer, to envision where they need to go, and those artifacts then help the SAP customer manage the integrator and the company doing it, which then dramatically reduces the contracting process. >> Gaby: Exactly. >> Because it's a lot clearer, which means I can focus more on the management of the partner-- >> You release resources, correct. >> As a set of capabilities because because it always changes along the way. >> That is correct. >> As I change, I can envision that using some of the technologies you're bringing to bear. >> That is correct, we create these assets that can be reused time and again, and then we free up resources so they can focus on innovation and additional activities. That is exactly our value proposition, you got it absolutely right. >> So, are you a consultant management system in the SAP world? >> We don't claim to be, no, we bring solutions. We're not in the consulting business at all. >> Peter: No, managing the consulting business. >> Oh, absolutely, we help to manage that process. >> Helping the customer manage those consultants. >> That is correct, that is correct. Yes, you're absolutely right. >> My final question for you, thanks for coming on The Cube, by the way, I know it's short notice. >> Thank you, thank you for having me. >> Great to have the insight. What's the biggest change in the ecosystem are you seeing today? Because you're close to the code, so you're close to all the action at Panaya and certainly Infosys is massive and global. What is the biggest change that's happening in the ecosystem, with SI's and generally across the board? >> That's a great question. One thing that we're seeing is much more competition. The customer is much more educated, exactly as you, Peter, said. The customers are much more educated, they know what they want, and they're coming in with much more control and knowledge, so we're seeing this. Customers are looking for much more long-term activities. This is why HANA is becoming such a strong, we're seeing this also here in this show how everybody's talking HANA, because it's not something that you do for the next year. It's something that is going to be with these customers for a long term. They are looking for long-term type of engagements. >> They don't have to buy a lot of HANA. They can actually put their toe in the water, if you will. The old days it was you buy SAP, and you hired the SI's, project management, delivery over a long period of time. They don't have to do that today. They can still have a long view with HANA, right? I mean, are you seeing that, too? >> Yes, and what we're seeing is, a move on this regard, we're seeing a move from best of suite into best of breed. We want on each area the best solution possible. >> Without ballooning integration and training costs. >> Correct, correct, and we fit perfectly into that story. >> Well, thanks so much. Real quick question for you. You guys have a big end-user event like Sapphire. >> Gaby: Yes. >> Didn't you just have one in San Francisco recently? Or do you have one coming up? What's going on with the events for Infosys? >> We participated in Confluence, which is a very large event of Infosys, just a couple of weeks ago. Very, very well-attended, and we-- >> John: Is that a global conference in San Francisco or is it in other areas? >> It's a global event in which the largest, the biggest customers of Infosys attend, once a year, they get together. It's all about thought leadership and sharing ideas, design thinking, which Vishal is leading very strongly. That was the main theme of the event, so we had the chance to meet a lot of our customers and prospects. Now, of course, Sapphire. >> Thank you so much for coming on, Gaby. Great to have you on The Cube, and welcome to the Cube alumni now that you're on The Cube. We are live here in Orlando for SAP Sapphire Now. I'm John Furrier with Peter Burse with the Cube. You're watching SiliconANGLE' The Cube. (futuristic music)

Published Date : May 20 2016

SUMMARY :

the cloud internet company. and extract the citizen noise, Congratulations on the of executing on all the testing cycles We had the global because of the collaboration. It's faster is the point. customers faster to market but the people trying to get customer needs Panaya. John: That's a good plug. They all have the need for testing he had the right call and He brought a lot of great mojo to SAP and the only way to bring that flexibility The traditional systems the expertise to do it. because of the support on the sales as they deliver value. and so I presume that in response to what because it always changes along the way. of the technologies and then we free up We're not in the the consulting business. to manage that process. Helping the customer That is correct, that is correct. by the way, I know it's short notice. and generally across the board? It's something that is going to be SAP, and you hired the SI's, Yes, and what we're seeing Without ballooning fit perfectly into that story. You guys have a big end-user just a couple of weeks ago. the biggest customers of Infosys attend, Great to have you on The Cube, and welcome

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
InfosysORGANIZATION

0.99+

PeterPERSON

0.99+

JohnPERSON

0.99+

Gaby KorenPERSON

0.99+

Gaby CorinPERSON

0.99+

GabyPERSON

0.99+

Peter BursePERSON

0.99+

San FranciscoLOCATION

0.99+

threeQUANTITY

0.99+

John FurrierPERSON

0.99+

OrlandoLOCATION

0.99+

PanayaORGANIZATION

0.99+

OneQUANTITY

0.99+

ConsolinkORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

Vishal SikkaPERSON

0.99+

next yearDATE

0.99+

Silicone ValleyLOCATION

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

SAPORGANIZATION

0.99+

late 90'sDATE

0.99+

HANATITLE

0.99+

ten years agoDATE

0.99+

VishalPERSON

0.99+

EMCORGANIZATION

0.99+

early 2000sDATE

0.99+

one instanceQUANTITY

0.98+

PanayaPERSON

0.98+

todayDATE

0.98+

about two thousand customersQUANTITY

0.98+

The CubeTITLE

0.97+

about fifty percentQUANTITY

0.97+

hundred percentQUANTITY

0.96+

each oneQUANTITY

0.96+

SiliconeANGLE MediaORGANIZATION

0.96+

CubeORGANIZATION

0.95+

each areaQUANTITY

0.95+

couple of weeks agoDATE

0.95+

FirstQUANTITY

0.95+

once a yearQUANTITY

0.94+

five system integratorsQUANTITY

0.93+

a year agoDATE

0.92+

SiliconANGLETITLE

0.92+

SapphireTITLE

0.92+

SAP HANA Cloud PlatformTITLE

0.91+

agileTITLE

0.89+

CONSOL CloudORGANIZATION

0.89+

One thingQUANTITY

0.88+

AmericasLOCATION

0.86+

SapphireORGANIZATION

0.86+