Image Title

Search Results for May 4th at 11:00 AMEastern, 8:00 AM Pacific:

Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 2 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Evolving InfluxDB into the Smart Data Platform Full Episode


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YoakumPERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

GoogleORGANIZATION

0.99+

16 timesQUANTITY

0.99+

two rowsQUANTITY

0.99+

New York CityLOCATION

0.99+

60,000 peopleQUANTITY

0.99+

RustTITLE

0.99+

InfluxORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

todayDATE

0.99+

Influx DataORGANIZATION

0.99+

PythonTITLE

0.99+

three expertsQUANTITY

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

each rowQUANTITY

0.99+

two laneQUANTITY

0.99+

TodayDATE

0.99+

Noble nineORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FluxORGANIZATION

0.99+

Influx DBTITLE

0.99+

each columnQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

cube.netOTHER

0.99+

twiceQUANTITY

0.99+

BryanPERSON

0.99+

PandasTITLE

0.99+

c plus plusTITLE

0.99+

three years agoDATE

0.99+

twoQUANTITY

0.99+

more than a decadeQUANTITY

0.98+

ApacheORGANIZATION

0.98+

dozensQUANTITY

0.98+

free@influxdbu.comOTHER

0.98+

30,000 feetQUANTITY

0.98+

Rust FoundationORGANIZATION

0.98+

two temperature valuesQUANTITY

0.98+

In Flux DataORGANIZATION

0.98+

one time stampQUANTITY

0.98+

tomorrowDATE

0.98+

RussPERSON

0.98+

IOTORGANIZATION

0.98+

Evolving InfluxDBTITLE

0.98+

firstQUANTITY

0.97+

Influx dataORGANIZATION

0.97+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

Influx DB UniversityORGANIZATION

0.97+

SQLTITLE

0.97+

The CubeTITLE

0.96+

Influx DB CloudTITLE

0.96+

single serverQUANTITY

0.96+

KubernetesTITLE

0.96+

Ann Potten & Cole Humphreys, HPE | CUBE Conversation


 

>>Hi, everyone. Welcome to this program. Sponsored by HPE. I'm your host, Lisa Martin. We're here talking about being confident and trusting your server security with HPE. I have two guests here with me to talk about this important topic. Cole Humphreys joins us global server security product manager at HPE and Anne Potton trusted supply chain program lead at HPE guys. It's great to have you on the program. Welcome. >>Hi, thanks. Thank you. It's nice to be here, Anne. >>Let's talk about really what's going on there. Some of the trends, some of the threats there's so much change going on. What is HPE seeing? >>Yes. Good question. Thank you. Yeah. You know, cyber security threats are increasing everywhere and it's causing disruption to businesses and governments alike worldwide. You know, the global pandemic has caused limited employee availability. Originally this has led to material shortages and these things opens the door perhaps even wider for more counterfeit parts and products to enter the market. And these are challenges for consumers everywhere. In addition to this, we're seeing the geopolitical environment has changed. We're seeing, you know, rogue nation states using cybersecurity warfare tactics to immobilize an entity's ability to operate and perhaps even use their tactics for revenue generation, the Russian invasion of Ukraine as one example, but businesses are also under attack. You know, for example, we saw solar winds, software supply chain was attacked two years ago, which unfortunately went a notice for several months and then this was followed by the colonial pipeline attack and numerous others. >>You know, it just seems like it's almost a daily occurrence that we hear of a cyber attack on the evening news. And in fact, it's estimated that the cyber crime cost will reach over 10 and a half trillion dollars by 2025 and will be even more profitable than the global transfer of all major illegal drugs combined. This is crazy, you know, the macro environment in which companies operate in has changed over the years. And you know, all of these things together and coming from multiple directions presents a cybersecurity challenge for an organization and in particular it's supply chain. And this is why HPE is taking proactive steps to mitigate supply chain risk so that we can provide our customers with the most secure products and services. >>So Cole, let's bring you into the conversation and did a great job of summarizing the major threats that are going on the tumultuous landscape. Talk to us Cole about the security gap. What is it? What is HPE seeing and why are organizations in this situation? >>Hi, thanks Lisa. You know, what we're seeing is as this threat landscape increases to, you know, disrupt or attempt to disrupt our customers and our partners and ourselves, I, it's a kind of a double edge if you will, because you're seeing the increase in attacks, but what you're not seeing is that equal to growth of the skills and the experiences required to address the scale. So it really puts the pressure on companies because you have a skill gap, a talent gap, if you will. There's, you know, for example, there are projected to be three and a half million cyber roles open in the next few years, right? So all this scale is growing and people are just trying to keep up, but the gap is growing just literally the people to stop the bad actors from attacking the data and, and to complicate matters. You're also seeing a dynamic change of the who and the, how the attacks are happening, right? >>The classic attacks that you've seen, you know, and the SDK and all the, you know, the history books, those are not the standard plays anymore. You'll have, you know, nation states going after commercial entities and, you know, criminal syndicates and alluded to that. There's more money in it than the international drug trade. So you can imagine the amount of criminal interest in getting this money. So you put all that together. And the increasing of attacks, it just is really pressing down is, is literally, I mean, the reports we're reading over half of everyone, obviously the most critical infrastructure cares, but even just mainstream computing requirements need to have their data protected, help me protect my workloads and they don't have the people in house, right? So that's where partnership is needed, right? And that's where we believe, you know, our approach with our partner ecosystem is it's not HPE delivering everything ourself, but all of us in this together is really what we believe. The only way we're gonna be able to get this done. >>So collets double click on that HPE and its partner ecosystem can provide expertise that companies and every industry are lacking. You're delivering HPE as a 360 degree approach to security. Talk about what that 360 degree approach encompasses. >>Thank you. It is, it is an approach, right? Because I feel that security is a, it is a, it is a thread that will go through the entire construct of a technical solution, right there. Isn't a, oh, if you just buy this one server with this one feature, you don't have to worry about anything else. It's really it's everywhere. And at least the way we believe it, it's everywhere. And it in a 360 degree approach, the way we like to frame it is it's, it's this beginning with our supply chain, right? We take a lot of pride in the designs, you know, the really smart engineering teams, the design, our technology, our awesome world class global operations team, working in concert to deliver some of these technologies into the market. That is a huge, you know, great capability, but also a huge risk to customers, cuz that is the most vulnerable place that if you inject some sort of malware or, or tampering at that point, you know, the rest of the story really becomes mute because you've already defeated, right? >>And then you move in to you physically deployed that through our global operations. Now you're in an operating environment. That's where automation becomes key, right? We have software innovations in, you know, our ILO product of management inside those single servers. And we have really cool new grain lake for compute operations management services out there that give customers more control back and more information to deal with this scaling problem. And then lastly, as you begin to wrap up, you know, the natural life cycle and you need to move to new platforms and new technologies, right? We think about the exit of that life cycle and how do we make sure we dispose of the data and, and move those products into a secondary life cycle so that we can move back into this kind of circular 360 degree approach. We don't wanna leave our customers hanging anywhere in this entire journey. >>That 360 degree approach is so critical, especially given as we've talked about already in this segment, the changes, the dynamics in the environment. And as Cole said, this is this 360 degree approach that HPE is delivering is beginning in the manufacturing supply chain seems like the first line of defense against cyber attackers talked to us about why that's important. And where did the impetus come from? Was that COVID was that customer demand? >>Yep. Yep. Yeah. The supply chain is critical. Thank you. So in 2018, we, we could see all of these cybersecurity issues starting to emerge and predicted that this would be a significant challenge for our industry. So we formed a strategic initiative called the trusted supply chain program designed to mitigate cybersecurity risk in the supply chain and really starting at the product with the product life cycle, starting at the product design phase and moving through sourcing and manufacturing, how we deliver products to our customers and ultimately a product's end of life that Cole mentioned. So in doing this, we're able to provide our customers with the most secure products and services, whether they're buying their servers from, for their data center or using our own GreenLake services. So just to give you some examples, something that is foundational to our trusted supply chain program, we've built a very robust cybersecurity supply chain risk management program that includes assessing our risk at our all factories and our suppliers. >>Okay. We're also looking at strengthening our software supply chain by developing mechanisms to identify software vulnerabilities and hardening our own software build environments to protect against counterfeit parts that I mentioned in the beginning from entering our supply chain, we've recently started a blockchain program so that we can identify component provenance and trace part parts back to their original manufacturers. So our security efforts, you know, continue even after product manufacturing, we offer three different levels of secure delivery services for our customers, including, you know, a dedicated truck and driver or perhaps even an exclusive use vehicle. We can tailor our delivery services to whatever the customer needs. And then when a product is at its end of life, products are either recycled or disposed using our approved vendors. So our servers are also equipped with the one button secure erase that erases every bite of data, including firmware data and talking about products, we've taken additional steps to provide additional security features for our products. >>Number one, we can provide platform certificates that allow the user to cryptographically verify that their server hasn't been tampered with from the time it left the manufacturing facility to the time that it arrives at the customer's factory facility. In addition to that, we've launched a dedicated line of trusted supply chain servers with additional security features, including secure configuration lock chassis intrusion detection. And these are assembled at our us factory by us vetted employees. So lots of exciting things happening within the supply chain, not just to shore up our own supply chain risk, but also to provide our customer the most. So that announcement. >>All right, thank you. You know, they've got great setup though, because I think you gotta really appreciate the whole effort that we're putting into, you know, bringing these online. But one of the just transparently the gaps we had as we proved this out was as you heard, this initial proof was delivered with assembly in the us factory employees, you know, fantastic program really successful in all our target industries and, and even expanding to places we didn't really expect it to, but it's kind of going to the point of security. Isn't just for one industry or one set of customers, right? We're seeing it in our partners. We're seeing it in different industries than we have in the past. And, but the challenge was we couldn't get this global right out the gate, right? This has been a really heavy transparently, a us federal activated focus, right? >>If, if you've been tracked in what's going on since may of last year, there's been a call to action to improve a nation cybersecurity. So we've been all in on that and we have an opinion and we're working hard on that, but we're a global company, right? How can we get this out to the rest of the world? Well guess what, this month we figured it out and well, let's take a lot more than those month. We did a lot of work that we figured it out and we have launched a comparable service globally called server security optimization service, right? HPE server security optimization service for proli. I like to call it, you know, S S O S sauce, right? Do you wanna be clever HPE sauce that we can now deploy globally? We get that product hardened in the supply chain, right? Because if you take the best of your supply chain and you take your technical innovations, that you've innovated into the server, you can deliver a better experience for your customers, right? >>So the supply chain equals server technology and our awesome, you know, services teams deliver supply chain security at that last mile. And we can deliver it in the European markets. And now in the Asia Pacific markets right now, we could always just, we could ship it from the us to other markets. So we could always fulfill this promise, but I think it's just having that local access into your partner ecosystem and stuff just makes more sense, but it is big deal for us because now we have activated a meaningful supply chain security benefit for our entire global network of partners and customers, and we're excited about it. And we hope our customers are too. >>That's huge Cole. And, and in terms of this significance of the impact that HPE is delivering through its partner ecosystem globally as the supply chain continues to be one of the terms on everyone's lips here, I'm curious Cole, we just couple months ago, we're at discover. Can you talk about what HPE is doing here from a, a security perspective, this global approach that it's taking as it relates to what HPE was talking about at discover, in terms of we wanna secure the enterprise to deliver these experiences from edge to cloud. >>You know, I feel like for, for me, and, and I think you look at the shared responsibility models and you know, other frameworks out there, the way we're the way I believe it to be is this is it's, it's a solution, right? There's not one thing, you know, if you use HPE supply chain, the end, or if you buy an HPE pro line the end, right. It is an integrated connectedness with our, as a service platform, our service and support commitments, you know, our extensive partner ecosystem, our alliances, all of that comes together to ultimately offer that assurance to a customer. And I think these are specific, meaningful proof points in that chain of custody, right? That chain of trust, if you will, because as the world becomes more, zero trust, we are gonna have to prove ourselves more, right. And these are those kind of technical I credentials and identities and, you know, capabilities that a modern approach to security need. >>Excellent, great work there. And let's go ahead and, and take us home, take the audience through what you think ultimately, what HPE is doing, really infusing security at that 360 degree approach level that we talked about. What are some of the key takeaways that you want the audience that's watching here today to walk away with? >>Right. Right. Thank you. Yeah. You know, with the increase in cyber security threats, everywhere affecting all businesses globally, it's gonna require everyone in our industry to continue to evolve in our supply chain security in our product security in order to protect our customers in our business, continuity protecting our supply chain is something that HPE is very committed to and takes very seriously. So, you know, I think regardless of whether our customers are looking for an on-prem solution or a GreenLake service, you know, HPE is proactively looking for in mitigating any security risk in this supply chain so that we can provide our customers with the most secure products and services. >>Awesome. Ann and Cole. Thank you so much for joining me today, talking about what HPE is doing here and why it's important as our program is called to be confident and trust your server security with HPE and how HPE is doing that. Appreciate your insights on your time. >>Thank you so much for having thank >>You, Lisa, >>For Cole Humphreys and Anne Potton I'm Lisa Martin. We wanna thank you for watching this segment in our series. Be confident and trust your server security with HPE. We'll see you soon.

Published Date : Aug 30 2022

SUMMARY :

It's great to have you on the program. It's nice to be here, Anne. Some of the trends, you know, rogue nation states using cybersecurity warfare tactics to And you know, all of these things together So Cole, let's bring you into the conversation and did a great job of summarizing the major threats the pressure on companies because you have a skill gap, And that's where we believe, you know, our approach with our partner ecosystem as a 360 degree approach to security. We take a lot of pride in the designs, you know, the really smart engineering We have software innovations in, you know, our ILO product of supply chain seems like the first line of defense against cyber attackers talked to us So just to give you some examples, something that is foundational So our security efforts, you know, continue even after product manufacturing, supply chain risk, but also to provide our customer the most. But one of the just transparently the gaps we had as we proved this out was as you heard, I like to call it, you know, S S O S sauce, right? you know, services teams deliver supply chain security at that last mile. to be one of the terms on everyone's lips here, I'm curious Cole, we just couple months ago, the end, or if you buy an HPE pro line the end, right. And let's go ahead and, and take us home, take the audience through what you think in this supply chain so that we can provide our customers with the most secure products and services. server security with HPE and how HPE is doing that. We wanna thank you for watching this segment in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Anne PottonPERSON

0.99+

AnnePERSON

0.99+

AnnPERSON

0.99+

LisaPERSON

0.99+

2018DATE

0.99+

Ann PottenPERSON

0.99+

HPEORGANIZATION

0.99+

Cole HumphreysPERSON

0.99+

ColePERSON

0.99+

two guestsQUANTITY

0.99+

first lineQUANTITY

0.99+

360 degreeQUANTITY

0.99+

todayDATE

0.99+

2025DATE

0.99+

Asia PacificLOCATION

0.99+

360 degreeQUANTITY

0.99+

one setQUANTITY

0.98+

over 10 and a half trillion dollarsQUANTITY

0.98+

two years agoDATE

0.98+

ILOORGANIZATION

0.97+

mayDATE

0.97+

couple months agoDATE

0.96+

this monthDATE

0.95+

one industryQUANTITY

0.94+

GreenLakeORGANIZATION

0.94+

threeQUANTITY

0.93+

oneQUANTITY

0.93+

last yearDATE

0.92+

one exampleQUANTITY

0.92+

three and a half million cyber rolesQUANTITY

0.91+

single serversQUANTITY

0.91+

double edgeQUANTITY

0.9+

pandemicEVENT

0.9+

UkraineLOCATION

0.83+

zero trustQUANTITY

0.8+

one serverQUANTITY

0.78+

over halfQUANTITY

0.77+

one thingQUANTITY

0.71+

COVIDOTHER

0.69+

S S OORGANIZATION

0.67+

next few yearsDATE

0.64+

RussianOTHER

0.63+

EuropeanOTHER

0.55+

biteQUANTITY

0.54+

monthsQUANTITY

0.46+

Ann Potten & Cole Humphreys | CUBE Conversation, August 2022


 

(upbeat music) >> Hi, everyone, welcome to this program sponsored by HPE. I'm your host, Lisa Martin. We're here talking about being confident and trusting your server security with HPE. I have two guests here with me to talk about this important topic. Cole Humphreys joins us, global server security product manager at HPE, and Ann Potten, trusted supply chain program lead at HPE. Guys, it's great to have you on the program, welcome. >> Hi, thanks. >> Thank you. It's nice to be here. >> Ann let's talk about really what's going on there. Some of the trends, some of the threats, there's so much change going on. What is HPE seeing? >> Yes, good question, thank you. Yeah, you know, cybersecurity threats are increasing everywhere and it's causing disruption to businesses and governments alike worldwide. You know, the global pandemic has caused limited employee availability originally, this has led to material shortages, and these things opens the door perhaps even wider for more counterfeit parts and products to enter the market, and these are challenges for consumers everywhere. In addition to this, we're seeing the geopolitical environment has changed. We're seeing rogue nation states using cybersecurity warfare tactics to immobilize an entity's ability to operate, and perhaps even use their tactics for revenue generation. The Russian invasion of Ukraine is one example. But businesses are also under attack, you know, for example, we saw SolarWinds' software supply chain was attacked two years ago, which unfortunately went unnoticed for several months. And then, this was followed by the Colonial Pipeline attack and numerous others. You know, it just seems like it's almost a daily occurrence that we hear of a cyberattack on the evening news. And, in fact, it's estimated that the cyber crime cost will reach over $10.5 trillion by 2025, and will be even more profitable than the global transfer of all major illegal drugs combined. This is crazy. You know, the macro environment in which companies operate in has changed over the years. And, you know, all of these things together and coming from multiple directions presents a cybersecurity challenge for an organization and, in particular, its supply chain. And this is why HPE is taking proactive steps to mitigate supply chain risk, so that we can provide our customers with the most secure products and services. >> So, Cole, let's bring you into the conversation. Ann did a great job of summarizing the major threats that are going on, the tumultuous landscape. Talk to us, Cole, about the security gap. What is it, what is HPE seeing, and why are organizations in this situation? >> Hi, thanks, Lisa. You know, what we're seeing is as this threat landscape increases to, you know, disrupt or attempt to disrupt our customers, and our partners, and ourselves, it's a kind of a double edge, if you will, because you're seeing the increase in attacks, but what you're not seeing is an equal to growth of the skills and the experiences required to address the scale. So it really puts the pressure on companies, because you have a skill gap, a talent gap, if you will, you know, for example, there are projected to be 3 1/2 million cyber roles open in the next few years, right? So all this scale is growing, and people are just trying to keep up, but the gap is growing, just literally the people to stop the bad actors from attacking the data. And to complicate matters, you're also seeing a dynamic change of the who and the how the attacks are happening, right? The classic attacks that you've seen, you know, in the espionage in all the, you know, the history books, those are not the standard plays anymore. You'll have, you know, nation states going after commercial entities and, you know, criminal syndicates, as Ann alluded to, that there's more money in it than the international drug trade, so you can imagine the amount of criminal interest in getting this money. So you put all that together and the increasing of attacks it just is really pressing down as literally, I mean, the reports we're reading over half of everyone. Obviously, the most critical infrastructure cares, but even just mainstream computing requirements need to have their data protected, "Help me protect my workloads," and they don't have the people in-house, right? So that's where partnership is needed, right? And that's where we believe, you know, our approach with our partner ecosystem this is not HPE delivering everything ourself, but all of us in this together is really what we believe the only way we're going to be able to get this done. >> So, Cole, let's double-click on that, HPE and its partner ecosystem can provide expertise that companies in every industry are lacking. You're delivering HPE as a 360-degree approach to security. Talk about what that 360-degree approach encompasses. >> Thank you, it is an approach, right? Because I feel that security it is a thread that will go through the entire construct of a technical solution, right? There isn't a, "Oh, if you just buy this one server with this one feature, you don't have to worry about anything else." It's really it's everywhere, at least the way we believe it, it's everywhere. And in a 360-degree approach, the way we like to frame it, is it's this beginning with our supply chain, right? We take a lot of pride in the designs, you know, the really smart engineering teams, the designer, technology, our awesome, world-class global operations team working in concert to deliver some of these technologies into the market, that is, you know, a great capability, but also a huge risk to customers. 'Cause that is the most vulnerable place that if you inject some sort of malware or tampering at that point, you know, the rest of the story really becomes mute, because you've already defeated, right? And then, you move in to you physically deployed that through our global operations, now you're in an operating environment. That's where automation becomes key, right? We have software innovations in, you know, our iLO product of management inside those single servers, and we have really cool new GreenLake for compute operations management services out there that give customers more control back and more information to deal with this scaling problem. And then, lastly, as you begin to wrap up, you know, the natural life cycle, and you need to move to new platforms and new technologies, we think about the exit of that life cycle, and how do we make sure we dispose of the data and move those products into a secondary life cycle, so that we can move back into this kind of circular 360-degree approach. We don't want to leave our customers hanging anywhere in this entire journey. >> That 360-degree approach is so critical, especially given, as we've talked about already in this segment, the changes, the dynamics in the environment. Ann, as Cole said, this 360-degree approach that HPE is delivering is beginning in the manufacturing supply chain, seems like the first line of defense against cyberattackers. Talk to us about why that's important and where did the impetus come from? Was that COVID, was that customer demand? >> Yep, yep. Yeah, the supply chain is critical, thank you. So in 2018, we could see all of these cybersecurity issues starting to emerge and predicted that this would be a significant challenge for our industry. So we formed a strategic initiative called the Trusted Supply Chain Program designed to mitigate cybersecurity risk in the supply chain, and really starting with the product life cycle, starting at the product design phase and moving through sourcing and manufacturing, how we deliver products to our customers and, ultimately, a product's end of life that Cole mentioned. So in doing this, we're able to provide our customers with the most secure products and services, whether they're buying their servers for their data center or using our own GreenLake services. So just to give you some examples, something that is foundational to our Trusted Supply Chain Program we've built a very robust cybersecurity supply chain risk management program that includes assessing our risk at all factories and our suppliers, okay? We're also looking at strengthening our software supply chain by developing mechanisms to identify software vulnerabilities and hardening our own software build environments. To protect against counterfeit parts, that I mentioned in the beginning, from entering our supply chain, we've recently started a blockchain program so that we can identify component provenance and trace parts back to their original manufacturers. So our security efforts, you know, continue even after product manufacturing. We offer three different levels of secured delivery services for our customers, including, you know, a dedicated truck and driver, or perhaps even an exclusive use vehicle. We can tailor our delivery services to whatever the customer needs. And then, when a product is at its end of life, products are either recycled or disposed using our approved vendors. So our servers are also equipped with the One-Button Secure Erase that erases every byte of data, including firmware data. And talking about products, we've taken additional steps to provide additional security features for our products. Number one, we can provide platform certificates that allow the user to cryptographically verify that their server hasn't been tampered with from the time it left the manufacturing facility to the time that it arrives at the customer's facility. In addition to that, we've launched a dedicated line of trusted supply chain servers with additional security features, including Secure Configuration Lock, Chassis Intrusion Detection, and these are assembled at our U.S. factory by U.S. vetted employees. So lots of exciting things happening within the supply chain not just to shore up our own supply chain risk, but also to provide our customers with the most secure product. And so with that, Cole, do you want to make our big announcement? >> All right, thank you. You know, what a great setup though, because I think you got to really appreciate the whole effort that we're putting into, you know, bringing these online. But one of the, just transparently, the gaps we had as we proved this out was, as you heard, this initial proof was delivered with assembly in the U.S. factory employees. You know, fantastic program, really successful in all our target industries and even expanding to places we didn't really expect it to. But it's kind of going to the point of security isn't just for one industry or one set of customers, right? We're seeing it in our partners, we're seeing it in different industries than we have in the past. But the challenge was we couldn't get this global right out the gate, right? This has been a really heavy, transparently, a U.S. federal activated focus, right? If you've been tracking what's going on since May of last year, there's been a call to action to improve the nation's cybersecurity. So we've been all in on that, and we have an opinion and we're working hard on that, but we're a global company, right? How can we get this out to the rest of the world? Well, guess what? This month we figured it out and, well, it's take a lot more than this month, we did a lot of work, but we figured it out. And we have launched a comparable service globally called Server Security Optimization Service, right? HPE Server Security Optimization Service for ProLiant. I like to call it, you know, SSOS Sauce, right? Do you want to be clever? HPE Sauce that we can now deploy globally. We get that product hardened in the supply chain, right? Because if you take the best of your supply chain and you take your technical innovations that you've innovated into the server, you can deliver a better experience for your customers, right? So the supply chain equals server technology and our awesome, you know, services teams deliver supply chain security at that last mile, and we can deliver it in the European markets and now in the Asia Pacific markets, right? We could ship it from the U.S. to other markets, so we could always fulfill this promise, but I think it's just having that local access into your partner ecosystem and stuff just makes more sense. But it is a big deal for us because now we have activated a meaningful supply chain security benefit for our entire global network of partners and customers and we're excited about it, and we hope our customers are too. >> That's huge, Cole and Ann, in terms of the significance of the impact that HPE is delivering through its partner ecosystem globally as the supply chain continues to be one of the terms on everyone's lips here. I'm curious, Cole, we just couple months ago, we're at Discover, can you talk about what HPE is doing here from a security perspective, this global approach that it's taking as it relates to what HPE was talking about at Discover in terms of we want to secure the enterprise to deliver these experiences from edge to cloud. >> You know, I feel like for me, and I think you look at the shared-responsibility models and, you know, other frameworks out there, the way I believe it to be is it's a solution, right? There's not one thing, you know, if you use HPE supply chain, the end, or if you buy an HPE ProLiant, the end, right? It is an integrated connectedness with our as-a-service platform, our service and support commitments, you know, our extensive partner ecosystem, our alliances, all of that comes together to ultimately offer that assurance to a customer, and I think these are specific meaningful proof points in that chain of custody, right? That chain of trust, if you will. Because as the world becomes more zero trust, we are going to have to prove ourselves more, right? And these are those kind of technical credentials, and identities and, you know, capabilities that a modern approach to security need. >> Excellent, great work there. Ann, let's go ahead and take us home. Take the audience through what you think, ultimately, what HPE is doing really infusing security at that 360-degree approach level that we talked about. What are some of the key takeaways that you want the audience that's watching here today to walk away with? >> Right, right, thank you. Yeah, you know, with the increase in cybersecurity threats everywhere affecting all businesses globally, it's going to require everyone in our industry to continue to evolve in our supply chain security and our product security in order to protect our customers and our business continuity. Protecting our supply chain is something that HPE is very committed to and takes very seriously. So, you know, I think regardless of whether our customers are looking for an on-prem solution or a GreenLake service, you know, HPE is proactively looking for and mitigating any security risk in the supply chain so that we can provide our customers with the most secure products and services. >> Awesome, Anne and Cole, thank you so much for joining me today talking about what HPE is doing here and why it's important, as our program is called, to be confident and trust your server security with HPE, and how HPE is doing that. Appreciate your insights and your time. >> Thank you so much for having us. >> Thank you, Lisa. >> For Cole Humphreys and Anne Potten, I'm Lisa Martin, we want to thank you for watching this segment in our series, Be Confident and Trust Your Server Security with HPE. We'll see you soon. (gentle upbeat music)

Published Date : Aug 23 2022

SUMMARY :

you on the program, welcome. It's nice to be here. Some of the trends, some of the threats, that the cyber crime cost you into the conversation. and the increasing of attacks 360-degree approach to security. that is, you know, a great capability, in the environment. So just to give you some examples, and our awesome, you know, services teams in terms of the significance of the impact and identities and, you know, Take the audience through what you think, so that we can provide our customers thank you so much for joining me today we want to thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Anne PottenPERSON

0.99+

ColePERSON

0.99+

AnnPERSON

0.99+

Ann PottenPERSON

0.99+

2018DATE

0.99+

HPEORGANIZATION

0.99+

August 2022DATE

0.99+

AnnePERSON

0.99+

Cole HumphreysPERSON

0.99+

LisaPERSON

0.99+

DiscoverORGANIZATION

0.99+

360-degreeQUANTITY

0.99+

Asia PacificLOCATION

0.99+

SolarWinds'ORGANIZATION

0.99+

two guestsQUANTITY

0.99+

MayDATE

0.99+

U.S.LOCATION

0.99+

over $10.5 trillionQUANTITY

0.99+

first lineQUANTITY

0.99+

two years agoDATE

0.99+

2025DATE

0.99+

todayDATE

0.99+

couple months agoDATE

0.98+

one exampleQUANTITY

0.98+

one setQUANTITY

0.97+

oneQUANTITY

0.97+

This monthDATE

0.96+

ProLiantORGANIZATION

0.94+

zero trustQUANTITY

0.93+

GreenLakeORGANIZATION

0.92+

singleQUANTITY

0.92+

threeQUANTITY

0.9+

one industryQUANTITY

0.89+

this monthDATE

0.89+

pandemicEVENT

0.89+

SSOS SauceORGANIZATION

0.85+

doubleQUANTITY

0.81+

3 1/2 million cyber rolesQUANTITY

0.78+

over halfQUANTITY

0.77+

one featureQUANTITY

0.76+

last yearDATE

0.75+

one serverQUANTITY

0.75+

next few yearsDATE

0.73+

Supply Chain ProgramOTHER

0.72+

Be Confident and TrustTITLE

0.72+

UkraineLOCATION

0.71+

Number oneQUANTITY

0.7+

HPECOMMERCIAL_ITEM

0.68+

Sandeep Singh, HPEv2


 

(smooth music) >> Hi, everybody. This is Dave Vellante, and with me is Sandeep Singh. He's the vice president of storage marketing at Hewlett Packard Enterprise, and we're going to riff on some of the trends in the industry, what we're seeing, and we got a little treat for you, Sandeep. Great to see you, man. >> Dave, it's a pleasure to be here. >> You and I have known each other for a long time. We've had some great discussions, some debates, (chuckles) some intriguing mind benders. What are you seeing out there in storage? So much has changed. What are the key trends you're seeing? And let's get into it. >> Yeah. Across the board, as you said, so much has changed. When you reflect back at the underlying transformation that's taking place with data, cloud, and AI across the board, first of all, for our customers, they're seeing this massive data explosion that literally now spans edge to core to cloud. They're also seeing a diversity of the application workloads across the board. The emphasis that it's placing is on the complexity that underlies overall infrastructure and data management. Across the board, we're hearing a lot from customers about just the underlying infrastructure and complexity, and the infrastructure sprawl. And then the second element of that is really extending into the complexity of data management. >> So it's interesting to talk about data management. You remember you and I were in... Well, you were in Andover. I don't know. It was probably like five years ago. And all we were talking about was media, flash this and flash that, and at the time that was kind of the hot storage topic. Well, flash came in, addressed some of the clicks that we historically talked about. Now the problem statement is really kind of, quote unquote, metaphorically moving up the stack, if you will. You mentioned management. But let's dig into that a little bit. I mean, what is management? I mean, a lot of people... That means different things to different people. You talk to a database person or a backup person. How do you look at management? What does that mean to you? >> Yeah, Dave. You mentioned that flash came in, and it actually accelerated the overall speed and latency that storage was delivering to the application workloads. But fundamentally, when you look back at storage over a couple of decades, the underlying way of how you're managing storage hasn't fundamentally changed. There's still an incredible amount of complexity for ITs. It's still a manual admin-driven experience for customers. And what that's translating to is, more often than not, IT is in the world of firefighting, and it leaves them unable to help with the more strategic projects to innovate for the business. And basically IT has that pressure point of moving beyond that, and helping bring greater levels of agility that line of business owners are asking for, and to be able to deliver on more of the strategic projects. So that's one element of it. The second element that we're hearing from customers about is as more and more data just continues to explode from edge to core to cloud, and as basically the infrastructure has grown from just being on-prem, to being at the edge, to being in the cloud, now that complexity is expanding from just being on-prem to across multiple different clouds. So when you look across the data life cycle, how do you store it? How do you secure it? How do you basically protect it, and archive it, and analyze that data? That end to end life cycle management of data, today resides on just a fragmented set of overall infrastructure, and tools, and processes, and administrative boundaries. That's creating a massive challenge for customers. And the impact of that, ultimately, is essentially comes at a cost to agility, to innovation, and ultimately business risk. >> Yeah, so we've seen obviously the cloud has addressed a lot of these problems, but the problem is the cloud is in the cloud. And much of my stuff, most of my stuff, isn't in the cloud. (chuckles) So I have all these other workloads that are either on-prem, and now you've got this emerging edge. And so I wonder if we could just talk a little vision here for a minute. I mean, what I've been envisioning is this abstraction layer that cuts across all, whether... It doesn't really matter where it is. If it's on-prem, if it's across cloud, if it's in the cloud, on the edge. We could talk about what that all means. But if customers that I talk to, they're sort of done with the complexity of that underlying infrastructure. They want technology to take care of that. They want automation. They want AI brought into that equation. And it seems like we're on the cusp of the decade where that might happen. What's your take? >> Well, yeah. Certainly, I mentioned that data cloud and AI are really the disruptive forces that are propelling the digital transformation for customers. Cloud has set the standard for agility, and AI-driven insights and intelligence are really helping to make the underlying infrastructure invisible. And yet a lot of their application workloads and data is on-prem and is increasingly growing at the edge. So they want that same experience to be able to truly bring that agility to wherever their data and apps load. And that's one of the things that we're continuing to hear from customers. >> And this problem's just going to get worse. I mean, we... For decades we marched to the cadence of Moore's law, and everybody's kind of forgets about Moore's law. And they'll say, "Ah, it's dying," or whatever. But actually, when you look at the processing power that's coming out now, it's not... It's more than doubling every two years, quadrupling every two years. So now you've got this capability in your hands, and application designers, storage companies, networking companies, they're going to have all this power to now bring in AI and do things that we've never even imagined before. So it's not about the box, and the speeds and feeds of the box. It's really more about this abstraction layer that I was talking about, the management, if you will, that you were discussing, and what we can do in terms of being able to power new workloads, machine intelligence. It's this kind of ubiquitous... Call it the cloud, but it's expanding pretty much everywhere in every part of our lives, (chuckles) even to the edge. You think about autonomous vehicles, you think about factories. It's actually quite mind boggling where we're headed. >> It is, and you touched upon AI, and certainly when you look at infrastructure, for example, there's been a ton of complexity in infrastructure management. One of the studies that was done, actually by IDC, indicated that over 90% of the challenges that arise, for example, ultimately down at the storage infrastructure layer that's powering the apps, ultimately, arises from way above the stack all the way from the server layer on down, or even the virtual machine layer. And there, for example, AI ops for infrastructure has become a game changer for customers to be able to bring the power of AI, and machine learning, and multi-variate analysis to be able to predict and prevent issues. Dave, you also touched upon edge, and across the board, what we're seeing is the enterprise edge is becoming that frontier for customer experiences, and the opportunity to reimagine customer experiences, as well as just the frontier for commerce that's happening when you look at retail, and manufacturing, and/or financial services. So across the board, with the data growth that's happening, and this edge becoming the strategic frontier for delivering the customer experiences, how you power your application workloads there, how you deliver that data, and protect that data, and be able to seamlessly manage that overall infrastructure, as you mentioned, abstracted away at a higher level, becomes incredibly important for our customers. >> It's so interesting to hear how the conversation's changing, I'd like to say. I go back to whatever it was, five years ago, we're talking about flash, storage class memory, and NVMe, and those things are still there, but your emphasis now, you're talking about machine learning, AI, math around deep learning. It's really software is really what you're focusing on these days. >> Very much so. Certainly, this notion of software and services that are delivering and unlocking a whole new experience for customers, that's really the game changer going forward for customers, and that's what we're focused on. >> Well, I said we had a little surprise for you. So you guys are having an event on May 4th. It's called Unleash the Power of Data. What's that event all about, Sandeep? >> Yeah. We are very much excited about our May 4th event. As you mentioned, it's called Unleash the Power of Data. And as most organizations today are data driven, and data is at the heart of what they're doing, we're excited to invite everyone to join this event. And through this event, we're unveiling a new vision for data that accelerates the data-driven transformation from edge to cloud. This event promises to be a pivotal event, and one that IT admins, cloud architects, virtual machine admins, vice-presidents, directors of IT, and CIOs really won't want to miss. Across the board, this event is just bringing a new way of articulating the overall problem statement, and a market-in focused the articulation of the trends that we were just discussing. It's an event that's going to be hosted by business and technology journalist, Shibani Joshi. It will feature a market-in panel with a focus on the crucial role that data is playing in customers' digital transformation. It will also include and feature Antonio Neri, CEO of HPE, and Tom Black, senior vice president and general manager of HPE storage business, and industry experts, including Julia Palmer, research vice president at Gartner. We will unveil game-changing HPE innovations that will make it possible for organizations across edge to cloud to unleash the power of data. >> Sounds like a great event. I presume I can go to hpe.com. And what? Get information. Is it a registered event? How does that all work? >> Yeah, we invite everyone to visit hpe.com, and by visiting there, you can click and save the date of May 4th at 8:00 AM Pacific. We invite everyone to join us. We couldn't be more excited to get to this event, and be able to share the vision and game-changing HPE innovations. >> Awesome. So it's... So I don't have to register, right? I don't have to give up my three children's name, and my social security number to attend your event, is that right? (chuckles) >> No registration required. Come by, click on hpe.com. Save the date on your calendar. And we very much look forward to having everyone join us for this event. >> I love it. It's pure content event. I'm not going to get a phone call afterwards saying, "Hey, buy some stuff from me." That could come other channels, so that's good. (chuckles) Thank you for that. Thanks for providing that service to the industry. I'm excited to see what you guys are going to be announcing that day. And look, Sandeep, I mean, like I said, we've known each other a while. We've seen a lot of trends, but the next 10 years, it ain't going to look like the last 10, is it? >> It's going to be very different, and we couldn't be more excited. >> Well, Sandeep, thanks so much for coming to theCUBE, and riffing with me on the industry, and giving us a preview for your event. Good luck with that, and always great to see you. >> Thanks a lot, Dave. Always great to see you as well. >> All right, and thank you, everybody. This is Dave Vellante for theCUBE, and we'll see you next time. (smooth music)

Published Date : Apr 22 2021

SUMMARY :

in the industry, what we're seeing, What are the key trends you're seeing? and AI across the board, and at the time that was kind and to be able to deliver on of the decade where that might happen. And that's one of the things and the speeds and feeds of the box. and the opportunity to It's so interesting to hear and services that are It's called Unleash the Power of Data. and data is at the heart I presume I can go to hpe.com. and be able to share the vision So I don't have to register, right? Save the date on your calendar. I'm excited to see what you guys It's going to be very different, and always great to see you. Always great to see you as well. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Julia PalmerPERSON

0.99+

Dave VellantePERSON

0.99+

Tom BlackPERSON

0.99+

SandeepPERSON

0.99+

DavePERSON

0.99+

Shibani JoshiPERSON

0.99+

Antonio NeriPERSON

0.99+

Sandeep SinghPERSON

0.99+

HPEORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

second elementQUANTITY

0.99+

May 4thDATE

0.99+

one elementQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

three childrenQUANTITY

0.99+

MoorePERSON

0.99+

Unleash the Power of DataEVENT

0.98+

IDCORGANIZATION

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

over 90%QUANTITY

0.98+

oneQUANTITY

0.98+

every two yearsQUANTITY

0.91+

hpe.comOTHER

0.88+

May 4th at 8:00 AM PacificDATE

0.88+

theCUBEORGANIZATION

0.83+

next 10 yearsDATE

0.81+

CEOPERSON

0.81+

One of theQUANTITY

0.75+

over a couple of decadesQUANTITY

0.74+

decadesQUANTITY

0.73+

hpe.comORGANIZATION

0.69+

10QUANTITY

0.64+

firstQUANTITY

0.51+

peopleQUANTITY

0.5+

AndoverLOCATION

0.48+

lotQUANTITY

0.44+

doublingQUANTITY

0.39+

Jeffrey Hammond, Forrester | DevOps Virtual Forum 2020


 

>> Narrator: From around the globe, it's theCUBE! With digital coverage of DevOps Virtual Forum, brought to you by Broadcom. >> Hi, Lisa Martin here covering the Broadcom DevOps Virtual Forum. I'm very pleased to be joined today by a CUBE alumni, Jeffrey Hammond, the Vice President and Principal Analyst serving CIOs at Forrester. Jeffrey, nice to talk with you today. >> Good morning, it's good to be here. >> So, a virtual forum, a great opportunity to engage with our audiences. So much has changed in the last, it's an understatement, right? Or it's an overstated thing, but it's obvious. So much has changed. When we think of DevOps, one of the things that we think of is speed, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now. Speaking of the need to adapt, talk to us about what you're seeing with respect to DevOps and Agile in the age of COVID. What are things looking like? >> Yeah, I think that for most organizations, we're in a period of adjustment. When we initially started, it was essentially a sprint. You run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And that's actually what the folks at GitHub saw in May, when they run an analysis of how developers commit times and level of work that they were committing and how they were working. In the first couple months of COVID, was progressing, they found that developers, at least in the Pacific Time Zone, were actually increasing their work volume, maybe 'cause they didn't have two hour commutes, or maybe because they work stuck away in their homes, but for whatever reason, they were doing more work. And it's almost like, if you've ever run a marathon, the first mile or two in the marathon, you feel great, you just want to run and you want to power through it, you want to go hard. And if you do that, by the time you get to mile 18 or 19, you're going to be gassed, sucking for wind. And that's I think where we're starting to hit. So as we start to gear our development shops up for the reality that most of us won't be returning into an office until 2021 at the earliest. And many organizations will be fundamentally changing their remote work policies, we have to make sure that the agile processes that we use, and the DevOps processes and tools that we use to support these teams are essentially aligned to help developers run that marathon, instead of just kind of power through. So, let me give you a couple specifics. For many organizations, they have been in an environment where they will tolerate remote work and what I would call remote work around the edges, like developers can be remote, but product managers and essentially scrum masters and all the administrators that are running the SCM repositories and the DevOps pipelines are all in the office. And it's essentially centralized work. That's not where we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so, one of the implications of that is that we have to think about all the activities that you need to do from a DevOps perspective, or from an agile perspective. They have to be remotable. One of the things I found with some of the organizations I talked to early on was, there were things that administrators had to do that required them to go into the office, to reboot the SCM server as an example, or to make sure that the final approvals for production were made. And so, the code could be moved into the production environment. And so, it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some states. And so, one of the the results of that is that, while we've traditionally said tools are important, but they're not as important as culture, as structure, as organization, as process, I think we have to rethink that a little bit. Because to the extent that tools enable us to be more digitally organized and to achieve higher levels of digitization in our processes, and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers that organizations have at their disposal. I'll give you another example. For years, we've said that the key to success with Agile at the team level is cross functional, co-located teams that are working together. Physically co-located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, how do you take the low hanging fruits of an agile transformation and apply it in the time of COVID? Well, I think what you have to do is you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table, it's the fact that we're able to get into a shared mind space. From a measurement perspective, we can have shared purpose, we can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So, one of the biggest things that organizations need to start to ask themselves is, how do we achieve spiritual co-location with our Agile teams, because we don't have the ease of physical co-location available to us anymore. >> Well, spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge. Here we are seven, eight months in, for many organizations as you say, going from physical workspaces, co-location, being able to collaborate face to face to a light switch flip overnight, and this undefined indeterminate period of time where all we were living with was uncertainty. How does spiritual... When you talk about spiritual co-location in terms of collaboration and processes and technology. Help us unpack that and how are you seeing organizations adopt it? >> Yeah, it's a great question. And I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace DevOps. If you go all the way back to the original Agile Manifesto. There were four principles that were espoused. Individuals and interactions over processes and tools. That's still important, individuals and interactions are at the core of software development. Processes and tools that support those individuals in those interactions are more important than ever. Working software over comprehensive documentation. Working software is still more important. But when you are trying to onboard employees, and they can't come into the office, and they can't do the two day training session, and kind of understand how things work, and they can't just holler over theCUBE, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context. Customer collaboration over contract negotiation. Absolutely still important. But employee collaboration is equally as important if you want to be spiritually co-located and if you want to have a shared purpose. And then, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our DevOps effort around velocity. Getting faster, we need to run as fast as we can. Like that sprinter, okay? Trying to just power through it as quickly as possible. But as we shift to the marathon way of thinking, velocity is still important but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important than just flat out velocity. And so, changing some of the ways that we think about DevOps practices is important to make sure that that agility is there. For one thing, you have to defer decisions as far down the chain to the team level as possible. So those teams have to be empowered to make decisions. Because you can't have a program level meeting of six or seven teams in one large hall and say, here's the lay of the land, here's what we're going to do, here are our processes, and here are our guardrails. Those teams have to make decisions much more quickly. The developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here, or 50 minutes there and do something useful. And so, the tools that support us have to become tolerant of the reality of how we're working. So, if they work in a way that it allows the team together to take as much autonomy as they can handle, to allow them to communicate in a way that delivers shared purpose, and allows them to adapt and master new technologies, then they're in the zone, they'll get spiritually connected. I hope that makes sense (chuckles). >> It does, I think we all could use some of that. But you talked about in the beginning and I've talked to numerous companies during the pandemic on theCUBE about the productivity or rather the number of hours worked has gone way up for many roles, and times that they normally at late at night on the weekends. So, but it's a cultural, it's a mind shift. To your point about DevOps focused on velocity, sprint, sprint, sprint, and now we have to. So that cultural shift is not an easy one for developers and even the biz folks to flip so quickly. What have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >> I think at the core, this really comes down to management sensitivity. When everybody was in the office, you could kind of see the mental health of development teams by watching how they work, you can call it management by walking around, right? We can't do that, managers have to be more aware of what their teams are doing, because they're not going to see that developer doing a check in at 9:00 p.m. on a Friday, because that's what they had to do to meet the objectives. And they're going to have to find new ways to measure engagement and also potential burnout. A friend of mine once had a great metric that he called the Parking Lot Metric. It was helpful as the parking lot at nine and helpful was it at five. And that gives you an indication of how engaged your developers are. What's the digital equivalent of the Parking Lot Metric in the time of COVID, it's commit stats, it's commit rates, it's the turn rate that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes how do we use that information? Do we use that information to say, well, this team isn't delivering at the same level of productivity as another team? Do we weaponize that data? Or do we use that data to identify impedances in the process? Why isn't a team working effectively? Is it because they have higher levels of family obligations, and they've got kids that are at home? Is it because they're working with hardware technology, and guess what, it's not easy to get the hardware technology into their home office, because it's in the lab, at the corporate office. Or they're trying to communicate halfway around the world. And they're communicating with an office lab that is also shut down. And the bandwidth just doesn't enable the level of high bandwidth communications. So, from a DevOps perspective, managers have to get much more sensitive to the exhaust that the DevOps tools are throwing off, but also how they're going to use that in a constructive way to prevent burnout. And then they also need to, if they're not already managing, or monitoring or measuring the level of developer engagement they have, they really need to start. Whether that's surveys around developer satisfaction, whether it's more regular social events where developers can kind of just get together and drink a beer and talk about what's going on in the project and monitoring who checks in and who doesn't. They have to work harder, I think than they ever have before. >> Well, and you mentioned burnout. And that's something that I think we've all faced in this time at varying levels, and it changes and it's a real, there's a tension in the air regardless of where you are. There's a challenge, as you mentioned, people having their kids as co-workers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that have done this, well, this adaptation. What can you share in terms of some real world examples that might inspire the audience? >> Yeah, I'll start with Stack Overflow. They recently published a piece in the Journal of the ACM around some of the things that they had discovered. First of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from the executive level. Social spaces, one of the things that they talk about doing is leaving the video conference room open at the team level all day long. And the team members will go on mute, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question and if anybody else knows the answer, it's kind of like being in that virtual pod, if you will. Even here at Forrester, one of the things that we've done is we've invested in social ceremonies. We've actually moved our team meetings on my analyst team from once every two weeks to weekly. And we have built more time in for socialization, just so we can see how we're doing. I think Microsoft has really made some good information available in how they've managed things like the onboarding process. I think Amanda Silver over there mentioned that a couple of weeks ago, a presentation they did that Microsoft's onboarded over 150,000 people since the start of COVID. If you don't have good remote onboarding processes, that's going to be a disaster. Now, they're not all developers, but if you think about it, everything from how you do the interviewing process, to how you get people their badges, to how they get their equipment. Security is another issue that they called out. Typically, IT security, security of developers machines, ends at the corporate desktop. But now since we're increasingly using our own machines, our own hardware, security organization's going to have to extend their security policies to cover employee devices. And that's caused them to scramble a little bit. So, the examples are out there. It's not a lot of like, we have to do everything completely differently. But it's a lot of subtle changes that have to be made. I'll give you another example. One of the things that we are seeing is that more and more organizations to deal with the challenges around agility with respect to delivering software and embracing low code tools. In fact, we see about 50% of firms are using low code tools right now, we predict it's going to be 75% by the end of next year. So, figuring out how your DevOps processes support an organization that might be using Mendix or OutSystems, or the Power Platform, building the front end of an application, like a track and trace application really, really quickly. But then hooking it up to your back end infrastructure. Does that happen completely outside the DevOps investments that you're making? And the agile processes that you're making? Or do you adapt your organization. Are hybrid teams now, teams that not just have professional developers, but also have business users that are doing some development with a low code tool. Those are the kinds of things that we have to be willing to entertain in order to shift the focus a little bit more toward the agility side, I think. >> A lot of obstacles but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location. Jeffrey, thank you so much for joining us on the program today, very insightful conversation. >> It's my pleasure, it's an important thing. Just remember, if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each, and you'll find that you'll get there. >> Digestible components, wise advice. Jeffrey Hammond, thank you so much for joining. For Jeffrey, I'm Lisa Martin. You're watching Broadcom's DevOps Virtual Forum. (bright upbeat music)

Published Date : Nov 20 2020

SUMMARY :

brought to you by Broadcom. Jeffrey, nice to talk with you today. Speaking of the need to adapt, that the key to success being able to collaborate face to face as far down the chain to and I've talked to numerous that the DevOps tools are throwing off, that might inspire the audience? One of the things that we are seeing and learning to embrace going to run that marathon, you so much for joining.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffreyPERSON

0.99+

Lisa MartinPERSON

0.99+

sixQUANTITY

0.99+

Jeffrey HammondPERSON

0.99+

MicrosoftORGANIZATION

0.99+

26QUANTITY

0.99+

MayDATE

0.99+

two hourQUANTITY

0.99+

two hoursQUANTITY

0.99+

50 minutesQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

sevenQUANTITY

0.99+

Amanda SilverPERSON

0.99+

9:00 p.m.DATE

0.99+

twoQUANTITY

0.99+

GitHubORGANIZATION

0.99+

two dayQUANTITY

0.99+

three weeksQUANTITY

0.99+

75%QUANTITY

0.99+

10 minuteQUANTITY

0.99+

Agile ManifestoTITLE

0.99+

over 150,000 peopleQUANTITY

0.99+

first mileQUANTITY

0.99+

2021DATE

0.98+

ForresterORGANIZATION

0.98+

eight monthsQUANTITY

0.98+

Pacific Time ZoneLOCATION

0.98+

seven teamsQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

CUBEORGANIZATION

0.98+

oneQUANTITY

0.97+

Journal of the ACMTITLE

0.97+

one thingQUANTITY

0.96+

first couple monthsQUANTITY

0.96+

eachQUANTITY

0.95+

about 50%QUANTITY

0.95+

one personQUANTITY

0.95+

fiveDATE

0.95+

FridayDATE

0.94+

mile 18QUANTITY

0.92+

pandemicEVENT

0.92+

DevOpsTITLE

0.92+

Parking Lot MetricOTHER

0.91+

agileTITLE

0.91+

end of next yearDATE

0.91+

FirstQUANTITY

0.9+

2020DATE

0.89+

OutSystemsORGANIZATION

0.89+

COVIDEVENT

0.87+

one largeQUANTITY

0.86+

AgileTITLE

0.85+

nineDATE

0.84+

Stack OverflowORGANIZATION

0.84+

couple of weeks agoDATE

0.83+

19QUANTITY

0.82+

COVIDOTHER

0.82+

once every two weeksQUANTITY

0.75+

theCUBEORGANIZATION

0.74+

four principlesQUANTITY

0.71+

COVIDTITLE

0.64+

DevOps Virtual ForumTITLE

0.61+

MendixORGANIZATION

0.58+

AgileORGANIZATION

0.53+

PresidentPERSON

0.51+

DevOps Virtual Forum 2020 | Broadcom


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi, Lisa Martin here covering the Broadcom dev ops virtual forum. I'm very pleased to be joined today by a cube alumni, Jeffrey Hammond, the vice president and principal analyst serving CIO is at Forester. Jeffrey. Nice to talk with you today. >>Good morning. It's good to be here. Yeah. >>So a virtual forum, great opportunity to engage with our audiences so much has changed in the last it's an understatement, right? Or it's an overstated thing, but it's an obvious, so much has changed when we think of dev ops. One of the things that we think of is speed, you know, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now, speaking of the need to adapt, talk to us about what you're seeing with respect to dev ops and agile in the age of COVID, what are things looking like? >>Yeah, I think that, um, for most organizations, we're in a, uh, a period of adjustment, uh, when we initially started, it was essentially a sprint, you know, you run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And, and that's actually what, um, the folks that get hub saw in may when they ran an analysis of how developers, uh, commit times and a level of work that they were committing and how they were working, uh, in the first couple of months of COVID was, was progressing. They found that developers, at least in the Pacific time zone were actually increasing their work volume, maybe because they didn't have two hour commutes or maybe because they were stuck away in their homes, but for whatever reason, they were doing more work. >>And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, you feel great and you just want to run and you want to power through it and you want to go hard. And if you do that by the time you get to mile 18 or 19, you're going to be gassed. It's sucking for wind. Uh, and, and that's, I think where we're starting to hit. So as we start to, um, gear our development chops out for the reality that most of us won't be returning into an office until 2021 at the earliest and many organizations will, will be fundamentally changing, uh, their remote workforce, uh, policies. We have to make sure that the agile processes that we use and the dev ops processes and tools that we use to support these teams are essentially aligned to help developers run that marathon instead of just kind of power through. >>So, um, let me give you a couple of specifics for many organizations, they have been in an environment where they will, um, tolerate Rover remote work and what I would call remote work around the edges like developers can be remote, but product managers and, um, you know, essentially scrum masters and all the administrators that are running the, uh, uh, the SCM repositories and, and the dev ops pipelines are all in the office. And it's essentially centralized work. That's not, we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so one of the implications of that is that, um, we have to think about all the activities that you need to do from a dev ops perspective or from an agile perspective, they have to be remote people. One of the things I found with some of the organizations I talked to early on was there were things that administrators had to do that required them to go into the office to reboot the SCM server as an example, or to make sure that the final approvals for production, uh, were made. >>And so the code could be moved into the production environment. And so it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some States. And so one of the, the results of that is that while we've traditionally said, you know, tools are important, but they're not as important as culture as structure as organization as process. I think we have to rethink that a little bit because to the extent that tools enable us to be more digitally organized and to hiring, you know, achieve higher levels of digitization in our processes and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers, uh, that, that, um, uh, that organizations have at their disposal. Um, I'll give you another example for years. >>We've said that the key to success with agile at the team level is cross-functional co located teams that are working together physically co located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, you know, how do you take the low hanging fruits of an agile transformation and apply it in, in, in, in the time of COVID? Well, I think what you have to do is that you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table. It's the fact that we're able to get into a shared mindspace, uh, from, um, uh, from a measurement perspective, we can have shared purpose. We can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So one of the biggest things that organizations need to start to ask themselves is how do we achieve spiritual colocation with our agile teams? Because we don't have the, the ease of physical co-location available to us anymore? >>Well, the spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge here, we are seven, eight months in for many organizations, as you say, going from, you know, physical workspaces, co-location being able to collaborate face to face to a, a light switch flip overnight. And this undefined period of time where all we were living with with was uncertainty, how does spiritual, what do you, when you talk about spiritual co-location in terms of collaboration and processes and technology help us unpack that, and how are you seeing organizations adopted? >>Yeah, it's, it's, um, it's a great question. And, and I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace dev ops. Um, if you go all the way back to the, to the original, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions over processes and tools. That's still important. Individuals and interactions are at the core of software development, processes and tools that support those individual and interact. Uh, those individuals in those interactions are more important than ever working software over comprehensive documentation. Working software is still more important, but when you are trying to onboard employees and they can't come into the office and they can't do the two day training session and kind of understand how things work and they can't just holler over the cube, uh, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context, uh, customer collaboration over contract negotiation. >>Absolutely still important, but employee collaboration is equally as important if you want to be spiritually, spiritually co-located. And if you want to have a shared purpose and then, um, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our dev ops effort around velocity getting faster. We need to run as fast as we can like that sprinter. Okay. You know, trying to just power through it as quickly as possible. But as we shift to, to the, to the marathon way of thinking, um, velocity is still important, but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important. Um, and then just flat out velocity. Um, and so changing some of the ways that we think about dev ops practices, um, is, is important to make sure that that agility is there for one thing, you have to defer decisions as far down the chain to the team level as possible. >>So those teams have to be empowered to make decisions because you can't have a program level meeting of six or seven teams and one large hall and say, here's the lay of the land. Here's what we're going to do here are our processes. And here are our guardrails. Those teams have to make decisions much more quickly that developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here or 50 minutes there and do something useful. And so the tools that support us have to become tolerant of the reality of, of, of, of how we're working. So if they work in a way that it allows the team together to take as much autonomy as they can handle, um, to, uh, allow them to communicate in a way that, that, that delivers shared purpose and allows them to adapt and master new technologies, then they're in the zone in their spiritual, they'll get spiritually connected. I hope that makes sense. >>It does. I think we all could use some of that, but, you know, you talked about in the beginning and I've, I've talked to numerous companies during the pandemic on the cube about the productivity, or rather the number of hours of work has gone way up for many roles, you know, and, and, and times that they normally late at night on the weekends. So, but it's a cultural, it's a mind shift to your point about dev ops focused on velocity, sprints, sprints, sprints, and now we have to, so that cultural shift is not an easy one for developers. And even at this folks to flip so quickly, what have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >>I think, I think at the core, this really comes down to management sensitivity. Um, when everybody was in the office, you could kind of see the mental health of development teams by, by watching how they work. You know, you call it management by walking around, right. We can't do that. Managers have to, um, to, to be more aware of what their teams are doing, because they're not going to see that, that developer doing a check-in at 9:00 PM on a Friday, uh, because that's what they had to do, uh, to meet the objectives. And, um, and, and they're going to have to, to, um, to find new ways to measure engagement and also potential burnout. Um, friend of mine once had, uh, had a great metric that he called the parking lot metric. It was helpful as the parking lot at nine. And how full was it at five? >>And that gives you an indication of how engaged your developers are. Um, what's the digital equivalent equivalent to the parking lot metric in the time of COVID it's commit stats, it's commit rates. It's, um, you know, the, uh, the turn rate, uh, that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes, how do we use that information? Do we use that information to say, well, this team isn't delivering as at the same level of productivity as another team, do we weaponize that data or do we use that data to identify impedances in the process? Um, why isn't a team working effectively? Is it because they have higher levels of family obligations and they've got kids that, that are at home? Um, is it because they're working with, um, you know, hardware technology, and guess what, they, it's not easy to get the hardware technology into their home office because it's in the lab at the, uh, at the corporate office, uh, or they're trying to communicate, uh, you know, halfway around the world. >>And, uh, they're communicating with a, with an office lab that is also shut down and, and, and the bandwidth just doesn't enable the, the level of high bandwidth communications. So from a dev ops perspective, managers have to get much more sensitive to the, the exhaust that the dev ops tools are throwing off, but also how they're going to use that in a constructive way to, to prevent burnout. And then they also need to, if they're not already managing or monitoring or measuring the level of developer engagement, they have, they really need to start whether that's surveys around developer satisfaction, um, whether it's, you know, more regular social events, uh, where developers can kind of just get together and drink a beer and talk about what's going on in the project, uh, and monitoring who checks in and who doesn't, uh, they have to, to, um, work harder, I think, than they ever have before. >>Well, and you mentioned burnout, and that's something that I think we've all faced in this time at varying levels and it changes. And it's a real, there's a tension in the air, regardless of where you are. There's a challenge, as you mentioned, people having, you know, coworker, their kids as coworkers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that are, that have done this well, this adaptation, what can you share in terms of some real-world examples that might inspire the audience? >>Yeah. Uh, I'll start with, uh, stack overflow. Uh, they recently published a piece in the journal of the ACM around some of the things that they had discovered. Um, you know, first of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from an executive level, um, social spaces. One of the things that they talk about doing is leaving a video conference room open at a team level all day long, and the team members, you know, we'll go on mute, you know, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question. And if anybody else knows the answer, it's kind of like being in that virtual pod. Uh, if you, uh, if you will, um, even here at Forrester, one of the things that we've done is we've invested in social ceremonies. >>We've actually moved our to our team meetings on, on my analyst team from, from once every two weeks to weekly. And we have built more time in for social Ajay socialization, just so we can see, uh, how, how, how we're doing. Um, I think Microsoft has really made some good, uh, information available in how they've managed things like the onboarding process. I think I'm Amanda silver over there mentioned that a couple of weeks ago when, uh, uh, a presentation they did that, uh, uh, Microsoft onboarded over 150,000 people since the start of COVID, if you don't have good remote onboarding processes, that's going to be a disaster. Now they're not all developers, but if you think about it, um, everything from how you do the interviewing process, uh, to how you get people, their badges, to how they get their equipment. Um, security is a, is another issue that they called out typically, uh, it security, um, the security of, of developers machines ends at, at, at the corporate desktop. >>But, you know, since we're increasingly using our own machines, our own hardware, um, security organizations kind of have to extend their security policies to cover, uh, employee devices, and that's caused them to scramble a little bit. Uh, so, so the examples are out there. It's not a lot of, like, we have to do everything completely differently, but it's a lot of subtle changes that, that have to be made. Um, I'll give you another example. Um, one of the things that, that we are seeing is that, um, more and more organizations to deal with the challenges around agility, with respect to delivering software, embracing low-code tools. In fact, uh, we see about 50% of firms are using low-code tools right now. We predict it's going to be 75% by the end of next year. So figuring out how your dev ops processes support an organization that might be using Mendix or OutSystems, or, you know, the power platform building the front end of an application, like a track and trace application really, really quickly, but then hooking it up to your backend infrastructure. Does that happen completely outside the dev ops investments that you're making and the agile processes that you're making, or do you adapt your organization? Um, our hybrid teams now teams that not just have professional developers, but also have business users that are doing some development with a low-code tool. Those are the kinds of things that we have to be, um, willing to, um, to entertain in order to shift the focus a little bit more toward the agility side, I think >>Lot of obstacles, but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow, and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location Jeffrey, thank you so much for joining us on the program today. Very insightful conversation. >>My pleasure. It's it's, it's an important thing. Just remember if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each and you'll find that you'll get there. >>Digestible components, wise advice. Jeffery Hammond. Thank you so much for joining for Jeffrey I'm Lisa Martin, you're watching Broadcom's dev ops virtual forum >>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom, >>Continuing our conversations here at Broadcom's dev ops virtual forum. Lisa Martin here, please. To welcome back to the program, Serge Lucio, the general manager of the enterprise software division at Broadcom. Hey, Serge. Welcome. Thank you. Good to be here. So I know you were just, uh, participating with the biz ops manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, but I wanted to get your thoughts on spiritual co-location as really a necessity for biz ops to succeed in this unusual time in which we're living. What are your thoughts on spiritual colocation in terms of cultural change versus adoption of technologies? >>Yeah, it's a, it's, it's quite interesting, right? When we, when we think about the major impediments for, uh, for dev ops implementation, it's all about culture, right? And swore over the last 20 years, we've been talking about silos. We'd be talking about the paradox for these teams to when it went to align in many ways, it's not so much about these teams aligning, but about being in the same car in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me, the, this, this is really about kind of changing this culture where people start to look at a kind of OKR is instead of the key objective, um, that, that drives the entire team. Now, what it means in practice is really that's, uh, we need to change a lot of behaviors, right? It's not about the Yarki, it's not about roles. It's about, you know, who can do what and when, and, uh, you know, driving a bias towards action. It also means that we need, I mean, especially in this school times, it becomes very difficult, right? To drive kind of a kind of collaboration between these teams. And so I think there there's a significant role that especially tools can play in terms of providing this complex feedback from teams to, uh, to be in that preface spiritual qualification. >>Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect to velocity, all about speed here. But of course this time everything changed so quickly, but going from the physical spaces to everybody being remote really does take it. It's very different than you can't replicate it digitally, but there are collaboration tools that can kind of really be essential to help that cultural shift. Right? >>Yeah. So 2020, we, we touch to talk about collaboration in a very mundane way. Like, of course we can use zoom. We can all get into, into the same room. But the point when I think when Jeff says spiritual, co-location, it's really about, we all share the same objective. Do we, do we have a niece who, for instance, our pipeline, right? When you talk about dev ops, probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team, but just thinking about a pipeline, right, at the end of the day, it's all about what is the meantime to beat back to these teams. If I'm a developer and a commit code, I don't, does it take where, you know, that code to be processed through pipeline pushy? Can I get feedback if I am a finance person who is funding a product or a project, what is my meantime to beat back? >>And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of the Dora metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration, it's really a balance. How do you provide the feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this, this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when I checked in code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allegation, how can the system help me reduce my change pillar rates? Because it's, it's able to predict that some issue was introduced in the outpatient or work product. Um, so I think there's, there's a great role of technology and AI candidate Lynch to, to actually provide that new level of collaboration. >>So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right now is organizations are still in some form of transformation to this new almost 100% remote workforce. >>So I'll just say first, I'm not a big fan of metrics. Um, and the reason being that, you know, you can look at a change killer rate, right, or a lead time or cycle time. And those are, those are interesting metrics, right? The trend on metric is absolutely critical, but what's more important is you get to the root cause what is taught to you lean to that metric to degrade or improve or time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll give you a very mundane example. You know, we know that cycle time is heavily influenced by, um, organizational boundaries. So, you know, we talk a lot about silos, but, uh, we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are a pretty, you know, valid set metrics, but what's way more important is to understand what are the antiperspirants, what are the things that we can detect through the data that actually are affecting those metrics. And, uh, I mean, over the last 10, 20 years, we've learned a lot about kind of what are, what are the antiperspirants within our large enterprise customers. And there are plenty of them. >>What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >>So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle, where DevOps is more and more embracing value stream management value stream mapping. Um, and I think what's interesting is that in many ways the product is becoming the new silo. Uh, the notion of a product is very difficult by itself to actually define people are starting to recognize that a value stream is not its own little kind of Island. That in reality, when I define a product, this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So, so even on that, and there is clearly kind of a new sets, if you will, of anti-patterns where products are being defined as a set of OTRs, they have interdependencies and you have have a new set of silos on the operands, uh, the Abra key movement to Israel and the SRE space where, um, I think there is a cultural clash while the dev ops side is very much embracing this notion of OTRs and value stream mapping and Belgium management. >>On the other end, you have the it operations teams. We still think business services, right? For them, they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so the, the, there is there's this boundary where, um, I think, well, SRE is being put in place. And there's lots of thinking about what kind of metrics can be fined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for true operations team. >>And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You talked about, uh, AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate, maybe see eye to eye with the SRS? >>Yeah. Um, so th you're kind of too. So even for myself, as a leader of a, you know, 1500 people organization, there's a number of things I don't see right. On a daily basis. And, um, I think the, the, the, the technologies that we have at our disposal today from the AI are able to mind a lot of data and expose a lot of, uh, issues that's as leaders we may not be aware of. And some of the, some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you, when you start to understand, for instance, uh, what is the, what is the working progress right to during the sprint? Um, when you start to analyze the data you can detect, for instance, that maybe the teams are over committed, that there is too much work in progress. >>You can start to identify kind of, interdepencies either from a technology, from a people point of view, which were hidden, uh, you can start to understand maybe the change filler rates he's he is dragging. So I believe that there is a, there's a fundamental role to be played by the tools to, to expose again, these anti parents, to, to make these things visible to the teams, to be able to even compare teams. Right. One of the things that's, that's, uh, that's amazing is now we have access to tons of data, not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate, and what's working, what's not working >>Thoughts on AI and automation as, as a facilitator of spiritual co-location. >>Yeah, absolutely. Absolutely. It's um, you know, th there's, uh, the problem we all face is the unknown, right? The, the law city, but volume variety of the data, uh, everyday we don't really necessarily completely appreciate what is the impact of our actions, right? And so, um, AI can really act as a safety net that enables us to, to understand what is the impact of our actions. Um, and so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data, um, and collaborate on the data. And the actual matter, I think is, is a, is a very powerful enabler, uh, on, in that respect. I mean, I, I've seen, um, I've seen countless of times that, uh, for instance, at the SRE boundary, um, to basically show that we'll turn the quality attributes, so an incoming release, right. And exposing that to, uh, an operations person and a sorry person, and enabling that collaboration dialogue through data is a very, very powerful tool. >>Do you have any recommendations for how teams can use, you know, the SRE folks, the dev ops says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be nonproductive. >>Yeah. So to me, the th there, there's a part of the question really is when, when we talk about data, there are there different ways you can use data, right? Um, so you can, you can do a lot of an analytics, predictive analytics. So I think there is a, there's a tendency, uh, to look at, let's say a, um, a specific KPI, like a, an availability KPI, or change filler rate, and to basically do a regression analysis and projecting all these things, going to happen in the future. To me, that that's, that's a, that's a bad approach. The reason why I fundamentally think it's a better approach is because we are systems. The way we develop software is, is a, is a non-leader kind of system, right? Software development is not linear nature. And so I think there's a D this is probably the worst approach is to actually focus on metrics on the other end. >>Um, if you, if you start to actually understand at a more granular level, what har, uh, which are the things which are contributing to this, right? So if you start to understand, for instance, that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we, we have, I've actually, uh, a customer who, uh, identified that, uh, over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed, this resulted in these plant outages. So if you start to be able to basically establish causality, right, cause an effect between kind of data across the last cycle. I think, I think this is the right way to, uh, to, to use AI. And so pharma to be, I think it's way more God could have a classification problem. What are the classes of problems that do exist and affect things as opposed to analytics, predictive, which I don't think is as powerful. >>So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. You're one of the authors of that. I want to get your thoughts on dev ops and biz ops overlapping, complimenting each other, what, from a, the biz ops perspective, what does it mean to the future of dev ops? >>Yeah, so, so it's interesting, right? If you think about DevOps, um, there's no felony document, right? Can we, we can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways, there's no clear definition of what dev ops is. Uh, if you go to the dev ops Institute today, you'll see that they are specific, um, trainings for instance, on value management on SRE. And so in many ways, the problem we have as an industry is that, um, there are set practices between agile dev ops, SRE Valley should management. I told, right. And we all basically talk about the same things, right. We all talk about essentially, um, accelerating in the meantime fee to feedback, but yet we don't have the common framework to talk about that. The other key thing is that we add to wait, uh, for, uh, for jeans, Jean Kim's Lascaux, um, to, uh, to really start to get into the business aspect, right? >>And for value stream mapping to start to emerge for us to start as an industry, right. It, to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about driving these business outcomes. And so to me, these ops is really about kind of, uh, putting a lens on this critical element that it's not business and it, that we in fact need to fuse business 19 that I need needs to transform itself to recognize that it's, it's this value generator, right. It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this Oliver or kind of framework, if you will. That set the context for what is the reason, uh, for it to exist. What's part of the core values and principles that it needs to embrace to, again, change from a cost center to a value center. And then we need to start to use this as a way to start to unify some of the, again, the core practices, whether it's agile, DevOps value, stream mapping SRE. Um, so, so I think over time, my hope is that we start to optimize a lot of our practices, language, um, and, uh, and cultural elements. >>Last question surgeon, the last few seconds we have here talking about this, the relation between biz ops and dev ops, um, what do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >>So to me, the key, the key, um, challenge for, for the industry is really around. So we were seeing a very rapid shift towards kind of, uh, product to product, right. Which we don't want to do is to recreate kind of these new silos, these hard silos. Um, so that, that's one of the big changes, uh, that I think we need to be, uh, to be really careful about, um, because it is ultimately, it is about culture. It's not about, uh, it's not about, um, kind of how we segment the work, right. And, uh, any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of, um, kind of the spiritual co-location, I think it's, it's really about that too. It's really about kind of, uh, uh, focusing on the business outcomes on kind of aligning on driving engagement across the teams, but, but not for create a, kind of a new set of silos, which instead of being vertical are going to be these horizontal products >>Crazy by surge that looking at culture as kind of a way of really, uh, uh, addressing and helping to, uh, re re reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps virtual forum. >>Thank you. Thanks for your time. >>I'll be right back >>From around the globe it's the cube with digital coverage of devops virtual forum brought to you by Broadcom. >>Welcome to Broadcom's DevOps virtual forum, I'm Lisa Martin, and I'm joined by another Martin, very socially distanced from me all the way coming from Birmingham, England is Glynn Martin, the head of QA transformation at BT. Glynn, it's great to have you on the program. Thank you, Lisa. I'm looking forward to it. As we said before, we went live to Martins for the person one in one segment. So this is going to be an interesting segment guys, what we're going to do is Glynn's going to give us a really kind of deep inside out view of devops from an evolution perspective. So Glynn, let's start. Transformation is at the heart of what you do. It's obviously been a very transformative year. How have the events of this year affected the >> transformation that you are still responsible for driving? Yeah. Thank you, Lisa. I mean, yeah, it has been a difficult year. >>Um, and although working for BT, which is a global telecommunications company, um, I'm relatively resilient, I suppose, as a, an industry, um, through COVID obviously still has been affected and has got its challenges. And if anything, it's actually caused us to accelerate our transformation journey. Um, you know, we had to do some great things during this time around, um, you know, in the UK for our emergency and, um, health workers give them unlimited data and for vulnerable people to support them. And that's spent that we've had to deliver changes quickly. Um, but what we want to be able to do is deliver those kinds of changes quickly, but sustainably for everything that we do, not just because there's an emergency. Um, so we were already on the kind of journey to agile, but ever more important now that we are, we are able to do those, that kind of work, do it more quickly. >>Um, and that it works because the, the implications of it not working is, can be terrible in terms of you know, we've been supporting testing centers,  new hospitals to treat COVID patients. So we need to get it right. And then therefore the coverage of what we do, the quality of what we do and how quickly we do it really has taken on a new scale and what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that, you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously, um, deal with the fact that, you know, COVID 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less costs, but having to deliver more value quicker and  to higher quality. So yeah, certainly the finances is, um, on our minds and that's why we need flexible models, cost models that allow us to kind of do growth, but we get that growth by showing that we're delivering value. Um, especially in these times when there are financial challenges on companies. So one of the things that I want to ask you about, I'm again, looking at DevOps from the inside >>Out and the evolution that you've seen, you talked about the speed of things really accelerating in this last nine months or so. When we think dev ops, we think speed. But one of the things I'd love to get your perspective on is we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that you've seen there as, as needing to get, as you said, get things right, but done so quickly to support essential businesses, essential workers. How have you seen that cultural shift? >>Yeah, I think, you know, before test teams for themselves at this part of the software delivery cycle, um, and actually now really our customers are expecting that quality and to deliver for our customers what they want, quality has to be ingrained throughout the life cycle. Obviously, you know, there's lots of buzzwords like shift left. Um, how do we do shift left testing? Um, but for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle that drive automation, drive improvements. I always say that, you know, you're only as good as your lowest common denominator. And one thing that we were finding on our dev ops journey was that we  would be trying to do certain things quick, we had automated build, automated tests. But if we were taking a weeks to create test scripts, or we were taking weeks to manually craft data, and even then when we had taken so long to do it, that the coverage was quite poor and that led to lots of defects later on in the life cycle, or even in our production environment, we just couldn't afford to do that. >>And actually, focusing on continuous testing over the last nine to 12 months has really given us the ability to deliver quickly across the whole life cycle. And therefore actually go from doing a kind of semi agile kind of thing, where we did the user stories, we did a few of the kind of agile ceremonies, but we weren't really deploying any quicker into production because our stakeholders were scared that we didn't have the same control that we had when we had more waterfall releases. And, you know, when we didn't think of ourselves. So we've done a lot of work on every aspect, um, especially from a testing point of view, every aspect of every activity, rather than just looking at automated tests, you know, whether it is actually creating the test in the first place, whether it's doing security testing earlier in the lot and performance testing in the life cycle, et cetera. So, yeah,  it's been a real key thing that for CT, for us to drive DevOps, >>Talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this. Um, you know, there's a thing that I think people will probably call it a customer experience gap, and it reminds me of a Gilbert cartoon, where we start with the requirements here and you're almost like a Chinese whisper effects and what we deliver is completely different. So we think the testing team or the delivery teams, um, know in our teeth has done a great job. This is what it said in the acceptance criteria, but then our customers are saying, well, actually that's not working this isn't working and there's this kind of gap. Um, we had a great launch this year of agile requirements, it's one of the Broadcom tools. And that was the first time in, ever since I remember actually working within BT, I had customers saying to me, wow, you know, we want more of this. >>We want more projects to have extra requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do we actually, you know, do that and have something that both the business and technical people can understand. And we've actually been working with the business , using agile requirements designer to really look at what the requirements are, tease out requirements we hadn't even thought of and making sure that we've got high levels of test coverage. And what we actually deliver at the end of it, not only have we been able to generate tests more quickly, but we've got much higher test coverage and also can more smartly, using the kind of AI within the tool and then some of the other kinds of pipeline tools, actually deliver to choose the right tasks, and actually doing a risk based testing approach. So that's been a great launch this year, but just the start of many kinds of things that we're doing >>Well, what I hear in that, Glynn is a lot of positives that have come out of a very challenging situation. Talk to me about it. And I liked that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration perspective you're right, we talk about that a lot critical with devops. But those challenges there, you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pivot so fast? >>I mean, you talked about culture. You know, BT is like most companies  So it's very siloed. You know we're still trying to work to become closer as a company. So I think there's a lot of challenges around how would you integrate with other tools? How would you integrate with the various different technologies. And BT, we have 58 different IT stacks. That's not systems, that's stacks, all of those stacks can have hundreds of systems. And we're trying to, we've got a drive at the moment, a simplified program where we're trying to you know, reduce that number to 14 stacks. And even then there'll be complexity behind the scenes that we will be challenged more and more as we go forward. How do we actually highlight that to our users? And as an it organization, how do we make ourselves leaner, so that even when we've still got some of that legacy, and we'll never fully get rid of it and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from our users and drive those programs, so we can, as I say, accelerate change,  reduce that kind of waste and that kind of legacy costs out of our business. You know, the other thing as well, I'm sure telecoms is probably no different to insurance or finance. When you take the number of products that we do, and then you combine them, the permutations are tens and hundreds of thousands of products. So we, as a business are trying to simplify, we are trying to do that in an agile way. >>And haven't tried to do agile in the proper way and really actually work at pace, really deliver value. So I think what we're looking more and more at the moment is actually  more value focused. Before we used to deliver changes sometimes into production. Someone had a great idea, or it was a great idea nine months ago or 12 months ago, but actually then we ended up deploying it and then we'd look at the users, the usage of that product or that application or whatever it is, and it's not being used for six months. So we haven't got, you know, the cost of the last 12 months. We certainly haven't gotten room for that kind of waste and, you know, for not really understanding the value of changes that we are doing. So I think that's the most important thing of the moment, it's really taking that waste out. You know, there's lots of focus on things like flow management, what bits of our process are actually taking too long. And we've started on that journey, but we've got a hell of a long way to go. But that involves looking at every aspect of the software delivery cycle. >> Going from, what 58 IT stacks down to 14 or whatever it's going to be, simplifying sounds magical to everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we were started on a continuous testing journey, and I think that's just the start. I mean as I say, looking at every aspect of, you know, from a QA point of view is every aspect of what we do. And it's also looking at, you know, we've started to branch into more like AI, uh, AI ops and, you know, really the full life cycle. Um, and you know, that's just a stepping stone to, you know, I think autonomics is the way forward, right. You know, all of this kind of stuff that happens, um, you know, monitoring, uh, you know, watching the systems what's happening in production, how do we feed that back? How'd you get to a point where actually we think about change and then suddenly it's in production safely, or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey, but if we want to, you know, in a world where the pace is in ever-increasing and the demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, uh, you know, more efficiently and as lean as possible, we need to be thinking about every part of the process and how we put the kind of stepping stones in place to lead us to a more automated kind of, um, you know, um, the future. >>Do you feel that that planned outcomes are starting to align with what's delivered, given this massive shift that you're experiencing? >>I think it's starting to, and I think, you know, as I say, as we look at more of a value based approach, um, and, um, you know, as I say, print, this was a kind of flow management. I think that that will become ever, uh, ever more important. So, um, I think it starting to people certainly realize that, you know, teams need to work together, you know, the kind of the cousin between business and it, especially as we go to more kind of SAS based solutions, low code solutions, you know, there's not such a gap anymore, actually, some of our business partners that expense to be much more tech savvy. Um, so I think, you know, this is what we have to kind of appreciate what is its role, how do we give the capabilities, um, become more of a centers of excellence rather than actually doing mounds amounts of work. And for me, and from a testing point of view, you know, mounds and mounds of testing, actually, how do we automate that? How do we actually generate that instead of, um, create it? I think that's the kind of challenge going forward. >>What are some, as we look forward, what are some of the things that you would like to see implemented or deployed in the next, say six to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think, um, you know, certainly for, for where we are as a company from a QA perspective, we are, um, you let's start in bits that we do well, you know, we've started creating, um, continuous delivery and DevOps pipelines. Um, there's still manual aspects of that. So, you know, certainly for me, I I've challenged my team with saying how do we do an automated journey? So if I put a requirement in JIRA or rally or wherever it is and why then click a button and, you know, with either zero touch for one such, then put that into production and have confidence that, that has been done safely and that it works and what happens if it doesn't work. So, you know, that's, that's the next, um, the next few months, that's what our concentration, um, is, is about. But it's also about decision-making, you know, how do you actually understand those value judgments? >>And I think there's lots of the things dev ops, AI ops, kind of that always ask aspects of business operations. I think it's about having the information in one place to make those kinds of decisions. How does it all try and tie it together? As I say, even still with kind of dev ops, we've still got elements within my company where we've got lots of different organizations doing some, doing similar kinds of things, but they're all kind of working in silos. So I think having AI ops as it comes more and more to the fore as we go to cloud, and that's what we need to, you know, we're still very early on in our cloud journey, you know, so we need to make sure the technologies work with cloud as well as you can have, um, legacy systems, but it's about bringing that all together and having a full, visible pipeline, um, that everybody can see and make decisions. >>You said the word confidence, which jumped out at me right away, because absolutely you've got to have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question then for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops, to be able to gain the confidence that they're making the right decisions for their business? >>I think the, the, the, the, the approach that we've taken actually is not started with technology. Um, we've actually taken a human centered design, uh, as a core principle of what we do, um, within the it part of BT. So by using human centered design, that means we talk to our customers, we understand their pain points, we map out their current processes. Um, and then when we mapped out what this process does, it also understand their aspirations as well, you know? Um, and where do they want to be in six months? You know, do they want it to be, um, more agile and, you know, or do they want to, you know, is, is this a part of their business that they want to do one better? We actually then looked at why that's not running well, and then see what, what solutions are out there. >>We've been lucky that, you know, with our partnership, with Broadcom within the payer line, lots of the tools and the PLA have directly answered some of the business's problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they do there is that kind of, you know, almost by understanding their, their pain points and then starting, this is how we can solve your problem. Um, is we've, we've tended to be much more successful than trying to impose something and say, well, here's the technology that they don't quite understand. It doesn't really understand how it kind of resonates with their problems. So I think that's the heart of it. It's really about, you know, getting, looking at the data, looking at the processes, looking at where the kind of waste is. >>And then actually then looking at the right solutions. Then, as I say, continuous testing is massive for us. We've also got a good relationship with Apple towards looking at visual AI. And actually there's a common theme through that. And I mean, AI is becoming more and more prevalent. And I know, you know, sometimes what is AI and people have kind of this semantics of, is it true AI or not, but it's certainly, you know, AI machine learning is becoming more and more prevalent in the way that we work. And it's allowing us to be much more effective, be quicker in what we do and be more accurate. And, you know, whether it's finding defects running the right tests or, um, you know, being able to anticipate problems before they're happening in a production environment. >>Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the successes that you're having, taking those challenges, converting them to opportunities and forgiving folks who might be in your shoes, or maybe slightly behind advice enter. They appreciate it. We appreciate your time. >>Well, it's been an absolute pleasure, really. Thank you for inviting me. I have a extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glenn Martin. I'm Lisa Martin. You're watching the cube >>Driving revenue today means getting better, more valuable software features into the hands of your customers. If you don't do it quickly, your competitors as well, but going faster without quality creates risks that can damage your brand destroy customer loyalty and cost millions to fix dev ops from Broadcom is a complete solution for balancing speed and risk, allowing you to accelerate the flow of value while minimizing the risk and severity of critical issues with Broadcom quality becomes integrated across the entire DevOps pipeline from planning to production, actionable insights, including our unique readiness score, provide a three 60 degree view of software quality giving you visibility into potential issues before they become disasters. Dev ops leaders can manage these risks with tools like Canary deployments tested on a small subset of users, or immediately roll back to limit the impact of defects for subsequent cycles. Dev ops from Broadcom makes innovation improvement easier with integrated planning and continuous testing tools that accelerate the flow of value product requirements are used to automatically generate tests to ensure complete quality coverage and tests are easily updated. >>As requirements change developers can perform unit testing without ever leaving their preferred environment, improving efficiency and productivity for the ultimate in shift left testing the platform also integrates virtual services and test data on demand. Eliminating two common roadblocks to fast and complete continuous testing. When software is ready for the CIC CD pipeline, only DevOps from Broadcom uses AI to prioritize the most critical and relevant tests dramatically improving feedback speed with no decrease in quality. This release is ready to go wherever you are in your DevOps journey. Broadcom helps maximize innovation velocity while managing risk. So you can deploy ideas into production faster and release with more confidence from around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of dev ops and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high-performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago. So it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate physical co-location. Okay. That makes sense. >>It does. I shouldn't have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with jump. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other understanding, Oh, individuals teams, uh, contributed to kind of a business outcome, what is our shared goal or shared vision? What's what is it we're trying to achieve collectively and keeping it kind of aligned to that? Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organization, there's been specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of reverted back to, okay, what is my database all the time? What is my cycle time? >>Right. And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens or these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, favorites experience was to actually weakness at one of a large financial institution. Um, you know, Tuesday Golder's unquote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, test execution results, you know, Coverity coverage, um, official liabilities and all the all ran. It could have a direction level links. And that's when you start to put these things in context and represent that to you in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and, and understand have they joined our company to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it can, it says simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos. We were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very dev, uh, different environments, depending on what area of BT you're in, whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can, um, drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. >>And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in their kind of testing world at the moment. But, you know, I could talk about things like continuous delivery direct to when a ball comes tools and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum it up, autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter? Jeff, >>I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we look at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test, uh, departments, a group, the fourth level that I think is really important, uh, in, in the current, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric helpful was the parking lot at nine. And how full was it at five o'clock. I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customers. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big impact. And, and, and also how do you measure quality kind of following onto the advice that Jeff provided? >>That's some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker, uh, called me Kirsten a couple of months ago. It taught very much around how important flow management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes long? So I think that's a very important thing for us. It's even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how, how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up our teams, training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are, um, delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, Jeff's point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. >>And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about our, these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood is that, you know, when you have a bad cycle time or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, if you look at, for instance, things like lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. >>And so I think that there is, you know, the only the metrics there is, I think a lot of work that we need to do in terms of classifying, descend type patterns, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as, as a key pattern, the industry or anti-spatter. Um, but waterscrumfall right is a key one, right? And you will detect that through kind of a defect arrival rates. That's where that looks like an S-curve. And so I think it's beyond kind of the, the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on this. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff will go back to you. Okay. >>I would say look for the opportunities that this disruption presents. And there are a couple that I see, first of all, uh, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks to be on the lookout for now? I, I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but low code also. Um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the, the life cycle, um, where, when I heard the word scrum fall, it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment we need to get beyond that. We need >>To be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production and go straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately we all need some peanut butter and chocolate. All right. It starts to take us home. That's what's that nugget you think everyone needs to have on their agendas? >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about the lost city, but it's all about sweet. We still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of a spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value into aligning everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 18 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Nice to talk with you today. It's good to be here. One of the things that we think of is speed, it was essentially a sprint, you know, you run as hard as you can for as fast as you can And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, um, we have to think about all the activities that you need to do from a dev ops perspective and to hiring, you know, achieve higher levels of digitization in our processes and We've said that the key to success with agile at the team level is cross-functional organizations, as you say, going from, you know, physical workspaces, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions is important to make sure that that agility is there for one thing, you have to defer decisions So those teams have to be empowered to make decisions because you can't have a I think we all could use some of that, but, you know, you talked about in the beginning and I've, Um, when everybody was in the office, you could kind of see the And that gives you an indication of how engaged your developers are. um, whether it's, you know, more regular social events, that have done this well, this adaptation, what can you share in terms of some real-world examples that might Um, you know, first of all, since the start of COVID, if you don't have good remote onboarding processes, Those are the kinds of things that we have to be, um, willing to, um, and the business folks to just get better at what they're doing and learning to embrace It's it's, it's an important thing. Thank you so much for joining for Jeffrey I'm Lisa Martin, of dev ops virtual forum brought to you by Broadcom, I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, uh, you know, driving a bias towards action. Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect does it take where, you know, that code to be processed through pipeline pushy? you know, when I checked in code, you know, to do I guess the system to automatically identify what So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right And so I'm much more interested and we, you know, fruit for Broadcom. are being defined as a set of OTRs, they have interdependencies and you have have a new set And so, you know, it's not uncommon to see, you know, teams where, you know, How do you think those technologies can be leveraged by DevOps leaders to influence as a leader of a, you know, 1500 people organization, there's a number of from a people point of view, which were hidden, uh, you can start to understand maybe It's um, you know, you know, the SRE folks, the dev ops says can use AI and automation in the right ways Um, so you can, you can do a lot of an analytics, predictive analytics. So if you start to understand, for instance, that whenever maybe, you know, So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. the problem we have as an industry is that, um, there are set practices between And so to me, these ops is really about kind of, uh, putting a lens on So to me, the key, the key, um, challenge for, We thank you so much for sharing your insights and your time at today's DevOps Thanks for your time. of devops virtual forum brought to you by Broadcom. Transformation is at the heart of what you do. transformation that you are still responsible for driving? you know, we had to do some great things during this time around, um, you know, in the UK for one of the things that I want to ask you about, I'm again, looking at DevOps from the inside But one of the things I'd love to get your perspective I always say that, you know, you're only as good as your lowest And, you know, What are some of the shifts in terms of expectations Um, you know, there's a thing that I think people I mean, we talk about collaboration, but how do we actually, you know, do that and have something that did you face and figure out quickly enough to be able to pivot so fast? and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from So we haven't got, you know, the cost of the last 12 months. What are some of the core technology capabilities that you see really as kind demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, And for me, and from a testing point of view, you know, mounds and mounds of testing, we are, um, you let's start in bits that we do well, you know, we've started creating, ops as it comes more and more to the fore as we go to cloud, and that's what we need to, Last question then for you is how would you advise your peers in a similar situation to You know, do they want it to be, um, more agile and, you know, or do they want to, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they And I know, you know, sometimes what is AI Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the So thank you ever so much. I'm Lisa Martin. the entire DevOps pipeline from planning to production, actionable This release is ready to go wherever you are in your DevOps journey. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago. Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other understanding, that to you in a way that these different stakeholders can, can look at from their different lens. And Glen, we talked a lot about transformation with you last time. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, then we can talk to the teams, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customers. And, and, and also how do you measure quality kind of following the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that of, uh, you know, setting up benchmarks, right? And so one of the things that we really need to evolve, um, as an industry is to understand that we need to do in terms of classifying, descend type patterns, um, you know, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and to put and dev ops are the peanut butter and chocolate to support creative, uh, But I think the last, um, the last thing for me is how do you really instill and having the confidence to actually do more testing in production and go straight to production itself. And if you read through the report, it's all about the I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

SergePERSON

0.99+

GlenPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

AppleORGANIZATION

0.99+

Jeffery HammondPERSON

0.99+

GlennPERSON

0.99+

sixQUANTITY

0.99+

26QUANTITY

0.99+

Glenn MartinPERSON

0.99+

50 minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

Jeff HammondPERSON

0.99+

tensQUANTITY

0.99+

six monthsQUANTITY

0.99+

2021DATE

0.99+

BenPERSON

0.99+

10 yearQUANTITY

0.99+

UKLOCATION

0.99+

two hoursQUANTITY

0.99+

15 yearQUANTITY

0.99+

sevenQUANTITY

0.99+

9:00 PMDATE

0.99+

two hourQUANTITY

0.99+

14 stacksQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

GlynnPERSON

0.99+

two dayQUANTITY

0.99+

MartinPERSON

0.99+

Glynn MartinPERSON

0.99+

KirstenPERSON

0.99+

todayDATE

0.99+

SRE ValleyORGANIZATION

0.99+

five o'clockDATE

0.99+

BothQUANTITY

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

second aspectQUANTITY

0.99+

Glen JeffPERSON

0.99+

threeQUANTITY

0.99+

14QUANTITY

0.99+

75%QUANTITY

0.99+

three weeksQUANTITY

0.99+

Amanda silverPERSON

0.99+

oneQUANTITY

0.99+

seven teamsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

last yearDATE

0.99+

Kiran Narsu, Alation & William Murphy, BigID | CUBE Conversation, May 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation LeBron welcome to the cube studio I'm John Ferrier here in Palo Alto in our remote coverage of the tech industry we are in our quarantine crew here getting all the stories in the technology industry from all the thought leaders and all the newsmakers we've got a great story here about data data compliance and really about the platforms around how enterprises are using data I've got two great guests and some news to announce Kieran our CEO is the vice president of business development with elation and William Murphy vice president of technology alliances of big ID got some interesting news a integration partnership between the two companies really kind of compelling especially now as people have to look at the cloud scale what's happening in our world certainly in the new realities of kovin 19 and going forward the role of data new kinds of applications and the speed and agility are gonna require more and more automation more reality around making sure things are in place so guys thanks for coming on appreciate it Kieran William thanks for joining me thank you thank you so let's take a step back elation you guys have been on the cube many times we've been following you guys been a leader and Enterprise catalog a new approach it's a real new technology approach and methodology and team approach to building out the data catalogues so talk about the Alliance here why what's the news why you guys in Creighton is integration partnership well let me start and thank you for having us today you know as you know elation launched the data catalog a category seven years ago and even today we're acknowledging the leader as a leader in that space you know and but we really began with the core belief that ultimately data management will be drive driven more and more by business demand and less by information suppliers so you know another way to think about that is you know how people behave with data will drive how companies manage data so our philosophy put very simply is to start with people and not first not data and our customers really seem to agree with this approach and we've got close to 200 brands using our data you know our tool every single day to drive vibrant data communities and and foster a real data culture in the environment so one of the things that was really exciting to us is the in been in data privacy by large corporate customers to get their arms around this and you know we really strive to improve our ability to use the tool inside you know these enterprises across more use cases so the partnership that we're announcing with big ID today is really you know Big Ideas the leading modern data intelligence platform for privacy and what we're trying to do is to bring bring a level of integration between our two technologies so that enterprises in better manage and scale their their data privacy compliance capability William talked about big ID what you guys are doing you guys also have a date intelligence platform we've been covering gdpr for a very long time I once called I won't say it again because it wasn't really that complimentary but the reality has sit in and they and the users now understand more than ever privacy super important companies have to deal with this you guys have a solution take a minute to explain big-big ID and what you guys are doing yeah absolutely so our founders Demetri Shirota and Nimrod Beck's founded big idea in 2016 Sam you know gdpr was authored and the big reason there is that data changed and how companies and enterprises doubled data was changing pretty much forever that profound change meant that the status quo could no longer exist and so privacy was gonna have to become a day-to-day reality to these enterprises but what big ID realized is that to start to do to do anything with privacy you actually have to understand where your data is what it is and whose it is and so that's really the genesis of what dimitri nimrod created which which is a privacy centric data discovery and intelligence platform that allows our enterprise customers and we have over 70 customers in the enterprise space many within the Fortune hundred to be able to find classify and correlate sensitive data as they defined it across data sources whether its own Prem or in the cloud and this gives our users and kind of unprecedented ability to look into their data to get better visibility which if both allows for collaboration and also allows for real-time decision-making a big place with better accuracy and confidence that regulations are not being broken and that customers data is being treated appropriately great I'm just reading here from the release that I want to get you guys thoughts and unpack some of the concepts on here but the headline is elation strengthens privacy capabilities with big ID part nur ship empowering organizations to mitigate risks delivering privacy aware data use and improved adherence to data privacy regulations it's a mouthful but the bottom line is is that there's a lot of stuff to that's a lot of complexity around these rules and these platforms and what's interesting you mentioned discovery the enterprise discovery side of the business has always been a complex nightmare I think what's interesting about this partnership from my standpoint is that you guys are bringing an interface into a complex platform and creating an easy abstraction to kind of make it usable I mean the end of the day you know we're seeing the trends with Amazon they have Kendre which they announced and they're gonna have a ship soon fast speed of insights has to be there so unifying data interfaces with back-end is really what seems to be the pattern is that the magic going on here can you guys explain what's going on with this and what's the outcome gonna be for customers yeah I guess I'll kick off and we'll please please chime in I think really there's three overarching challenges that I think enterprises are facing is they're grappling with these regulations as as we'll talked about you know number one it's really hard to both identify and classify private data right it's it's not as easy as it might sound and you know we can talk a little bit more about that it's also very difficult to flag at the point of analysis when somebody wants to find information the relevant policies that might apply to the given data that they're looking to it to run an analysis on and lastly the enterprise's are constantly in motion as enterprises change and by new businesses and enter new markets and launch new products these policies have to keep up with that change and these are real challenges to address and you know with Big Idea halation we're trying to really accelerate that compliance right with the the you know the combination of our tools you know reduce the the cost and complexity of compliance and fundamentally keep up through a single interface so that users can know what to do with data at the point of consumption and I think that's the way to think about it well I don't know if you want to add something to that absolutely I think when Karen and I have been working on this for actually many months at this point but most companies don't have a business plan of just saying let's store as much data as possible without getting anything out of it but in order to get something out of it the ability to find that data rapidly and then analyze it so that decision makers make up-to-date decisions is pretty vital a lot of these things when they have to be done manually take a long time they're huge business issues there and so the ability to both automate data discovery and then cataloging across elation and big ID gives those decision makers whether the data steward the data analyst the chief data officer an ability to really dive deeper than they have previously with better speed you know one of the things that we've been talking about for a long time with big data as these data links and they're fairly easy to pull I mean you can put a bunch of data into a corpus and you you act on them but as you start to get across these silos there's a need for you know getting a process down around managing just not only the data wrangling but the policies behind it and platforms are becoming more complex can you guys talk about the product market fit here because there's sass involved so there's also a customer activity what's the product market fit that you guys see with this integration what are some of the things that you're envisioning to emerge out of this value proposition I think I can start I think you're exactly right enterprises have made huge investments in you know historically data warehouses data Mart's data lakes all kinds of other technology infrastructure aimed at making the data easier to get to but they've effectively just layered on to the problem so elations catalog has made it incredibly much more effective at helping organizations to find to understand trust to reuse and use that data so that stewards and people who know about the data can inform users who may need need to run a particular report or conduct a specific analysis can accelerate that process and compress the time the insights much much more than then it's are possible with today's technologies and if you if you overlay that on to the data privacy challenge its compounded and I think you know will it would be great for you to comment on what the data discovery capability it's a big ID do to improve that that even further yeah absolutely so as to companies we're trying to bridge this gap between data governance and privacy and and John as you mentioned there's been a proliferation of a lot of tools whether their data lakes data analysis tools etc what Big Idea is able to do is we're looking across over 70 different types of data platforms whether they be legacy systems like SharePoint and sequel whether they be on pram or in the cloud whether it's data at rest or in motion and we're able to auto populate our metadata findings into relations data catalog the main purpose there being that those data stewards and have access to the most authentic real time data possible so on the terms of the customer value they're going to see what more built in privacy aware features is its speed but you know what I mean the problem is compounded with the data getting that catalog and getting insights out of it but for this partnership is it speed to outcome what does the outcome that you guys are envisioning here for the customer I think it's a combination of speed as you said you know they can much more rapidly get up to speed so an analyst who needs to make a decision about specific data set whether they can use it or not and know at the point of analysis if this data is governed by policies that has been informed by big IDs so the elation catalog user can make a much more rapid decision about how to use that the second piece is the complexity and costs of compliance they can really reduce and start to winnow down their technology footprint because with the combination of the discovery that big ID provides the the the ongoing discovery the big ID provides and the enterprise it data catalog provided violation we give the framework for being able to keep up with these changes in policies as rules and as companies change so they don't have to keep reinventing the wheel every time so we think that there's a significant speed time the market advantage as well as an ability to really consolidate technology footprint well I'll add to that yeah yeah just one moment so elation when they helped create this marketplace seven years ago one of the goals there and I think we're Big Ideas assisting as well as the trusting confidence that both the users of these software's the data store of the analysts have and the data that they're using and then the the trust and confidence are building with their end consumers is much better knowing that there is the this is both bi-directional and ongoing continuously you know I've always been impressed with relations vision it's big vision around the role of the human and data and it's always been impressive and yeah I think the world spinning in that direction you starting to see that now William I want to get your thoughts with big id because you know one of the things is challenging out there from what we're hearing is you know people want to protect the sensitive data obviously with the hacks and everything else and personal information there's all kinds of regulation and believe me state by state nation by nation it's crazy complex at the same time they've got to ensure this compliance tripwires everywhere right so you have this kind of nested complex web of stuff and some real security concerns at the same time you want to make data available for machine learning and for things like that this is the real kind of things that the problem has twisted around so if I'm an enterprise I'm like oh man this is a pain in the butt so how are you guys seeing this evolve because this solution is one step in that direction what are some of the pain points what are some of the examples can you share any insights around how people are overcoming that because they want to get the data out there they want to create applications that are gonna be modern robust and augmented with whether it's augmented AI of some sort or some sort of application at the same time protecting the information and compliance it's a huge problem challenge your thoughts absolutely so to your point regulations and compliance measures both state-by-state and internationally they're growing I mean I think when we saw GDP our four years ago in the proliferation of other things whether it be in Latin America in Asia Pacific or across the United States potentially even at the federal level in the future it's not making it easier to add complexity to that every industry and many companies individually have their own policies in the way that they describe data whether what's sensitive to them is it patent numbers is it loyalty card numbers is it any number of different things where they could just that that enterprise says that this type of data is particularly sensitive the way we're trying to do this is we're saying that if we can be a force multiplier for the individuals within our organization that are in charge of the stewardship over their data whether it be on the privacy side on the security side or on the data and analytics side that's what we want to do and automation is a huge piece of this so yes the ID has a number of patents in the machine learning area around data discovery and classification cluster analysis being able to find duplicate of data out there and when we put that in conjunction with what elations doing and actually gave the users of the data the kind of unprecedented ability to curate deduplicate secure sensitive data all by a policy driven automated platform that's actually I think the magic gear is we want to make sure that when humans get involved their actions can be made how do I say this minimum minimum human interaction and when it's done it's done for a reason of remediation so they're there the second step not the first step here I'll get your thoughts you know I always riff on the idea of DevOps and it's a cloud term and when you apply that the data you talk about programmability scale automation but the humans are making calls whether you're a programmer and devops world or to a data customer of the catalog and halation i'm making decisions with my business I'm a human I'm taking action at the point of design or whatever this is where I think the magic can happen your thoughts on how this evolves for that use case because what you're doing is you're augmenting the value for the user by taking advantage of these things is is that right or am i around the right area yeah I think so I think the one way to think about elation and that analogy is that the the biggest struggle that enterprise business users have and we target the the consumers of data we're not a provider to the information suppliers if you will but the people who had need to make decisions every single day on the right set of data we're here to empower them to be able to do that with the data that they know has been given the thumbs up by people who know about the data connecting stewards who know about the subject matter at hand with the data that the analyst wants to use at the time of consumption and that powerful connection has been so effective in our customers that enabling them to do in our analytical work that they just couldn't dream of before so the key piece here is with the combination with big ID we can now layer in a privacy aware consumption angle which means if you have a question about running some customer propensity model and you don't know if you can use this data or that data the big ID data discovery platform informs the elation catalog of the usage capabilities of that given data set at the moment the analyst wants conduct his or her analysis with the appropriate data set as identified by the stewards and and as endorsed by the steward so that point in time is really critical because that's where the we can we can fundamentally shrink the decision sight yeah it's interesting and so have the point of attack on the user in this case the person in the business who's doing some real work that's where the action is yeah it's a whole nother meaning of actionable data right so you know this seems to where the values quits its agility really it's kind of what we're talking about here isn't it it is very agile on the differentiation between elation and big idea in what we're bringing to the market now is we're also bringing flexibility and you meant that the point of agility there is because we allow our customers to say what their policies are what their sense of gait is define that themselves within our platforms and then go out find that data classify and catalog at etc like that's giving them that extra flexibility the enterprise's today need so that it can make business decisions and faster and I actually operationalize data guys great job good good news it's I think this is kind of a interesting canary in the coal mine around the trends that are going on around how data is evolving what's next how you guys gonna go to market partnership obviously makes a lot of sense technical integration business model integration good fit what's next for you guys I'm sorry I mean I think the the great thing is that you know from the CEO down our organizations are very much aligned in terms of how we want to integrate our two solutions and how we want to go to market so myself and will have been really focused on making sure that the skill sets of the various constituents within both of our companies have the level of education and knowledge to bring these results to bear coupled with the integration of our two technologies well your thoughts yeah absolutely I mean between our CEOs who have a good cadence to care to myself who probably spend too much time on the phone at this point we might have to get him a guest bedroom or something alignments a huge key here ensuring that we've enabled our field to - and to evangelize this out to the marketplace itself and then doing whether it's this or our webinars or or however we're getting the news out it's important that the markets know that these capabilities are out there because the biggest obstacle honestly to adoption it's not that other solutions or build-it-yourself it's just lack of knowledge that it could be easier it could be done better that you could have you could know your data better you could catalog it better great final question to end the segment message to the potential customer out there what it what about their environment that might make them a great prospect for this solution is it is it a known problem is it a blind spot when would someone know to call you guys up in this to ship and leverage this partnership is it too much data as it's just too much many applications across geographies I'm just trying to understand the folks watching when it's an opportunity to call you guys welcome a relation perspective there that can never be too much data they the a signal that may may indicate an interest or a potential fit for us would be you know the need to be compliant with one or more data privacy regulations and as well said these are coming up left and right individual states in the in addition to the countries are rolling out data privacy regulations that require a whole set of capabilities to be in place and a very rigorous framework of compliance those those requirements and the ability to make decisions every single day all day long about what data to use and when and under what conditions are a perfect set of conditions for the use of a data catalog evacuation coupled with a data discovery and data privacy solution like big I well absolutely if you're an organization out there and you have a lot of customers you have a lot of employees you have a lot of different data sources and disparate locations whether they're on prime of the cloud these are solid indications that you should look at purchasing best-of-breed solutions like elation and Big Ideas opposed to trying to build something internally guys congratulations relations strengthening your privacy capabilities with the big ID partnership congratulations on the news and we'll we'll be tracking it thanks for coming I appreciate it thank you okay so cube coverage here in Palo Alto on remote interviews as we get through this kovat crisis we have our quarantine crew here in Palo Alto I'm John Fourier thanks for watching [Music] okay guys

Published Date : May 13 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
KieranPERSON

0.99+

KarenPERSON

0.99+

Kieran WilliamPERSON

0.99+

2016DATE

0.99+

Palo AltoLOCATION

0.99+

John FourierPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

WilliamPERSON

0.99+

May 2020DATE

0.99+

two solutionsQUANTITY

0.99+

second pieceQUANTITY

0.99+

Kiran NarsuPERSON

0.99+

two companiesQUANTITY

0.99+

William MurphyPERSON

0.99+

Nimrod BeckPERSON

0.99+

United StatesLOCATION

0.99+

Latin AmericaLOCATION

0.99+

two technologiesQUANTITY

0.99+

Demetri ShirotaPERSON

0.99+

Asia PacificLOCATION

0.99+

first stepQUANTITY

0.99+

AlationPERSON

0.99+

JohnPERSON

0.99+

William MurphyPERSON

0.99+

todayDATE

0.99+

seven years agoDATE

0.99+

over 70 customersQUANTITY

0.99+

second stepQUANTITY

0.99+

BostonLOCATION

0.99+

four years agoDATE

0.99+

LeBronPERSON

0.98+

John FerrierPERSON

0.98+

bothQUANTITY

0.98+

two great guestsQUANTITY

0.98+

oneQUANTITY

0.98+

KendreORGANIZATION

0.97+

200 brandsQUANTITY

0.96+

single interfaceQUANTITY

0.95+

SharePointTITLE

0.95+

over 70 different typesQUANTITY

0.94+

one stepQUANTITY

0.93+

three overarching challengesQUANTITY

0.89+

BigIDORGANIZATION

0.85+

Big IdeaORGANIZATION

0.84+

Big IdeasORGANIZATION

0.81+

MartORGANIZATION

0.78+

one momentQUANTITY

0.77+

gdprTITLE

0.76+

every single dayQUANTITY

0.74+

big IDORGANIZATION

0.74+

one wayQUANTITY

0.73+

monthsQUANTITY

0.73+

bunch of dataQUANTITY

0.72+

every single dayQUANTITY

0.7+

muchQUANTITY

0.69+

a lot of stuffQUANTITY

0.68+

a lot of toolsQUANTITY

0.67+

CreightonLOCATION

0.66+

DevOpsTITLE

0.65+

vicePERSON

0.58+

nimrodPERSON

0.58+

thingsQUANTITY

0.57+

kovin 19ORGANIZATION

0.55+

dimitriPERSON

0.52+

every industryQUANTITY

0.52+

bigORGANIZATION

0.47+

hundredQUANTITY

0.46+

bigTITLE

0.44+

IDTITLE

0.35+

Daniel Sultana & Cameron Edwards, TechnologyOne | PagerDuty Summit 2019


 

>>from San Francisco. It's the Q covering pager duty Summit 2019. Brought to you by pager Duty. >>Hey, welcome back there, Right, Jeffrey here with the cue, We're pager duty Psalm in its fourth year page summit Third year The Cube being here at West say, Fritz in downtown San Francisco and tying a pager duty summons up running the Western Frances. We're excited to be joined by our next two guests coming all way across the Pacific Ocean. My media left is Daniel Sultaana, group director for >>Sass for Technology. Want Daniel? Great to see you. Thank you. On his left camera. Network TV production engineer Lee also for technology one woke up. So first question. First time in the States. >>Not the first time. The state of into the states, Many tires. So it's a great comeback. California particular center. It is the >>first time for May, but it's been absolutely great. I got the whole weekend to explore San Francisco. Just one >>good give great. It's a great place thio around, but let's talk about Pedro December 1st time duty, Simon A lot. Actually, 1000 people company I P o. This year, a lot of buzz around here >>Really exciting. Great for pages. Video. I appreciate very similar company to technology wanted. Tim saws terms off genetic heritage. So there's a lot of affiliation between our two companies. >>All right, let's jump into what is technology. >>So technology wanted to Australia's largest enterprise software company. We produce software in a few vertical markets, focusing on higher education, local federal government, asset intensive and healthy. >>All right, so you guys are presenting later today on a really interesting topic referenced in the keynote. Your conversation is having increased customer experiences without burning out your people. I think the official report was unplanned work. The human impact been always on world. This is a really deal. People about the human impact duty, the pager. Peter's got a ring somewhere. You see a big impact in terms of the pressure on the teams to deliver with this kind of consumerism ation of I t expect. And that's >>exactly if you look at the enterprise well. Vanda pauses, expecting consumer response. You know, if your Netflix goes down your home tonight, you want that keeps immediately. It's the same pressure now that we're saying transferring today, it's complicated >>for me on on myself. So implementing these kind of systems that just helps an awful ones really understand and reduce the amount of time that we're spending on those incidents after Alice. >>Right? Because we talk a lot about unplanned downtime and maintenance for here, right on machines. And it's hugely impactful and a lot of conversations about prescriptive maintenance and kind of getting ahead of that. We don't hear that conversation so much about people you got humans about. The humans evolved, and I really interesting take as we go aboard. The complexity of the systems between the 80 eyes and everything's connected is no astronomically more complex. And it wasn't >>it definitely is way usedto have very simple traditional surfaces, but now it's hundreds of different services and applications that only talk together. Managing That's a very different game when it used to be >>right. So how does painting maybe help you? How did you start to build a I machine learning for it to be able to get a triage and more importantly, you know, assigned right tasked with the right people, >>I think first start off with us having many district systems bring that together, falling through. So it's like having many different nations around the world. Trying to talk, but not a common interface on bringing together was a first >>for us. What's next? They're still together, >>still pulling together now, actually understanding what we have turning that into processes that are more efficient, using the technology to move the various conversation alerts and information right ares triage ahead of time before problems actually happening. >>I think the other thing that we're more towards starting to use the diner a lot more to make more valuable got agreement, decisions, a supposed toe, intuition based decisions that we used to make >>right, replace something else that you already had kind of a supplement, >>not replace it. So So if I go back just to the technology wanted a street we're 30 years old started off before the Internet. So as we made this transition from on premises to a sax baseball way, needed tools help us in these multiple always on world. >>So So what? What are the characteristics of the biggest problems come up in terms of application interfaces or no way at all? These things tied together what seems to be the weakest link What is the one that you know most banks Now you can kind of reduced the settings. >>I don't think there's any one specific thing way. Talk about Cole's. An awful lot guards really great causes. It's very rarely ever one simple thing that's caused the problem. It's normally a multiple factors that come into play, and some of that can be. Has the engineer being cold three times. I've not came to what with two hours sleep, >>right? And you said you said you carry a pager and hopefully you don't have it All right Now >>it is on >>its way to switch number inside of me. Have you seen seen a reduction in kind of the pressure call in the qualities stuff that gets through triage and actually make it to the major >>way some stuff, way fix from bed. Now you stop to wake up >>way getting up. >>So we used a pager beauty my bollock way. Have some stuff that we built into that as well. And waken fix things from Ben >>give you exact way, have some issues that take us minutes to resolve. We've managed to bring that down to three >>wise that because better, better tasking of the people. Better identification problems were some things that drive exactly that. >>So it is bringing the multiple inputs into a central place that being interpreted and then being shifted off to the right resources to be able to fix it behind. Or there's no some automated, tacit kickoff. And that just condenses the whole into in process dramatically. So our customers seeing a much greater meantime between failure because we could get on the things a lot faster. >>Okay, so lessons for people thinking about paging me. What would you tell him? Some of your peers that are that are carrying the pager and red eyed way. >>Look, I think managing your PayPal is very important, I think way living in a world where talent is actually hard to secure. So you need to ensure that that talent is protected and looked after well nourished and grows on. So we've just page me to help do that, sure that teams don't burn out to understand what root causes also attack a rock, pools on become more efficient. >>Is there any specific characteristics are attributes in the people leaving? They're in their behavior, things that they do You're measuring as being now less burning? Absolutely >>way. Actually running employee in peace >>So they all just wrote a book. Five. So they get >>Andrea Lee. Something fundamental was around with number out of Dallas. That was That was really died. Other measure its foreign off. I wonder what a >>charity secrets. But when things were not good, orders of magnitude of work was done. Kind of unscheduled, which is causing this angst. How's that? Kind of? Just >>wear multiple hours every night. I'll be, quite frankly, people was on way. Knew that's how far. >>Right? Right, Right. >>Good. Well, thank you. Thank you for sharing the story. And good luck. Hopefully nobody else resigns and keep a couple a bunch of happy, happy clients opened out and deliver the great customer experience. Absolutely. Alright, >>stand the camera. Jeff, You're watching the cube? Were some it downtown

Published Date : Sep 24 2019

SUMMARY :

Brought to you by pager Duty. We're excited to be joined by our next two guests coming So first question. The state of into the states, Many tires. I got the whole weekend to explore San Francisco. It's a great place thio around, but let's talk about Pedro So there's a lot of affiliation between our two So technology wanted to Australia's largest enterprise software company. You see a big impact in terms of the pressure on the teams It's the same pressure So implementing these kind of systems that just helps an awful The complexity of the systems between the 80 eyes and everything's connected is no but now it's hundreds of different services and applications that only talk together. learning for it to be able to get a triage and more importantly, the world. for us. that are more efficient, using the technology to move the various So So if I go back just to the technology wanted a street we're What are the characteristics of the biggest problems come up in I've not came to what with two hours sleep, call in the qualities stuff that gets through triage and actually make it to the major Now you stop to wake up So we used a pager beauty my bollock way. give you exact way, have some issues that take us minutes So it is bringing the multiple inputs into a central place that being interpreted What would you tell him? So you need to ensure that that talent is protected and looked after well nourished way. So they all just wrote a book. I of magnitude of work was done. I'll be, quite frankly, people was on way. Right? a couple a bunch of happy, happy clients opened out and deliver the great customer experience. stand the camera.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andrea LeePERSON

0.99+

Daniel SultaanaPERSON

0.99+

DanielPERSON

0.99+

JeffPERSON

0.99+

San FranciscoLOCATION

0.99+

CaliforniaLOCATION

0.99+

PayPalORGANIZATION

0.99+

two companiesQUANTITY

0.99+

DallasLOCATION

0.99+

first questionQUANTITY

0.99+

Pacific OceanLOCATION

0.99+

first timeQUANTITY

0.99+

First timeQUANTITY

0.99+

MayDATE

0.99+

hundredsQUANTITY

0.99+

80 eyesQUANTITY

0.99+

JeffreyPERSON

0.99+

TimPERSON

0.99+

SimonPERSON

0.99+

PedroPERSON

0.99+

FiveQUANTITY

0.99+

LeePERSON

0.99+

PeterPERSON

0.99+

NetflixORGANIZATION

0.99+

1000 peopleQUANTITY

0.98+

This yearDATE

0.98+

tonightDATE

0.98+

three timesQUANTITY

0.98+

Third yearQUANTITY

0.98+

FritzPERSON

0.98+

AustraliaLOCATION

0.97+

AlicePERSON

0.97+

VandaPERSON

0.97+

todayDATE

0.97+

oneQUANTITY

0.96+

firstQUANTITY

0.96+

threeQUANTITY

0.96+

fourth year pageQUANTITY

0.95+

two guestsQUANTITY

0.95+

one simple thingQUANTITY

0.93+

TechnologyOneORGANIZATION

0.93+

30 years oldQUANTITY

0.9+

BenPERSON

0.9+

ColePERSON

0.85+

Cameron EdwardsPERSON

0.84+

two hours sleepQUANTITY

0.83+

PagerDuty Summit 2019EVENT

0.79+

December 1st timeDATE

0.76+

Summit 2019EVENT

0.72+

SultanaORGANIZATION

0.71+

servicesQUANTITY

0.7+

SassORGANIZATION

0.68+

coveringEVENT

0.66+

pagerEVENT

0.66+

Western FrancesLOCATION

0.54+

coupleQUANTITY

0.48+

CubeORGANIZATION

0.35+

Lance Shaw, Commvault | Commvault GO 2018


 

>> Narrator: Live from Nashville, Tennessee. It's theCUBE covering Commvault GO 2018. Brought to you by Commvault. >> Welcome back to Nashville, Tennessee. You're watching theCUBE at Commvault GO. I'm Stu Miniman with my co-host Keith Townsend. Happy to welcome to the program Lance Shaw, who's the director of Solutions Marketing at Commvault. Thanks so much for joining us. >> Thank you so much, glad to be here. >> All right, so we've been having a great day here. We're talking to some of your partners. Talking to some of your customers. Solutions Marketing, of course, everything's a solution these days. That's what they're looking for. Tell us a little bit about your background and what you do at Commvault. >> Lance: Absolutely, right. So, I came from a product management and product marketing background and one of the things we're really focused on here at this show, of course, is all about customers and what their stories are and frankly, how we can improve our products and our solutions to better meet the needs of the customers, right. That's what ultimately what it all comes down to. And so, that's why we're here, the whole reason for the show. I think what's been interesting so far at this show has been the focus on, not only just cloud utilization, but the fact that customers are having to deal with multiple clouds and the fact that why they have to do that. There's a variety of reasons that drive people to say well you know maybe five years ago, you would have said, "Are you using a cloud?" Yeah, I've got one cloud provider, but now I've got lots. >> Stu: Yeah, and Lance I'd love to hear what you're hearing from customers 'cause one of the things you talk to customers and oh, they have a multi-cloud strategy and when you dig in, first of all, every customer has a totally different environment, >> Lance: Right. >> Stu: and it reminds me, I spent the last two decades trying to help customers get out of their silos, and in some ways I'm a little worried that we've just created a whole bunch of new silos, that just don't happen to live in my data center, and we called it multi-cloud >> Lance: Right. >> Stu: because the strategy is oh, well I did this application for here and then oh, there's this service over here that I needed and then I sissified a bunch of stuff. So, tell me we've got it all figured out. Customers, they have a good strategy, they're really sharp as to where they're going, and the future is bright. >> Lance: Absolutely. Now the reality of that is, (laughs) that in fact, you're absolutely right. Unwittingly or unknowingly we've gotten to a path of history repeating itself where I'm creating new silos of information and data. So, you're absolutely right. Organizations start out with a point solution for a particular application or a particular data set or acquired a company and so brought in this new thing. And pretty soon, I have no idea what I've got in the Singapore office versus the London office versus New York, right, so. And how do I reconcile that and bring it back together? So I've got that same old problem that, if you've been around in the industry for a few years, we saw 10 years ago, 15 years ago, I've got to bring my silos of information together. And so, yeah you're right. It's suddenly a new, same old challenge all over again. Alright, so and that's why it's become a focus area because I suddenly have fragmented, disconnected application and data silos. So that's really where Commvault, turns out, can really help because sometimes it's a matter of consolidation. You know what, I need to get down from three locations to two, or four to one, or whatever the case may be, some sort of consolidation. And usually there's some cost savings involved there. And or, it's I got these multiple solutions that are out there and I've got no control and I have no visibility, I know I'm exposed so, I've got a risk factor now that I didn't have before. So when you start to blend all of those together, you're absolutely right, it's the same old story again, right. >> Keith: Industry versus vertical versus use case, you've given us a couple of different ones. Use cases, reducing costs, consolidation, even multi-cloud in itself is a use case. But, if you're an enterprise software company, if you're an enterprise IT company, you're challenged as you talk to different industries about specific solutions. You got to tailor solutions to industries. Talk about some of the industries that Commvault has come to solve specific problems for. >> Lance: Right, well I think there's a lot, to be honest, right, because every company faces those set of challenges. I think where it gets really interesting is in highly regulated industries, right. So, you think about biopharmaceuticals, you think about financial services, or certainly in the government space, in the federal space. And they have a whole set of unique challenges there because you're dealing with top secret clouds and you're dealing with, you know, some special concerns there. I think where it gets of particular interest is when I've got all those fragmented or disconnected silos, is that I need to address my compliant's concerns. I need to understand the data for more than just is it protected and could I recover it in a specific amount of time? I actually need to be able to show that I have it and prove that what I've got and be able to address specific industry regulations that are unique to my particular industry. So, that's where we start to see very specific use cases that kind of get down from the generic or the general, down to the very specific how do I manage this data and how do I understand what I have? And then of course you get into, you know, can you prove what you've got? Can you go out and retrieve it? And there's all sort of, you know, regulations along that that I've got to adhere to. But that can be addressed once I have that full index, an understanding of what my environment's like. Now, I can go out and locate that information, I can retrieve it when I need to, and actually open it up from a persona based access perspective, let specific people in an organization have access just to the limited data sets that they need, alright. So that comes into play a lot, especially, for example, every organization, right, you've got database admins, you've got critical tier zero applications that you need need to manage. It's your CRM system, it's your supply chain management system. If it goes down, you know, people freak out, alright. So, and I want to be able to provide, you know, self-service access to information for those people. So I've got a well-managed understanding of my environment, but then I'm able to dole out access to the individuals that need it when they need it and they don't have to come ask us or ask IT or ask anybody else, you know, for that information. >> Stu: Yeah, Lance as we watch the cust-to-cust companies really understand that data is very valuable, we have a transition that's going on. Traditional customer for Commvault, you're talking about things like RPO and RTO and the like. And, you know, you've got the admins of the world trying to figure out how they do their jobs and things like, okay, backup Windows of the past versus recovery and all those moving pieces. As opposed to today, you talk about the value of data, these are board-level discussions. >> Lance: Right. >> Stu: You've got the C-Suite that you're working with. We talked to a few of your teams about, well, you've got the top down and the bottom up. How are you helping them and what conversations do you have with them? >> Lance: They are entirely different conversations, right. IT is serving the business, as we all know, right. You know, maybe a bit cliché, sorry. >> Stu: Hopefully, if they're doing their job right, they're responding to and actually doing what they need to. >> Lance: Why am I here? Oh, that's right! To serve the business. Yeah, let's try that. So, anyways, there's that delivery of data, but you're absolutely right. The utilization of data and how it's consumed and the understanding that I can get from it, that is an entirely different conversation and, you're right, it is. It's a business unit discussion, you know, it's a line of business discussion at the very least, and it's probably a senior executive discussion because with that additional visibility, I'm then able to make much better, at least theoretically, better business decisions and because I've got more information to draw from. So, you're right, in terms of the conversation, we're not talking about strictly data protection. It's like, yes, when your data is understood, here's what else you can do with it. And then you got to tailor that to the specific industry, specific vertical, and a little more specific to that particular conversation. >> Keith: So Lance, give us a feel for that conversation that's happening here at Commvault GO, 2,000 people, over 150 sessions, education focused event, and there's different personas. I'll let the focus on that executive persona a little bit. I got you in front of the SVP of some group, the CDO. What's the Commvault story? Why Commvault over any other data protection company? >> Lance: I like to think of it as the proverbial, killing two birds with one stone, right. So, is my data growing? Oh, yeah, right. You're never going to hear someone say, you know data is shrinking, I have less to worry about. I mean, I've been in the industry a couple years now, give or take, and it's just never going to happen, right. So, you don't have to worry about that. With that in mind, the need to be able to have the visibility is continuing to increase. So, you see the rise of a chief data officer and what are they concerned about? They're concerned about utilizing data in ways that they were previously never able to do. And so, when we have those conversations, it's one of if I'm going to kill two birds with one stone, I'm going to be able to not only protect my data, but I'm going to give you additional visibility that you didn't have before because I'm providing you visibility into all of the secondary data and the application protection and I'm allowing you to be, ultimately, more flexible because now you're able to actually move data where you need it and expand your data center in ways you previously could not. So, I want to move from one cloud to the other. No problem, I can do that. I want to finally move, finally get off of tape and consolidate my environments and move either to an on premises environment or to a cloud. Not a problem. I can come back, we see customers that are coming back to on premises from cloud in some cases just for particular use cases. So the conversations that we have with a CEO, will just stick with a CEO as an example, are around better utilization of the data and better risk mitigation around that data, alright. So I've had a number of conversations related to that where we were concerned about not, you know, everybody talks about ransomware, but in general, attacks on the business and it's not if it's when, so how do I make sure that I can keep my business up and running? And so, it's that broader perspective that you have around how I manage data and how I deliver it to the business. That's what they care about, alright. That's crazy you're protected by the way, that's sort of important too. But what I can do with it and how I deliver it to my lines of business, that's where the interest starts to lie in a CEO level conversation. >> Yeah, Lance. One of the things everybody loves coming to a show like this, you get some of those great user stories. This morning, we had the State of Colorado on talking about how they're recovering from ransomware. >> Right, right, right. >> We had American Pacific Mortgage on talking about just the scale. You talked about the growing data and how, you know, using Commvault they're able to manage that much better. Any other specific examples of kind of interesting use cases or good customer stories you might have? >> Yeah, we recently had a very large customer that was looking to consolidate their environment. It was a classic case of I got offices spread around the world and they had a number of different point solutions, right. So, without naming names, I've got different protection solutions for different areas. I've got different administrators. I've got different policies. And, you know, they hit a scenario where they were exposed from a risk perspective that that particular set of data was not covered as they thought it was because they didn't have standardization of policies, standardized policies I should say, around how they manage, access, and the retention of that data. And so that, sometimes there's that forcing event that says we have a problem here, we need to do something about this. Alright so, in their case, they we able to consolidate from multiple solutions down to Commvault where they could have predefined set of policies in place around the data and not only for what they were gathering in. So as they ingested it or moved data under Commvault's management, they were able to automatically assign policies to that, but then in their case, they were also acquiring other companies. So, they were acquiring a rather large European entity, and when they were bringing that organization in, they wanted to make sure that they did so in a way that didn't expose the risk again in the future because if we're going to grow as a business with an acquisition strategy, we've got to be able to make sure that what comes into the organization is consistent. >> So, being partner presence here, Commvault has been pretty direct and forward talking about how you're shifting from a direct sales model and having gone through partners to help provide the solutions to these challenges. Talk through, how do you enable partners, or how do you encourage partners, this is a crowded market, there's a lot of investment in the area of data protection, how do you rise to the top of the partner list and for partners putting your solutions in front of their customers? >> Lance: Right, there's two ways we do that, right. So, the first, because you're absolutely right. You know, partners are key to our growth and we can be key to their growth and success. No doubt about it. So, the first thing is give them something that's going to really make them successful. So, instead, if I'm a partner, I want the flexibility to be able to address a wider variety of demands. I want to be able to go in to a potential prospect and say yeah, I can address this, but also I have the software behind the scenes, Commvault, to be able to attack multiple other scenarios for you. Oh and by the way, it's all in one and you've got one solution to be able to address all that. So, one of the key ways that we differentiate, and you're right, in a very crowded market, alright, that says we should really have Commvault in the back of your mind, at the top of your list. If you're going in and seeing scenarios where point solutions simply doesn't do it or paints you into a corner where you're not going to be able to help them grow down the future. The other thing partners obviously want, as every business wants, is repeat business. I want to be able to go back in and expand, I want to build my footprint out, and if I can go in with a partner that enables me to do that, then I've got long term opportunity versus just going in like, hey, I made a quick sale and I'm out and good luck to you, right. >> Stu: Lance, last thing I wanted to ask you. Last year, GDPR was the talk of every single show like this. >> Lance: Yeah, I've seem to have heard about that, yeah. >> Stu: We got a good education. My boss actually read through the entire specs. I read the Cliffnotes version >> Lance: Okay, yeah, me too. >> Stu: and then talked to a lot of smart people about it. California is looking at some new legislation, but what's the latest on that? It seems like, you know, I know some of the lawsuits already happening at some of the biggest companies in Europe, you know, from a technology standpoint, but what are you hearing and how has Commvault helped customers understand kind of today and future legislation? >> Lance: Yeah, I think, you know what's interesting? When we looked at, you know, everybody was kind of marching up to the GDPR date as if it was Y2K all over again. >> Stu: Right. >> Lance: Not that I remember that of course. I'm too young for that. (Keith laughs) You know, it was like May 25th, May 25th, the sky's going to fall, and we all knew that, hey listen, that day is going to come and go and somebody's going to be made an example at some point, right. And sure enough, that's starting to happen. And you know, it's a good thing. It's building the awareness that we tried to educate people, tried to get the word out, you know, it happens longer. Why wait past May 25th? It's still going on, right. So, for a lot of customers that we're talking to, they're looking to, they've had a plan in place and they're moving there gradually, it wasn't right away, but I think sometimes when you see those things in the press about there's actually being a finesse, it's actually real and it brings it to life like, uh we should really do something here, right. So, I think, honestly, that's a process that's going to continue for years. You know, I've heard everything from we'll just pay the fine, which is a risky strategy both probably on a personal level as well as professional. (Keith laughs) You wouldn't want to bet your career on that strategy. With the advent of, we also always knew that hey, GDPR is one of these set of regulations. There will be others, there are others. And you have to be able to adhere to those no matter where you live on the Earth. So, you know, long story short, I think it's a continuing evolution. We help customers understand their data. So, you know, through our Commvault activate product, we can do it. Even if you're not using Commvault for backup and recovery, you're actually able to go out and scan your environment and get a better understanding of what personal information you've got under lock and key, what you've got in your environment, and be able to ascertain well okay, where's my risk, where am I exposed? And then I can start to put a plan in place to mitigate that. So, I think it'll be going on for quite some time in terms of especially as new laws like the California law. I always forget the letters and numbers associated with it, but it's same idea around personal privacy. And I think, you know, we've had the Patriot Act for a long time, right, where foreign governments are concerned about data sovereignty and where data lives and that's going to continue to increase, you know, for a variety of reasons. So organizations have to really know where their data is and what's encapsulated within that data and that's where the Commvault data platform, the index, actually shines to uncover that information. >> Stu: Well, Lance Shaw, I really appreciate you sharing with us where your customers are in a lot of these really important issues. For Keith Townsend, I'm Stu Miniman. We'll be back with more coverage here from Commvault Go in Nashville, Tennessee. Thanks for watching theCUBE. (upbeat music)

Published Date : Oct 10 2018

SUMMARY :

Brought to you by Commvault. Welcome back to Nashville, Tennessee. and what you do at Commvault. but the fact that customers are having to deal they're really sharp as to where they're going, I've got to bring my silos of information together. You got to tailor solutions to industries. So, and I want to be able to provide, you know, As opposed to today, you talk about the value of data, Stu: You've got the C-Suite that you're working with. IT is serving the business, as we all know, right. they're responding to and actually doing what they need to. And then you got to tailor that to the specific industry, I got you in front of the SVP of some group, the CDO. With that in mind, the need to be able to have to a show like this, you get some of You talked about the growing data and how, you know, that didn't expose the risk again in the future to help provide the solutions to these challenges. So, one of the key ways that we differentiate, Stu: Lance, last thing I wanted to ask you. I read the Cliffnotes version Stu: and then talked to a lot of smart people about it. When we looked at, you know, everybody was and that's going to continue to increase, Stu: Well, Lance Shaw, I really appreciate you sharing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LancePERSON

0.99+

KeithPERSON

0.99+

Keith TownsendPERSON

0.99+

Lance ShawPERSON

0.99+

EuropeLOCATION

0.99+

Patriot ActTITLE

0.99+

May 25thDATE

0.99+

Stu MinimanPERSON

0.99+

Last yearDATE

0.99+

EarthLOCATION

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

SingaporeLOCATION

0.99+

one stoneQUANTITY

0.99+

StuPERSON

0.99+

two birdsQUANTITY

0.99+

Nashville, TennesseeLOCATION

0.99+

CommvaultORGANIZATION

0.99+

New YorkLOCATION

0.99+

two waysQUANTITY

0.99+

oneQUANTITY

0.99+

2,000 peopleQUANTITY

0.99+

bothQUANTITY

0.98+

LondonLOCATION

0.98+

over 150 sessionsQUANTITY

0.98+

10 years agoDATE

0.98+

15 years agoDATE

0.98+

GDPRTITLE

0.98+

five years agoDATE

0.98+

fourQUANTITY

0.97+

one solutionQUANTITY

0.97+

todayDATE

0.97+

CaliforniaORGANIZATION

0.95+

This morningDATE

0.95+

OneQUANTITY

0.94+

Commvault GoORGANIZATION

0.94+

ColoradoLOCATION

0.93+

CaliforniaLOCATION

0.91+

one cloudQUANTITY

0.91+

WindowsTITLE

0.86+

EuropeanLOCATION

0.83+

every single showQUANTITY

0.8+

Ken Xie, Fortinet | Fortinet Accelerate 2018


 

>>Live from Las Vegas. It's theCUBE. Covering Fortinet Accelerate 18. Brought to you by Fortinet. >> Welcome to Fortinet Accelerate 2018. I'm Lisa Martin with theCUBE and we're excited to be here doing our second year of coverage of this longstanding event. My cohost for the day is Peter Burris; excited to be co-hosting with Peter again, and we're very excited to be joined by the CEO, Founder, and Chief Chairman of Fortinet, Ken Xie, Ken welcome back to theCUBE. >> Thank you, Lisa, thank you, Peter. Happy to be here. >> It's great to be here for us as well, and the title of your Keynote was Leading the Change in Security Transformation, but something as a marketer I geeked out on before that, was the tagline of the event, Strength in Numbers. You shared some fantastic numbers that I'm sure you're quite proud of. In 2017, $1.8 in billing, huge growth in customer acquisitions 17.8 thousand new customers acquired in 2017 alone, and you also shared that Forinet protects around 90% of the Global S&P 100. Great brands and logos you shared Apple, Coca Cola, Oracle. Tell us a little bit more and kind of as an extension of your Keynote, this strength in numbers that you must be very proud of. >> Yeah, I'm an engineer background, always liked the number, and not only we become much bigger company, we actually has 25 to 30% global employment in a network security space. That give a huge customer base and last year sales grow 19% and we keeping leading the space with a new product we just announced today. The FortiGate 6000 and also the FortiOS 6.0. So all this changing the landscape and like I said last year we believe the space is in a transition now, they've got a new generation infrastructure security, so we want to lead again. We started the company 18 years ago to get into we called a UTM network firewall space. We feel infrastructure security is very important now. And that we want to lead in the transition and lead in the change. >> So growth was a big theme or is a big theme. Some of the things that we're also interesting is another theme of really this evolution, this landscape I think you and Peter will probably get into more the technology, but give our viewers a little bit of an extension of what you shared in your keynote about the evolution. These three generations of internet and network security. >> Yeah, when I first start my network security career the first company I was study at Stanford University, I was in the 20s. It was very exciting is that a space keeping changing and grow very fast, that makes me keeping have to learning everyday and that I like. And then we start a company call Net Screen when it was early 30s, that's my second company. We call the first generation network security which secured a connection into the trust company environment and the Net Screens a leader, later being sold for $4 billion. Then starting in 2000, we see the space changing. Basically you only secure the connection, no longer enough. Just like a today you only validate yourself go to travel with a ticket no longer enough, they need to see what you carry, what's the what's the luggage has, right. So that's where we call them in application and content security they call the UTM firewall, that's how Fortinet started. That's the second generation starting replacing the first generation. But compared to 18 years ago, since change it again and nowadays the data no longer stay inside company, they go to the mobile device, they go to the cloud, they call auditive application go to the IoT is everywhere. So that's where the security also need to be changed and follow the important data secure the whole infrastructure. That's why keeping talking from last year this year is really the infrastructure security that secure fabric the starting get very important and we want to lead in this space again like we did 18 years ago starting Fortinet. >> Ken, I'd like to tie that, what you just talked about, back to this notion of strength in numbers. Clearly the bad guys that would do a company harm are many and varied and sometimes they actually work together. There's danger in numbers Fortinet is trying to pull together utilizing advanced technologies, new ways of using data and AI and pattern recognition and a lot of other things to counter effect that. What does that say about the nature of the relationships that Fortinet is going to have to have with its customers going forward? How is that evolving, the idea of a deeper sharing? What do you think? >> Actually, the good guy also started working together now. We formed the they call it the Cyber Threat Alliance, the CTA, and Fortinet is one of the founding company with the five other company including Palo Alto Network, Check Point and McAfee and also feel a Cisco, there's a few other company all working together now. We also have, we call, the Fabric-Ready Program which has 42 big partners including like IBM, Microsoft, Amazon, Google, all this bigger company because to defend the latest newest Fabric threat you have to be working together and that also protect the whole infrastructure. You also need a few company working together and it's a because on average every big enterprise they deploy 20 to 30 different products from different company. Management cost is number one, the highest cost in the big enterprise security space because you have to learn so many different products from so many different vendor, most of them competitor and now even working together, now communicate together. So that's where we want to change the landscape. We want to provide how infrastructure security can work better and not only partner together but also share the data, share the information, share the intelligence. >> So fundamentally there is the relationship is changing very dramatically as a way of countering the bad actors by having the good actors work more closely together and that drives a degree of collaboration coordination and a new sense of trust. But you also mentioned that the average enterprise is 20 to 30 fraud based security products. Every time you introduce a new product, you introduce some benefits you introduce some costs, potentially some new threat surfaces. How should enterprises think about what is too many, what is not enough when they start thinking about the partnerships that needed put together to sustain that secure profile? >> In order to have the best protection today you need to secure the whole infrastructure, the whole cyberspace. Network security still the biggest and also grow very fast and then there's the endpoint and there's a like a cloud security, there's a whole different application, email, web and all the other cloud all the other IoT. You really need to make sure all these different piece working together, communicate together and the best way is really, they have to have a single panel of our management service. They can look at them, they can make it integrate together they can automate together, because today's attack can happen within seconds when they get in the company network. It's very difficult for human to react on that. That's where how to integrate, how to automate, this different piece, that is so important. That's where the Fabric approach, the infrastructure approach get very important. Otherwise, you cannot react quick enough, in fact, to defend yourself in a current environment. On the other side for your question, how many vendor do you have, I feel the less the better. At least they have to work together. If they're not working together, will make it even more difficult to defend because each part they not communicate and not react and not automate will make the job very, very difficult and that's where all this working together and the less vendor they can all responsible for all your security it's better. So that's where we see some consolidation in the space. They do still have a lot of new company come up, like you mentioned, there's close to 2,000 separate security company. A lot of them try to address the point solution. I mentioned there's a four different level engineer after engineer work there because I see 90% company they do the detection. There's a certain application you can detect the intrusion and then the next level is where they after you attack what are going to do about it. Is it really the prevention setting kick in automatic pull out the bad actor. After that, then you need to go to the integration because there's so many different products, so many different piece you need to working together, that's the integration. Eventually the performance and cost. Because security on average still cost 100 times more expensive under same traffic and also much slower compared to the routing switch in networking device. That's what the performance cost. Also starting in the highest level, that's also very difficult to handle. >> So, we're just enough to start with the idea of data integration, secure data integration amongst the security platform, so enough to do as little as possible, as few as possible to do that, but enough to cover all the infrastructure. >> Yes, because the data is all a whole different structure. You no longer does have to trust environment. Because even inside the company, there's so many different way you can access to the outside, whether it by your mobile device so there's a multiple way you can connect on the internet and today in the enterprise 90% connection goes to Wi-Fi now it's not goes to a wired network, that's also difficult to manage. So that's where we will hide it together and make it all working together it's very important. >> So, in the spirit of collaboration, collaborating with vendors. When you're talking with enterprises that have this myriad security solutions in place now, how are they helping to guide and really impact Fortinet's technologies to help them succeed. What's that kind of customer collaboration like, I know you meet with a lot of customers, how are they helping to influence the leading security technologies you deliver? >> We always want to listen the customer. They have the highest priority, they gave us the best feedback. Like the presentation they talked about there's a case from Olerica which is where they have a lot of branch office and they want to use in the latest technology and networking technology, SD-WAN. Are working together with security, that's ready the new trend and how to make sure they have all the availability, they have the flexibility software-defined networking there and also make sure to security also there to handle the customer data, that's all very important so that's what we work very closely with customer to response what they need. That's where I'm still very proud to be no longer kind of engineer anymore but will still try to build in an engineer technology company. Listen to the customer react quick because to handle security space, cyber security, internet security, you have to work to quickly react for the change, on internet, on application. So that's where follow the customer and give them the quick best solution it's very very important. >> On the customer side in Anaemia we talked about that was talked a little bit about this morning with GDPR are is around the corner, May 2018. Do you see your work coordinates work with customers in Anaemia as potentially being, kind of, leading-edge to help customers in the Americas and Asia-Pacific be more prepared for different types of compliance regulations? >> We see the GDPR as an additional opportunity, as a additional complement solution compared to all the new product technology would come up. They definitely gave us an additional business rate, additional opportunity, to really help customer protect the data, make the data stay in their own environment and the same time, internet is a very global thing, and how to make sure different country, different region, working together is also very important. I think it's a GDPR is a great opportunity to keeping expanding a security space and make it safer for the consumer for the end-user. >> So Ken as CEO Fortinet or a CEO was tough act, but as CEO you have to be worried about the security of your business and as a security company you're as much attacked, if not more attacked than a lot of other people because getting to your stuff would allow folks to get to a lot of other stuff. How do you regard the Fortinet capabilities inside Fortinet capability as providing you a source of differentiation in the technology industry? >> Yeah we keep security in mind as the highest priority within a company. That's where we develop a lot of product, we also internally use tests first. You can see from endpoint, the network side, the email, to the web, to the Wi-Fi access, to the cloud, to the IoT, it's all developing internally, it tests internally so the infrastructure security actually give you multiple layer protection. No longer just have one single firewall, you pass the fire were all open up. It's really multiple layer, like a rather the ransomware or something they had to pass multiple layer protection in order to really reach the data there. So that's where we see the infrastructure security with all different products and developed together, engineer working together is very important. And we also have were strong engineer and also we call the IT security team lead by Phil Cauld, I think you are being interview him later and he has a great team and a great experience in NSA for about 30 years, secure country. And that's where we leverage the best people, the best technology to provide the best security. Not only the portal side, also our own the internal security in this space. >> So, in the last minute or so that we have here, one of the things that Patrice Perce your global sales leader said during his keynote this morning was that security transformation, this is the year for it. So, in a minute or so, kind of what are some of the things besides fueling security transformation for your customers do you see as priorities and an exciting futures this year for Fortinet, including you talked about IoT, that's a $9 billion opportunity. You mentioned the securing the connected car to a very cool car in there, what are some of the things that are exciting to you as the leader of this company in 2018? >> We host some basic technology, not another company has. Like a built in security for a single chip. I also mentioned like some other bigger company, like a Google started building a TPU for the cloud computing and Nvidia the GPU. So we actually saw this vision 18 years ago when we start a company and the combine the best hardware and best technology with solve for all this service together. So, long term you will see the huge benefit and that's also like translate into today you can see all these technology enable us to really provide a better service to the customer to the partner and we all starting benefit for all this investment right now. >> Well Ken, thank you so much for joining us back on theCUBE. It's our pleasure to be here at the 16th year of the event, our second time here. Thanks for sharing your insight and we're looking forward to a great show. >> Thank you, great questions, it's the best platform to really promoting the technology, promoting the infrastructure security, thank you very much. >> Likewise, we like to hear that. For my co-host Peter Burris, I'm Lisa Martin, we are coming to you from Fortinet Accelerate 2018. Thanks for watching, stick around we have great content coming up.

Published Date : Mar 1 2018

SUMMARY :

Brought to you by Fortinet. My cohost for the day is Peter Burris; excited to be co-hosting with Peter again, and we're Happy to be here. It's great to be here for us as well, and the title of your Keynote was Leading the Yeah, I'm an engineer background, always liked the number, and not only we become much give our viewers a little bit of an extension of what you shared in your keynote about the they need to see what you carry, what's the what's the luggage has, right. What does that say about the nature of the relationships that Fortinet is going to have We formed the they call it the Cyber Threat Alliance, the CTA, and Fortinet is one of countering the bad actors by having the good actors work more closely together and that In order to have the best protection today you need to secure the whole infrastructure, amongst the security platform, so enough to do as little as possible, as few as possible Because even inside the company, there's so many different way you can access to the outside, how are they helping to influence the leading security technologies you deliver? They have the highest priority, they gave us the best feedback. On the customer side in Anaemia we talked about that was talked a little bit about this customer protect the data, make the data stay in their own environment and the same time, So Ken as CEO Fortinet or a CEO was tough act, but as CEO you have to be worried about You can see from endpoint, the network side, the email, to the web, to the Wi-Fi access, of the things that are exciting to you as the leader of this company in 2018? customer to the partner and we all starting benefit for all this investment right now. It's our pleasure to be here at the 16th year of the event, our second time here. promoting the infrastructure security, thank you very much. For my co-host Peter Burris, I'm Lisa Martin, we are coming to you from Fortinet Accelerate

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

PeterPERSON

0.99+

Lisa MartinPERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

FortinetORGANIZATION

0.99+

2018DATE

0.99+

AppleORGANIZATION

0.99+

Ken XiePERSON

0.99+

$1.8QUANTITY

0.99+

McAfeeORGANIZATION

0.99+

KenPERSON

0.99+

OracleORGANIZATION

0.99+

20QUANTITY

0.99+

2017DATE

0.99+

Patrice PercePERSON

0.99+

25QUANTITY

0.99+

Net ScreenORGANIZATION

0.99+

Phil CauldPERSON

0.99+

May 2018DATE

0.99+

Coca ColaORGANIZATION

0.99+

90%QUANTITY

0.99+

$9 billionQUANTITY

0.99+

last yearDATE

0.99+

AmericasLOCATION

0.99+

Palo Alto NetworkORGANIZATION

0.99+

100 timesQUANTITY

0.99+

LisaPERSON

0.99+

Net ScreensORGANIZATION

0.99+

$4 billionQUANTITY

0.99+

19%QUANTITY

0.99+

CTAORGANIZATION

0.99+

2000DATE

0.99+

NvidiaORGANIZATION

0.99+

Check PointORGANIZATION

0.99+

second timeQUANTITY

0.99+

Las VegasLOCATION

0.99+

GDPRTITLE

0.99+

second companyQUANTITY

0.99+

ForinetORGANIZATION

0.99+

AnaemiaORGANIZATION

0.99+

about 30 yearsQUANTITY

0.99+

second yearQUANTITY

0.99+

18 years agoDATE

0.99+

first generationQUANTITY

0.99+

second generationQUANTITY

0.99+

todayDATE

0.98+

oneQUANTITY

0.98+

16th yearQUANTITY

0.98+

42 big partnersQUANTITY

0.98+

Stanford UniversityORGANIZATION

0.98+

30%QUANTITY

0.98+

each partQUANTITY

0.98+

early 30sDATE

0.98+

OlericaORGANIZATION

0.98+

this yearDATE

0.97+

30 different productsQUANTITY

0.97+

FortiOS 6.0COMMERCIAL_ITEM

0.96+

around 90%QUANTITY

0.96+

Cyber Threat AllianceORGANIZATION

0.96+

firstQUANTITY

0.95+

five other companyQUANTITY

0.95+

DONOTPOSTKen Xie, Fortinet | Fortinet Accelerate 2018


 

>> (Narrator) Live from Las Vegas. It's theCUBE. Covering Fortinet Accelerate 18. Brought to you by Fortinet. >> Welcome to Fortinet Accelerate 2018. I'm Lisa Martin with theCUBE and we're excited to be here doing our second year of coverage of this longstanding event. My cohost for the day is Peter Burris; excited to be co-hosting with Peter again, and we're very excited to be joined by the CEO, Founder, and Chief Chairman of Fortinet, Ken Xie, Ken welcome back to theCUBE. >> Thank you, Lisa, thank you, Peter. Happy to be here. >> It's great to be here for us as well, and the title of your Keynote was Leading the Change in Security Transformation, but something as a marketer I geeked out on before that, was the tagline of the event, Strength in Numbers. You shared some fantastic numbers that I'm sure you're quite proud of. In 207, $1.8 in billing, huge growth in customer acquisitions 17.8 thousand new customers acquired in 2017 alone, and you also shared that Forinet protects around 90% of the Global S&P 100. Great brands and logos you shared Apple, Coca Cola, Oracle. Tell us a little bit more and kind of as an extension of your Keynote, this strength in numbers that you must be very proud of. >> Yeah, I'm an engineer background, always liked the number, and not only we become much bigger company, we actually has 25 to 30% global employment in a network security space. That give a huge customer base and last year sales grow 19% and we keeping leading the space with a new port out we just announced today. The FortiGate 6000 and also the FortiOS 6.0. So all this changing in the landscape and like I said last year we believe the space is in a transition now, they've got a new generation infrastructure security, so we want to lead again. We started the company 18 years ago to get into we called a UTM network firewall space. We feel infrastructure security is very important now. And that we want to lead in the transition and lead in the change. >> So growth was a big theme or is a big theme. Some of the things that we're also interesting is another theme of really this evolution, this landscape I think you and Peter will probably get into more the technology, but give our viewers a little bit of an extension of what you shared in your keynote about the evolution. These three generations of internet and network security. >> Yeah, when I first start my network security career the first company I was study at Stanford University, I was in the 20s. It was very exciting is that a space keeping changing and grow very fast, that makes me keeping have to learning everyday and that I like. And then we start a company call Net Screen when it was early 30s, that's my second company. We call the first generation network security which secured a connection into the trust company environment and the Net Screens a leader, later being sold for $4 billion. Then starting in 2000, we see the space changing. Basically you only secure the connection, no longer enough. Just like a today you only validate yourself go to travel with a ticket no longer enough, they need to see what you carry, what's the what's the luggage has, right. So that's where we call them in application and content security they call the UTM firewall, that's how Fortinet started. That's the second generation starting replacing the first generation. But compared to 18 years ago, since change it again and nowadays the data no longer stay inside company, they go to the mobile device, they go to the cloud, they call auditive application go to the IoT is everywhere. So that's where the security also need to be changed and follow the important data secure the whole infrastructure. That's why keeping talking from last year this year is really the infrastructure security that secure fabric the starting get very important and we want to lead in this space again like we did 18 years ago starting Fortinet. >> Ken, I'd like to tie that, what you just talked about, back to this notion of strength in numbers. Clearly the bad guys that would do a company harm are many and varied and sometimes they actually work together. There's danger in numbers Fortinet is trying to pull together utilizing advanced technologies, new ways of using data and AI and pattern recognition and a lot of other things to counter effect that. What does that say about the nature of the relationships that Fortinet is going to have to have with its customers going forward? How is that evolving, the idea of a deeper sharing? What do you think? >> Actually, the good guy also started working together now. We formed the they call it the Cyber Threat Alliance, the CTA, and Fortinet is one of the founding company with the five other company including Palo Alto Network, Check Point and McAfee and also feel a Cisco, there's a few other company all working together now. We also have, we call, the Fabric-Ready Program which has a 42 bigger partner including like IBM, Microsoft, Amazon, Google, all this bigger company because to defend the latest newest Fabric threat you have to be working together and that also protect the whole infrastructure. You also need a few company working together and it's a because on average every big enterprise they deploy 20 to 30 different products from different company. Management cost is number one, the highest cost in the big enterprise security space because you have to learn so many different products from so many different vendor, most of them competitor and now even working together, now communicate together. So that's where we want to change the landscape. We want to provide how infrastructure security can work better and not only partner together but also share the data, share the information, share the intelligence. >> So fundamentally there is the relationship is changing very dramatically as a way of countering the bad actors by having the good actors work more closely together and that drives a degree of collaboration coordination and a new sense of trust. But you also mentioned that the average enterprise is 20 to 30 fraud based security products. Every time you introduce a new product, you introduce some benefits you introduce some costs, potentially some new threat surfaces. How should enterprises think about what is too many, what is not enough when they start thinking about the partnerships that needed put together to sustain that secure profile? >> In order to have the best protection today you need to secure the whole infrastructure, the whole cyberspace. Network security still the biggest and also grow very fast and then there's the endpoint and there's a like a cloud security, there's a whole different application, email, web and all the other cloud all the other IoT. You really need to make sure all these different piece working together, communicate together and the best way is really, they have to have a single panel of our management service. They can look at them, they can make it integrate together they can automate together, because today's attack can happen within seconds when they get in the company network. It's very difficult for human to react on that. That's where how to integrate, how to automate, this different piece, that is so important. That's where the Fabric approach, the infrastructure approach get very important. Otherwise, you cannot react quick enough, in fact, to defend yourself in a current environment. On the other side for your question, how many vendor do you have, I feel the less the better. At least they have to work together. If they're not working together, will make it even more difficult to defend because each part they not communicate and not react and not automate will make the job very, very difficult and that's where all this working together and the less vendor they can all responsible for all your security it's better. So that's where we see some consolidation in the space. They do still have a lot of new company come up, like you mentioned, there's close to 2,000 separate security company. A lot of them try to address the point solution. I mentioned there's a four different level engineer after engineer work there because I see 90% company they do the detection. There's a certain application you can detect the intrusion and then the next level is where they after you attack what are going to do about it. Is it really the prevention setting kick in automatic pull out the bad actor. After that, then you need to go to the integration because there's so many different products, so many different piece you need to working together, that's the integration. Eventually the performance and cost. Because security on average still cost 100 times more expensive under same traffic and also much slower compared to the routing switch in networking device. That's what the performance cost. Also starting in the highest level, that's also very difficult to handle. >> So, we're just enough to start with the idea of data integration, secure data integration amongst the security platform, so enough to do as little as possible, as few as possible to do that, but enough to cover all the infrastructure. >> Yes, because the data is all a whole different structure. You no longer does have to trust environment. Because even inside the company, there's so many different way you can access to the outside, whether it by your mobile device so there's a multiple way you can connect on the internet and today in the enterprise 90% connection goes to Wi-Fi now it's not goes to a wired network, that's also difficult to manage. So that's where we will hide it together and make it all working together it's very important. >> So, in the spirit of collaboration, collaborating with vendors. When you're talking with enterprises that have this myriad security solutions in place now, how are they helping to guide and really impact Fortinet's technologies to help them succeed. What's that kind of customer collaboration like, I know you meet with a lot of customers, how are they helping to influence the leading security technologies you deliver? >> We always want to listen the customer. They have the highest priority, they gave us the best feedback. Like the presentation they talked about there's a case from Olerica which is where they have a lot of branch office and they want to use in the latest technology and networking technology. I see when I'm working together with security, that's ready the new trend and how to make sure they have all the availability, they have the flexibility software-defined networking there and also make sure to security also there to handle the customer data, that's all very important so that's what we work very closely with customer to response what they need. That's where I'm still very proud to be no longer kind of engineer anymore but will still try to build in an engineer technology company. Lesson to the customer react quick because to handle security space, cyber security, internet security, you have to be work quickly react for the change, on internet, on application. So that's where follow the customer and give them the quick best solution it's very very important. On the customer side in Anaemia we talked about that was talked a little bit about this morning with GDPR are is around the corner, May 2018. Do you see your work coordinates work with customers in Anaemia as potentially being, kind of, leading-edge to help customers in the Americas and Asia-Pacific be more prepared for different types of compliance regulations? >> We see the GDPR as an additional opportunity, as a additional complement solution compared to all the new product technology would come up. They definitely gave us an additional business rate, additional opportunity, to really help customer protect the data, make the data stay in their own environment and the same time, internet is a very global thing, and how to make sure different country, different region, working together is also very important. I think it's a GDPR is a great opportunity to keeping expanding a security space and make it safer for the consumer for the end-user. >> So Ken as CEO Fortinet or a CEO was tough act, but as CEO you have to be worried about the security of your business and as a security company you're as much attacked, if not more attacked than a lot of other people because getting to your stuff would allow folks to get to a lot of other stuff. How do you regard the Fortinet capabilities inside Fortinet capability as providing you a source of differentiation in the technology industry? >> Yeah we keep security in mind as the highest priority within a company. That's where we develop a lot of product, we also internally use tests first. You can see from endpoint, the network side, the email, to the web, to the Wi-Fi access, to the cloud, to the IoT, it's all developing internally, it tests internally so the infrastructure security actually give you multiple layer protection. No longer just have one single firewall, you pass the fire were all open up. It's really multiple layer, like a rather the ransomware or something they had to pass multiple layer protection in order to really reach the data there. So that's where we see the infrastructure security with all different products and developed together, engineer working together is very important. And we also have were strong engineer and also we call the IT security team lead by Phil Cauld, I think you are being interview him later and he has a great team and a great experience in NSA for about 30 years, secure country. And that's where we leverage the best people, the best technology to provide the best security. Not only the portal side, also our own the internal security in this space. >> So, in the last minute or so that we have here, one of the things that Patrice Perce your global sales leader said during his keynote this morning was that security transformation, this is the year for it. So, in a minute or so, kind of what are some of the things besides fueling security transformation for your customers do you see as priorities and an exciting futures this year for Fortinet, including you talked about IoT, that's a $9 billion opportunity. You mentioned the securing the connected car to a very cool car in there, what are some of the things that are exciting to you as the leader of this company in 2018? >> We host some basic technology, not another company has. Like a built in security for a single chip. I also mentioned like some other bigger company, like a Google started building a TPU for the cloud computing and Nvidia the GPU. So we actually saw this vision 18 years ago when we start a company and the combine the best hardware and best technology with solve for all this service together. So, long term you will see the huge benefit and that's also like translate into today you can see all these technology enable us to really provide a better service to the customer to the partner and we all starting benefit for all this investment right now. >> Well Ken, thank you so much for joining us back on theCUBE. It's our pleasure to be here at the 16th year of the event, our second time here. Thanks for sharing your insight and we're looking forward to a great show. >> Thank you, great questions, it's the best platform to really promoting the technology, promoting the infrastructure security, thank you very much. >> Likewise, we like to hear that. For my co-host Peter Burris, I'm Lisa Martin, we are coming to you from Fortinet Accelerate 2018. Thanks for watching, stick around we have great content coming up.

Published Date : Feb 27 2018

SUMMARY :

Brought to you by Fortinet. My cohost for the day is Peter Burris; Happy to be here. and the title of your Keynote was The FortiGate 6000 and also the FortiOS 6.0. Some of the things that we're also interesting they need to see what you carry, Ken, I'd like to tie that, what you just talked about, We formed the they call it the Cyber Threat Alliance, the bad actors by having the good actors and the best way is really, they have to have amongst the security platform, so enough to do Yes, because the data is all a whole different structure. the leading security technologies you deliver? They have the highest priority, they gave us and make it safer for the consumer for the end-user. a source of differentiation in the technology industry? the best technology to provide the best security. the things that are exciting to you as to the partner and we all starting benefit It's our pleasure to be here at the 16th year promoting the infrastructure security, thank you very much. we are coming to you from Fortinet Accelerate 2018.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

Lisa MartinPERSON

0.99+

GoogleORGANIZATION

0.99+

PeterPERSON

0.99+

AppleORGANIZATION

0.99+

KenPERSON

0.99+

CiscoORGANIZATION

0.99+

Patrice PercePERSON

0.99+

OracleORGANIZATION

0.99+

McAfeeORGANIZATION

0.99+

20QUANTITY

0.99+

FortinetORGANIZATION

0.99+

2017DATE

0.99+

Net ScreenORGANIZATION

0.99+

Ken XiePERSON

0.99+

2018DATE

0.99+

May 2018DATE

0.99+

Phil CauldPERSON

0.99+

Coca ColaORGANIZATION

0.99+

90%QUANTITY

0.99+

Palo Alto NetworkORGANIZATION

0.99+

LisaPERSON

0.99+

25QUANTITY

0.99+

AmericasLOCATION

0.99+

$9 billionQUANTITY

0.99+

19%QUANTITY

0.99+

Check PointORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

100 timesQUANTITY

0.99+

last yearDATE

0.99+

Las VegasLOCATION

0.99+

$4 billionQUANTITY

0.99+

Net ScreensORGANIZATION

0.99+

second timeQUANTITY

0.99+

2000DATE

0.99+

CTAORGANIZATION

0.99+

GDPRTITLE

0.99+

second yearQUANTITY

0.99+

NSALOCATION

0.99+

todayDATE

0.99+

second companyQUANTITY

0.99+

18 years agoDATE

0.99+

AnaemiaORGANIZATION

0.99+

about 30 yearsQUANTITY

0.99+

first generationQUANTITY

0.99+

OlericaORGANIZATION

0.99+

ForinetORGANIZATION

0.99+

second generationQUANTITY

0.99+

each partQUANTITY

0.98+

$1.8QUANTITY

0.98+

207QUANTITY

0.98+

oneQUANTITY

0.98+

Cyber Threat AllianceORGANIZATION

0.98+

16th yearQUANTITY

0.98+

30%QUANTITY

0.97+

Stanford UniversityORGANIZATION

0.97+

FortiOS 6.0COMMERCIAL_ITEM

0.97+

30 different productsQUANTITY

0.97+

firstQUANTITY

0.97+

FortiGate 6000COMMERCIAL_ITEM

0.96+

around 90%QUANTITY

0.96+

early 30sDATE

0.96+

single chipQUANTITY

0.96+

this yearDATE

0.95+