Image Title

Search Results for SLA:

Manoj Sharma, Google Cloud | VMware Explore 2022


 

>>Welcome back everyone to the Cube's live coverage here in San Francisco of VMware Explorer, 2022. I'm John furrier with Dave ante coast of the hub. We're two sets, three days of wall to wall coverage. Our 12 year covering VMware's annual conference day, formerly world. Now VMware Explorer. We're kicking off day tube, no Sharma director of product management at Google cloud GCP. No Thankss for coming on the cube. Good to see you. >>Yeah. Very nice to see you as well. >>It's been a while. Google next cloud. Next is your event. We haven't been there cuz of the pandemic. Now you got an event coming up in October. You wanna give that plug out there in October 11th, UHS gonna be kind of a hybrid show. You guys with GCP, doing great. Getting up, coming up on in, in the rear with third place, Amazon Azure GCP, you guys have really nailed the developer and the AI and the data piece in the cloud. And now with VMware, with multicloud, you guys are in the mix in the universal program that they got here had been, been a partnership. Talk about the Google VMware relationship real quick. >>Yeah, no, I wanna first address, you know, us being in third place. I think when, when customers think about cloud transformation, you know, they, they, for them, it's all about how you can extract value from the data, you know, how you can transform your business with AI. And as far as that's concerned, we are in first place. Now coming to the VMware partnership, what we observed was, you know, you know, first of all, like there's a lot of data gravity built over the past, you know, 20 years in it, you know, and you know, VMware has, you know, really standardized it platforms. And when it comes to the data gravity, what we found was that, you know, customers want to extract the value that, you know, lives in that data as I was just talking about, but they find it hard to change architectures and, you know, bring those architectures into, you know, the cloud native world, you know, with microservices and so forth. >>Especially when, you know, these applications have been built over the last 20 years with off the shelf, you know, commercial off the shelf in, you know, systems you don't even know who wrote the code. You don't know what the IP address configuration is. And it's, you know, if you change anything, it can break your production. But at the same time, they want to take advantage of what the cloud has to offer. You know, the self-service the elasticity, you know, the, the economies of scale efficiencies of operation. So we wanted to, you know, bring CU, you know, bring the cloud to where the customer is with this service. And, you know, with, like I said, you know, VMware was the defacto it platform. So it was a no brainer for us to say, you know what, we'll give VMware in a native manner yeah. For our customers and bring all the benefits of the cloud into it to help them transform and take advantage of the cloud. >>It's interesting. And you called out that the, the advantages of Google cloud, one of the things that we've observed is, you know, VMware trying to be much more cloud native in their messaging and their positioning. They're trying to connect into that developer world for cloud native. I mean, Google, I mean, you guys have been cloud native literally from day one, just as a company. Yeah. Infrastructure wise, I mean, DevOps was an infrastructures code was Google's DNA. I, you had Borg, which became Kubernetes. Everyone kind of knows that in the history, if you, if you're in, in the, inside the ropes. Yeah. So as you guys have that core competency of essentially infrastructures code, which is basically cloud, how are you guys bringing that into the enterprise with the VMware, because that's where the puck is going. Right. That's where the use cases are. Okay. You got data clearly an advantage there, developers, you guys do really well with developers. We see that at say Coon and CNCF. Where's the use cases as the enterprise start to really figure out that this is now happening with hybrid and they gotta be more cloud native. Are they ramping up certain use cases? Can you share and connect the dots between what you guys had as your core competency and where the enterprise use cases are? >>Yeah. Yeah. You know, I think transformation means a lot of things, especially when you get into the cloud, you want to be not only efficient, but you also wanna make sure you're secure, right. And that you can manage and maintain your infrastructure in a way that you can reason about it. When, you know, when things go wrong, we took a very unique approach with Google cloud VMware engine. When we brought it to the cloud to Google cloud, what we did was we, we took like a cloud native approach. You know, it would seem like, you know, we are to say that, okay, VMware is cloud native, but in fact that's what we've done with this service from the ground up. One of the things we wanted to do was make sure we meet all the enterprise needs availability. We are the only service that gives four nines of SLA in a single site. >>We are the only service that has fully redundant networking so that, you know, some of the pets that you run on the VMware platform with your operational databases and the keys to the kingdom, you know, they can be run in a efficient manner and in a, in a, in a stable manner and, and, you know, in a highly available fashion, but we also paid attention to performance. One of our customers Mitel runs a unified communication service. And what they found was, you know, the high performance infrastructure, low latency infrastructure actually helps them deliver, you know, highly reliable, you know, communication experience to their customers. Right. And so, you know, we, you know, while, you know, so we developed the service from the ground up, making sure we meet the needs of these enterprise applications, but also wanted to make sure it's positioned for the future. >>Well, integrated into Google cloud VPC, networking, billing, identities, access control, you know, support all of that with a one stop shop. Right? And so this completely changes the game for, for enterprises on the outset, but what's more like we also have built in integration to cloud operations, you know, a single pane of glass for managing all your cloud infrastructure. You know, you have the ability to easily ELT into BigQuery and, you know, get a data transformation going that way from your operational databases. So, so I think we took a very like clean room ground from the ground of approach to make sure we get the best of both worlds to our customers. So >>Essentially made the VMware stack of first class citizen connecting to all the go Google tool. Did you build a bare metal instance to be able to support >>That? We, we actually have a very customized infrastructure to make sure that, you know, the experience that customers looking for in the VMware context is what we can deliver to them. And, and like I said, you know, being able to manage the pets in, in addition to the cattle that, that we are, we are getting with the modern containerized workloads. >>And, and it's not likely you did that as a one off, I, I would presume that other partners can potentially take advantage of that, that approach as well. Is that >>True? Absolutely. So one of our other examples is, is SAP, you know, our SAP infrastructure runs on very similar kind of, you know, highly redundant infrastructure, some, some parts of it. And, and then, you know, we also have in the same context partners such as NetApp. So, so customers want to, you know, truly, so, so there's two parts to it, right? One is to meet customers where they already are, but also take them to the future. And partner NetApp has delivered a cloud service that is well integrated into the platform, serves use cases like VDI serves use cases for, you know, tier two data protection scenarios, Dr. And also high performance context that customers are looking for, explain >>To people because think a lot of times people understand say, oh, NetApp, but doesn't Google have storage. Yeah. So explain that relationship and why that, that is complimentary. Yeah. And not just some kind of divergence from your strategy. >>Yeah. Yeah. No. So I think the, the idea here is NetApp, the NetApp platform living on-prem, you know, for, for so many years, it's, it's built a lot of capabilities that customers take advantage of. Right. So for example, it has the sta snap mirror capabilities that enable, you know, instant Dr. Of between locations and customers. When they think of the cloud, they are also thinking of heterogeneous context where some of the infrastructure is still needs to live on prem. So, you know, they have the Dr going on from the on-prem side using snap mirror, into Google cloud. And so, you know, it enables that entry point into the cloud. And so we believe, you know, partnering with NetApp kind of enables these high performance, you know, high, you know, reliability and also enables the customers to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. And, >>And NetApp, obviously a big VMware partner as well. So I can take that partnership with VMware and NetApp into the Google cloud. >>Correct. Yeah. Yeah. It's all about leverage. Like I said, you know, meeting customers where they already are and ensuring that we smoothen their journey into the future rather than making it like a single step, you know, quantum leap. So to speak between two words, you know, I think, you know, I like to say like for the, for the longest time the cloud was being presented as a false choice between, you know, the infrastructure as of, of the past and the infrastructure of the future, like the red pill and the blue pill. Right. And, you know, we've, I like to say, like, I've, you know, we've brought, brought into the, into this context, the purple pill. Right. Which gives you really the best of both tools. >>Yeah. And this is a tailwind for you guys now, and I wanna get your thoughts on this and your differentiation around multi-cloud that's around the corner. Yeah. I mean, everyone now recognizes at least multi clouds of reality. People have workloads on AWS, Azure and GCP. That is technically multi-cloud. Yeah. Now the notion of spanning applications across clouds is coming certainly hybrid cloud is a steady state, which essentially DevOps on prem or edge in the cloud. So, so you have, now the recognition that's here, you guys are positioned well for this. How is that evolving and how are you positioning yourself with, and how you're differentiating around as clients start thinking, Hey, you know what, I can start running things on AWS and GCP. Yeah. And OnPrem in a really kind of a distributed way. Yeah. With abstractions and these things that people are talking about super cloud, what we call it. And, and this is really the conversations. Okay. What does that next future around the corner architecture look like? And how do you guys fit in, because this is an opportunity for you guys. It's almost, it's almost, it's like Wayne Gretsky, the puck is coming to you. Yeah. Yeah. It seems that way to me. What, how do you respond to >>That? Yeah, no, I think, you know, Raghu said, yes, I did yesterday. Right. It's all about being cloud smart in this new heterogeneous world. I think Google cloud has always been the most open and the most customer oriented cloud. And the reason I say that is because, you know, looking at like our Kubernetes platform, right. What we've enabled with Kubernetes and Antho is the ability for a customer to run containerized infrastructure in the same consistent manner, no matter what the platform. So while, you know, Kubernetes runs on GKE, you can run using Anthos on the VMware platform and you can run using Anthos on any other cloud on the planet in including AWS Azure. And, and so it's, you know, we, we take a very open, we've taken an open approach with Kubernetes to begin with, but, you know, the, the fact that, you know, with Anthos and this multicloud management experience that we can provide customers, we are, we are letting customers get the full freedom of an advantage of what multicloud has to has to offer. And I like to say, you know, VMware is the ES of ISAs, right. Cause cuz if you think about it, it's the only hypervisor that you can run in the same consistent manner, take the same image and run it on any of the providers. Right. And you can, you know, link it, you know, with the L two extensions and create a fabric that spans the world and, and, and multiple >>Products with, with almost every company using VMware. >>That's pretty much that's right. It's the largest, like the VMware network of, of infrastructure is the largest network on the planet. Right. And so, so it's, it's truly about enabling customer choice. We believe that every cloud, you know, brings its advantages and, you know, at the end of their day, the technology of, you know, capabilities of the provider, the differentiation of the provider need to stand on its merit. And so, you know, we truly embrace this notion of money. Those ops guys >>Have to connect to opportunities to connect to you, you guys in yeah. In, in the cloud. >>Yeah. Absolutely >>Like to ask you a question sort of about database philosophy and maybe, maybe futures a little bit, there seems to be two camps. I mean, you've got multiple databases, you got span for, you know, kind of global distributed database. You've got big query for analytics. There seems to be a trend in the industry for some providers to say, okay, let's, let's converge the transactions and analytics and kind of maybe eliminate the need to do a lot of Elting and others are saying, no, no, we want to be, be, you know, really precise and distinct with our capabilities and, and, and have be spoke set of capability, right. Tool for the right job. Let's call it. What's Google's philosophy in that regard. And, and how do you think about database in the future? >>So, so I think, you know, when it comes to, you know, something as general and as complex as data, right, you know, data lives in all ships and forms, it, it moves at various velocities that moves at various scale. And so, you know, we truly believe that, you know, customers should have the flexibility and freedom to put things together using, you know, these various contexts and, and, you know, build the right set of outcomes for themselves. So, you know, we, we provide cloud SQL, right, where customers can run their own, you know, dedicated infrastructure, fully managed and operated by Google at a high level of SLA compared to any other way of doing it. We have a database born in the cloud, a data warehouse born in the cloud BigQuery, which enables zero ops, you know, zero touch, you know, instant, you know, know high performance analytics at scale, you know, span gives customers high levels of reliability and redundancy in, in, in a worldwide context. So with, with, with extreme levels of innovation coming from, you know, the, the, the NTP, you know, that happen across different instances. Right? So I, you know, I, we, we do think that, you know, data moves a different scale and, and different velocity and, and, you know, customers have a complex set of needs. And, and so our portfolio of database services put together can truly address all ends of the spectrum. >>Yeah. And we've certainly been following you guys at CNCF and the work that Google cloud's doing extremely strong technical people. Yeah. Really open source focused, great products, technology. You guys do a great job. And I, I would imagine, and it's clear that VMware is an opportunity for you guys, given the DNA of their customer base. The installed base is huge. You guys have that nice potential connection where these customers are kind of going where its puck is going. You guys are there now for the next couple minutes, give a, give a plug for Google cloud to the VMware customer base out there. Yeah. Why Google cloud, why now what's in it for them? What's the, what's the value parts? Give the, give the plug for Google cloud to the VMware community. >>Absolutely. So, so I think, you know, especially with VMware engine, what we've built, you know, is truly like a cloud native next generation enterprise platform. Right. And it does three specific things, right? It gives you a cloud optimized experience, right? Like the, the idea being, you know, self-service efficiencies, economies, you know, operational benefits, you get that from the platform and a customer like Mitel was able to take advantage of that. Being able to use the same platform that they were running in their co-located context and migrate more than a thousand VMs in less than 90 days, something that they weren't able to do for, for over two years. The second aspect of our, you know, our transformation journey that we enable with this service is cloud integration. What that means is the same VPC experience that you get in the, the, the networking global networking that Google cloud has to offer. >>The VMware platform is fully integrated into that. And so the benefits of, you know, having a subnet that can live anywhere in the world, you know, having multi VPC, but more importantly, the benefits of having these Google cloud services like BigQuery and span and cloud operations management at your fingertips in the same layer, three domain, you know, just make an IP call and your data is transformed into BigQuery from your operational databases and car four. The retailer in Europe actually was able to do that with our service. And not only that, you know, do do the operational transform into BigQuery, you know, from their, the data gravity living in VMware on, on VMware engine, but they were able to do it in, you know, cost effective, a manner. They, they saved, you know, over 40% compared to the, the current context and also lower the co increase the agility of operations at the same time. >>Right. And so for them, this was extremely transf transformative. And lastly, we believe in the context of being open, we are also a very partner friendly cloud. And so, you know, customers come bring VMware platform because of all the, it, you know, ecosystem that comes along with it, right. You've got your VM or your Zerto or your rubric, or your capacity for data protection and, and backup. You've got security from Forex, tha fortunate, you know, you've got, you know, like we'd already talked about NetApp storage. So we, you know, we are open in that technology context, ISVs, you know, fully supported >>Integrations key. Yeah, >>Yeah, exactly. And, and, you know, that's how you build a platform, right? Yeah. And so, so we enable that, but, but, you know, we also enable customers getting into the future, going into the future, through their AI, through the AI capabilities and services that are once again available at, at their fingertips. >>Soo, thanks for coming on. Really appreciate it. And, you know, as super clouds, we call it, our multi-cloud comes around the corner, you got the edge exploding, you guys do a great job in networking and security, which is well known. What's your view of this super cloud multi-cloud world. What's different about it? Why isn't it just sass on cloud what's, what's this next gen cloud really about it. You had to kind of kind explain that to, to business folks and technical folks out there. Is it, is it something unique? Do you see a, a refactoring? Is it something that does something different? Yeah. What, what doesn't make it just SAS. >>Yeah. Yeah. No, I think that, you know, there's, there's different use cases that customers have have in mind when they, when they think about multi-cloud. I think the first thing is they don't want to have, you know, all eggs in a single basket. Right. And, and so, you know, it, it helps diversify their risk. I mean, and it's a real problem. Like you, you see outages in, you know, in, in availability zones that take out entire businesses. So customers do wanna make sure that they're not, they're, they're able to increase their availability, increase their resiliency through the use of multiple providers, but I think so, so that's like getting the same thing in different contexts, but at the same time, the context is shifting right. There is some, there's some data sources that originate, you know, elsewhere and there, the scale and the velocity of those sources is so vast, you know, you might be producing video from retail stores and, you know, you wanna make sure, you know, this, this security and there's, you know, information awareness built about those sources. >>And so you want to process that data, add the source and take instant decisions with that proximity. And that's why we believe with the GC and, you know, with, with both, both the edge versions and the hosted versions, GDC stands for Google, Google distributed cloud, where we bring the benefit and value of Google cloud to different locations on the edge, as well as on-prem. And so I think, you know, those kinds of contexts become important. And so I think, you know, we, you know, we are not only do we need to be open and pervasive, you know, but we also need to be compatible and, and, and also have the proximity to where information lives and value lives. >>Minish. Thanks for coming on the cube here at VMware Explorer, formerly world. Thanks for your time. Thank >>You so much. Okay. >>This is the cube. I'm John for Dave ante live day two coverage here on Moscone west lobby for VMware Explorer. We'll be right back with more after the short break.

Published Date : Aug 31 2022

SUMMARY :

No Thankss for coming on the cube. And now with VMware, with multicloud, you guys are in the mix in the universal program you know, the cloud native world, you know, with microservices and so forth. You know, the self-service the elasticity, you know, you know, VMware trying to be much more cloud native in their messaging and their positioning. You know, it would seem like, you know, we And so, you know, we, you know, while, you know, so we developed the service from the you know, get a data transformation going that way from your operational databases. Did you build a bare metal instance to be able to support And, and like I said, you know, being able to manage the pets in, And, and it's not likely you did that as a one off, I, I would presume that other partners And, and then, you know, we also have in the same context partners such as NetApp. And not just some kind of divergence from your strategy. to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. and NetApp into the Google cloud. you know, I think, you know, I like to say like for the, now the recognition that's here, you guys are positioned well for this. Kubernetes to begin with, but, you know, the, the fact that, you know, And so, you know, we truly embrace this notion of money. In, in the cloud. no, no, we want to be, be, you know, really precise and distinct with So, so I think, you know, when it comes to, you know, for you guys, given the DNA of their customer base. of our, you know, our transformation journey that we enable with this service is you know, having a subnet that can live anywhere in the world, you know, you know, we are open in that technology context, ISVs, you know, fully supported Yeah, so we enable that, but, but, you know, we also enable customers getting And, you know, as super clouds, we call it, our multi-cloud comes stores and, you know, you wanna make sure, you know, this, this security and there's, And so I think, you know, Thanks for coming on the cube here at VMware Explorer, formerly world. You so much. This is the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

RaghuPERSON

0.99+

San FranciscoLOCATION

0.99+

Manoj SharmaPERSON

0.99+

October 11thDATE

0.99+

Wayne GretskyPERSON

0.99+

OctoberDATE

0.99+

two wordsQUANTITY

0.99+

two partsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

less than 90 daysQUANTITY

0.99+

BigQueryTITLE

0.99+

DavePERSON

0.99+

12 yearQUANTITY

0.99+

second aspectQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

2022DATE

0.99+

20 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

more than a thousand VMsQUANTITY

0.99+

two setsQUANTITY

0.99+

both toolsQUANTITY

0.99+

oneQUANTITY

0.98+

over two yearsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

OneQUANTITY

0.98+

CoonORGANIZATION

0.98+

three daysQUANTITY

0.98+

both worldsQUANTITY

0.98+

first thingQUANTITY

0.98+

third placeQUANTITY

0.98+

MosconeLOCATION

0.98+

over 40%QUANTITY

0.98+

first placeQUANTITY

0.97+

AnthosTITLE

0.97+

GDCORGANIZATION

0.96+

NetAppTITLE

0.96+

two campsQUANTITY

0.96+

VMware ExplorerORGANIZATION

0.95+

first addressQUANTITY

0.95+

single stepQUANTITY

0.95+

KubernetesTITLE

0.95+

VMwareTITLE

0.93+

single basketQUANTITY

0.93+

GCPORGANIZATION

0.93+

tier twoQUANTITY

0.92+

MitelORGANIZATION

0.92+

SQLTITLE

0.91+

single siteQUANTITY

0.91+

OnPremORGANIZATION

0.91+

Google VMwareORGANIZATION

0.9+

ForexORGANIZATION

0.88+

day oneQUANTITY

0.88+

pandemicEVENT

0.87+

ISAsTITLE

0.87+

three specific thingsQUANTITY

0.86+

VMware ExplorerORGANIZATION

0.86+

AnthoTITLE

0.86+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>> Instructor: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and we're at KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, and my co-host, Enrico Signoretti. Enrico's really proud of me. I've called him Enrico instead of Enrique every session. >> Every day. >> Senior IT analyst at GigaOm. We're talking to fantastic builders at KubeCon, CloudNativeCon Europe 2022 about the projects and their efforts. Enrico, up to this point, it's been all about provisioning, insecurity, what conversation have we been missing? >> Well, I mean, I think that we passed the point of having the conversation of deployment, of provisioning. Everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem a and in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their cluster work, why it is happening and all the questions that come with it. And the more I talk with people in the show floor here or even in the various sessions is about, we are growing so that our clusters are becoming bigger and bigger, applications are becoming bigger as well. So we need to now understand better what is happening. As it's not only about cost, it's about everything at the end. >> So I think that's a great set up for our guests, Matt Provo, founder and CEO of StormForge and Patrick Brixton? >> Bergstrom. >> Bergstrom. >> Yeah. >> I spelled it right, I didn't say it right, Bergstrom, CTO. We're at KubeCon, CloudNativeCon where projects are discussed, built and StormForge, I've heard the pitch before, so forgive me. And I'm kind of torn. I have service mesh. What do I need more, like what problem is StormForge solving? >> You want to take it? >> Sure, absolutely. So it's interesting because, my background is in the enterprise, right? I was an executive at UnitedHealth Group before that I worked at Best Buy and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky-dory, right? But then we run into the issue like you and I were just talking about, where it gets very very expensive very quickly. And so my first conversations with Matt and the StormForge group, and they were telling me about the product and what we're dealing with. I said, that is the problem statement that I have always struggled with and I wish this existed 10 years ago when I was dealing with EC2 costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically what it is, is we take your raw telemetry data and we essentially monitor the performance of your application, and then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over-provisioning. So we reduce your consumption of CPU, of memory and production which ultimately nine times out of 10, actually I would say 10 out of 10, reduces your cost significantly without sacrificing reliability. >> So can your solution also help to optimize the application in the long run? Because, yes, of course-- >> Yep. >> The lowering fluid as you know optimize the deployment. >> Yeah. >> But actually the long-term is optimizing the application. >> Yes. >> Which is the real problem. >> Yep. >> So, we're fine with the former of what you just said, but we exist to do the latter. And so, we're squarely and completely focused at the application layer. As long as you can track or understand the metrics you care about for your application, we can optimize against it. We love that we don't know your application, we don't know what the SLA and SLO requirements are for your app, you do, and so, in our world it's about empowering the developer into the process, not automating them out of it and I think sometimes AI and machine learning sort of gets a bad rap from that standpoint. And so, at this point the company's been around since 2016, kind of from the very early days of Kubernetes, we've always been, squarely focused on Kubernetes, using our core machine learning engine to optimize metrics at the application layer that people care about and need to go after. And the truth of the matter is today and over time, setting a cluster up on Kubernetes has largely been solved. And yet the promise of Kubernetes around portability and flexibility, downstream when you operationalize, the complexity smacks you in the face and that's where StormForge comes in. And so we're a vertical, kind of vertically oriented solution, that's absolutely focused on solving that problem. >> Well, I don't want to play, actually. I want to play the devils advocate here and-- >> You wouldn't be a good analyst if you didn't. >> So the problem is when you talk with clients, users, there are many of them still working with Java, something that is really tough. I mean, all of us loved Java. >> Yeah, absolutely. >> Maybe 20 years ago. Yeah, but not anymore, but still they have developers, they have porting applications, microservices. Yes, but not very optimized, et cetera, cetera, et cetera. So it's becoming tough. So how you can interact with this kind of old hybrid or anyway, not well engineered applications. >> Yeah. >> We do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage and we, like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So perfect example is Java, you have to worry about your heap size, your garbage collection tuning and one of the things that really struck me very early on about the StormForge product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and performance tuning, we were only as good as our humans were because of what they knew. And so, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that the machine will recommend things you never would've dreamed of. And you get amazing results out of that. >> So both me and Enrico have been doing this for a long time. Like, I have battled to my last breath the argument when it's a bare metal or a VM, look, I cannot give you any more memory. >> Yeah. >> And the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources are expensive, buy bigger box. >> Yeah. >> Yap. >> Buying a bigger box in the cloud to your point is no longer a option because it's just expensive. >> Yeah. >> Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? Is it the shift in responsibility? >> I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially as the development of applications becomes more and more rapid and the management of them. Our charge and our belief wholeheartedly is that you shouldn't have to choose. You should not have to choose between costs or performance. You should not have to choose where your applications live, in a public private or hybrid cloud environment. And so, we want to empower people to be able to sit in the middle of all of that chaos and for those trade offs and those difficult interactions to no longer be a thing. We're at a place now where we've done hundreds of deployments and never once have we met a developer who said, "I'm really excited to get out of bed and come to work every day and manually tune my application." One side, secondly, we've never met, a manager or someone with budget that said, please don't increase the value of my investment that I've made to lift and shift us over to the cloud or to Kubernetes or some combination of both. And so what we're seeing is the converging of these groups, their happy place is the lack of needing to be able to make those trade offs, and that's been exciting for us. >> So, I'm listening and looks like that your solution is right in the middle in application performance, management, observability. >> Yeah. >> And, monitoring. >> Yeah. >> So it's a little bit of all of this. >> Yeah, so we want to be, the intel inside of all of that, we often get lumped into one of those categories, it used to be APM a lot, we sometimes get, are you observability or and we're really not any of those things, in and of themselves, but we instead we've invested in deep integrations and partnerships with a lot of that tooling 'cause in a lot of ways, the tool chain is hardening in a cloud native and in Kubernetes world. And so, integrating in intelligently, staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for our users who have already invested likely, in a APM or observability. >> So to go a little bit deeper. What does it mean integration? I mean, do you provide data to this, other applications in the environment or are they supporting you in the work that you do. >> Yeah, we're a data consumer for the most part. In fact, one of our big taglines is take your observability and turn it into action ability, right? Like how do you take that, it's one thing to collect all of the data, but then how do you know what to do with it, right? So to Matt's point, we integrate with folks like Datadog, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >> But also we want Datadog customers, for example, we have a very close partnership with Datadog so that in your existing Datadog dashboard, now you have-- >> Yeah. >> The StormForge capability showing up in the same location. >> Yep. >> And so you don't have to switch out. >> So I was just going to ask, is it a push pull? What is the developer experience when you say you provide developer this resolve ML learnings about performance, how do they receive it? Like, what's the developer experience. >> They can receive it, for a while we were CLI only, like any good developer tool. >> Right. >> And, we have our own UI. And so it is a push in a lot of cases where I can come to one spot, I've got my applications and every time I'm going to release or plan for a release or I have released and I want to pull in observability data from a production standpoint, I can visualize all of that within the StormForge UI and platform, make decisions, we allow you to set your, kind of comfort level of automation that you're okay with. You can be completely set and forget or you can be somewhere along that spectrum and you can say, as long as it's within, these thresholds, go ahead and release the application or go ahead and apply the configuration. But we also allow you to experience the same, a lot of the same functionality right now, in Grafana, in Datadog and a bunch of others that are coming. >> So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges or if not, one of the biggest challenges CIOs are facing are resource constraints. >> Yeah. >> They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs-- >> Yeah.6 >> And developers? >> You should take that one. >> Yeah, absolutely. So like my background, like I said at UnitedHealth Group, right. It's not always just about cost savings. In fact, the way that I look about at some of these tech challenges, especially when we talk about scalability there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece 'cause you can only throw money at a problem for so long and it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small, and so, we are absolutely squarely in that footprint of we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. Like, you were talking about private cloud for instance and so having a physical data center, I've worked with physical data centers that companies I've worked for have owned where it is literally full, wall to wall. You can't rack any more servers in it, and so their biggest option is, well, I could spend $1.2 billion to build a new one if I wanted to, or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster, like that's a huge opportunity. >> So I have another question. I mean, maybe it doesn't sound very intelligent at this point, but, so is it an ongoing process or is it something that you do at the very beginning, I mean you start deploying this. >> Yeah. >> And maybe as a service. >> Yep. >> Once in a year I say, okay, let's do it again and see if something change it. >> Sure. >> So one spot, one single.. >> Yeah, would you recommend somebody performance test just once a year? Like, so that's my thing is, at previous roles, my role was to do performance test every single release, and that was at a minimum once a week and if your thing did not get faster, you had to have an executive exception to get it into production and that's the space that we want to live in as well as part of your CICD process, like this should be continuous verification, every time you deploy, we want to make sure that we're recommending the perfect configuration for your application in the name space that you're deploying into. >> And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the CICD process that's connected to optimization and that no application should be released, monitored, and sort of analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, but for cost and performance. >> Almost a couple of hundred vendors on this floor. You mentioned some of the big ones Datadog, et cetera, but what happens when one of the up and comings out of nowhere, completely new data structure, some imaginative way to click to telemetry data. >> Yeah. >> How do, how do you react to that? >> Yeah, to us it's zeros and ones. >> Yeah. >> And, we really are data agnostic from the standpoint of, we're fortunate enough from the design of our algorithm standpoint, it doesn't get caught up on data structure issues, as long as you can capture it and make it available through one of a series of inputs, one would be load or performance tests, could be telemetry, could be observability, if we have access to it. Honestly, the messier the better from time to time from a machine learning standpoint, it's pretty powerful to see. We've never had a deployment where we saved less than 30%, while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and 30 to 40% improvement in performance. >> And what happens if the application is, I mean, yes Kubernetes is the best thing of the world but sometimes we have to, external data sources or, we have to connect with external services anyway. >> Yeah. >> So, can you provide an indication also on this particular application, like, where the problem could be? >> Yeah. >> Yeah, and that's absolutely one of the things that we look at too, 'cause it's, especially when you talk about resource consumption it's never a flat line, right? Like depending on your application, depending on the workloads that you're running it varies from sometimes minute to minute, day to day, or it could be week to week even. And so, especially with some of the products that we have coming out with what we want to do, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like, or, what your disc looks like, right. Like 'cause with our low environment testing, any metric you throw at us, we can optimize for. >> So Matt and Patrick, thank you for stopping by. >> Yeah. >> Yes. >> We can go all day because day two is I think the biggest challenge right now, not just in Kubernetes but application re-platforming and transformation, very, very difficult. Most CTOs and EASs that I talked to, this is the challenge space. From Valencia, Spain, I'm Keith Townsend, along with my host Enrico Signoretti and you're watching "theCube" the leader in high-tech coverage. (whimsical music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, and we're at KubeCon, about the projects and their efforts. And the more I talk with I've heard the pitch and then we can tell you know optimize the deployment. is optimizing the application. the complexity smacks you in the face I want to play the devils analyst if you didn't. So the problem is when So how you can interact and one of the things that last breath the argument and the CIO basically saying, Buying a bigger box in the cloud Is it the shift in responsibility? and the management of them. that your solution is right in the middle we sometimes get, are you observability or in the work that you do. consumer for the most part. showing up in the same location. What is the developer experience for a while we were CLI only, and release the application and he's saying one of the They cannot find the developers and it's the same thing or is it something that you do Once in a year I say, okay, and that's the space and that no application You mentioned some of the and 30 to 40% improvement in performance. Kubernetes is the best thing of the world so that we can make So Matt and Patrick, Most CTOs and EASs that I talked to,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

EnricoPERSON

0.99+

Enrico SignorettiPERSON

0.99+

MattPERSON

0.99+

JeffPERSON

0.99+

Tim CrawfordPERSON

0.99+

PatrickPERSON

0.99+

2003DATE

0.99+

Keith TownsendPERSON

0.99+

UnitedHealth GroupORGANIZATION

0.99+

40QUANTITY

0.99+

AlexPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

30QUANTITY

0.99+

$1.2 billionQUANTITY

0.99+

Alex WolfPERSON

0.99+

EnriquePERSON

0.99+

StormForgeORGANIZATION

0.99+

Alexander WolfPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ACGORGANIZATION

0.99+

JanuaryDATE

0.99+

Matt ProvoPERSON

0.99+

Red HatORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

Best BuyORGANIZATION

0.99+

30%QUANTITY

0.99+

first timeQUANTITY

0.99+

BergstromORGANIZATION

0.99+

nine timesQUANTITY

0.99+

10QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

300 peopleQUANTITY

0.99+

millionsQUANTITY

0.99+

DatadogORGANIZATION

0.99+

JavaTITLE

0.99+

GigaOmORGANIZATION

0.99+

Baskin School of EngineeringORGANIZATION

0.99+

two thingsQUANTITY

0.99+

third yearQUANTITY

0.99+

Mountain View, CaliforniaLOCATION

0.99+

KubeConEVENT

0.99+

ACGSVORGANIZATION

0.99+

bothQUANTITY

0.99+

once a weekQUANTITY

0.99+

less than 30%QUANTITY

0.99+

ACGSV GROW! AwardsEVENT

0.98+

2016DATE

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

40%QUANTITY

0.98+

Santa Cruz UC Santa Cruz School of EngineeringORGANIZATION

0.98+

todayDATE

0.98+

ACG Silicon ValleyORGANIZATION

0.98+

60%QUANTITY

0.98+

once a yearQUANTITY

0.98+

one spotQUANTITY

0.98+

10 years agoDATE

0.97+

Patrick BrixtonPERSON

0.97+

PrometheusTITLE

0.97+

20 years agoDATE

0.97+

CloudNativeCon Europe 2022EVENT

0.97+

secondlyQUANTITY

0.97+

one singleQUANTITY

0.96+

first conversationsQUANTITY

0.96+

millions of dollarsQUANTITY

0.96+

ACGSV GROW! Awards 2018EVENT

0.96+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CrawfordPERSON

0.99+

Keith TownsendPERSON

0.99+

30QUANTITY

0.99+

40QUANTITY

0.99+

1.2 billionQUANTITY

0.99+

MattPERSON

0.99+

Matt ProvoPERSON

0.99+

DatadogORGANIZATION

0.99+

storm for forgeORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

2016DATE

0.99+

JavaTITLE

0.99+

10QUANTITY

0.99+

Melissa SpainPERSON

0.99+

nine timesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

40%QUANTITY

0.99+

less than 30%QUANTITY

0.99+

10 years agoDATE

0.98+

United health groupORGANIZATION

0.98+

bothQUANTITY

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

KeithPERSON

0.98+

once a yearQUANTITY

0.98+

once a weekQUANTITY

0.98+

HPAORGANIZATION

0.98+

2022DATE

0.98+

CoonORGANIZATION

0.98+

30%QUANTITY

0.98+

first conversationsQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

60%QUANTITY

0.97+

KubernetesTITLE

0.97+

EttiPERSON

0.97+

todayDATE

0.96+

Patrick BrittonPERSON

0.96+

KubeconORGANIZATION

0.96+

StormForgeORGANIZATION

0.95+

data dogORGANIZATION

0.94+

PrometheusTITLE

0.94+

three pillarsQUANTITY

0.94+

secondlyQUANTITY

0.94+

RicoORGANIZATION

0.93+

Q con cloudORGANIZATION

0.93+

hundreds of deploymentsQUANTITY

0.92+

day twoQUANTITY

0.92+

EuropeLOCATION

0.92+

KubernetesORGANIZATION

0.92+

IntelORGANIZATION

0.92+

one spotQUANTITY

0.89+

at least 10%QUANTITY

0.87+

one thingQUANTITY

0.85+

hundred vendorsQUANTITY

0.83+

Once in a yearQUANTITY

0.83+

cuon cloud native conORGANIZATION

0.81+

RicoLOCATION

0.81+

BrookstoneORGANIZATION

0.8+

GrafanaORGANIZATION

0.8+

Berg storm CTOORGANIZATION

0.8+

SRETITLE

0.79+

SLATITLE

0.79+

BergstromORGANIZATION

0.79+

cloud native conORGANIZATION

0.78+

single releaseQUANTITY

0.77+

storm forge groupORGANIZATION

0.75+

1QUANTITY

0.75+

One sideQUANTITY

0.74+

EC twoTITLE

0.74+

1 singleQUANTITY

0.74+

PatrickPERSON

0.74+

Johnny Dallas, Zeet | AWS Summit SF 2022


 

>>Hello, and welcome back to the live cube coverage here in San Francisco, California, the cube live coverage. Two days, day two of a summit 2022, a summit New York city coming up in the summer. We'll be there as well. Events are back. I'm the host, John fur, the cube got great guest here, Johnny Dallas with Ze. Um, here's on the cube. We're gonna talk about his background. Uh, little trivia here. He was the youngest engineer ever worked at Amazon at the age. 17 had to get escorted into reinvent in Vegas cause he was underage <laugh> with security, all good stories. Now the CEO of gonna called Ze know DevOps kind of focus, managed service, a lot of cool stuff, John, welcome to the cube. >>Thanks John. Great. >>So tell a story. You were the youngest engineer at AWS. >>I was, yes. So I used to work at a company called Bebo. I got started very young. I started working when I was about 14, um, kind of as a software engineer. And when I, uh, was about 16, I graduated out of high school early. Um, worked at this company, Bebo running all of the DevOps at that company. Um, I went to reinvent in about 2018 to give a talk about some of the DevOps software I wrote at that company. Um, but you know, as many of those things are probably familiar with reinvent happens in a casino and I was 16, so I was not able to actually go into the casino on my own <laugh> um, so I'd have <inaudible> security as well as C security escort me in to give my talk. >>Did Andy jazzy, was he aware of this? >>Um, you know, that's a great question. I don't know. <laugh> >>I'll ask him great story. So obviously you started a young age. I mean, it's so cool to see you jump right in. I mean, I mean, you never grew up with the old school that I used to grew up in loading package software, loading it onto the server, deploying it, plugging the cables in, I mean you just rocking and rolling with DevOps as you look back now what's the big generational shift because now you got the Z generation coming in, millennials are in the workforce. It's changing. Like no one's putting package software on servers. >>Yeah, no, I mean the tools keep getting better, right? We, we keep creating more abstractions that make it easier and easier. When I, when I started doing DevOps, I could go straight into E two APIs. I had APIs from the get go and you know, my background was, I was a software engineer. I never went through like the CIS admin stack. I, I never had to, like you said, rack servers, myself. I was immediately able to scale to I, I was managing, I think 2,500 concurrent servers across every Ables region through software. It was a fundamental shift. >>Did you know what an SRE was at that time? Uh, you were kind of an SRE on >>Yeah, I was basically our first SRE, um, familiar with the, with the phrasing, but really thought of myself as a software engineer who knows cloud APIs, not a SRE. >>All right. So let's talk about what's what's going on now, as you look at the landscape today, what's the coolest thing that's going on in your mind and cloud? >>Yeah, I think the, I think the coolest thing is, you know, we're seeing the next layer of those abstraction tools exist and that's what we're doing with Ze is we've basically gone and we've, we're building an app platform that deploys onto your cloud. So if you're familiar with something like Carku, um, where you just click a GitHub repo, uh, we actually make it that easy. You click a GI hub repo and it'll deploy on a AWS using Al AWS tools. >>So, right. So this is Z. This is the company. Yes. How old's the company >>About a year and a half old now. >>Right. So explain what it does. >>Yeah. So we make it really easy for any software engineer to deploy on a AWS. Um, that's not SREs. These are the actual application engineers doing the business logic. Mm-hmm <affirmative> they don't really want to think about Yamo. They don't really want to configure everything super deeply. Um, they want to say, run this API on a AWS in the best way possible. We've encoded all the best practices into software and we set it up for you. >>Yeah. So I think the problem you're solving is, is that there's a lot of want to be DevOps engineers. And then they realize, oh shit, I don't wanna do this. Yeah. And the people want to do it. They loved under the hood. Right. People love that infrastructure, but the average developer needs to actually be as agile on scale. So that seems to be the problem you solve. Right? Yeah. >>We, we, we give way more productivity to each individual engineer, you know? >>All right. So let me ask you a question. So let me just say, I'm a developer. Cool. I built this new app. It's a streaming app or whatever. I'm making it up cube here, but let's just say I deploy it. I need your service. But what happens about when my customers say, Hey, what's your SLA? The CDN went down from this it's flaky. Does Amazon have? So how do you handle all that SLA reporting that Amazon provides? Cause they do a good job with sock reports all through the console. But as you start getting into DevOps and sell your app, mm-hmm <affirmative> you have customer issues. You, how do you view that? Yeah, >>Well, I, I think you make a great point of AWS has all this stuff already. AWS has SLAs. AWS has contract. Aw, has a lot of the tools that are expected. Um, so we don't have to reinvent the wheel here. What we do is we help people get to those SLAs more easily. So, Hey, this is a AWS SLA as a default. Um, Hey, we'll configure your services. This is what you can expect here. Um, but we can really leverage AWS reli ability of you don't have to trust us. You have to trust S and trust that the setup is good there. >>Do you handle all the recovery or mitigation between, uh, identification say downtime for instance, oh, the servers not 99% downtime, uh, went down for an hour, say something's going on? And is there a service dashboard? How does it get what's the remedy? Do you have, how does all that work? >>Yeah, so we have some built in remediation. You know, we, we basically say we're gonna do as much as we can to keep your endpoint up 24 7 mm-hmm <affirmative>. If it's something in our control, we'll do it. If it's a disc failure, that's on us. If you push bad code, we won't put out that new version until it's working. Um, so we do a lot to make sure that your endpoint stays up, um, and then alert you if there's a problem that we can't fix. So cool. Hey, S has some downtime, this thing's going on. You need to do this action. Um, we'll let you know. >>All right. So what do you do for fun? >>Yeah, so, uh, for, for fun, um, a lot of side projects. <laugh>, uh, >>What's your side hustle right now. You got going on >>The, uh, it's a lot of schools playing >>With serverless. >>Yeah. Playing with a lot of serverless stuff. Um, I think there's a lot of really cool Lam stuff as well, going on right now. Um, I love tools is, is the truest answer is I love building something that I can give to somebody else. And they're suddenly twice as productive because of it. Um, >>That's a good feeling, isn't it? Oh >>Yeah. There's nothing >>Like that. Tools versus platforms. Mm-hmm, <affirmative>, you know, the expression, too many tools in the tool, she becomes, you know, tools for all. And then ultimately tools become platforms. What's your view on that? Because if a good tool works and starts to get traction, you need to either add more tools or start building a platform platform versus tool. What's your, what's your view on our reaction to that kind of concept debate? >>Yeah, it's a good question. Uh, we we've basically started as like a, a platform. First of we've really focused on these, uh, developers who don't wanna get deep into the DevOps. And so we've done all of the piece of the stacks. We do C I C D management. We do container orchestration, we do monitoring. Um, and now we're, spliting those up into individual tools so they can be used awesome in conjunction more. >>Right. So what are some of the use cases that you see for your service? It's DevOps basically nano service DevOps for people on a DevOps team. Do clients have a DevOps person and then one person, two people what's the requirements to run >>Z? Yeah. So we we've got teams, um, from no DevOps is kind of when they start and then we've had teams grow up to about, uh, five, 10 man DevOps teams. Mm-hmm <affirmative> um, so, you know, as more structured people come in, because we're in your cloud, you're able to go in and configure it on top you're we can't block you. Uh, you wanna use some new AOL service. You're welcome to use that alongside the stack that we deploy for >>You. How many customers do you have now? >>So we've got about 40 companies that are using us for all of their infrastructure, um, kind of across the board, um, as well as >>What's the pricing model. >>Uh, so our pricing model is we, we charge basically similar to an engineer salary. So we charge, uh, a monthly rate. We have plans at 300 bucks a month, a thousand bucks a month, and then enterprise plan for based >>On the requirement scale. Yeah. You know, so back into the people cost, you must offer her discounts, not a fully loaded thing, is it? >>Yeah. There's a discounts kind of at scale, >>Then you pass through the Amazon bill. >>Yeah. So our customers actually pay for the Amazon bill themselves. Oh. So >>They have their own >>Account. There's no margin on top. You're linking your Aless account in, um, it, which is huge because we can, we are now able to help our customers get better deals with Amazon. Um, got it. We're incentivized on their team to drive your cost down. >>And what's your unit main unit of economics software scale. >>Yeah. Um, yeah, so we, we think of things as projects. How many services do you have to deploy as that scales up? Um, awesome. >>All right. You're 20 years old now you not even can't even drink legally. <laugh> what are you gonna do when you're 30? We're gonna be there. >>Well, we're, uh, we're making it better. And >>The better, the old guy on the cube here. >><laugh> I think, uh, I think we're seeing a big shift of, um, you know, we've got these major clouds. AWS is obviously the biggest cloud. Um, and it's constantly coming out with new services. Yeah. But we're starting to see other clouds have built many of the common services. So Kubernetes is a great example. It exists across all the clouds. Um, and we're starting to see new platforms come up on top that allow you to leverage tools from multiple clouds. At the same time. Many of our customers actually have AWS as their primary cloud and they'll have secondary clouds or they'll pull features from other clouds into AWS, um, through our software. I think that I'm very excited by that. And I, uh, expect to be working on that when I'm 30. Awesome. >>Well, you gonna have a good future. I gotta ask you this question cuz uh, you know, I've always, I was a computer science undergraduate in the, in the eighties and um, computer science back then was hardcore, mostly systems OS stuff, uh, database compiler. Um, now there's so much compi, right? So mm-hmm <affirmative> how do you look at the high school college curriculum experience slash folks who are nerding out on computer science? It's not one or two things much. You've got a lot of, a lot of things. I mean, look at Python, data engineering, merging as a huge skill. What's it? What's it like for college kids now and high school kids? What, what do you think they should be doing if you had to give advice to your 16 year old self back a few years ago now in college? Um, I mean Python's not a great language, but it's super effective for coding and the data's really relevant, but it's you got other language opportunities, you got tools to build. So you got a whole culture of young builders out there. What should, what should people gravitate to in your opinion stay away from yeah. Or >>Stay away from that's a good question. I, I think that first of all, you're very right of the, the amount of developers is increasing so quickly. Um, and so we see more specialization. That's why we also see, you know, these SREs that are different than typical application engineering. You get more specialization in job roles. Um, I think if, what I'd say to my 16 year old self is do projects, um, the, I learned most of my, what I've learned just on the job or online trying things, playing with different technologies, actually getting stuff out into the world, um, way more useful than what you'll learn in kind of a college classroom. I think classrooms great to, uh, get a basis, but you need to go out and experiment actually try things. >>You know? I think that's great advice. In fact, I would just say from my experience of doing all the hard stuff and cloud is so great for just saying, okay, I'm done, I'm abandoning the project. Move on. Yeah. Because you know, it's not gonna work in the old days. You have to build this data center. I bought all this certain, you know, people hang on to the old, you know, project and try to force it out there. >>You can launch a project, >>Can see gratification, it ain't working <laugh> or this is shut it down and then move on to something new. >>Yeah, exactly. Instantly you should be able to do that much more quickly. Right. >>So you're saying get those projects and don't be afraid to shut it down. Mm-hmm <affirmative> that? Do you agree with that? >>Yeah. I think it's ex experiment. Um, you're probably not gonna hit it rich on the first one. It's probably not gonna be that idea is DJing me this idea. So don't be afraid to get rid of things and just try over and over again. It's it's number of reps that a win. >>I was commenting online. Elon Musk was gonna buy Twitter, that whole Twitter thing. And, and, and someone said, Hey, you know, what's the, I go look at the product group at Twitter's been so messed up because they actually did get it right on the first time <laugh> and, and became such a great product. They could never change it because people would freak out and the utility of Twitter. I mean, they gotta add some things, the added button and we all know what they need to add, but the product, it was just like this internal dysfunction, the product team, what are we gonna work on? Don't change the product so that you kind of have there's opportunities out there where you might get the lucky strike, right. Outta the gate. Yeah. Right. You don't know, >>It's almost a curse too. It's you're not gonna Twitter. You're not gonna hit a rich second time too. So yeah. >><laugh> Johnny Dallas. Thanks for coming on the cube. Really appreciate it. Give a plug for your company. Um, take a minute to explain what you're working on, what you're looking for. You're hiring funding. Customers. Just give a plug, uh, last minute and have the last word. >>Yeah. So, um, John Dallas from Ze, if you, uh, need any help with your DevOps, if you're a early startup, you don't have DevOps team, um, or you're trying to deploy across clouds, check us out ze.com. Um, we are actively hiring. So if you are a software engineer excited about tools and cloud, or you're interested in helping getting this message out there, hit me up. Um, find a Z. >>Yeah. LinkedIn Twitter handle GitHub handle. >>Yeah. I'm the only Johnny on a LinkedIn and GitHub and underscore Johnny Dallas underscore on Twitter. Right? Um, >>Johnny Dallas, the youngest engineer working at Amazon. Um, now 20 we're on great new project here. The cube builders are all young. They're growing in to the business. They got cloud at their, at their back it's, uh, tailwind. I wish I was 20. Again, this is a cue. I'm John for your host. Thanks for watching. >>Thanks.

Published Date : Apr 21 2022

SUMMARY :

John fur, the cube got great guest here, Johnny Dallas with Ze. So tell a story. Um, but you know, Um, you know, that's a great question. I mean, it's so cool to see you jump right in. get go and you know, my background was, I was a software engineer. Yeah, I was basically our first SRE, um, familiar with the, with the phrasing, but really thought of myself as a software engineer So let's talk about what's what's going on now, as you look at the landscape today, what's the coolest Yeah, I think the, I think the coolest thing is, you know, we're seeing the next layer of those abstraction tools exist So this is Z. This is the company. So explain what it does. Um, they want to say, So that seems to be the problem you solve. So how do you handle all that SLA reporting that Amazon provides? This is what you can expect here. Um, we'll let you know. So what do you do for fun? Yeah, so, uh, for, for fun, um, a lot of side projects. What's your side hustle right now. Um, I think there's a lot of really cool Lam stuff as well, going on right now. Mm-hmm, <affirmative>, you know, the expression, too many tools in the tool, Um, and now we're, spliting those up into individual tools so they can be used awesome in conjunction more. So what are some of the use cases that you see for your service? Mm-hmm <affirmative> um, so, you know, as more structured people come in, So we charge, uh, On the requirement scale. Oh. So Um, got it. How many services do you have to deploy as that scales up? <laugh> what are you gonna do when you're And <laugh> I think, uh, I think we're seeing a big shift of, um, you know, So mm-hmm <affirmative> how do you look at the high school college curriculum experience I think classrooms great to, uh, get a basis, but you need to go out and experiment actually try things. I bought all this certain, you know, move on to something new. Instantly you should be able to do that much more quickly. Do you agree with that? So don't be afraid to get rid of things and Don't change the product so that you kind of have there's opportunities out there where you might get the lucky strike, So yeah. Um, take a minute to explain what you're working on, what you're looking for. So if you are a software engineer excited about tools and cloud, Um, Johnny Dallas, the youngest engineer working at Amazon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Johnny DallasPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John DallasPERSON

0.99+

BeboORGANIZATION

0.99+

Two daysQUANTITY

0.99+

VegasLOCATION

0.99+

99%QUANTITY

0.99+

30QUANTITY

0.99+

John furPERSON

0.99+

San Francisco, CaliforniaLOCATION

0.99+

PythonTITLE

0.99+

Elon MuskPERSON

0.99+

16 yearQUANTITY

0.99+

oneQUANTITY

0.99+

New YorkLOCATION

0.99+

JohnnyPERSON

0.99+

two peopleQUANTITY

0.99+

20QUANTITY

0.99+

LinkedInORGANIZATION

0.99+

17QUANTITY

0.99+

DevOpsTITLE

0.99+

firstQUANTITY

0.99+

fiveQUANTITY

0.99+

AOLORGANIZATION

0.99+

16QUANTITY

0.98+

an hourQUANTITY

0.98+

24 7 mmQUANTITY

0.98+

TwitterORGANIZATION

0.98+

eightiesDATE

0.98+

one personQUANTITY

0.97+

FirstQUANTITY

0.97+

GitHubORGANIZATION

0.96+

Andy jazzyPERSON

0.96+

twiceQUANTITY

0.96+

ZeetPERSON

0.95+

About a year and a half oldQUANTITY

0.95+

first timeQUANTITY

0.95+

todayDATE

0.94+

10 manQUANTITY

0.93+

each individualQUANTITY

0.92+

2,500 concurrentQUANTITY

0.92+

first oneQUANTITY

0.91+

ZeORGANIZATION

0.91+

20 years oldQUANTITY

0.89+

agileTITLE

0.87+

day twoQUANTITY

0.87+

KubernetesTITLE

0.86+

second timeQUANTITY

0.86+

a thousand bucks a monthQUANTITY

0.86+

ZePERSON

0.85+

about 40 companiesQUANTITY

0.83+

300 bucks aQUANTITY

0.78+

two thingsQUANTITY

0.77+

aboutQUANTITY

0.77+

DevOpsORGANIZATION

0.75+

ZeTITLE

0.75+

CarkuTITLE

0.73+

Saket Saurabh, Nexla | CUBE Conversation


 

>>Hey everyone. Welcome to this cube conversation featuring next law. I'm your host, Lisa Martin. And today we are joined by Sukkot Sarab CEO and founder of next, next LA Sukkot. Great to have you on the program, >>Lisa, thank you so much for having me here really excited about this. >>Tell us a little bit about next level. What is it that you guys do? >>Yeah. Um, you know, we are in the world of data and one of the biggest challenges that we face, um, as an industry is there is so much data, so much variety. How do we really get it into the hands of people who use data? And, um, the users of data are all across. You know, they shouldn't have to be engineers. They are across the board in different functions. So next, last purpose and mission has always been ready to use data in the hands of the users. So, um, what misled us today is makes it possible for users across the board, whether those are data scientists, whether those are data analysts, whether those are people in various business functions to get the data that they need in the tools that they work with. So, um, we make that possible in a very, no code way, um, for users to get access to the data. Um, and very uniquely actually do that by automating a lot of the data engineering process. We'll talk more about that, but it's an exciting space to be in. >>It is an exciting space to be in. And of course that the volumes of data could just continue to explode and there's, that will not be slowing down anytime. Soon, as we know in businesses, one of the things we saw in the last two years was businesses pivoting so many times and really needing, going from survival mode to thriving mode, but the ability to harness the power and the insights and data is critical for businesses to be successful these days as consumers. We just expect that if it's our business life or personal life, whoever we're interacting with is going to know what we want, and we're going to be able to display that to us quickly. We think about data match. It's a relatively new concept, right? Talk to me about data mesh and what differentiates next left from your competition. >>Yeah. Um, so data mesh is essentially, I would say in the lineage of the concept of democratizing data, the idea has always been that data should get to the users. Now for a long time, these users were dependent on it and engineering to get the data to them. So what the data mesh is doing is it's bringing a framework by which users of data. We call them data, the domains, the different functions that use data. They can have the data to use themselves. They can manage things on their own. And I think that is allowing for a framework in which teams can truly scale. I mean, that bottleneck of depending on engineering to do everything for you is just not going work. And I think in the last two years, even more so we saw that as companies tried to move fast, it started to break down. And I think there is a lot of momentum around this concept of data mesh. For this reason, people are finding that this concept is what can help them scale >>And how does next SLA deliver that single tool so that you can really democratize data and give people with varying levels of technical fluency, the access that they need. I can imagine finance folks with ERP data marketing folks with CRM data. How do you do that with a single tool? >>Yeah. So, um, I think the key thing about getting data in the hands of users, as we think about data democratization has been that, how does it actually happen? How do you give people access to the data? You know, simply giving them passwords to systems is not enough right now the data mesh concept comes with the understanding that there should be an entity, which we call it a data, product data, you know, a data product becomes that sort of common entity that becomes something that people can get access to. They can use, they can collaborate on. Now, what is a data product becomes an important question, of course, and how do we get a data product? So our next step comes in in a very key way is we automatically generate these data products. So again, going back to the thinking that look there, there is not going to be enough engineers to write code for everything. What we are able to do is to say that we can actually, you know, connect to data systems, look to the data, understand it, and package it up as a product, as a data product. And that data product is a core element of the damage. I'm happy to share what a data product is, if it helps people understand and of, >>Yeah. Let's double click into that a little bit. I was noticing on your website about next sets and I wanted to know what that is and how does it reimagine data, product creation. >>Yeah. Um, so let me just break down a little bit about what is the data product in the first place, right. I mean, as consumers, we use products all the time, you know, I'm, I'm, my laptop is here on our desk and that is a product. It is a product made from raw materials, like wood and metal and screws, right. And somebody designed the product, somebody built it and I'm using it. So if we think of the same parallel in the world of data, then API APIs and files and database tables, those are the raw materials. Um, if somebody takes that and packages that up into something that other people can use easily, that is the concept of the data product. Now, what, how is it different from data? Well, you take the core data and you put things around it. Like, what is the distribution of data? >>What is the structure of it? You know, what are the validations that make it work, how to better manage that, who has access to it when you take that raw material and put all of those other structures, it that's when it sort of becomes a data product. And the next step concept in next door is essentially a manifestation of that. It is the concept that these data products do not need to be new copies of data, which is a huge pain by the way. But instead they can be these logical entities. So if I can take us back to the world of compute, where we understand the concept of containers, no, these containers are basically a logical NPP that gives us access to the computation resources. Think of next set as a very similar thing, a logical entity that gives us access to the data resources. And, um, this is something that, you know, we have been able to innovate and automate in such a way that today, when people think of the data mesh and they want to build that, they see us as a component in that whole framework. So data mesh is a much broader framework, but we are sort of the building block for that, through this concept >>Building block. Got it. Talk to me about where your customer conversations are happening. Are they within chief data officer chief information officer? Is it within the C-suite as data is every company these days has to be a data company. >>Oh yeah. Very much in the C-suite. Right. So again, this changes a little bit industry by industry because every industry is organized differently. Um, for example, you know, we have some amazing customer international services there. The conversation often is this the chief data officer or the chief analytics officer. And the key thing that the C suite is thinking is how does this work in the future? How do you know the scale of data challenges are, is the growth of that is so fast. How do we handle things to three years, five years from now? And that's where the strategic conversation is. And that's where things like data mesh become extremely important for companies where we talked to them about, you know, how our technology sort of enables that, right? Um, across other industries, the functions may vary. And one of the things which is very interesting with data, um, compared to other technologies is that it touches almost every aspect of business. It's not limited to engineering. It is your person in HR who is doing HR analytics, source candidates, and profiles are reviewing and all that stuff to finance, to operations, every aspect of business does touch data. So this has to be done in a language and a mechanism that's much more approachable. >>It's gotta be horizontal for all of those different types of users, right. To be able to understand so that ultimately not only did they get access to the data, but they can pull out those insights faster than their competition, whether it's to develop new revenue, streams, new products, new services, you know, the, the person on the other end or the companies on the other end are expecting that real-time interaction. >>Yeah. Yeah. But that's >>No longer a nice to have >>No longer likes to have. And to clarify, right. I mean, the use of data is in multiple ways, right? So analytics is a big use of data, which is how is my business doing and running. Um, we have customers like, um, you know, um, Marchex and Poshmark and bed bath beyond and so on. We'll use us heavily to bring data for the analytics use cases as our companies, for example, like a door dash or Instacart, but that data feeds operational purposes, operational purposes, meaning, understanding the availability of inventory or products across different stores. Not that data has functioned to say, well, if I know what products are available, then I can list them. Then I can go pick them up. And that's not a analytics use case alone. It has a, um, you know, it has an operational use case, right? Um, similarly we see that in audit tracking, we have customers, for example, like Narvar that use us to connect to different shipping tracking system. So the applications of data are in analytics. There are certainly also in operations, which is core business. And they're also, of course, in data science. There's no question that the extension of analytics from looking back on how business is doing to data science, which is, you know, what should we be doing and how should we be more intelligent? So it's across the board, >>Across the board, horizontal, all industries really need to do this, but one of the things that pop into my mind as you were walking through that example was the supply chain challenges that we're all experiencing right now. How can next help organizations mitigate some of the challenges that are going on? >>I think what happens is, you know, technologies like ours, which are the data layer are at a fundamental foundational level. One of the things about next slide is that we are able to bring a data into a much more real time usage. So where in companies, but traditionally moving data on a much more sort of periodic basis. We are our plumbing under the hood. We are completely in real time, which means that we are allowing companies to now get access to that data in a faster way where possible. So again, this is not something that can be fixed overnight, but the role that data can play in is better visibility. Um, and better visibility means those business decisions are being made earlier at the right time. It's more insight. And hopefully that eventually leads to sort of, um, much more efficient, actual on the ground, some movement of products and so on. >>Yeah. That visibility is absolutely critical regardless of the global climate. Right. Talk to me last question here, since we're almost out of time, give me a little bit about your AWS partnership and then talk to me about what's next for next time. >>Um, you know, as, as a technology provider, we ended up, um, running a lot of our own infrastructure on AWS as do many of our customers. And, uh, we have been an AWS partner for multiple years, but very decently, we actually made our product available on AWS marketplace, which means that the access to our technology has become so much easier for companies. Now, uh, next law has started its journey focusing on mid to large enterprises and some of the most complex use cases out there from some of the biggest banks to some of the biggest companies in marketing to some of the core companies in retail, logistics and so on. Now what is happening is that the powerful nature of our product and the ease of use that we have given that need is coming further and further earlier in the life cycle of companies, right? So today new companies are starting up, which said, which are saying that we need to make that sort of investment in data infrastructure earlier. You know, and that's why we have seen even some very small, early startups starting to use next level to come to us for our technology. So we are very much partnered up with AWS because AWS covers the whole gamut from companies that were started yesterday to extremely large enterprises, um, and bring our technology accessible to them. >>Excellent. Well, thank you so much for joining me. It sounds like a tremendous amount of momentum and opportunity at Nexa. We appreciate your insights and best of luck to you. We look forward to hearing more. >>Thank you, Lisa. It's a pleasure talking to it's an exciting space. So time flies, when we talked about that, >>Doesn't it, it really does for sockets sound room. I'm Lisa Martin. You're watching the queue, leave it here for more coverage and a leader in live tech hybrid events.

Published Date : Mar 24 2022

SUMMARY :

Great to have you on the program, What is it that you guys do? actually do that by automating a lot of the data engineering process. And of course that the volumes of data could just continue to explode and there's, that data should get to the users. And how does next SLA deliver that single tool so that you can really democratize data and that we can actually, you know, connect to data systems, look to the and I wanted to know what that is and how does it reimagine data, product creation. And somebody designed the product, somebody built it and I'm using it. how to better manage that, who has access to it when you take that raw material and put all of those other Talk to me about where your customer conversations are happening. talked to them about, you know, how our technology sort of enables that, right? only did they get access to the data, but they can pull out those insights faster than their competition, is doing to data science, which is, you know, what should we be doing and how should we be more intelligent? Across the board, horizontal, all industries really need to do this, but one of the things that pop into my mind as you were walking And hopefully that eventually leads to sort of, um, Talk to me last question here, since we're almost out of time, give me a little bit about your AWS some of the biggest companies in marketing to some of the core companies in retail, We look forward to hearing more. So time flies, when we talked about that, I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

MarchexORGANIZATION

0.99+

Saket SaurabhPERSON

0.99+

LisaPERSON

0.99+

OneQUANTITY

0.99+

PoshmarkORGANIZATION

0.99+

yesterdayDATE

0.99+

single toolQUANTITY

0.99+

todayDATE

0.99+

NarvarORGANIZATION

0.98+

oneQUANTITY

0.98+

three yearsQUANTITY

0.98+

NexaORGANIZATION

0.98+

firstQUANTITY

0.98+

single toolQUANTITY

0.97+

NexlaORGANIZATION

0.94+

SukkotORGANIZATION

0.89+

last two yearsDATE

0.89+

five yearsQUANTITY

0.84+

last two yearsDATE

0.82+

CEOPERSON

0.8+

doubleQUANTITY

0.73+

C suiteTITLE

0.66+

SLATITLE

0.63+

LAORGANIZATION

0.54+

SarabPERSON

0.54+

SukkotPERSON

0.52+

Unpacking IBM's Summer 2021 Announcement | CUBEconversation


 

(soft music) >> There are many constants in the storage business, relentlessly declining cost per bit, innovations that perpetually battled the laws of physics, a seemingly endless flow of venture capital, despite the intense competition. And there's one other constant in the storage business, Eric Hertzog, and he joins us today in this CUBE video exclusive to talk about IBM's recent storage announcements. Eric, welcome back to theCUBE. >> Great, Dave, thanks very much, we love being on theCUBE and you guys do a great job of informing the industry about what's going on in storage and IT in general. >> Well, thank you for that. >> Great job. >> We're going to cover a lot of ground today. IBM Storage, made a number of announcements the past month around data resilience, a new as-a-service model, which a lot of folks are doing in the industry, you've made performance enhancements. Can you give us the top line summary of the hard news, Eric? >> Sure, the top line summary is of course cyber security is on top of mind for everybody in the recent Fortune 500 list that came out, you probably saw, there was a survey of CEOs of Fortune 500 companies, they named cybersecurity as their number one concern, not war, not pandemic, but cybersecurity. So we've got an announcement around data resilience and cyber resiliency built on our FlashSystem family with our new offering, Safeguarded Copy. And the second thing is the move to a new method of storage consumption. Storage-as-a-Service, a pay-as-you-go model, cloud-like the way people buy cloud storage, that's what you can do now from IBM Storage with our Storage-as-a-Service. Those are the key, two takeaways, Dave. >> Yeah and I want to stay on the trends that we're seeing in cyber for a moment, the work from home pivot in the hybrid work approach has really created a new exposures, people aren't as secure outside of the walled garden of the offices and we've seen a dramatic escalation in the adversaries capabilities and techniques, another least of which is island hopping, in other words, putting code fragments in the digital supply chain, they reform once they're inside the company and it's almost like this organic creepy thing that occurs. They're also living as you know, stealthily for many, many months, sometimes years, exfiltrating data, and then just waiting and then when companies respond, the incidents response trigger a ransomware incident. So they escalate the cyber crime and it's just a really, really bad situation for victims. What are you seeing in that regard and the trends? >> Well, one of the key things we see as everyone is very concerned about cybersecurity. The Biden administration has issued (indistinct) not only to the government sector, but to the private sector, cyber security is a big issue. Other governments across the world have done the same thing. So at IBM Storage, what we see is taking a comprehensive view. Many people think that cybersecurity is moat with the alligators, the castle wall and then of course the sheriff of Nottingham to catch the bad guys. And we know the sheriff of Nottingham doesn't do a good job of catching Robin Hood. So it takes a while as you just pointed out, sitting there for months or even longer. So one of the key things you need to do in an overall cybersecurity strategy is don't forget storage. Now our announcement around Safeguarded Copy is very much about rapid recovery after an attack for malware or ransomware. We have a much broader set of cyber security technology inside of IBM Storage. For example, with our FlashSystem family, we can encrypt data at rest with no performance penalty. So if someone steals that data, guess what? It's encrypted. We can do anomalous pattern detection with our backup product, Spectrum Protect Plus, why would you care? Well, if theCUBE's backup was taking two hours on particular datasets and all of a sudden it was taking four hours, Hmm maybe someone is encrypting those backup data sets. And so we notify. So what we believe at IBM is that an overarching cybersecurity strategy has to keep the bad guys out, threat detection, anomalous pattern behavior on the network, on the servers, on the storage and all of that, chasing the bad guy down once they breach the wall, 'cause that does happen, but if you don't have cyber and data resilience built into your storage technology, you are leaving a gap that the bad guys can explain, whether that be the malware ransomware guys oh by the way, Dave, there still is internal IT theft that there was a case about 10 years ago now where 10 IT guys stole $175 million. I kid you not, $175 million from a bunch of large banks across the country, and that was an internal IT theft. So between the internal IT issues that could approach you malware and ransomware, a comprehensive cybersecurity strategy, must include storage. >> So I want to ask you about come back to Safeguarded Copy and you mentioned some features and capabilities, encrypting data at rest, your anomalous pattern recognition inferring, you're taking a holistic approach, but of course you've got a storage centricity, what's different about your cyber solution? What's your unique value probability to your (indistinct) . >> Well, when you look at Safeguarded Copy, what it does is it creates immutable copies that are logically air-gapped, but logically air-gapped locally. So what that means is if you have a malware or ransomware attack and you need to do a recovery, whether it be a surgical recovery or a full-on recovery, because they attacked everything, then we can do recovery in a couple hours versus a couple of days or a couple of weeks. Now, in addition to the logical local air-gapping with Safeguarded Copy, you also could do remote logical air-gapping by snapping out to the cloud, which we also have on our FlashSystem products and you also of course, could take our FlashSystem products and back up to tape, giving you a physical air gap. In short, we give our customers three different ways to help with malware and ransomware. >> Let me ask you- >> Are air-gapped locally. >> Yeah, please continue, I'm sorry. >> So our air-gapping locally for rapid recovery, air-gapping remotely, which again, then puts it on the cloud provider network, so hopefully they can't breach that. And then clearly a physical air gap going out to tape all three and on the mainframe, we have Safeguarded Copy already, Dave and several of our mainframe customers actually do two of those things, they'll do Safeguarded Copy or rapid recovery locally, but they'll also take that Safeguarded Copy and either put it out to tape or put it out to a cloud provider with a remote logical air-gap using a snapshot. >> I want to ask you a question about management 'cause when you ask CSOs, what's your number one challenge, they'll say lack of talent, We've got all these tools and all this lack of skills to really do all this stuff. Can't hire people fast enough and they don't have the skills. So when you think about it, and so what you do is you bring a lot of automation into the orchestration and management. My question is this, when you set up air gaps, do you recommend, or what do you see in terms of not, of logically and physically not only physically separating the data, but also the management and orchestration and automation does that have to be logically air-gapped as well or can you use the same management system? What's best practice there? >> Ah, so what we do is we work with our copy management software, which will manage regular copies as well, but Safeguarded Copies are immutable. You can't write to them, you can't get rid of them and they're logically air-gapped from the local hosts. So the hosts, for the Safeguarded Copies that immutable copy, you just made, the hosts don't even know that it's there. So you manage that with our copy management software, which by the way, we'll manage regular snapshots and replicas as well, but what that allows you to do is allows you to automate, for example, you can automate recovery across multiple FlashSystem arrays, the copy services manager will allow you to set different parameters for different Safeguarded Copies. So a certain Safeguarded Copy, you could say, make me a copy every four hours. And then on another volume on a different data set, you could say, make me a copy every 12 hours. Once you set all that stuff update, it's completely automated, completely automated. >> So, I want to come back to something you mentioned about anomalous pattern recognition and how you help with threat detection. So a couple of a couple of quick multi-part question here. First of all, the backup corpus is an obvious target. So that's an area that you have to protect. And so can, and you're saying, you've used the example if your backups taking too long, but so how do you do that? What's the technology behind that? And then can you go beyond, should you go beyond just the backup corpus, with primary data or copies on-prem, et cetera? Two part questions. >> So when we look at it, the anomalous pattern detection is part of our backup software, say Spectrum Protect and what it does it uses AI-based technology, it recognizes a pattern. So it knows that the backup dataset for the queue takes two hours and it recognizes that, and it sees that as the normal state of events. So if all of a sudden that backup that theCUBE was doing used to take two hours and starts taking four, what it does is that's an anomalous pattern, it's not a normal pattern. It'll send a note to the backup admin, the storage admin, whoever you designate it to and say the backup data set for theCUBE that used to take two hours, it's taken four hours, you probably ought to check that. So when we view cyber resiliency from a storage perspective, it's broad. We just talked about anomalous pattern detection in Spectrum Protect. We were talking most of the conversation about our Safeguarded Copy, which is available on the mainframe for several years and is now available on FlashSystems, making immutable local air-gap copies, that can be rapidly recovered and are immutable and can help you recover for a malware or ransomware attack. Our data at rest encryption happens to be with no performance penalty. So when you look at it, you need to create an overarching strategy for cybersecurity and then when you look at your storage estate, you need to look at your secondary storage, backup, replicas, snaps, archive, and have a strategy there to protect that and then you need a strategy to protect your primary storage, which would be things like Safeguarded Copy and encryption. So then you put it all together and in fact, Dave, one of the things we offer is a free cyber resilience assessment. It's not only for IBM Storage, but it happens to be a cyber resilience assessment that conforms to the NIST Framework and it's heterogeneous. So if you're a big company, you've got IBM EMC and HP Storage, guess what? It's all about the data sets not about the storage. So we say, you said these 10 data sets are critical, why are you not encrypting them? These data sets are XYZ, why are you not air-gapping them? So we come up based on the NIST Framework, a set of recommendations that are not IBM specific, but they are storage specific. Here's how you make your storage more resilient, both your secondary storage and your primary storage. That's how we see the big thing and Safeguarded Copy of course fits in on the primary storage side, A on the mainframe, which we've had for several years now and B in the Linux world, the Unix world and the Windows Server world on our FlashSystem portfolio with the announcement we did on July 20th. >> Great, thank you for painting that picture. Eric, are you seeing any use case patterns emerge in this space? >> Well, we see a couple of things. First of all, is A most resellers and most end-users, don't see storage an overarching part of the cybersecurity strategy, and that's starting to change. Second thing we're seeing is more and more storage companies are trying to get into this bailiwick of offering cyber and data resilience. The value IBM brings of course is much longer experience to that and we even integrate with other products. So for example, IBM offers a product called QRadar from the security divisions not a storage product, a security product, and it helps you with early data breach recognition. So it looks at servers, network access, it looks at the storage and it actually integrates now with our Safeguarded Copy. So, part of the value that we bring is this overarching strategy of a comprehensive data and cyber resilience across our whole portfolio, including Safeguarded Copy our July 20th announcement. But also integration beyond storage now with our QRadar product from IBM security division. And there will be future announcements coming in both Q4 and Q1 of additional integration with other security technologies, so you can see how storage can be a vital COD in the corporate cybersecurity strategy. >> Got it, thank you. Let's pivot to the, as-a-service it's, cloud obviously is brought in that as-a-service. Now, it seems like everybody has one now. You guys have announced obviously HPE, Dell, Lenovo, Cisco, Pure, everybody's gotten out there as-a-service model, what do we need to know about your as-a-service solution and why is it different from the others? >> Sure. Well, one of the big differences is we actually go on actual storage, not effective. So when you look at effective storage, which most of them do that includes creating the (indistinct) data sets and other things, so you're basically paying for that. Second thing we do is we have a bigger margin. So for example, if theCUBE says we want SLA-3 and we sell it by the SLA, Dave, SLA-1, two and three. So let's say theCUBE needs SLA-3 and the minimum capacity is a 100 terabytes, but let's say you think you need 300 terabytes. No problem. You also have a variable. One of the key differences is unlike many of our competitors, the rate for the base and the rate for the variable are identical. Several of our competitors, when you're in the base, you pay a certain amount, when you go into the variable, they charge you a premium. The other key differentiator is around data reduction. Some of our competitors and all storage companies have data reduction technology. Block-level D do thin provisioning, compression, we all offer those features. The difference is with IBM's pay-as-you-grow, Storage-as-a-Service model, if you have certain data sets that are not very deducible, not very compressible, we absorbed that with our competitors, most of them, if the dataset is not easily deducible, compressible, and they don't see the value, they actually charge you a premium for that. So that is a huge difference. And then the last big difference is our a 100% availability guarantee. We have that on our FlashSystem product line, we're the only one offering 100% availability guarantee. We also against many of the competitors offer a better base nines, as you know, availability characteristics. We offer six nines of availability, which is five minutes and 26 seconds of downtime and a 100% availability of offering. Some of our competitors only offer four nines of availability and if you want five or six, they charge you extra. We give you six nines base in which has only five minutes and change of downtime in a year. So those are the key difference between us and the other as-a-service models out there. >> So, the basic concept I think, is if you commit to more and buy more, you pay less per. I mean, that's the basic philosophy of these things, right? So, if- >> Yes. >> I commit to you X, let's say, I want to just sort of start small and I commit to you to X and great. I'm in now in, maybe I sign up for a multi-year term, I commit this much, whatever, a 100 terabytes or whatever the minimum is. And then I can say, Hey, you know what? This is working for me. The CFO likes it and the IT guys can provision more seamlessly, we got our chargeback or showback model goes, I want to now make a bigger commitment and I can, and I want to sort of, can I break my three-year term and come back and then renegotiate, kind of like reserved instances, maybe bigger and pay less? How do you approach that? >> Well, what you do is we do a couple of things. First of all, you could always add additional capacity, and you just call up. We assign a technical account manager to every account. So in addition to what you get from the regular sales team and what you get from our value business partners, by the way, we did factor in the business partners, Dave, into this, so business partners will have a great pay-as-you-go Storage-as-a-Service solution, that includes partners and their ability to leverage. In fact, several of our partners that do have both MSP and MHP businesses are working right now to leverage our Storage-as-a-Service, and then add on their own value with their own MSP and MHP capability. >> And they can white label that? Is that right or? >> Well, you'd still have Storage-as-a-Service from IBM. They would resell that to theCUBE and then they'd add in their own MHP or MSP. >> Got it. >> That said partners interested in doing a white label, we would certainly entertain that capability. >> Got it. I interrupted you, carry on please. >> Yeah, you can go ahead and add more capacity, not a problem. You also can change the SLA. So theCUBE, one of the leading an industry analyst firms, you bought every analyst firm in the world, and you're using IBM Storage-as-a-Service, pay-as-you-go cloud-like model. So what you do is you call up the technical account manager and say, Eric, we bought all these other companies they're using on-prem storage, we'd like to move to Storage-as-a-Service for all the companies we acquire. We can do that, so that would up your capacity. And then you could say, now we've been at SLA-2, but because we're adding all these new applications of workloads from our acquired companies, we want some of it to be at SLA-1. So we can have some of your workloads on SLA-2, others on SLA-1, you could switch everything to SLA-1, and you just call your technical account manager and they'll make that happen for you or your business partner, obviously, if you bought through the channel. >> I get it, the hard question is what if all those other companies theCUBE acquired are also IBM Storage-as-a-Service customers? Can I, what's that discussion like? Hey, can I consolidate those and get a better deal? >> Yeah, there are all Storage-as-a-Service customers and Dave I love that thought, we would just figure out a way to consolidate the agreement. The agreements are one through five years. What I think also that's very unique is let's say for whatever reason, and we all love finance people. Let's say the IT guys have called the finance and say, we did a one-year contract, we now like to do a three-year contract. The one year is coming up and guess what? Finance's delayed for whatever reason, the PO doesn't go through. So the ITI calls up the technical account manager, we love your service, it's delayed in finance. We will let them stay on their Storage-as-a-Service, even though they don't have a contract. Now, of course they've told us they want to do one, but if they exceed the contract by a quarter or two, because they can't get the finance guys are messing with the IT guys, that's fine. What the key differentiators? Exactly the same price. Several of our competitors will also extend without a contract, but until you do a contract, they charge you a premium, we do not, whatever, if you're an SLA-3, you're SLA-3, we'll extend you and no big deal. And then you do your contract, when the finance guys get their act together and you're ready to go. So that is something we can do and we'll do on a continual basis. >> Last question. Let's go way out. So, we're not doing any time, near-term forecasts, I'm trying to understand how popular you think as-a-service is going to be. I mean, if you think about the end of the decade, let's think industry total, IBM specific, how popular do you think as-a-service models will be? Do you think it will be the majority of the transacted business or it's kind of more of a, just one of many? >> So I think there will be many, some people will still have bare metal on-premises. Some people will still do virtualization on-premises or in a hybrid cloud configuration. What I do think though is Storage-as-a-Service will be over 50% by the end. Remember, we're sitting at 2021. So we're talking now 2029. >> Right. >> So I think Storage-as-a-Service will be over 50%. I think most of that Storage-as-a-Service will be in a hybrid cloud model. I think the days of a 100% cloud, which is the way it started. I think a lot of people realize that a 100% cloud actually is more expensive than a hybrid cloud or fully on-prem. I was at a major university in New York, they are in the healthcare space and I know their CIO from one of my past lives. I was talking to him, they did a full on analysis of all the cloud providers going a 100% cloud. And their analysis showed that a 100% cloud, particularly for highly transactional workloads was 50% more expensive than buying it, paying the maintenance and paying their employees. So we did an all in view. So what I think it's going to be is Storage-as-a-Service will be over 50%. I think most of that Storage-as-a-Service will be in a hybrid cloud configuration with storage on-prem or in a colo, like what our IBM pay-as-you-go service will do and then it will be accessed and available through a hybrid cloud configuration with IBM Cloud, Google, Amazon as or whoever the cloud provider is. So I do think that you're looking at over 50% of the storage being as-a-service, but I do think the bulk of that as-a-service will be as-a-service through someone like IBM or our competitors and then part of it will be from the cloud providers. But I do think you're going to see a mix because right now the expense of going a 100% cloud cloud storage is dramatically understated and when someone does an analysis like that major university in New York did, they had a guy from finance, help them do the analysis and it was 50% more expensive than doing on-premise either on-prem or on-prem as-a-service, both were way cheaper. >> But you own the asset, right? >> Yes. >> As-a-service model. >> We, right, we own the asset. >> And I would bet, >> I would bet that over the lifetime value of the spend and it as-a-service model, just like the cloud, if you do this with IBM or any of your competitors, I would bet that overall you're going to spend more just like you've seen in the cloud, but you get the benefit is the flexibility that you get. >> Yeah, yeah. If you compare it to the, so obviously the number one model would be to buy. That's probably going to be the least expensive. >> Right. >> But it's also the least flexible. Then you also have leasing, more flexibility, but leasing usually is more expensive. Just like when you lease your car, if you add up all the lease payments and then you, at the end, pay that balloon payment to buy, it's cheaper to buy the car up front than it is to lease a car. Same thing with any IT asset, now storage network servers, all are available on leasing, the net is at the bottom line, that's more than buying it upfront. And then Storage-as-a-Service will also be more expensive than buying it, my friend, but ultimate capability, altering SLAs, adding new capacity, being able to handle an app very quickly. We can provision the storage, as you mentioned, the IT guys can easily provision. We provision, the storage in 10 minutes, if you bought from IBM Storage or any competitor you bought and you need more storage, A you got to put a PO through your system and if you're not theCUBE, but you're a giant global Fortune 500, sometimes it takes weeks to get the PO done. Then the PO has to go to the business partner, the business partner has got to give a PO to the distributor and a PO to IBM. So it can take you weeks to actually get the additional storage that you need. With Storage-as-a-Service from IBM with our pay-as-you-go, cloud-like model, all you have to do is provision and you're done. And by the way, we provide a 50% overage for free. So if they end up needing more storage, that 50% is actually sitting on-prem already and if they get to 75% utilization of the total amount of storage, we then call them up, the technical account manager would call them up and their business partner and say, Dave, do you know that you guys are at 75% full? We'd like to come add some additional storage to get you back down to a 50% margin. And by the way, most of our competitors only do a 25% margin. So again, another differentiator for IBM Storage-as-a-Service. >> What about, I said, last question, but I have another question. What about day one? Like how long does it take, if I want to start fresh with as-a-service? >> Get it. >> How long does it take to get up and running? >> Basically you put the PO through, whatever it takes on your side or through your business partner, we then we'll sign the technical account manager, will call you up because you need to tell us, do you want to, in a colo facility that you're working with or do you want to put it on on-prem? And then once we do that, we just schedule a time for your IT guys do the install. So, probably two weeks. >> Yeah. >> It all depends because you've got to call back and say, Eric, we'd like it at our colo partner, our colo partners, ABC, we got to call ABC and then get back to you or on-prem , we're going to have guys in the office, a good day when it's not going to be too busy. Could you come two weeks from Thursday? Which now would be three weeks for sake of argument. But that would be, we interface with the customer, with the technical account manager to do it on your schedule on your time, whether you do it in your own facility or use a colo provider. >> Yeah, but once you tell, once I tell you, once we get through all that stuff, it's two weeks from when that's all agreed. >> Yeah. >> It's like the Xerox copier salesman, (Dave chuckles) Where are you going to put it? Once you decide where you're going to put it, then it's a couple of weeks. It's not a month or two months or yeah. >> Yeah, it's not. And we need additional capacity, remember there's a 50% margin sitting there. So if you need to go into the variable and use it, and when we hit a 75%, we actually track it with our storage insights pro. So we'll call you up and say, Dave, you're at 76%. We'd like to add more storage to give you better margin of extra storage and you would say, great, when can we do it? So, yeah, we're proactive about that to make sure that you stay at that 50% margin. Again, our competitors, all do only have 25% margin. So we're giving you that better margin, a larger margin in case you really have a high capacity demand for that quarter and we proactively will call you up, if we think you need more based on monitoring your storage usage. >> Great. Eric got to go, thank you so much for taking us through that great detail, I really appreciate it. Always good to see you. >> Great, thanks Dave, really appreciate it. >> Alright, thank you for watching this CUBE conversation, this is Dave Vellante and we'll see you next time. (soft music)

Published Date : Aug 19 2021

SUMMARY :

in the storage business, and you guys do a great job of the hard news, Eric? that's what you can do now of the offices and we've So one of the key things you need to do and you mentioned some and you also of course, could and either put it out to tape and so what you do is you So you manage that with our and how you help with threat detection. and then you need a strategy Eric, are you seeing any use case patterns and it helps you with early and why is it different from the others? So when you look at effective storage, is if you commit to more and and I commit to you to X and great. So in addition to what you get theCUBE and then they'd add in we would certainly entertain I interrupted you, and you just call your And then you do your contract, I mean, if you think about So I think there will be many, of the storage being as-a-service, the flexibility that you get. If you compare it to the, the additional storage that you need. if I want to start fresh will call you up because then get back to you Yeah, but once you Where are you going to put it? So if you need to go into you so much for taking us really appreciate it. Alright, thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Eric HertzogPERSON

0.99+

DellORGANIZATION

0.99+

LenovoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

EricPERSON

0.99+

July 20thDATE

0.99+

two hoursQUANTITY

0.99+

fiveQUANTITY

0.99+

one-yearQUANTITY

0.99+

100%QUANTITY

0.99+

three-yearQUANTITY

0.99+

New YorkLOCATION

0.99+

50%QUANTITY

0.99+

four hoursQUANTITY

0.99+

$175 millionQUANTITY

0.99+

five minutesQUANTITY

0.99+

XeroxORGANIZATION

0.99+

sixQUANTITY

0.99+

two monthsQUANTITY

0.99+

five yearsQUANTITY

0.99+

25%QUANTITY

0.99+

three weeksQUANTITY

0.99+

AmazonORGANIZATION

0.99+

ABCORGANIZATION

0.99+

26 secondsQUANTITY

0.99+

ThursdayDATE

0.99+

one yearQUANTITY

0.99+

300 terabytesQUANTITY

0.99+

Two partQUANTITY

0.99+

75%QUANTITY

0.99+

100 terabytesQUANTITY

0.99+

2029DATE

0.99+

HPEORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

10 minutesQUANTITY

0.99+

2021DATE

0.99+

SLA-1TITLE

0.99+

a monthQUANTITY

0.99+

SLA-2TITLE

0.99+

76%QUANTITY

0.99+

two weeksQUANTITY

0.99+

10 data setsQUANTITY

0.99+

MHPORGANIZATION

0.99+

10 IT guysQUANTITY

0.99+

PureORGANIZATION

0.99+

Ajay Singh, Pure Storage | CUBEconversation


 

(upbeat music) >> The Cloud essentially turned the data center into an API and ushered in the era of programmable infrastructure, no longer do we think about deploying infrastructure in rigid silos with a hardened, outer shell, rather infrastructure has to facilitate digital business strategies. And what this means is putting data at the core of your organization, irrespective of its physical location. It also means infrastructure generally and storage specifically must be accessed as sets of services that can be discovered, deployed, managed, secured, and governed in a DevOps model or OpsDev, if you prefer. Now, this has specific implications as to how vendor product strategies will evolve and how they'll meet modern data requirements. Welcome to this Cube conversation, everybody. This is Dave Vellante. And with me to discuss these sea changes is Ajay Singh, the Chief Product Officer of Pure Storage, Ajay welcome. >> Thank you, David, gald to be on. >> Yeah, great to have you, so let's talk about your role at Pure. I think you're the first CPO, what's the vision there? >> That's right, I just joined up Pure about eight months ago from VMware as the chief product officer and you're right, I'm the first our chief product officer at Pure. And at VMware I ran the Cloud management business unit, which was a lot about automation and infrastructure as code. And it's just great to join Pure, which has a phenomenal all flash product set. I kind of call it the iPhone or flash story super easy to use. And how do we take that same ease of use, which is a heart of a Cloud operating principle, and how do we actually take it up to really deliver a modern data experience, which includes infrastructure and storage as code, but then even more beyond that and how do you do modern operations and then modern data services. So super excited to be at Pure. And the vision, if you may, at the end of the day, is to provide, leveraging this moderate experience, a connected and effortless experience data experience, which allows customers to ultimately focus on what matters for them, their business, and by really leveraging and managing and winning with their data, because ultimately data is the new oil, if you may, and if you can mine it, get insights from it and really drive a competitive edge in the digital transformation in your head, and that's what be intended to help our customers to. >> So you joined earlier this year kind of, I guess, middle of the pandemic really I'm interested in kind of your first 100 days, what that was like, what key milestones you set and now you're into your second a 100 plus days. How's that all going? What can you share with us in and that's interesting timing because the effects of the pandemic you came in in a kind of post that, so you had experience from VMware and then you had to apply that to the product organization. So tell us about that sort of first a 100 days and the sort of mission now. >> Absolutely, so as we talked about the vision, around the modern data experience, kind of have three components to it, modernizing the infrastructure and really it's kudos to the team out of the work we've been doing, a ton of work in modernizing the infrastructure, I'll briefly talk to that, then modernizing the data, much more than modernizing the operations. I'll talk to that as well. And then of course, down the pike, modernizing data services. So if you think about it from modernizing the infrastructure, if you think about Pure for a minute, Pure is the first company that took flash to mainstream, essentially bringing what we call consumer simplicity to enterprise storage. The manual for the products with the front and back of a business card, that's it, you plug it in, boom, it's up and running, and then you get proactive AI driven support, right? So that was kind of the heart of Pure. Now you think about Pure again, what's unique about Pure has been a lot of our competition, has dealt with flash at the SSD level, hey, because guess what? All this software was built for hard drive. And so if I can treat NAND as a solid state drive SSD, then my software would easily work on it. But with Pure, because we started with flash, we released went straight to the NAND level, and as opposed to kind of the SSD layer, and what that does is it gives you greater efficiency, greater reliability and create a performance compared to an SSD, because you can optimize at the chip level as opposed to at the SSD module level. That's one big advantage that Pure has going for itself. And if you look at the physics, in the industry for a minute, there's recent data put out by Wikibon early this year, effectively showing that by the year 2026, flash on a dollar per terabyte basis, just the economics of the semiconductor versus the hard disk is going to be cheaper than hard disk. So this big inflection point is slowly but surely coming that's going to disrupt the hardest industry, already the high end has been taken over by flash, but hybrid is next and then even the long tail is coming up over there. And so to end to that extent our lead, if you may, the introduction of QLC NAND, QLC NAND powerful competition is barely introducing, we've been at it for a while. We just recently this year in my first a 100 days, we introduced the flasher AC, C40 and C60 drives, which really start to open up our ability to go after the hybrid story market in a big way. It opens up a big new market for us. So great work there by the team,. Also at the heart of it. If you think about it in the NAND side, we have our flash array, which is a scale-up latency centric architecture and FlashBlade which is a scale-out throughput architecture, all operating with NAND. And what that does is it allows us to cover both structured data, unstructured data, tier one apps and tier two apps. So pretty broad data coverage in that journey to the all flash data center, slowly but surely we're heading over there to the all flash data center based on demand economics that we just talked about, and we've done a bunch of releases. And then the team has done a bunch of things around introducing and NVME or fabric, the kind of thing that you expect them to do. A lot of recognition in the industry for the team or from the likes of TrustRadius, Gartner, named FlashRay, the Carton Peer Insights, the customer choice award and primary storage in the MQ. We were the leader. So a lot of kudos and recognition coming to the team as a result, Flash Blade just hit a billion dollars in cumulative revenue, kind of a leader by far in kind of the unstructured data, fast file an object marketplace. And then of course, all the work we're doing around what we say, ESG, environmental, social and governance, around reducing carbon footprint, reducing waste, our whole notion of evergreen and non-disruptive upgrades. We also kind of did a lot of work in that where we actually announced that over 2,700 customers have actually done non-disruptive upgrades over the technology. >> Yeah a lot to unpack there. And a lot of this sometimes you people say, oh, it's the plumbing, but the plumbing is actually very important too. 'Cause we're in a major inflection point, when we went from spinning disk to NAND. And it's all about volumes, you're seeing this all over the industry now, you see your old boss, Pat Gelsinger, is dealing with this at Intel. And it's all about consumer volumes in my view anyway, because thanks to Steve Jobs, NAND volumes are enormous and what two hard disk drive makers left in the planet. I don't know, maybe there's two and a half, but so those volumes drive costs down. And so you're on that curve and you can debate as to when it's going to happen, but it's not an if it's a when. Let me, shift gears a little bit. Because Cloud, as I was saying, it's ushered in this API economy, this as a service model, a lot of infrastructure companies have responded. How are you thinking at Pure about the as a service model for your customers? What's the strategy? How is it evolving and how does it differentiate from the competition? >> Absolutely, a great question. It's kind of segues into the second part of the moderate experience, which is how do you modernize the operations? And that's where automation as a service, because ultimately, the Cloud has validated and the address of this model, right? People are looking for outcomes. They care less about how you get there. They just want the outcome. And the as a service model actually delivers these outcomes. And this whole notion of infrastructure as code is kind of the start of it. Imagine if my infrastructure for a developer is just a line of code, in a Git repository in a program that goes through a CICD process and automatically kind of is configured and set up, fits in with the Terraform, the Ansibles, all that different automation frameworks. And so what we've done is we've gone down the path of really building out what I think is modern operations with this ability to have storage as code, disability, in addition modern operations is not just storage scored, but also we've got recently introduced some comprehensive ransomware protection, that's part of modern operations. There's all the threat you hear in the news or ransomware. We introduced what we call safe mode snapshots that allow you to recover in literally seconds. When you have a ransomware attack, we also have in the modern operations Pure one, which is maybe the leader in AI driven support to prevent downtime. We actually call you 80% of the time and fix the problems without you knowing about it. That's what modern operations is all about. And then also Martin operations says, okay, you've got flash on your on-prem side, but even maybe using flash in the public Cloud, how can I have seamless multi-Cloud experience in our Cloud block store we've introduced around Amazon, AWS and Azure allows one to do that. And then finally, for modern applications, if you think about it, this whole notion of infrastructure's code, as a service, software driven storage, the Kubernetes infrastructure enables one to really deliver a great automation framework that enables to reduce the labor required to manage the storage infrastructure and deliver it as code. And we have, kudos to Charlie and the Pure storage team before my time with the acquisition of Portworx, Portworx today is truly delivers true storage as code orchestrated entirely through Kubernetes and in a multi-Cloud hybrid situation. So it can run on EKS, GKE, OpenShift rancher, Tansu, recently announced as the leader by giggle home for enterprise Kubernetes storage. We were really proud about that asset. And then finally, the last piece are Pure as a service. That's also all outcome oriented, SLS. What matters is you sign up for SLS, and then you get those SLS, very different from our competition, right? Our competition tends to be a lot more around financial engineering, hey, you can buy it OPEX versus CapEx. And, but you get the same thing with a lot of professional services, we've really got, I'd say a couple of years and lead on, actually delivering and managing with SRE engineers for the SLA. So a lot of great work there. We recently also introduced Cisco FlashStack, again, flash stack as a service, again, as a service, a validation of that. And then finally, we also recently did a announcement with Aquaponics, with their bare metal as a service where we are a key part of their bare metal as a service offering, again, pushing the kind of the added service strategy. So yes, big for us, that's where the buck is skating, half the enterprises, even on prem, wanting to consume things in the Cloud operating model. And so that's where we're putting it lot. >> I see, so your contention is, it's not just this CapEx to OPEX, that's kind of the, during the economic downturn of 2007, 2008, the economic crisis, that was the big thing for CFOs. So that's kind of yesterday's news. What you're saying is you're creating a Cloud, like operating model, as I was saying upfront, irrespective of physical location. And I see that as your challenge, the industry's challenge, be, if I'm going to effect the digital transformation, I don't want to deal with the Cloud primitives. I want you to hide the underlying complexity of that Cloud. I want to deal with higher level problems, but so that brings me to digital transformation, which is kind of the now initiative, or I even sometimes call it the mandate. There's not a one size fits all for digital transformation, but I'm interested in your thoughts on the must take steps, universal steps that everybody needs to think about in a digital transformation journey. >> Yeah, so ultimately the digital transformation is all about how companies are gain a competitive edge in this new digital world or that the company are, and the competition are changing the game on, right? So you want to make sure that you can rapidly try new things, fail fast, innovate and invest, but speed is of the essence, agility and the Cloud operating model enables that agility. And so what we're also doing is not only are we driving agility in a multicloud kind of data, infrastructure, data operation fashion, but we also taking it a step further. We were also on the journey to deliver modern data services. Imagine on a Pure on-prem infrastructure, along with your different public Clouds that you're working on with the Kubernetes infrastructures, you could, with a few clicks run Kakfa as a service, TensorFlow as a service, Mongo as a service. So me as a technology team can truly become a service provider and not just an on-prem service provider, but a multi-Cloud service provider. Such that these services can be used to analyze the data that you have, not only your data, your partner data, third party public data, and how you can marry those different data sets, analyze it to deliver new insights that ultimately give you a competitive edge in the digital transformation. So you can see data plays a big role there. The data is what generates those insights. Your ability to match that data with partner data, public data, your data, the analysis on it services ready to go, as you get the digital, as you can do the insights. You can really start to separate yourself from your competition and get on the leaderboard a decade from now when this digital transformation settles down. >> All right, so bring us home, Ajay, summarize what does a modern data strategy look like and how does it fit into a digital business or a digital organization? >> So look, at the end of the day, data and analysis, both of them play a big role in the digital transformation. And it really comes down to how do I leverage this data, my data, partner data, public data, to really get that edge. And that links back to a vision. How do we provide that connected and effortless, modern data experience that allows our customers to focus on their business? How do I get the edge in the digital transformation? But easily leveraging, managing and winning with their data. And that's the heart of where Pure is headed. >> Ajay Singh, thanks so much for coming inside theCube and sharing your vision. >> Thank you, Dave, it was a real pleasure. >> And thank you for watching this Cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Aug 18 2021

SUMMARY :

in the era of programmable Yeah, great to have you, And the vision, if you the pandemic you came in in kind of the unstructured data, And a lot of this sometimes and the address of this model, right? of 2007, 2008, the economic crisis, the data that you have, And that's the heart of and sharing your vision. was a real pleasure. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Ajay SinghPERSON

0.99+

CharliePERSON

0.99+

AmazonORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AjayPERSON

0.99+

Steve JobsPERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

PureORGANIZATION

0.99+

TrustRadiusORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2008DATE

0.99+

2007DATE

0.99+

firstQUANTITY

0.99+

CapExORGANIZATION

0.99+

AquaponicsORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

OPEXORGANIZATION

0.99+

MartinPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

100 plus daysQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

second partQUANTITY

0.99+

over 2,700 customersQUANTITY

0.99+

WikibonORGANIZATION

0.98+

secondQUANTITY

0.98+

first 100 daysQUANTITY

0.98+

billion dollarsQUANTITY

0.98+

this yearDATE

0.97+

KubernetesTITLE

0.97+

CiscoORGANIZATION

0.96+

two and a halfQUANTITY

0.96+

oneQUANTITY

0.96+

MongoORGANIZATION

0.96+

TansuORGANIZATION

0.95+

AzureORGANIZATION

0.95+

early this yearDATE

0.94+

earlier this yearDATE

0.94+

100 daysQUANTITY

0.94+

FlashRayORGANIZATION

0.93+

first companyQUANTITY

0.93+

tier two appsQUANTITY

0.93+

C60COMMERCIAL_ITEM

0.92+

pandemicEVENT

0.92+

OpenShiftORGANIZATION

0.91+

SLSTITLE

0.91+

2026DATE

0.91+

CartonORGANIZATION

0.91+

three componentsQUANTITY

0.9+

todayDATE

0.88+

CloudTITLE

0.88+

a minuteQUANTITY

0.87+

SREORGANIZATION

0.86+

Cloud blockTITLE

0.86+

two hard disk driveQUANTITY

0.86+

EKSORGANIZATION

0.85+

KubernetesORGANIZATION

0.82+

about eight months agoDATE

0.82+

AnsiblesORGANIZATION

0.8+

GKEORGANIZATION

0.79+

KakfaORGANIZATION

0.79+

a decadeDATE

0.77+

tier one appsQUANTITY

0.76+

Peer InsightsTITLE

0.75+

GitTITLE

0.75+

TensorFlowORGANIZATION

0.71+

one big advantageQUANTITY

0.7+

Michele Goetz,, Forrester Research | Collibra Data Citizens'21


 

>> From around the globe, it's theCUBE, covering Data Citizens '21. Brought to you by Collibra. >> For the past decade organizations have been effecting very deliberate data strategies and investing quite heavily in people, processes and technology, specifically designed to gain insights from data, better serve customers, drive new revenue streams we've heard this before. The results quite frankly have been mixed. As much of the effort is focused on analytics and technology designed to create a single version of the truth, which in many cases continues to be elusive. Moreover, the world of data is changing. Data is increasingly distributed making collaboration and governance more challenging, especially where operational use cases are a priority. Hello, everyone. My name is Dave Vellante and you're watching theCUBE coverage of Data Citizens '21. And we're pleased to welcome Michele Goetz who's the vice president and principal analyst at Forrester Research. Hello, Michele. Welcome to theCUBE. >> Hi, Dave. Thanks for having me today. >> It's our pleasure. So I want to start, you serve have a wide range of roles including enterprise architects, CDOs, chief data officers that is, analyst, the analyst, et cetera, and many data-related functions. And my first question is what are they thinking about today? What's on their minds, these data experts? >> So there's actually two things happening. One is what is the demand that's placed on data for our new intelligent digital systems. So we're seeing a lot of investment and interest in things like edge computing. And then how does that intersect with artificial intelligence to really run your business intelligently and drive new value propositions to be both adaptive to the market as well as resilient to changes that are unforeseen. The second thing is then you create this massive complexity to managing the data, governing the data, orchestrating the data because it's not just a centralized data warehouse environment anymore. You have a highly diverse and distributed landscape that you both control internally, as well as taking advantage of third party information. So really what the struggle then becomes is how do you trust the data? How do you govern it, and secure, and protect that data? And then how do you ensure that it's hyper contextualized to the types of value propositions that our intelligence systems are going to serve? >> Well, I think you're hitting on the key issues here. I mean, you're right. The data and I sort of refer to this as well is sort of out there, it's distributed at the edge. But generally our data organizations are actually quite centralized and as well you talk about the need to trust the data obviously that's crucial. But are you seeing the organization change? I know you're talking about this to clients, your discussion about collaboration. How are you seeing that change? >> Yeah, so as you have to bring data into context of the insights that you're trying to get or the intelligence that's automating and scaling out the value streams and outcomes within your business, we're actually seeing a federated model emerge in organizations. So while there's still a centralized data management and data services organization led typical enterprise architects for data, a data engineering team that's managing warehouses as in data lakes. They're creating this great platform to access and orchestrate information, but we're also seeing data, and analytics, and governance teams come together under chief data officers or chief data and analytics officers. And this is really where the insights are being generated from either BI and analytics or from data science itself and having dedicated data engineers and stewards that are helping to access and prepare data for analytic efforts. And then lastly, this is the really interesting part is when you push data into the edge the goal is that you're actually driving an experience and an application. And so in that case we are seeing data engineering teams starting to be incorporated into the solutions teams that are aligned to lines of business or divisions themselves. And so really what's happening is if there is a solution consultant who is also overseeing value-based portfolio management when you need to instrument the data to these new use cases and keep up with the pace of the business it's this engineering team that is part of the DevOps work bench to execute on that. So really the balances we need the core, we need to get to the insights and build our models for AI. And then the next piece is how do you activate all that? And there's a team over there to help. So it's really spreading the wealth and expertise where it needs to go. >> Yeah, I love that. You took a couple of things that really resonated with me. You talked about context a couple of times and this notion of a federated model, because historically the sort of big data architecture, the team, they didn't have the context, the business context, and my inference is that's changing and I think that's critical. Your talk at Data Citizens is called how obsessive collaboration fuels scalable DataOps. You talk about the data, the DevOps team. What's the premise you put forth to the audience? >> So the point about obsessive collaboration is sort of taking the hubris out of your expertise on the data. Certainly there's a recognition by data professionals that the business understands and owns their data. They know the semantics, they know the context of it and just receiving the requirements on that was assumed to be okay. And then you could provide a data foundation, whether it's just a lake or whether you have a warehouse environment where you're pulling for your analytics. The reality is that as we move into more of AI machine learning type of model, one, more context is necessary. And you're kind of balancing between what are the things that you can ascribe to the data globally which is what data engineers can support. And then there's what is unique about the data and the context of the data that is related to the business value and outcome as well as the feature engineering that is being done on the machine learning models. So there has to be a really tight link and collaboration between the data engineers, the data scientists, and analysts, and the business stakeholders themselves. You see a lot of pods starting up that way to build the intelligence within the system. And then lastly, what do you do with that model? What do you do with that data? What do you do with that insight? You now have to shift your collaboration over to the work bench that is going to pull all these components together to create the experiences and the automation that you're looking for. And that requires a different collaboration model around software development. And still incorporating the business expertise from those stakeholders, so that you're satisfying, not only the quality of the code to run the solution, but the quality towards the outcome that meets the expectation and the time to value that your stakeholders have. So data teams aren't just sitting in the basement or in another part of the organization and digitally disconnected anymore. You're finding that they're having to work much more closely and side by side with their colleagues and stakeholders. >> I think it's clear that you understand this space really well. Hubris out context in, I mean, that's kind of what's been lacking. And I'm glad you said you used the word anymore because I think it's a recognition that that's kind of what it was. They were down in the basement or out in some kind of silo. And I think, and I want to ask you this. I come back to organization because I think a lot of organizations look the most cost effective way for us to serve the business is to have a single data team with hyper specialized roles. That'll be the cheapest way, the most efficient way that we can serve them. And meanwhile, the business, which as you pointed out has the context is frustrated. They can't get to data. So there's this notion of a federated governance model is actually quite interesting. Are you seeing actual common use cases where this is being operationalized? >> Absolutely, I think the first place that you were seeing it was within the operational technology use cases. There the use cases where a lot of the manufacturing industrial device. Any sort of IOT based use case really recognized that without applying data and intelligence to whatever process was going to be executed. It was really going to be challenging to know that you're creating the right foundation, meeting the SLA requirements, and then ultimately bringing the right quality and integrity to the data, let alone any sort of data protection and regulatory compliance that has to be necessary. So you already started seeing the solution teams coming together with the data engineers, the solution developers, the analysts, and data scientists, and the business stakeholders to drive that. But that is starting to come back down into more of the IT mindset as well. And so DataOps starts to emerge from that paradigm into more of the corporate types of use cases and sort of parrot that because there are customer experience use cases that have an IOT or edge component to though. We live on our smart phones, we live on our smart watches, we've got our laptops. All of us have been put into virtual collaboration. And so we really need to take into account not just the insight of analytics but how do you feed that forward. And so this is really where you're seeing sort of the evolution of DataOps as a competency not only to engineer the data and collaborate but ensure that there sort of an activation and alignment where the value is going to come out, and still being trusted and governed. >> I got kind of a weird question, but I'm going. I was talking to somebody in Israel the other day and they told me masks are off, the economy's booming. And he noted that Israel said, hey, we're going to pay up for the price of a vaccine. The cost per dose out, 28 bucks or whatever it was. And he pointed out that the EU haggled big time and they don't want to pay $19. And as a result they're not as far along. Israel understood that the real value was opening up the economy. And so there's an analogy here which I want to come back to my organization and it relates to the DataOps. Is if the real metric is, hey, I have an idea for a data product. How long does it take to go from idea to monetization? That seems to me to be a better KPI than how much storage I have, or how much geometry petabytes I'm managing. So my question is, and it relates to DataOps. Can that DataOps, should that DataOps individual maybe live, and then maybe even the data engineer live inside of the business and is that even feasible technically with this notion of federated governance? Are you seeing that and maybe talk a little bit more about this DataOps role. Is it. >> Yeah. >> Fungible. >> Yeah, it's definitely fungible. And in fact, when I talked about sort of those three units of there's your core enterprise data services, there's your BI and data, and then there's your line of business. All of those, the engineering and the ops is the DataOps which is living in all of those environments and being as close as possible to where the value proposition is being defined and designed. So absolutely being able to federate that. And I think the other piece on DataOps that is really important is recognizing how the practices around continuous integration and continuous deployment using agile methodologies is really reshaping. A lot of the waterfall approaches that were done before where data was lagging 12 to 18 months behind any sort of insights, but a lot of the platforms today assume that you're moving into a standard mature software development life cycle. And you can start seeing returns on investment within a quarter, really, so that you can iterate and then speed that up so that you're delivering new value every two weeks. But it does change the mindset this DataOps team aligned to solution development, aligned to a broader portfolio management of business capabilities and outcomes needs to understand how to appropriately scope the data products that they're delivering to incremental value-based milestones. So the business feels that they're getting improvements over time and not just waiting. So there's an MVP, you move forward on that and optimize, optimize, extend scale. So again, that CICD mindset is helping to not bottleneck and wait for the complete field of dreams to come from your data and your insights. >> Thank you for that, Michelle. I want to come back to this idea of collaboration because over the last decade we've seen attempts, I've seen software come out to try to help the various roles collaborate and some of it's been okay, but you have these hyper specialized roles. You've got data scientists, data engineers, quality engineers, analysts, et cetera. And they tend to be in their own little worlds. But at the end of the day we rely on them all to get answers. So how can these data scientists, all these stewards, how can they collaborate better? What are you seeing there? >> You need to get them onto the same process. That's really what it comes down to. If you're working from different points of view, that's one thing. But if you're working from different processes collaborating is really challenging. And I think the one thing that's really come out of this move to machine learning and AI is recognizing that you need processes that reinforce collaboration. So that's number one. So you see agile development in CICD not just for DataOps, not just for DevOps, but also encouraging and propelling these projects and iterations for the data science teams as well or even if there's machine learning engineers incorporated. And then certainly the business stakeholders are inserted within there as appropriate to accept what it is that is going to be developed. So processes is number one. And number two is what is the platform that's going to reinforce those processes and collaboration. And it's really about what's being shared. How do you share? So certainly what we're seeing within the platforms themselves is everybody contributing into some sort of a library where their components and products are being ascribed to and then that's able to help different teams grab those components and build out what those solutions are going to be. And in fact, what gets really cool about that is you don't always need hardcore data scientists anymore as you have this social platform for data product and analytic product development. This is where a lot of the auto ML begins because those who are less data science-oriented but can build an insight pipeline, can grab all the different components from the pipelines to the transformations, to capture mechanisms, to bolting into the model itself and allowing that to be delivered to the application. So really kind of balancing out between process and platforms that enable and encourage, and almost force you to collaborate and manage through sharing. >> Thank you for that. I want to ask you about the role data governance. You've mentioned trust and that's data quality, and you've got teams that are focused on and specialists focused on data quality. There's the data catalog. Here's my question. You mentioned edge a couple of times and I can see a lot of that. I mean, today, most AI is are a lot of value, I would say most is modeling. And in the future, you mentioned edge it's going to be a lot of influencing in real time. And people maybe not going to have the time or be involved in that decision. So what are you seeing in terms of data governance, federate. We talked about federated governance, this notion of a data catalog and maybe automating data quality without necessarily having it be so labor intensive. What are you seeing the trends there? >> Yeah, so I think our new environment, our new normal is that you have to be composable, interoperable, and portable. Portability is really the key here. So from a cataloging perspective and governance we would bring everything together into our catalogs and business glossaries. And it would be a reference point, it was like a massive Wiki. Well, that's wonderful, but why just how's it in a museum. You really want to activate that. And I think what's interesting about the technologies today for governance is that you can turn those rules, and business logic, and policies into services that are composable components and bring those into the solutions that you're defining. And in that way what happens is that creates portability. You can drive them wherever they need to go. But from the composability and the interoperability portion of that you can put those services in the right place at the right time for what you need for an outcome so that you start to become behaviorally driven on executing on governance rather than trying to write all of the governance down into transformations and controls to where the data lives. You can have quality and observability of that quality and performance right at the edge and context of behavior and use of that solution. You can run those services and in governance on gateways that are managing and routing information at those edge solutions and we synchronization between the edge and the cloud comes up. And if it's appropriate during synchronization of the data back into the data lake you can run those services there. So there's a lot more flexibility and elasticity for today's modern approaches to cataloging, and glossaries, and governance of data than we had before. And that goes back into what we talked about earlier of like, this is the new wave of DataOps. This is how you bring data products to fruition now. Everything is about activation. >> So how do you see the future of DataOps? I mean, I kind of been pushing you to a more decentralized model where the business has more control 'cause the business has the context. I mean, I feel as though, hey, we've done a great job of contextualizing our operational systems. The sales team they know when the data is crap within my CRM, but our data systems are context agnostic generally. And you obviously understand that problem well. But so how do you see the future of DataOps? >> So I think what's kind of interesting about that is we're going to go to governance on greed versus governance on right more so. What do I mean by that? That means that from a business perspective there's two sides of it. There's ensuring that where governance is run is as we talked about before executing at the appropriate place at the appropriate time. It's semantically domain-centric driven not logical and systems centric. So that's number one. Number two is also recognizing that business owners or business operations actually plays a role in this, because as you're working within your CRM systems, like a Salesforce, for example you're using an iPaaS MuleSoft to connect to other applications, connect to other data sources, connect to other analytics sources. And what's happening there is that the data is being modeled and personalized to whatever view insight our task has to happen within those processes. So even CRM environments where we think of as sort of traditional technologies that we're used to are getting a lift, both in terms of intelligence from the data but also your flexibility and how you execute governance and quality services within that environment. And that actually opens up the data foundations a lot more and avoids you from having to do a lot of moving, copying centralizing data and creating an over-weighted business application and an over, both in terms of the data foundation but also in terms of the types of business services, and status updates, and processes that happen in the application itself. You're drawing those tasks back down to where they should be and where performance can be managed rather than trying to over customize your application environment. And that gives you a lot more flexibility later too for any sort of upgrades or migrations that you want to make because all of the logic is contained back down in a service layer instead. >> Great perspectives, Michelle, you obviously know your stuff and it's been a pleasure having you on. My last question is when you look out there anything that really excites you or any specific research that you're working on that you want to share, that you're super pumped about? >> I think there's two things. One is it's truly incredible the amount of insight and growth that is coming through data profiling and observation. Really understanding and contextualizing data anomalies so that you understand is data helping or hurting the business value and tying it very specifically to processes and metrics, which is fantastic as well as models themselves like really understanding how data inputs and outputs are making a difference whether the model performs or not. And then I think the second thing is really the emergence of more active data, active insights. And as what we talked about before your ability to package up services for governance and quality in particular that allow you to scale your data out towards the edge or where it's needed. And doing so not just so that you can run analytics but that you're also driving overall processes and value. So the research around the operationalization and activation of data is really exciting. And looking at the networks and service mesh to bring those things together is kind of where I'm focusing right now because what's the point of having data in a database if it's not providing any value. >> Michele Goetz, Forrester Research, thanks so much for coming on theCUBE. Really awesome perspectives. You're in an exciting space, so appreciate your time. >> Absolutely, thank you. >> And thank you for watching Data Citizens '21 on theCUBE. My name is Dave Vellante. (upbeat music)

Published Date : Jun 17 2021

SUMMARY :

Brought to you by Collibra. of the truth, which in many Thanks for having me today. So I want to start, you serve that you both control internally, the need to trust the data the data to these new use cases What's the premise you and the time to value that And meanwhile, the business, But that is starting to come back down and it relates to the DataOps. and the ops is the DataOps And they tend to be in and allowing that to be And in the future, you mentioned edge of that you can put those services I mean, I kind of been pushing you And that gives you a lot more flexibility on that you want to share, that allow you to scale your so appreciate your time. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michele GoetzPERSON

0.99+

Dave VellantePERSON

0.99+

MichelePERSON

0.99+

DavePERSON

0.99+

MichellePERSON

0.99+

$19QUANTITY

0.99+

IsraelLOCATION

0.99+

12QUANTITY

0.99+

28 bucksQUANTITY

0.99+

first questionQUANTITY

0.99+

two sidesQUANTITY

0.99+

EUORGANIZATION

0.99+

two thingsQUANTITY

0.99+

Forrester ResearchORGANIZATION

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

Data CitizensORGANIZATION

0.99+

second thingQUANTITY

0.99+

bothQUANTITY

0.98+

CollibraORGANIZATION

0.98+

18 monthsQUANTITY

0.98+

Forrester ResearchORGANIZATION

0.98+

oneQUANTITY

0.96+

IsraelORGANIZATION

0.96+

three unitsQUANTITY

0.94+

Data Citizens '21TITLE

0.94+

DataOpsORGANIZATION

0.93+

one thingQUANTITY

0.9+

HubrisPERSON

0.89+

first placeQUANTITY

0.85+

past decadeDATE

0.84+

agileTITLE

0.83+

Number twoQUANTITY

0.82+

single data teamQUANTITY

0.82+

DevOpsTITLE

0.81+

lastDATE

0.8+

DataOpsTITLE

0.8+

edgeORGANIZATION

0.78+

DataOpsOTHER

0.78+

single versionQUANTITY

0.78+

waveEVENT

0.74+

two weeksQUANTITY

0.74+

DataOpsEVENT

0.73+

timesQUANTITY

0.73+

SLATITLE

0.72+

number twoQUANTITY

0.71+

SalesforceTITLE

0.7+

CICDORGANIZATION

0.67+

number oneQUANTITY

0.65+

CICDTITLE

0.6+

iPaaSTITLE

0.59+

Citizens'21ORGANIZATION

0.56+

coupleQUANTITY

0.42+

MuleSoftORGANIZATION

0.41+

theCUBETITLE

0.34+

LIVE Panel: "Easy CI With Docker"


 

>>Hey, welcome to the live panel. My name is Brett. I am your host, and indeed we are live. In fact, if you're curious about that, if you don't believe us, um, let's just show a little bit of the browser real quick to see. Yup. There you go. We're live. So, all right. So how this is going to work is I'm going to bring in some guests and, uh, in one second, and we're going to basically take your questions on the topic designer of the day, that continuous integration testing. Uh, thank you so much to my guests welcoming into the panel. I've got Carlos, Nico and Mandy. Hello everyone. >>Hello? All right, >>Let's go. Let's go around the room and all pretend we don't know each other and that the internet didn't read below the video who we are. Uh, hi, my name is Brett. I am a Docker captain, which means I'm supposed to know something about Docker. I'm coming from Virginia Beach. I'm streaming here from Virginia Beach, Virginia, and, uh, I make videos on the internet and courses on you to me, Carlos. Hey, >>Hey, what's up? I'm Carlos Nunez. I am a solutions architect, VMware. I do solution things with computers. It's fun. I live in Dallas when I'm moving to Houston in a month, which is where I'm currently streaming. I've been all over the Northeast this whole week. So, um, it's been fun and I'm excited to meet with all of you and talk about CIA and Docker. Sure. >>Yeah. Hey everyone. Uh, Nico, Khobar here. I'm a solution engineer at HashiCorp. Uh, I am streaming to you from, uh, the beautiful Austin, Texas. Uh, ignore, ignore the golden gate bridge here. This is from my old apartment in San Francisco. Uh, just, uh, you know, keeping that, to remember all the good days, um, that that lived at. But, uh, anyway, I work at Patrick Corp and I work on all things, automation, um, and cloud and dev ops. Um, and I'm excited to be here and Mandy, >>Hi. Yeah, Mandy Hubbard. I am streaming from Austin, Texas. I am, uh, currently a DX engineer at ship engine. Um, I've worked in QA and that's kind of where I got my, uh, my Docker experience and, um, uh, moving into DX to try and help developers better understand and use our products and be an advocate for them. >>Nice. Well, thank you all for joining me. Uh, I really appreciate you taking the time out of your busy schedule to be here. And so for those of you in chat, the reason we're doing this live, because it's always harder to do things live. The reason we're here is to answer a question. So we didn't come with a bunch of slides and demos or anything like that. We're here to talk amongst ourselves about ideas and really here for you. So we've, we obviously, this is about easy CII, so we're, we're going to try to keep the conversation around testing and continuous integration and all the things that that entails with containers. But we may, we may go down rabbit holes. We may go veer off and start talking about other things, and that's totally fine if it's in the realm of dev ops and containers and developer and ops workflows, like, Hey, it's, it's kinda game. >>And, uh, these people have a wide variety of expertise. They haven't done just testing, right? We, we live in a world where you all kind of have to wear many hats. So feel free to, um, ask what you think is on the top of your mind. And we'll do our best to answer. It may, might not be the best answer or the correct answer, but we're going to do our best. Um, well, let's get it start off. Uh, let's, let's get a couple of topics to start off with. Uh, th the, the easy CGI was my, one of my three ideas. Cause he's the, one of the things that I'm most excited about is the innovation we're seeing around easier testing, faster testing, automated testing, uh, because as much as we've all been doing this stuff for, you know, 15 years, since 20 years since the sort of Jenkins early days, um, it it's, it seems like it's still really hard and it's still a lot of work. >>So, um, let's go around the room real quick, and everybody can just kind of talk for a minute about like your experience with testing and maybe some of your pain points, like what you don't like about our testing world. Um, and we can talk about some pains, cause I think that will lead us to kind of talk about what, what are the things we're seeing now that might be better, uh, ideas about how to do this. I know for me, uh, testing, obviously there's the code part, but just getting it automated, but mostly getting it in the hands of developers so that they can control their own testing. And don't have to go talk to a person to run that test again, or the mysterious Jenkins platform somewhere. I keep mentioning Jenkins cause it's, it is still the dominant player out there. Um, so for me, I'm, I'm, I, I don't like it when I'm walking into a room and there's, there's only one or two people that know how the testing works or know how to make the new tests go into the testing platform and stuff like that. So I'm always trying to free those things so that any of the developers are enabled and empowered to do that stuff. So someone else, Carlos, anybody, um, >>Oh, I have a lot of opinions on that. Having been a QA engineer for most of my career. Um, the shift that we're saying is everyone is dev ops and everyone is QA. Th the issue I see is no one asked developers if they wanted to be QA. Um, and so being the former QA on the team, when there's a problem, even though I'm a developer and we're all running QA, they always tend to come to the one of the former QA engineers. And they're not really owning that responsibility and, um, and digging in. So that's kind of what I'm saying is that we're all expected to test now. And some people, well, some people don't know how it's, uh, for me it was kind of an intuitive skill. It just kind of fit with my personality, but not knowing what to look for, not knowing what to automate, not even understanding how your API end points are used by your front end to know what to test when a change is made. It's really overwhelming for developers. And, um, we're going to need to streamline that and, and hold their hands a little bit until they get their feet wet with also being QA. >>Right. Right. So, um, uh, Carlos, >>Yeah, uh, testing is like, Tesla is one of my favorite subjects to talk about when I'm baring with developers. And a lot of it is because of what Mandy said, right? Like a lot of developers now who used to write a test and say, Hey, QA, go. Um, I wrote my unit tests. Now write the rest of the test. Essentially. Now developers are expected to be able to understand how testing, uh, testing methodologies work, um, in their local environments, right? Like they're supposed to understand how to write an integration tasks federate into and tasks, a component test. And of course, how to write unit tests that aren't just, you know, assert true is true, right? Like more comprehensive, more comprehensive, um, more high touch unit tests, which include things like mocking and stubbing and spine and all that stuff. And, you know, it's not so much getting those tests. Well, I've had a lot of challenges with developers getting those tests to run in Docker because of usually because of dependency hell, but, um, getting developers to understand how to write tests that matter and mean something. Um, it's, it's, it can be difficult, but it's also where I find a lot of the enjoyment of my work comes into play. So yeah. I mean, that's the difficulty I've seen around testing. Um, big subject though. Lots to talk about there. >>Yeah. We've got, we've already got so many questions coming in. You already got an hour's worth of stuff. So, uh, Nico 81st thoughts on that? >>Yeah, I think I definitely agree with, with other folks here on the panel, I think from a, um, the shift from a skillset perspective that's needed to adopt the new technologies, but I think from even from, uh, aside from the organizational, um, and kind of key responsibilities that, that the new developers have to kinda adapt to and, and kind of inherit now, um, there's also from a technical perspective as there's, you know, um, more developers are owning the full stack, including the infrastructure piece. So that adds a lot more to the plate in Tim's oaf, also testing that component that they were not even, uh, responsible for before. Um, and, um, also the second challenge that, you know, I'm seeing is that on, you know, the long list of added, um, uh, tooling and, you know, there's new tool every other day. Um, and, um, that kind of requires more customization to the testing, uh, that each individual team, um, any individual developer Y by extension has to learn. Uh, so the customization, uh, as well as the, kind of the scope that had, uh, you know, now in conferences, the infrastructure piece, um, uh, both of act to the, to the challenges that we're seeing right now for, um, for CGI and overall testing, um, uh, the developers are saying, uh, in, in the market today. >>Yeah. We've got a lot of questions, um, about all the, all the different parts of this. So, uh, let me just go straight to them. Cause that's why we're here is for the people, uh, a lot of people asking about your favorite tools and in one of this is one of the challenges with integration, right? Is, um, there is no, there are dominant players, but there, there is such a variety. I mean, every one of my customers seems like they're using a different workflow and a different set of tools. So, and Hey, we're all here to just talk about what we're, what we're using, uh, you know, whether your favorite tools. So like a lot of the repeated questions are, what are your favorite tools? Like if you could create it from scratch, uh, what would you use? Pierre's asking, you know, GitHub actions sounds like they're a fan of GitHub actions, uh, w you know, mentioning, pushing the ECR and Docker hub and, uh, using vs code pipeline, I guess there may be talking about Azure pipelines. Um, what, what's your preferred way? So, does anyone have any, uh, thoughts on that anyone want to throw out there? Their preferred pipeline of tooling? >>Well, I have to throw out mine. I might as Jenkins, um, like kind of a honorary cloud be at this point, having spoken a couple of times there, um, all of the plugins just make the functionality. I don't love the UI, but I love that it's been around so long. It has so much community support, and there are so many plugins so that if you want to do something, you don't have to write the code it's already been tested. Um, unfortunately I haven't been able to use Jenkins in, uh, since I joined ship engine, we, most of our, um, our, our monolithic core application is, is team city. It's a dotnet application and TeamCity plays really well with.net. Um, didn't love it, uh, Ms. Jenkins. And I'm just, we're just starting some new initiatives that are using GitHub actions, and I'm really excited to learn, to learn those. I think they have a lot of the same functionality that you're looking for, but, um, much more simplified in is right there and get hubs. So, um, the integration is a lot more seamless, but I do have to go on record that my favorite CICT tools Jenkins. >>All right. You heard it here first people. All right. Anyone else? You're muted? I'm muted. Carlin says muted. Oh, Carla says, guest has muted themselves to Carlos. You got to unmute. >>Yes. I did mute myself because I was typing a lot, trying to, you know, try to answer stuff in the chat. And there's a lot of really dark stuff in there. That's okay. Two more times today. So yeah, it's fine. Yeah, no problem. So totally. And it's the best way to start a play more. So I'm just going to go ahead and light it up. Um, for enterprise environments, I actually am a huge fan of Jenkins. Um, it's a tool that people really understand. Um, it has stood the test of time, right? I mean, people were using Hudson, but 15 years ago, maybe longer. And, you know, the way it works, hasn't really changed very much. I mean, Jenkins X is a little different, but, um, the UI and the way it works internally is pretty familiar to a lot of enterprise environments, which is great. >>And also in me, the plugin ecosystem is amazing. There's so many plugins for everything, and you can make your own if you know, Java groovy. I'm sure there's a perfect Kotlin in there, but I haven't tried myself, but it's really great. It's also really easy to write, um, CIS code, which is something I'm a big fan of. So Jenkins files have been, have worked really well for me. I, I know that I can get a little bit more complex as you start to build your own models and such, but, you know, for enterprise enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Jenkins is the bellwether and for very good reason now for my personal projects. And I see a lot on the chat here, I think y'all, y'all been agreed with me get hub actions 100%, my favorite tool right now. >>Um, I love GitHub actions. It's, it's customizable, it's modular. There's a lot of plugins already. I started using getting that back maybe a week after when GA and there was no documentation or anything. And I still, it was still my favorite CIA tool even then. Um, and you know, the API is really great. There's a lot to love about GitHub actions and, um, and I, and I use it as much as I can from my personal project. So I still have a soft spot for Travis CAI. Um, you know, they got acquired and they're a little different now trying to see, I, I can't, I can't let it go. I just love it. But, um, yeah, I mean, when it comes to Seattle, those are my tools. So light me up in the comments I will respond. Yeah. >>I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, you know, early days get hub open source and like a free CIA tool that I could describe. I think it was the ammo back then. I don't actually remember, but yeah, it was kind of an exciting time from my experience. There was like, oh, this is, this is just there as a service. And I could just use it. It doesn't, it's like get hub it's free from my open source stuff. And so it does have a soft spot in my heart too. So yeah. >>All right. We've got questions around, um, cam, so I'm going to ask some questions. We don't have to have these answers because sometimes they're going to be specific, but I want to call them out because people in chat may have missed that question. And there's probably, you know, that we have smart people in chat too. So there's probably someone that knows the answer to these things. If, if it's not us, um, they're asking about building Docker images in Kubernetes, which to me is always a sore spot because it's Kubernetes does not build images by default. It's not meant for that out of the gate. And, uh, what is the best way to do this without having to use privileged containers, which privileged containers just implying that yeah, you, you, it probably has more privileges than by default as a container in Kubernetes. And that is a hard thing because, uh, I don't, I think Docker doesn't lie to do that out of the gate. So I don't know if anyone has an immediate answer to that. That's a pretty technical one, but if you, if you know the answer to that in chat, call it out. >>Um, >>I had done this, uh, but I'm pretty sure I had to use a privileged, um, container and install the Docker Damon on the Kubernetes cluster. And I CA I can't give you a better solution. Um, I've done the same. So, >>Yeah, uh, Chavonne asks, um, back to the Jenkins thing, what's the easiest way to integrate Docker into a Jenkins CICB pipeline. And that's one of the challenges I find with Jenkins because I don't claim to be the expert on Jenkins. Is there are so many plugins because of this, of this such a huge ecosystem. Um, when you go searching for Docker, there's a lot that comes back, right. So I, I don't actually have a preferred way because every team I find uses it differently. Um, I don't know, is there a, do you know if there's a Jenkins preferred, a default plugin? I don't even know for Docker. Oh, go ahead. Yeah. Sorry for Docker. And jacon sorry, Docker plugins for Jenkins. Uh, as someone's asking like the preferred or easy way to do that. Um, and I don't, I don't know the back into Jenkins that well, so, >>Well, th the new, the new way that they're doing, uh, Docker builds with the pipeline, which is more declarative versus the groovy. It's really simple, and their documentation is really good. They, um, they make it really easy to say, run this in this image. So you can pull down, you know, public images and add your own layers. Um, so I don't know the name of that plugin, uh, but I can certainly take a minute after this session and going and get that. Um, but if you really are overwhelmed by the plugins, you can just write your, you know, your shell command in Jenkins. You could just by, you know, doing everything in bash, calling the Docker, um, Damon directly, and then getting it working just to see that end to end, and then start browsing for plugins to see if you even want to use those. >>The plugins will allow more integration from end to end. Some of the things that you input might be available later on in the process for having to manage that yourself. But, you know, you don't have to use any of the plugins. You can literally just, you know, do a block where you write your shell command and get it working, and then decide if, for plugins for you. Um, I think it's always under important to understand what is going on under the hood before you, before you adopt the magic of a plugin, because, um, once you have a problem, if you're, if it's all a lockbox to you, it's going to be more difficult to troubleshoot. It's kind of like learning, get command line versus like get cracking or something. Once, once you get in a bind, if you don't understand the underlying steps, it's really hard to get yourself out of a bind, versus if you understand what the plugin or the app is doing, then, um, you can get out of situations a lot easier. That's a good place. That's, that's where I'd start. >>Yeah. Thank you. Um, Camden asks better to build test environment images, every commit in CII. So this is like one of those opinions of we're all gonna have some different, uh, or build on build images on every commit, leveraging the cash, or build them once outside the test pile pipeline. Um, what say you people? >>Uh, well, I I've seen both and generally speaking, my preference is, um, I guess the ant, the it's a consultant answer, right? I think it depends on what you're trying to do, right. So if you have a lot of small changes that are being made and you're creating images for each of those commits, you're going to have a lot of images in your, in your registry, right? And on top of that, if you're building those images, uh, through CAI frequently, if you're using Docker hub or something like that, you might run into rate limiting issues because of Docker's new rate, limiting, uh, rate limits that they put in place. Um, but that might be beneficial if the, if being able to roll back between those small changes while you're testing is important to you. Uh, however, if all you care about is being able to use Docker images, um, or being able to correlate versions to your Docker images, or if you're the type of team that doesn't even use him, uh, does he even use, uh, virgins in your image tags? Then I would think that that might be a little, much you might want to just have in your CIO. You might want to have a stage that builds your Docker images and Docker image and pushes it into your registry, being done first particular branches instead of having to be done on every commit regardless of branch. But again, it really depends on the team. It really depends on what you're building. It really depends on your workflow. It can depend on a number of things like a curse sometimes too. Yeah. Yeah. >>Once had two points here, you know, I've seen, you know, the pattern has been at every, with every, uh, uh, commit, assuming that you have the right set of tests that would kind of, uh, you would benefit from actually seeing, um, the, the, the, the testing workflow go through and can detect any issue within, within the build or whatever you're trying to test against. But if you're just a building without the appropriate set of tests, then you're just basically consuming almond, adding time, as well as all the, the image, uh, stories associated with it without treaty reaping the benefit of, of, of this pattern. Uh, and the second point is, again, I think if you're, if you're going to end up doing a per commit, uh, definitely recommend having some type of, uh, uh, image purging, um, uh, and, and, and garbage collection process to ensure that you're not just wasting, um, all the stories needed and also, um, uh, optimizing your, your bill process, because that will end up being the most time-consuming, um, um, you know, within, within your pipeline. So this is my 2 cents on this. >>Yeah, that's good stuff. I mean, those are both of those are conversations that could lead us into the rabbit hole for the rest of the day on storage management, uh, you know, CP CPU minutes for, uh, you know, your build stuff. I mean, if you're in any size team, more than one or two people, you immediately run into headaches with cost of CIA, because we have now the problem of tools, right? We have so many tools. We can have the CIS system burning CPU cycles all day, every day, if we really wanted to. And so you re very quickly, I think, especially if you're on every commit on every branch, like that gets you into a world of cost mitigation, and you probably are going to have to settle somewhere in the middle on, uh, between the budget, people that are saying you're spending way too much money on the CII platform, uh, because of all these CPU cycles, and then the developers who would love to have everything now, you know, as fast as possible and the biggest, biggest CPU's, and the biggest servers, and have the bills, because the bills can never go fast enough, right. >>There's no end to optimizing your build workflow. Um, we have another question on that. This is another topic that we'll all probably have different takes on is, uh, basically, uh, version tags, right? So on images, we, we have a very established workflow in get for how we make commits. We have commit shots. We have, uh, you know, we know get tags and there's all these things there. And then we go into images and it's just this whole new world that's opened up. Like there's no real consensus. Um, so what, what are your thoughts on the strategy for teams in their image tag? Again, another, another culture thing. Um, commander, >>I mean, I'm a fan of silver when we have no other option. Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, I don't really see any reason to use another, uh, there's just normal, incremental, um, you know, numbering, but I love the fact that you can pull any tag and know exactly when it was created. So I'm a big fan of bar, if you can make that work for your organization. >>Yep. People are mentioned that in chat, >>So I like as well. Uh, I'm a big fan of it. I think it's easy to be able to just be as easy to be able to signify what a major changes versus a minor change versus just a hot fix or, you know, some or some kind of a bad fix. The problem that I've found with having teams adopt San Bernardo becomes answering these questions and being able to really define what is a major change, what is a minor change? What is a patch, right? And this becomes a bit of an overhead or not so much of an overhead, but, uh, uh, uh, a large concern for teams who have never done versioning before, or they never been responsible for their own versioning. Um, in fact, you know, I'm running into that right now, uh, with, with a client that I'm working with, where a lot, I'm working with a lot of teams, helping them move their applications from a legacy production environment into a new one. >>And in doing so, uh, versioning comes up because Docker images, uh, have tags and usually the tax correlate to versions, but some teams over there, some teams that I'm working with are only maintaining a script and others are maintaining a fully fledged JAK, three tier application, you know, with lots of dependencies. So telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking about, you know, what's major, what's my number what's patch. That might be a lot for them. And for someone or a team like that, I might just suggest using commit shots as your versions until you figure that out, or maybe using, um, dates as your version, but for the more for the team, with the larger application, they probably already know the answers to those questions. In which case they're either already using Sember or they, um, or they may be using some other version of the strategy and might be in December, might suit them better. So, um, you're going to hear me say, it depends a lot, and I'm just going to say here, it depends. Cause it really does. Carlos. >>I think you hit on something interesting beyond just how to version, but, um, when to consider it a major release and who makes those decisions, and if you leave it to engineers to version, you're kind of pushing business decisions down the pipe. Um, I think when it's a minor or a major should be a business decision and someone else needs to make that call someone closer to the business should be making that call as to when we want to call it major. >>That's a really good point. And I add some, I actually agree. Um, I absolutely agree with that. And again, it really depends on the team that on the team and the scope of it, it depends on the scope that they're maintaining, right? And so it's a business application. Of course, you're going to have a product manager and you're going to have, you're going to have a product manager who's going to want to make that call because that version is going to be out in marketing. People are going to use it. They're going to refer to and support calls. They're going to need to make those decisions. Sember again, works really, really well for that. Um, but for a team that's maintaining the scripts, you know, I don't know, having them say, okay, you must tell me what a major version is. It's >>A lot, but >>If they want it to use some birds great too, which is why I think going back to what you originally said, Sember in the absence of other options. I think that's a good strategy. >>Yeah. There's a, there's a, um, catching up on chat. I'm not sure if I'm ever going to catch up, but there's a lot of people commenting on their favorite CII systems and it's, and it, it just goes to show for the, the testing and deployment community. Like how many tools there are out there, how many tools there are to support the tools that you're using. Like, uh, it can be a crazy wilderness. And I think that's, that's part of the art of it, uh, is that these things are allowing us to build our workflows to the team's culture. Um, and, uh, but I do think that, you know, getting into like maybe what we hope to be at what's next is I do hope that we get to, to try to figure out some of these harder problems of consistency. Uh, one of the things that led me to Docker at the beginning to begin with was the fact that it wa it created a consistent packaging solution for me to get my code, you know, off of, off of my site of my local system, really, and into the server. >>And that whole workflow would at least the thing that I was making at each step was going to be the same thing used. Right. And that, that was huge. Uh, it was also, it also took us a long time to get there. Right. We all had to, like Docker was one of those ones that decade kind of ideas of let's solidify the, enter, get the consensus of the community around this idea. And we, and it's not perfect. Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, but it is there and we're all using it. And now I'm looking for that next piece, right. Then hopefully the next step in that, um, that where we can all arrive at a consensus so that once you hop teams, you know, okay. We all knew Docker. We now, now we're all starting to get to know the manifests, but then there's this big gap in the middle where it's like, it might be one of a dozen things. Um, you know, so >>Yeah, yeah. To that, to that, Brett, um, you know, uh, just maybe more of a shameless plug here and wanting to kind of talk about one of the things that I'm on. So excited, but I work, I work at Tasha Corp. I don't know anyone, or I don't know if many people have heard of, um, you know, we tend to focus a lot on workflows versus technologies, right. Because, you know, as you can see, even just looking at the chat, there's, you know, ton of opinions on the different tooling, right. And, uh, imagine having, you know, I'm working with clients that have 10,000 developers. So imagine taking the folks in the chat and being partnered with one organization or one company and having to make decisions on how to build software. Um, but there's no way you can conversion one or, or one way or one tool, uh, and that's where we're facing in the industry. >>So one of the things that, uh, I'm pretty excited about, and I don't know if it's getting as much traction as you know, we've been focused on it. This is way point, which is a project, an open source project. I believe we got at least, uh, last year, um, which is, it's more of, uh, it's, it is aim to address that really, uh, uh, Brad set on, you know, to come to tool to, uh, make it extremely easy and simple. And, you know, to describe how you want to build, uh, deploy or release your application, uh, in, in a consistent way, regardless of the tools. So similar to how you can think of Terraform and having that pluggability to say Terraform apply or plan against any cloud infrastructure, uh, without really having to know exactly the details of how to do it, uh, this is what wave one is doing. Um, and it can be applied with, you know, for the CIA, uh, framework. So, you know, task plugability into, uh, you know, circle CEI tests to Docker helm, uh, Kubernetes. So that's the, you know, it's, it's a hard problem to solve, but, um, I'm hopeful that that's the path that we're, you know, we'll, we'll eventually get to. So, um, hope, you know, you can, you can, uh, see some of the, you know, information, data on it, on, on HashiCorp site, but I mean, I'm personally excited about it. >>Yeah. Uh I'm to gonna have to check that out. And, um, I told you on my live show, man, we'll talk about it, but talk about it for a whole hour. Uh, so there's another question here around, uh, this, this is actually a little bit more detailed, but it is one that I think a lot of people deal with and I deal with a lot too, is essentially the question is from Cameron, uh, D essentially, do you use compose in your CIO or not Docker compose? Uh, because yes I do. Yeah. Cause it, it, it, it solves so many problems am and not every CGI can, I don't know, there's some problems with a CIO is trying to do it for me. So there are pros and cons and I feel like I'm still on the fence about it because I use it all the time, but also it's not perfect. It's not always meant for CIA. And CIA sometimes tries to do things for you, like starting things up before you start other parts and having that whole order, uh, ordering problem of things anyway. W thoughts and when have thoughts. >>Yes. I love compose. It's one of my favorite tools of all time. Um, and the reason why it's, because what I often find I'm working with teams trying to actually let me walk that back, because Jack on the chat asked a really interesting question about what, what, what the hardest thing about CIS for a lot of teams. And in my experience, the hardest thing is getting teams to build an app that is the same app as what's built in production. A lot of CGI does things that are totally different than what you would do in your local, in your local dev. And as a result of that, you get, you got this application that either doesn't work locally, or it does work, but it's a completely different animal than what you would get in production. Right? So what I've found in trying to get teams to bridge that gap by basically taking their CGI, shifting the CII left, I hate the shift left turn, but I'll use it. >>I'm shifting the CIO left to your local development is trying to say, okay, how do we build an app? How do we, how do we build mot dependencies of that app so that we can build so that we can test our app? How do we run tests, right? How do we build, how do we get test data? And what I found is that trying to get teams to do all this in Docker, which is normally a first for a lot of teams that I'm working with, trying to get them all to do all of this. And Docker means you're running Docker, build a lot running Docker, run a lot. You're running Docker, RM a lot. You ran a lot of Docker, disparate Docker commands. And then on top of that, trying to bridge all of those containers together into a single network can be challenging without compose. >>So I like using a, to be able to really easily categorize and compartmentalize a lot of the things that are going to be done in CII, like building a Docker image, running tests, which is you're, you're going to do it in CII anyway. So running tests, building the image, pushing it to the registry. Well, I wouldn't say pushing it to the registry, but doing all the things that you would do in local dev, but in the same network that you might have a mock database or a mock S3 instance or some of something else. Um, so it's just easy to take all those Docker compose commands and move them into your Yammel file using the hub actions or your dankest Bob using Jenkins, or what have you. Right. It's really, it's really portable that way, but it doesn't work for every team. You know, for example, if you're just a team that, you know, going back to my script example, if it's a really simple script that does one thing on a somewhat routine basis, then that might be a lot of overhead. Um, in that case, you know, you can get away with just Docker commands. It's not a big deal, but the way I looked at it is if I'm, if I'm building, if I build something that's similar to a make bile or rate file, or what have you, then I'm probably gonna want to use Docker compose. If I'm working with Docker, that's, that's a philosophy of values, right? >>So I'm also a fan of Docker compose. And, um, you know, to your point, Carlos, the whole, I mean, I'm also a fan of shifting CEI lift and testing lift, but if you put all that logic in your CTI, um, it changes the L the local development experience from the CGI experience. Versus if you put everything in a compose file so that what you build locally is the same as what you build in CGI. Um, you're going to have a better experience because you're going to be testing something more, that's closer to what you're going to be releasing. And it's also very easy to look at a compose file and kind of, um, understand what the dependencies are and what's happening is very readable. And once you move that stuff to CGI, I think a lot of developers, you know, they're going to be intimidated by the CGI, um, whatever the scripting language is, it's going to be something they're going to have to wrap their head around. >>Um, but they're not gonna be able to use it locally. You're going to have to have another local solution. So I love the idea of a composed file use locally, um, especially if he can Mount the local workspace so that they can do real time development and see their changes in the exact same way as it's going to be built and tested in CGI. It gives developers a high level of confidence. And then, you know, you're less likely to have issues because of discrepancies between how it was built in your local test environment versus how it's built in NCI. And so Docker compose really lets you do all of that in a way that makes your solution more portable, portable between local dev and CGI and reduces the number of CGI cycles to get, you know, the test, the test data that you need. So that's why I like it for really, for local dev. >>It'll be interesting. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, not a whole lot, but a little bit talk of the Docker, compose V two, which has now built into the Docker command line. And so now we're shifting from the Python built compose, which was a separate package. You could that one of the challenges was getting it into your CA solution because if you don't have PIP and you got down on the binary and the binary wasn't available for every platform and, uh, it was a PI installer. It gets a little nerdy into how that works, but, uh, and the team is now getting, be able to get unified with it. Now that it's in Golang and it's, and it's plugged right into the Docker command line, it hopefully will be easier to distribute, easier to, to use. >>And you won't have to necessarily have dependencies inside of where you're running it because there'll be a statically compiled binary. Um, so I've been playing with that, uh, this year. And so like training myself to do Docker going from Docker dash compose to Docker space, compose. It is a thing I I'm almost to the point of having to write a shell replacement. Yeah. Alias that thing. Um, but, um, I'm excited to see what that's going, cause there's already new features in it. And it, these built kit by default, like there's all these things. And I, I love build kit. We could make a whole session on build kit. Um, in fact there's actually, um, maybe going on right now, or right around this time, there is a session on, uh, from Solomon hikes, the seat, uh, co-founder of Docker, former CTO, uh, on build kit using, uh, using some other tool on top of build kit or whatever. >>So that, that would be interesting for those of you that are not watching that one. Cause you're here, uh, to do a check that one out later. Um, all right. So another good question was caching. So another one, another area where there is no wrong answers probably, and everyone has a different story. So the question is, what are your thoughts on CII build caching? There's often a debate between security. This is from Quentin. Thank you for this great question. There's often a debate between security reproducibility and build speeds. I haven't found a good answer so far. I will just throw my hat in the ring and say that the more times you want to build, like if you're trying to build every commit or every commit, if you're building many times a day, the more caching you need. So like the more times you're building, the more caching you're gonna likely want. And in most cases caching doesn't bite you in the butt, but that could be, yeah, we, can we get the bit about that? So, yeah. Yeah. >>I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, what you're trying to build and I'm quoting your colors. Um, yeah, it's, it's got, it's gonna depend because, you know, there are some instances where you definitely want to use, you know, depends on the frequency that you're building and how you're building. Um, it's you would want to actually take advantage of cashing functionalities, um, for the build, uh, itself. Um, but if, um, you know, as you mentioned, there could be some instances where you would want to disable, um, any caching because you actually want to either pull a new packages or, um, you know, there could be some security, um, uh, disadvantages related to security aspects that would, you know, you know, using a cache version of, uh, image layer, for example, could be a problem. And you, you know, if you have a fleet of build, uh, engines, you don't have a good grasp of where they're being cashed. We would have to, um, disable caching in that, in that, um, in those instances. So it, it would depend. >>Yeah, it's, it's funny you have that problem on both sides of cashing. Like there are things that, especially in Docker world, they will cash automatically. And, and then, and then you maybe don't realize that some of that caching could be bad. It's, it's actually using old, uh, old assets, old artifacts, and then there's times where you would expect it to cash, that it doesn't cash. And then you have to do something extra to enable that caching, especially when you're dealing with that cluster of, of CIS servers. Right. And the cloud, the whole clustering problem with caching is even more complex, but yeah, >>But that's, that's when, >>Uh, you know, ever since I asked you to start using build kits and able to build kit, you know, between it's it's it's reader of Boston in, in detecting word, you know, where in, in the bill process needs to cash, as well as, uh, the, the, um, you know, the process. I don't think I've seen any other, uh, approach there that comes close to how efficient, uh, that process can become how much time it can actually save. Uh, but again, I think, I think that's, for me that had been my default approach, unless I actually need something that I would intentionally to disable caching for that purpose, but the benefits, at least for me, the benefits of, um, how bill kit actually been processing my bills, um, from the builds as well as, you know, using the cash up until, you know, how it detects the, the difference in, in, in the assets within the Docker file had been, um, you know, uh, pretty, you know, outweigh the disadvantages that it brings in. So it, you know, take it each case by case. And based on that, determine if you want to use it, but definitely recommend those enabling >>In the absence of a reason not to, um, I definitely think that it's a good approach in terms of speed. Um, yeah, I say you cash until you have a good reason not to personally >>Catch by default. There you go. I think you catch by default. Yeah. Yeah. And, uh, the trick is, well, one, it's not always enabled by default, especially when you're talking about cross server. So that's a, that's a complexity for your SIS admins, or if you're on the cloud, you know, it's usually just an option. Um, I think it also is this, this veers into a little bit of, uh, the more you cash the in a lot of cases with Docker, like the, from like, if you're from images and checked every single time, if you're not pinning every single thing, if you're not painting your app version, you're at your MPN versions to the exact lock file definition. Like there's a lot of these things where I'm I get, I get sort of, I get very grouchy with teams that sort of let it, just let it all be like, yeah, we'll just build two images and they're totally going to have different dependencies because someone happened to update that thing and after whatever or MPM or, or, and so I get grouchy about that, cause I want to lock it all down, but I also know that that's going to create administrative burden. >>Like the team is now going to have to manage versions in a very much more granular way. Like, do we need to version two? Do we need to care about curl? You know, all that stuff. Um, so that's, that's kind of tricky, but when you get to, when you get to certain version problems, uh, sorry, uh, cashing problems, you, you, you don't want those set those caches to happen because it, if you're from image changes and you're not constantly checking for a new image, and if you're not pinning that V that version, then now you, you don't know whether you're getting the latest version of Davion or whatever. Um, so I think that there's, there's an art form to the more you pen, the less you have, the less, you have to be worried about things changing, but the more you pen, the, uh, all your versions of everything all the way down the stack, the more administrative stuff, because you're gonna have to manually change every one of those. >>So I think it's a balancing act for teams. And as you mature, I to find teams, they tend to pin more until they get to a point of being more comfortable with their testing. So the other side of this argument is if you trust your testing, then you, and you have better testing to me, the less likely to the subtle little differences in versions have to be penned because you can get away with those minor or patch level version changes. If you're thoroughly testing your app, because you're trusting your testing. And this gets us into a whole nother rant, but, uh, yeah, but talking >>About penny versions, if you've got a lot of dependencies isn't that when you would want to use the cash the most and not have to rebuild all those layers. Yeah. >>But if you're not, but if you're not painting to the exact patch version and you are caching, then you're not technically getting the latest versions because it's not checking for all the time. It's a weird, there's a lot of this subtle nuance that people don't realize until it's a problem. And that's part of the, the tricky part of allow this stuff, is it, sometimes the Docker can be almost so much magic out of the box that you, you, you get this all and it all works. And then day two happens and you built it a second time and you've got a new version of open SSL in there and suddenly it doesn't work. Um, so anyway, uh, that was a great question. I've done the question on this, on, uh, from heavy. What do you put, where do you put testing in your pipeline? Like, so testing the code cause there's lots of types of testing, uh, because this pipeline gets longer and longer and Docker building images as part of it. And so he says, um, before staging or after staging, but before production, where do you put it? >>Oh man. Okay. So, um, my, my main thought on this is, and of course this is kind of religious flame bait, so sure. You know, people are going to go into the compensation wrong. Carlos, the boy is how I like to think about it. So pretty much in every stage or every environment that you're going to be deploying your app into, or that your application is going to touch. My idea is that there should be a build of a Docker image that has all your applications coded in, along with its dependencies, there's testing that tests your application, and then there's a deployment that happens into whatever infrastructure there is. Right. So the testing, they can get tricky though. And the type of testing you do, I think depends on the environment that you're in. So if you're, let's say for example, your team and you have, you have a main branch and then you have feature branches that merged into the main branch. >>You don't have like a pre-production branch or anything like that. So in those feature branches, whenever I'm doing CGI that way, I know when I freak, when I cut my poll request, that I'm going to merge into main and everything's going to work in my feature branches, I'm going to want to probably just run unit tests and maybe some component tests, which really, which are just, you know, testing that your app can talk to another component or another part, another dependency, like maybe a database doing tests like that, that don't take a lot of time that are fascinating and right. A lot of would be done at the beach branch level and in my opinion, but when you're going to merge that beach branch into main, as part of a release in that activity, you're going to want to be able to do an integration tasks, to make sure that your app can actually talk to all the other dependencies that it talked to. >>You're going to want to do an end to end test or a smoke test, just to make sure that, you know, someone that actually touches the application, if it's like a website can actually use the website as intended and it meets the business cases and all that, and you might even have testing like performance testing, low performance load testing, or security testing, compliance testing that would want to happen in my opinion, when you're about to go into production with a release, because those are gonna take a long time. Those are very expensive. You're going to have to cut new infrastructure, run those tests, and it can become quite arduous. And you're not going to want to run those all the time. You'll have the resources, uh, builds will be slower. Uh, release will be slower. It will just become a mess. So I would want to save those for when I'm about to go into production. Instead of doing those every time I make a commit or every time I'm merging a feature ranch into a non main branch, that's the way I look at it, but everything does a different, um, there's other philosophies around it. Yeah. >>Well, I don't disagree with your build test deploy. I think if you're going to deploy the code, it needs to be tested. Um, at some level, I mean less the same. You've got, I hate the term smoke tests, cause it gives a false sense of security, but you have some mental minimum minimal amount of tests. And I would expect the developer on the feature branch to add new tests that tested that feature. And that would be part of the PR why those tests would need to pass before you can merge it, merge it to master. So I agree that there are tests that you, you want to run at different stages, but the earlier you can run the test before going to production. Um, the fewer issues you have, the easier it is to troubleshoot it. And I kind of agree with what you said, Carlos, about the longer running tests like performance tests and things like that, waiting to the end. >>The only problem is when you wait until the end to run those performance tests, you kind of end up deploying with whatever performance you have. It's, it's almost just an information gathering. So if you don't run your performance test early on, um, and I don't want to go down a rabbit hole, but performance tests can be really useless if you don't have a goal where it's just information gap, uh, this is, this is the performance. Well, what did you expect it to be? Is it good? Is it bad? They can get really nebulous. So if performance is really important, um, you you're gonna need to come up with some expectations, preferably, you know, set up the business level, like what our SLA is, what our response times and have something to shoot for. And then before you're getting to production. If you have targets, you can test before staging and you can tweak the code before staging and move that performance initiative. Sorry, Carlos, a little to the left. Um, but if you don't have a performance targets, then it's just a check box. So those are my thoughts. I like to test before every deployment. Right? >>Yeah. And you know what, I'm glad that you, I'm glad that you brought, I'm glad that you brought up Escalades and performance because, and you know, the definition of performance says to me, because one of the things that I've seen when I work with teams is that oftentimes another team runs a P and L tests and they ended, and the development team doesn't really have too much insight into what's going on there. And usually when I go to the performance team and say, Hey, how do you run your performance test? It's usually just a generic solution for every single application that they support, which may or may not be applicable to the application team that I'm working with specifically. So I think it's a good, I'm not going to dig into it. I'm not going to dig into the rabbit hole SRE, but it is a good bridge into SRE when you start trying to define what does reliability mean, right? >>Because the reason why you test performance, it's test reliability to make sure that when you cut that release, that customers would go to your site or use your application. Aren't going to see regressions in performance and are not going to either go to another website or, you know, lodge in SLA violation or something like that. Um, it does, it does bridge really well with defining reliability and what SRE means. And when you have, when you start talking about that, that's when you started talking about how often do I run? How often do I test my reliability, the reliability of my application, right? Like, do I have nightly tasks in CGI that ensure that my main branch or, you know, some important branch I does not mean is meeting SLA is meeting SLR. So service level objectives, um, or, you know, do I run tasks that ensure that my SLA is being met in production? >>Like whenever, like do I use, do I do things like game days where I test, Hey, if I turn something off or, you know, if I deploy this small broken code to production and like what happens to my performance? What happens to my security and compliance? Um, you can, that you can go really deep into and take creating, um, into creating really robust tests that cover a lot of different domains. But I liked just using build test deploy is the overall answer to that because I find that you're going to have to build your application first. You're going to have to test it out there and build it, and then you're going to want to deploy it after you test it. And that order generally ensures that you're releasing software. That works. >>Right. Right. Um, I was going to ask one last question. Um, it's going to have to be like a sentence answer though, for each one of you. Uh, this is, uh, do you lint? And if you lint, do you lent all the things, if you do, do you fail the linters during your testing? Yes or no? I think it's going to depend on the culture. I really do. Sorry about it. If we >>Have a, you know, a hook, uh, you know, on the get commit, then theoretically the developer can't get code there without running Melinta anyway, >>So, right, right. True. Anyone else? Anyone thoughts on that? Linting >>Nice. I saw an additional question online thing. And in the chat, if you would introduce it in a multi-stage build, um, you know, I was wondering also what others think about that, like typically I've seen, you know, with multi-stage it's the most common use case is just to produce the final, like to minimize the, the, the, the, the, the image size and produce a final, you know, thin, uh, layout or thin, uh, image. Uh, so if it's not for that, like, I, I don't, I haven't seen a lot of, you know, um, teams or individuals who are actually within a multi-stage build. There's nothing really against that, but they think the number one purpose of doing multi-stage had been just producing the minimalist image. Um, so just wanted to kind of combine those two answers in one, uh, for sure. >>Yeah, yeah, sure. Um, and with that, um, thank you all for the great questions. We are going to have to wrap this up and we could go for another hour if we all had the time. And if Dr. Khan was a 24 hour long event and it didn't sadly, it's not. So we've got to make room for the next live panel, which will be Peter coming on and talking about security with some developer ex security experts. And I wanted to thank again, thank you all three of you for being here real quick, go around the room. Um, uh, where can people reach out to you? I am, uh, at Bret Fisher on Twitter. You can find me there. Carlos. >>I'm at dev Mandy with a Y D E N D Y that's me, um, >>Easiest name ever on Twitter, Carlos and DFW on LinkedIn. And I also have a LinkedIn learning course. So if you check me out on my LinkedIn learning, >>Yeah. I'm at Nicola Quebec. Um, one word, I'll put it in the chat as well on, on LinkedIn, as well as, uh, uh, as well as Twitter. Thanks for having us, Brett. Yeah. Thanks for being here. >>Um, and, and you all stay around. So if you're in the room with us chatting, you're gonna, you're gonna, if you want to go to see the next live panel, I've got to go back to the beginning and do that whole thing, uh, and find the next, because this one will end, but we'll still be in chat for a few minutes. I think the chat keeps going. I don't actually know. I haven't tried it yet. So we'll find out here in a minute. Um, but thanks you all for being here, I will be back a little bit later, but, uh, coming up next on the live stuff is Peter Wood security. Ciao. Bye.

Published Date : May 28 2021

SUMMARY :

Uh, thank you so much to my guests welcoming into the panel. Virginia, and, uh, I make videos on the internet and courses on you to me, So, um, it's been fun and I'm excited to meet with all of you and talk Uh, just, uh, you know, keeping that, to remember all the good days, um, uh, moving into DX to try and help developers better understand and use our products And so for those of you in chat, the reason we're doing this So feel free to, um, ask what you think is on the top of your And don't have to go talk to a person to run that Um, and so being the former QA on the team, So, um, uh, Carlos, And, you know, So, uh, Nico 81st thoughts on that? kind of the scope that had, uh, you know, now in conferences, what we're using, uh, you know, whether your favorite tools. if you want to do something, you don't have to write the code it's already been tested. You got to unmute. And, you know, the way it works, enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Um, and you know, the API is really great. I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, And there's probably, you know, And I CA I can't give you a better solution. Um, when you go searching for Docker, and then start browsing for plugins to see if you even want to use those. Some of the things that you input might be available later what say you people? So if you have a lot of small changes that are being made and time-consuming, um, um, you know, within, within your pipeline. hole for the rest of the day on storage management, uh, you know, CP CPU We have, uh, you know, we know get tags and there's Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, in fact, you know, I'm running into that right now, telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking I think you hit on something interesting beyond just how to version, but, um, when to you know, I don't know, having them say, okay, you must tell me what a major version is. If they want it to use some birds great too, which is why I think going back to what you originally said, a consistent packaging solution for me to get my code, you know, Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, To that, to that, Brett, um, you know, uh, just maybe more of So similar to how you can think of Terraform and having that pluggability to say Terraform uh, D essentially, do you use compose in your CIO or not Docker compose? different than what you would do in your local, in your local dev. I'm shifting the CIO left to your local development is trying to say, you know, you can get away with just Docker commands. And, um, you know, to your point, the number of CGI cycles to get, you know, the test, the test data that you need. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, And you won't have to necessarily have dependencies inside of where you're running it because So that, that would be interesting for those of you that are not watching that one. I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, And then you have to do something extra to enable that caching, in, in the assets within the Docker file had been, um, you know, Um, yeah, I say you cash until you have a good reason not to personally uh, the more you cash the in a lot of cases with Docker, like the, there's an art form to the more you pen, the less you have, So the other side of this argument is if you trust your testing, then you, and you have better testing to the cash the most and not have to rebuild all those layers. And then day two happens and you built it a second And the type of testing you do, which really, which are just, you know, testing that your app can talk to another component or another you know, someone that actually touches the application, if it's like a website can actually Um, the fewer issues you have, the easier it is to troubleshoot it. So if you don't run your performance test early on, um, and you know, the definition of performance says to me, because one of the things that I've seen when I work So service level objectives, um, or, you know, do I run Hey, if I turn something off or, you know, if I deploy this small broken code to production do you lent all the things, if you do, do you fail the linters during your testing? So, right, right. And in the chat, if you would introduce it in a multi-stage build, And I wanted to thank again, thank you all three of you for being here So if you check me out on my LinkedIn Um, one word, I'll put it in the chat as well on, Um, but thanks you all for being here,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Carlos NunezPERSON

0.99+

CarlaPERSON

0.99+

CarlosPERSON

0.99+

BrettPERSON

0.99+

DallasLOCATION

0.99+

HoustonLOCATION

0.99+

NicoPERSON

0.99+

Virginia BeachLOCATION

0.99+

ChavonnePERSON

0.99+

San FranciscoLOCATION

0.99+

DecemberDATE

0.99+

MandyPERSON

0.99+

KhobarPERSON

0.99+

CarlinPERSON

0.99+

JackPERSON

0.99+

SeattleLOCATION

0.99+

CIAORGANIZATION

0.99+

two pointsQUANTITY

0.99+

24 hourQUANTITY

0.99+

Tasha Corp.ORGANIZATION

0.99+

PierrePERSON

0.99+

Patrick CorpORGANIZATION

0.99+

PeterPERSON

0.99+

Jenkins XTITLE

0.99+

second pointQUANTITY

0.99+

second challengeQUANTITY

0.99+

PythonTITLE

0.99+

DockerTITLE

0.99+

2 centsQUANTITY

0.99+

10,000 developersQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

bothQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

CameronPERSON

0.99+

two imagesQUANTITY

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.99+

JenkinsTITLE

0.99+

KhanPERSON

0.99+

HashiCorpORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

each caseQUANTITY

0.99+

BradPERSON

0.99+

firstQUANTITY

0.99+

three ideasQUANTITY

0.99+

this yearDATE

0.99+

QuentinPERSON

0.98+

both sidesQUANTITY

0.98+

TimPERSON

0.98+

last yearDATE

0.98+

20 yearsQUANTITY

0.98+

CamdenPERSON

0.98+

each stepQUANTITY

0.98+

Two more timesQUANTITY

0.98+

Caitlin Gordon promo v2


 

(upbeat music) >> From theCube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a Cube Conversation. >> Hi, Lisa Martin here with Caitlin Gordon, the VP of product marketing for Dell Technologies. Caitlin, welcome back to theCube, we are excited to see you again. >> I'm very excited to be here again. So data protection in the news, what's going on? Yeah, it's been a busy year, we had our, obviously our PowerProtect DD appliance launched last year. And then this year, we have announcements on the software side, we had announcements at vMworld, some more at Dell Technologies world. And now today we're announcing even more, which is the new PowerProtect DP series appliances, the new integrated appliances, it's really exciting. So we now have our PowerProtect DP, the next generation of Data Domain. And we have our PowerProtect DP appliances, integrated appliances. And that's all about combining both protection storage, protection software in a single converged all in one offering. It's really popular with our customers today, because of the simplicity, the ability to really modernize your data protection in a very simple way, get up, really up and running quickly. And in fact, it's really the fastest growing part of the backup appliance market. >> I have read that the integrated appliances, our market is growing twice as fast as the target market. So give us a picture of what customers can expect from the new DP series. >> Yeah, it's not that just similar to actually our DD series from last year, which is there's four models in the new DP series. And it's really all about getting better performance, better efficiency, we've got new hardware assisted compression, denser drives, and all that gives us the ability to get faster backups, faster recovery. In fact, you get 38% faster backups, 45% faster recovery, more logical capacity, 30% more logical capacity 65 to one deduplication, which is just incredible. And 60,000 IOPS for instant access, so really ups the game both in performance and in efficiency. >> Those are big numbers. You mentioned a DD launched last year, contrast it with what you're announcing now, what's the significance of the DP series. >> And this is exciting for us because it does a couple things, it expands our PowerProtect appliance family, with the new DP series of integrated appliances. But at the same time, we're also announcing other important PowerProtect enhancements. on the software side, PowerProtect Data Manager, which we've been enhancing and continuing to talk about all year, also has some new improvements, the ability to deploy it in Azure, in AWS Govcloud for in-cloud protection, the enhancements that we've done with VMware that we talked about not that long ago at VMworld, about being able to integrate with storage based policy management, really automating and simplifying VMware protection. And it's really all about Kubernetes, right? And the ability to support Kubernetes as well. So not only is this an exciting appliance launch for us, but it's also the marking of yet even more enhancements on the PowerProtect Data Manager side. And all that together means that with PowerProtect, you really have a one stop shop for all of your data protection needs, no matter where the data lives. No matter what SLA, whether it's a physical virtual appliance, whether it's target or integrated. You've all got them in the PowerProtect family now. >> Excellent, all right, last question for you Caitlin. We know Dell Technologies is focused on three big waves, its cloud, VMware, and cyber recovery. Anything else you want to add here? >> cyber resiliency, cyber recovery, ransomware has really risen to the top of the list unfortunately for many organizations, and PowerProtect cyber recovery is really an important enhancement that we also have with this announcement today. We've had this offering and market for a couple years, but the exciting new enhancement here. So it is the first cyber recovery solution endorsed by Sheltered Harbor. And if you're not familiar with PowerProtect cyber recovery, it provides an automated air gapped solution for data isolation and then cyber sense provides the analytics and the forensics for discovering, diagnosing and remediating those attacks. So it's really all about ransomware protecting from or covering from those attacks, which unfortunately have become all too common for our customers today. >> Excellent news Caitlin, thanks for sharing what's new. Congratulations to you and the Dell team. >> Thank you so much Lisa. >> Okay Gordon. I'm Lisa Martin. You're watching theCube (upbeat music)

Published Date : Oct 27 2020

SUMMARY :

leaders all around the world, the VP of product marketing on the software side, we had announcements I have read that the And it's really all about of the DP series. And the ability to support question for you Caitlin. So it is the first cyber Congratulations to you and the Dell team. I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaitlinPERSON

0.99+

Lisa MartinPERSON

0.99+

Caitlin GordonPERSON

0.99+

45%QUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

38%QUANTITY

0.99+

LisaPERSON

0.99+

GordonPERSON

0.99+

DellORGANIZATION

0.99+

last yearDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

four modelsQUANTITY

0.99+

65QUANTITY

0.99+

Sheltered HarborORGANIZATION

0.99+

this yearDATE

0.99+

twiceQUANTITY

0.99+

bothQUANTITY

0.98+

todayDATE

0.98+

BostonLOCATION

0.98+

KubernetesTITLE

0.98+

PowerProtect DPCOMMERCIAL_ITEM

0.97+

VMworldORGANIZATION

0.97+

vMworldORGANIZATION

0.97+

60,000 IOPSQUANTITY

0.96+

PowerProtect DPCOMMERCIAL_ITEM

0.93+

AzureTITLE

0.93+

VMwareTITLE

0.92+

PowerProtectCOMMERCIAL_ITEM

0.89+

PowerProtectTITLE

0.89+

singleQUANTITY

0.87+

PowerProtect Data ManagerTITLE

0.83+

first cyberQUANTITY

0.82+

theCubeORGANIZATION

0.82+

one offeringQUANTITY

0.77+

one stopQUANTITY

0.73+

PowerProtect DDCOMMERCIAL_ITEM

0.72+

couple yearsQUANTITY

0.71+

three bigQUANTITY

0.7+

GovcloudTITLE

0.69+

one deduplicationQUANTITY

0.69+

coupleQUANTITY

0.57+

SLATITLE

0.51+

PowerProtectORGANIZATION

0.47+

v2OTHER

0.43+

wavesEVENT

0.32+

Caitlin Gordon 10 21 Promo V1


 

>> Announcer: From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is "theCUBE' conversation. >> Hi, Lisa Martin here with Caitlin Gordon, the VP of Product Marketing for Dell Technologies. Caitlin, welcome back to 'theCUBE' I'm excited to see you again. >> I'm very excited to be here again. >> So data protection in the news, what's going on? >> Yeah you know, it's been a busy year. We had obviously our power protect DD appliance launched last year and then this year, we've have announcements on the software side. We had announcements at the VMworld some more at Dell Technologies world. And now today we're announcing even more, which is the new PowerProtect PP series appliances, the new integrated appliances. And it's really exciting. So we now have our PowerProtect DD,xx the next generation of data domain, and we have our PowerProtect DP appliances, integrated appliances. And that's all about combining both protection storage, protecting software in a single converge, all in one offering. It's really popular with our customers today because of the simplicity, the ability to really modernize your data protection in a very simple way, get up really up and running quickly. And in fact, it's really the fastest growing part of the back of appliance market. >> I have read that the integrated appliances, our market is growing twice as fast as the targeted market. So give us a picture of what customers can expect from the new DP series. >> Yeah, and it's not that dissimilar to actually our DD series from last year which is there's in four models in the new DP series. There's a 4,400 which is actually now taking the PowerProtect brand and putting that on the existing DP 4,400 and then three new appliances: the 5,900, the 8,400 and then the 8,900. And it's really all about getting better performance, better efficiency. We've got new hardware, assisted compression, denser drives, and all that gives us the ability to get faster backups, faster recovery, in fact you get 38% faster backups, 45% faster recovery, more logical capacity, 30% more logical capacity, 65 to one theater application, which is just incredible and 60,000 IOPS for instant access. So really ups the game, both in performance and an efficiency. >> Those are big numbers, you mentioned the DD launch last year, contrast it with what you're announcing now. What's the significance of the DP series? >> And that this is exciting for us because it does a couple things. It expands our PowerProtect appliance family with the new DP series of integrated appliances. But at the same time, we're also announcing other important PowerProtect enhancements. On the software side, PowerProtect data manager, which we've enhancing and continuing to talk about all year also has some new improvements. The ability deploy it in Azure, in AWS GovCloud for in-cloud protection. The enhancements that we've done with VMware that we talked about, not that long ago at VMworld about being able to integrate with storage based policy management, really automating and simplifying VMware protection. And it's really all about Kubernetes right And the ability to support Kubernetes as well. So not only is this an exciting appliance launch for us, but it's also the marketing of yet even more enhancements on the PowerProtect data manager side. And all that together means that with PowerProtect, you really have a one-stop shop for all of your data protection needs no matter where the data lives, no matter what SLA, whether it's a physical, virtual appliance, whether it's target or integrated, you've all bought them in the PowerProtect family now. >> Excellent. All right. Last question for you, Caitlin we know Dell Technologies is focused on three big waves, it's cloud VMware and Cyber Recovery. Anything else you want to add here? Yeah, I'll pick up, especially on that last one, we talke%d a little bit about the enhancements we've done with cloud in cloud data protection, longterm recovery, disaster recovery, as well as what we've done on the VMware front, really important that we continue to have that automation at simplicity with VM-ware but cyber resiliency, cyber recovery ransomware has really risen to the top of the list. Unfortunately for many organizations and PowerProtect cyber recovery is really an important enhancement that we also have with this announcement today. We've had this offer in market for a couple of years, with the exciting new enhancement here. It is the first cyber recovery solution and endorsed by Sheltered Harbor. So it is the first Cyber Recovery solution endorsed by Sheltered Harbor. And if you're not familiar with PowerProtect data, PowerProtect, if you're not familiar with PowerProtect cyber recovery, it provides an automated air gapped solution for data isolation and then cyber sense provides the analytics and the forensics for discovering, diagnosing and remediating those attacks. So it's really all about ransomware protecting from protecting from or covering from those attacks, which unfortunately have become all too common for our customers today. >> Excellent news, Caitlin. Thanks for sharing what's new congratulations to you and the Dell team. >> Thank you so much, Lisa. >> For Cait%lin Gordon I'm Lisa Martin. You're watch%ing 'theCUBE'. (calm music)

Published Date : Oct 21 2020

SUMMARY :

leaders all around the world. the VP of Product Marketing because of the simplicity, the ability I have read that the that on the existing DP What's the significance of the DP series? And the ability to support So it is the first Cyber to you and the Dell team. For Cait%lin Gordon I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Caitlin GordonPERSON

0.99+

Lisa MartinPERSON

0.99+

65QUANTITY

0.99+

CaitlinPERSON

0.99+

45%QUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

38%QUANTITY

0.99+

Sheltered HarborORGANIZATION

0.99+

last yearDATE

0.99+

LisaPERSON

0.99+

firstQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Cait%lin GordonPERSON

0.99+

AWSORGANIZATION

0.99+

this yearDATE

0.99+

four modelsQUANTITY

0.99+

DellORGANIZATION

0.99+

twiceQUANTITY

0.99+

todayDATE

0.98+

theCUBE StudiosORGANIZATION

0.97+

VMworldORGANIZATION

0.97+

AzureTITLE

0.97+

PowerProtectORGANIZATION

0.96+

bothQUANTITY

0.96+

PowerProtect PP seriesCOMMERCIAL_ITEM

0.95+

BostonLOCATION

0.95+

KubernetesTITLE

0.94+

theCUBEORGANIZATION

0.94+

21QUANTITY

0.92+

three new appliancesQUANTITY

0.92+

8,900QUANTITY

0.92+

oneQUANTITY

0.92+

GovCloudTITLE

0.91+

5,900QUANTITY

0.91+

8,400QUANTITY

0.88+

PowerProtectCOMMERCIAL_ITEM

0.88+

60,000 IOPSQUANTITY

0.87+

first cyber recoveryQUANTITY

0.87+

4,400QUANTITY

0.86+

PowerProtect DDCOMMERCIAL_ITEM

0.86+

VMwareTITLE

0.85+

10QUANTITY

0.83+

VMworldEVENT

0.81+

PowerProtect DPCOMMERCIAL_ITEM

0.77+

single convergeQUANTITY

0.76+

couple of yearsQUANTITY

0.76+

one theater applicationQUANTITY

0.76+

both protectionQUANTITY

0.75+

DDCOMMERCIAL_ITEM

0.74+

DPCOMMERCIAL_ITEM

0.72+

one-stopQUANTITY

0.71+

DP 4,400COMMERCIAL_ITEM

0.71+

big wavesEVENT

0.56+

coupleQUANTITY

0.51+

theCUBETITLE

0.51+

PowerProtectTITLE

0.5+

SLATITLE

0.49+

threeEVENT

0.48+

Mohit Lad, ThousandEyes | CUBEConversations, November 2019


 

our Studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hey welcome back they're ready Jeff Rick here with the cube we're in our Palo Alto studios today to have a conversation with a really exciting company they've actually been around for a while but they've raised a ton of money and they're doing some really important work in the world in which we live today which is a lot different than the world was when they started in 2010 so we're excited to welcome to the studio he's been here on before Mohit ladee is the CEO and co-founder of Thousand Eyes mode great to see you great to see you as well as pretty to be here yeah welcome back but for people that didn't see the last video or not that familiar with Thousand Eyes tell them a little bit kind of would a thousand eyes all about absolutely so in today's world the cloud is your new data center the Internet is your new network and SAS is your new application stack and thousand eyes is built to be the the only thing that can really help you see across all three of these like it's your own private environment I love that I love that kind of setup and framing because those are the big three things and as you said all those things have moved from inside your control to outside of your control so in 2010 is that was that division I mean when you guys started the company UCLA I guess a while ago now what was that the trend what did you see what yes what kind of started it so it's really interesting right so our background as a founding company with two founders we did our PhD at UCLA in computer science and focused on internet and we were fascinated by the internet because it was just this complex system that nobody understood but we knew even then that it would meaningfully change our lives not just as consumers but even as enterprise companies so we had this belief that it's going to be the backbone of the modern enterprise and nobody quite understood how it worked because everyone was focused on your own data center your own network and so our entire vision at that point was we want people to feel the power of seeing the internet like your network that's sort of where we started and then as we started to expand on that vision it was clear to us that the Internet is what brings companies together what brings the cloud closer to the enterprise what brings the SAS applications closer to the enterprise right so we expanded into into cloud and SAS as well so when you had that vision you know people had remote offices and they would set up they would you know set up tunnels and peer-to-peer and all kinds of stuff why did you think that it was gonna go to that next step in terms of the internet you know just kind of the public Internet being that core infrastructure yes we were at the at the very early stages of this journey to cloud right and at the same time you had companies like Salesforce you had office 365 they were starting to just make it so much easier for companies to deploy a CRM you don't have to stand up these massive servers anymore its cloud-based so it was clear to us that that was gonna be the new stack and we knew that you had to build a fundamentally different technology to be able to operate in that stack and it's not just about visibility it's about making use of collective information as well because you're going from a private environment with your own data center your own private network your own application stack to something that's sitting in the cloud which is a shared environment going over the Internet which is the same network that carries cat videos that your kids watch it's carrying production traffic now for your core applications and so you need a different technology stack and you need to really sort of benefit from this notion of collective intelligence of knowing what everybody sees together as one view so I'm here I think I think Salesforce was such an important company in terms of getting enterprises to trust a SAS application for really core function which just sales right I think that was a significant moment in moving the dial was there a killer app for you guys that was you know for your customers the one where they finally said wait you know we need a different level of his ability to something that we rely on that's coming to us through an outside service so it's interesting right when we started the company we had a lot of advisors that said hey your position should be you're gonna help enterprises enforce SLA with Salesforce and we actually took a different position because what we realized was Salesforce did all the right stuff on their data centers but the internet could mess things up or enterprise companies that were not ready to move to cloud didn't have the right architectures would have some bottlenecks in their own environment because they are backhauling traffic from their London office to New York and then exiting from New York they're going back to London so all this stuff right so we took the position of really presenting thousand eyes as a way to get transparency into this ecosystem and we we believe that if we take this position if we want to help both sides not just the enterprise companies we want to help sales force we want to have enterprise companies and just really present it as a means of finding a common truth of what is actually going on it works so much better right so there wasn't really sort of one killer application but we found that anything that was real-time so if you think about video based applications or any sort of real-time communications based so the web access of the world they were just very sensitive to network conditions and internet conditions same with things that are moving a lot of data back and forth so these applications like Salesforce office 365 WebEx they just are demanding applications on the infrastructure and even if they're done great if the infrastructure doesn't it doesn't give you a great experience right and and and you guys made a really interesting insight too it's an it's an all your literature it's it's a really a core piece of what you're about and you know when you owned it you could diagnose it and hopefully you could fix it or call somebody else to fix it but when you don't own it it's a very different game and as you guys talked about it's really about finding the evidence or everyone's not pointing fingers back in and forth a to validate where the actual problem is and then to also help those people fix the problem that you don't have direct control of so it's a very different you know kind of requirement to get things fixed when they have to get fixed yeah and the first aspect of that is visibility so as an example right you generally don't have a problem going from one part of your house to another part of your house because you own the whole place you know exactly what sits between the two rooms that you're trying to get to you don't you don't have run into surprises but when you're going from let's say Palo Alto to San Francisco and you have two options you can take the 101 or 280 you need to know what you expect to see before you get on one of those options right and so the Internet is very similar you have these environments that you have no idea what to expect and if you don't see that with the right level of granularity that you would in your own environments you would make decisions that you have you know you have no control over right the visibility is really important but it's giving that lens like making it feel like a google maps of the internet that gives you the power to look at these environments like it's your private network that's the hard part right and then so what you guys have done as I understand is you've deployed sensors basically all over the Internet all at an important pops yeah an important public clouds and important enterprises etc so that you now have a view of what's going on it I can have that view inside my enterprise by leveraging your infrastructure is that accurate correct and so this is where the notion of being able to set up this sort of data collection environment is really difficult and so we have created all of this over years so enterprise companies consumer companies they can leverage this infrastructure to get instant results so there's zero implementation what right but the key to that is also understanding the internet itself and so this is where a research background comes in play because we studied we did years of research on actually modeling the internet so we know what strategic locations to put these probes that to give good coverage we know how to fill the gaps and so it's not just a numbers game it's how you deploy them where you deploy them and knowing that connectivity we've created this massive infrastructure now that can give you eyes on the internet and we leverage all of their data together so if let's say hypothetically you know AT&T has an issue that same issue is impacting multiple customers through all our different measurements so it's like ways if you're using ways to get from point A to point B if Waze was just used by your family members and nobody else it would give you completely useless information values in that collective insight right and then now you also will start to be able to until every jamel and AI and you know having all that data and apply just more machine learning to it to even better get out in front of problems I imagine as much as as is to be able to identify it so that's a really interesting point right so the first thing we have to tackle is making a complex data set really accessible and so we have a lot of focus into essentially getting insights out of it using techniques that are smarter than the brute-force techniques to get insights out and then present it in manners that it's accessible and digestible and then as we look into the next stages we're going to bring more and more things like learning and so on to take it even further right it's funny the accessible and digestible piece I've just had a presentation the other day and there was a woman from a CSO at a big bank and she talked about you know the problem of false positives and in in early days I mean their biggest issues was just too much data coming in from too many sensors and and too many false positives to basically bury people so I didn't have time to actually service the things that are a priority so you know a nice presentation of a whole lot of data that's a big difference to make it actual it is absolutely true and now that the example I'll give you is oftentimes when you think about companies that operate with a strong network core like we do they are in the weeds right which is important but what is really important is tying that intelligence to business impact and so the entire product portfolio we've built it's all about business impact user experience and then going into connecting the dots or the network side so we've seen some really interesting events and as much as we know the internet every day I wake up and I see something that surprises me right we've had customers that have done migrations to cloud that have gone horribly wrong right so we the latest when I was troubleshooting with the customer was where we saw they migrated from there on from data center to Amazon and the user experience was 10x worse than what it was on their own data the app once they moved to Amazon okay and what had happened there was the whole migration to Amazon included the smart sort of CDN where they were fronting your traffic at local sites but the traffic was going all over the place so from if a user was in London instead of going to the London instance of Amazon they were going to Atlanta they were going to Los Angeles and so the whole migration created a worse user experience and you don't have that lens because you don't see that in a net portion of that right that's what we like we caught it instantly and we were able to showcase that hey this is actually a really bad migration and it's not that Amazon is bad it's just it's been implemented incorrectly right so ya fix these things and those are all configurations all Connecticut which is so very easy all the issues you hear about with with Amazon often go back to miss configuration miss settings suboptimal leaving something open so to have that visibility makes a huge impact and it's more challenging because you're trying to configure different components of this environment right so you have a cloud component you have the internet component your own network you have your own firewalls and you used to have this closed environment now it's hybrid it involves multiple parties multiple skill sets so a lot of things can really go wrong yeah I think I think you guys you guys crystallize very cleanly is kind of the inside out and outside in approach both you know a as as a service consumer yep right I'm using Salesforce I'm using maybe s3 I'm using these things that I need and I want to focus on that and I want to have a good experience I want my people to be able to get on their Salesforce account and book business but but don't forget the other way right because as people are experiencing my service that might be connecting through and aggregating many other services along the way you know I got to make sure my customer experience is big and you guys kind of separate those two things out and really make sure people are focusing on both of them correct and it's the same technology but you can use that for your production services which are revenue generating or you can use that for your employee productivity the the visibility that you provide is is across a common stack but on the production side for example because of the way the internet works right your job is not just to ensure a great performance in user experience your job is also to make sure that people are actually reaching your site and so we've seen several instances where because of the way internet works somebody else could announce that their google.com and they could suck a bunch of traffic from the Internet and this happens quite routinely in the notion of what is now known as DP hijacks or sometimes DNS hijacks and the the one that I remember very well is when there was the small ISP in Nigeria that announced the identity of the address block for Google and that was picked up by China Telecom which was picked up by a Russian telco and now you have Russia China and Nigeria in the path for traffic to Google which is actually not even going to Google's right those kinds of things are very possible because of the way the internet how fast those things kind of rise up and then get identified and then get shut off is this hours days weeks in this kind of example so it really depends because if you are let's say you were Google in this situation right you're not seeing a denial of service attack T or data centers in fact you're just not seeing traffic running it because somebody else is taking it away right it's like identity theft right like I somebody takes your identity you wouldn't get a mail in your inbox saying hey your identity has been taken back so I see you have to find it some other way and usually it's the signal by the time you realize that your identity has been stolen you have a nightmare ahead of you all right so you've got some specific news a great great conversation you know it's super insightful to talk people that are in the weeds of how all the stuff works but today you have a new a new announcement some new and new offering so tell us about what's going on so we have a couple of announcements today and coming back to this notion of the cloud being a new data center the internet your new network right two things were announcing today is one we're announcing our second version of the cloud then benchmark performance comparison and what this is about is really helping people understand the nuances the performance difference is the architecture differences between Amazon Google ad your IBM cloud and Alibaba cloud so as you make decisions you actually understand what is the right solution for me from a performance architecture standpoint so that's one it's a fascinating report we found some really interesting findings that surprised us as well and so we're releasing that we're also touching on the internet component by releasing a new product which we call as Internet insights and that is giving you the power to actually look at the internet more holistically like you own the entire internet so that is really something we're all excited about because it's the first time that somebody can actually see the Internet see all these connections see what is going on between major service providers and feel like you completely owned the environment so are people using information like that to dynamically you know kind of reroute the way that they handle their traffic or is it more just kind of a general health you know kind of health overview you know how much of it do I have control over how much should I have control over and how much of I just need to know what's going on so yeah so in just me great question so the the best way I can answer that is what I heard CIO say in a CIO forum we were presenting it where they were a customer it's a large financial services customer and somebody asked the CIO what was the value of thousand I wasn't the way he explained it which was really fascinating was phase one of thousand eyes when we started using it was getting rid of technical debt because we would keep identifying issues which we could fix but we could fix the underlying root cause so it doesn't happen again and that just cleared the technical debt that we had made our environment much better and then we started to optimize the environments to just get better get more proactive so that's a good way to think about it when you think about our customers most of the times they're trying to just not have their hair on fire right that's the first step right once we can help them with that then they go on to tuning optimising and so on but knowing what is going on is really important for example if you're providing a.com service like cube the cube comm right it's its life and you're providing it from your data center here you have two up streams like AT&T and Verizon and Verizon is having issues you can turn off that connection and read all your customers back live having a full experience if you know that's the issues right right the remediation is actually quite quite a few times it's very straight forward if you know what you are trying to solve right so do you think on the internet insights this is going to be used just more for better remediation or do you think it's it's kind of a step forward and getting a little bit more proactive and a little bit more prescriptive and getting out ahead of the issues or or can you because these things are kind of ephemeral and come and go so I think it's all of the about right so one the things that the internet insights will help you is with planning because as you expand into new geo so if you're a company that's launching a service in a new market right that immediately gives you a landscape of who do you connect with where do you host right now you can actually visualize the entire network how do you reach your customer base the best right so that's the planning aspect and if you plan right you would actually reduce a lot of the trouble that you see so we had this customer of ours that was deploying Estevan software-defined man in there a she offices and they used thousand eyes to evaluate two different ISPs that they were looking at one of them had this massive time-of-day congestion so every time every day at nine o'clock the latency would get doubled because of congestion it's common in Asia the other did not have time of day congestion and with that view they could implement the entire Estevan on the ice pea that actually worked well for them so planning is important part of this and then the other aspect of this is the thing that folks often don't realize is internet is not static it's constantly changing so you know AT&T may connect to where I is in this way it connects it differently it connects to somebody else and so having that live map as you're troubleshooting customer experience issues so let's say you have customers from China that are having a ton of issues all of a sudden or you see a drop of traffic from China now you can relate that information of where these customers are coming from with our view of the health of the Chinese internet and which specific ISPs are having issues so that's the kind of information merger that simply doesn't happen today right promote is a fascinating discussion and we could go on and on and on but unfortunately do not have all day but I really like what you guys are doing the other thing I just want to close on which which I thought was really interesting is you know a lot of talked about digital transformation we always talk about digital transformation everybody wants a digital transfer eyes it but you really boiled it down into really three create three critical places that you guys play the digital experience in terms of what what the customers experience you know getting to cloud everybody wants to get to cloud so one can argue how much and what percentage but everybody's going to cloud and then as you said in this last example the modern when as you connect all these remote sites and you guys have a play in all of those places so whatever you thought about in 2010 that worked out pretty well thank you and we had a really strong vision but kudos to the team that we have in place that has stretched it and really made the most out of that so excited good job and thanks for for stopping by sharing the story thank you for hosting always fun to be here absolutely all right well he's mo and I'm Jeff you're watching the cube when our Palo Alto studio is having a cube conversation thanks for watching we'll see you next time [Music]

Published Date : May 4 2020

SUMMARY :

of the internet you know just kind of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
China TelecomORGANIZATION

0.99+

NigeriaLOCATION

0.99+

ChinaLOCATION

0.99+

2010DATE

0.99+

Jeff RickPERSON

0.99+

VerizonORGANIZATION

0.99+

November 2019DATE

0.99+

UCLAORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

LondonLOCATION

0.99+

New YorkLOCATION

0.99+

AsiaLOCATION

0.99+

AtlantaLOCATION

0.99+

Los AngelesLOCATION

0.99+

San FranciscoLOCATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two roomsQUANTITY

0.99+

JeffPERSON

0.99+

10xQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two optionsQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

second versionQUANTITY

0.99+

two thingsQUANTITY

0.99+

both sidesQUANTITY

0.99+

first timeQUANTITY

0.98+

first stepQUANTITY

0.98+

SalesforceTITLE

0.98+

RussianOTHER

0.98+

two foundersQUANTITY

0.98+

Mohit ladeePERSON

0.98+

thousand eyesQUANTITY

0.98+

two different ISPsQUANTITY

0.97+

google.comOTHER

0.97+

nine o'clockDATE

0.97+

oneQUANTITY

0.96+

first thingQUANTITY

0.96+

ChineseOTHER

0.96+

AlibabaORGANIZATION

0.96+

one partQUANTITY

0.96+

first aspectQUANTITY

0.95+

thousandQUANTITY

0.95+

Mohit LadPERSON

0.95+

RussiaLOCATION

0.94+

WazeTITLE

0.94+

Thousand EyesORGANIZATION

0.94+

Palo Alto CaliforniaLOCATION

0.93+

google mapsTITLE

0.93+

office 365TITLE

0.92+

SLATITLE

0.91+

three critical placesQUANTITY

0.9+

zero implementationQUANTITY

0.9+

yearsQUANTITY

0.89+

SalesforceORGANIZATION

0.88+

EstevanORGANIZATION

0.88+

a ton of moneyQUANTITY

0.87+

three thingsQUANTITY

0.85+

lot of dataQUANTITY

0.84+

a while agoDATE

0.83+

Salesforce office 365TITLE

0.82+

280OTHER

0.82+

every dayQUANTITY

0.81+

threeQUANTITY

0.81+

Thousand EyesORGANIZATION

0.8+

ThousandEyesORGANIZATION

0.8+

UNLIST TILL 4/2 - The Road to Autonomous Database Management: How Domo is Delivering SLAs for Less


 

hello everybody and thank you for joining us today at the virtual Vertica BBC 2020 today's breakout session is entitled the road to autonomous database management how Domo is delivering SLA for less my name is su LeClair I'm the director of marketing at Vertica and I'll be your host for this webinar joining me is Ben white senior database engineer at Domo but before we begin I want to encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click Submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we aren't able to address or drew our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forum to keep the conversation going also as a reminder you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you notification as soon as it's ready now let's get started then over to you greetings everyone and welcome to our virtual Vertica Big Data conference 2020 had we been in Boston the song you would have heard playing in the intro would have been Boogie Nights by heatwaves if you've never heard of it it's a great song to fully appreciate that song the way I do you have to believe that I am a genuine database whisperer then you have to picture me at 3 a.m. on my laptop tailing a vertical log getting myself all psyched up now as cool as they may sound 3 a.m. boogie nights are not sustainable they don't scale in fact today's discussion is really all about how Domo engineers the end of 3 a.m. boogie nights again well I am Ben white senior database engineer at Domo and as we heard the topic today the road to autonomous database management how Domo is delivering SLA for less the title is a mouthful in retrospect I probably could have come up with something snazzy er but it is I think honest for me the most honest word in that title is Road when I hear that word it evokes for me thoughts of the journey and how important it is to just enjoy it when you truly embrace the journey often you look up and wonder how did we get here where are we and of course what's next right now I don't intend to come across this too deep so I'll submit there's nothing particularly prescient and simply noticing the elephant in the room when it comes to database economy my opinion is then merely and perhaps more accurately my observation the office context imagine a place where thousands and thousands of users submit millions of ad-hoc queries every hour now imagine someone promised all these users that we could deliver bi leverage at cloud scale in record time I know what many of you should be thinking who in the world would do such a thing of course that news was well received and after the cheers from executives and business analysts everywhere and chance of Keep Calm and query on finally started to subside someone that turns an ass that's possible we can do that right except this is no imaginary place this is a very real challenge we face the demo through imaginative engineering demo continues to redefine what's possible the beautiful minds at Domo truly embrace the database engineering paradigm that one size does not fit all that little philosophical nugget is one I would pick up while reading the white papers and books of some guy named stone breaker so to understand how I and by extension Domo came to truly value analytic database administration look no further than that philosophy and what embracing it would mean it meant really that while others were engineering skyscrapers we would endeavor to build Datta neighborhoods with a diverse kapala G of database configuration this is where our journey at Domo really gets under way without any purposeful intent to define our destination not necessarily thinking about database as a service or anything like that we had planned this ecosystem of clusters capable of efficiently performing varied workloads we achieve this with custom configurations for node count resource pool configuration parameters etc but it also meant concerning ourselves with the unattended consequences of our ambition the impact of increased DDL activities on the catalog system overhead in general what would be the management requirements of an ever-evolving infrastructure we would be introducing multiple points of failure what are the advantages the disadvantages those types of discussions and considerations really help to define what would be the basic characteristics of our system the database itself needed to be trivial redundant potentially ephemeral customizable and above all scalable and we'll get more into that later with this knowledge of what we were getting into automation would have to be an integral part of development one might even say automation will become the first point of interest on our journey now using popular DevOps tools like saltstack terraform ServiceNow everything would be automated I mean it discluded everything from larger multi-step tasks like database designs database cluster creation and reboots to smaller routine tasks like license updates move-out and projection refreshes all of this cool automation certainly made it easier for us to respond to problems within the ecosystem these methods alone still if our database administration reactionary and reacting to an unpredictable stream of slow query complaints is not a good way to manage a database in fact that's exactly how three a.m. Boogie Nights happen and again I understand there was a certain appeal to them but ultimately managing that level of instability is not sustainable earlier I mentioned an elephant in the room which brings us to the second point of interest on our road to autonomy analytics more specifically analytic database administration why our analytics so important not just in this case but generally speaking I mean we have a whole conference set up to discuss it domo itself is self-service analytics the answer is curiosity analytics is the method in which we feed the insatiable human curiosity and that really is the impetus for analytic database administration analytics is also the part of the road I like to think of as a bridge the bridge if you will from automation to autonomy and with that in mind I say to you my fellow engineers developers administrators that as conductors of the symphony of data we call analytics we have proven to be capable producers of analytic capacity you take pride in that and rightfully so the challenge now is to become more conscientious consumers in some way shape or form many of you already employ some level of analytics to inform your decisions far too often we are using data that would be categorized as nagging perhaps you're monitoring slow queries in the management console better still maybe you consult the workflows analyzing how about a logging and alerting system like sumo logic if you're lucky you do have demo where you monitor and alert on query metrics like this all examples of analytics that help inform our decisions being a Domo the incorporation of analytics into database administration is very organic in other words pretty much company mandated as a company that provides BI leverage a cloud scale it makes sense that we would want to use our own product could be better at the business of doma adoption of stretches across the entire company and everyone uses demo to deliver insights into the hands of the people that need it when they need it most so it should come as no surprise that we have from the very beginning use our own product to make informed decisions as it relates to the application back engine in engineering we call it our internal system demo for Domo Domo for Domo in its current iteration uses a rules-based engine with elements through machine learning to identify and eliminate conditions that cause slow query performance pulling data from a number of sources including our own we could identify all sorts of issues like global query performance actual query count success rate for instance as a function of query count and of course environment timeout errors this was a foundation right this recognition that we should be using analytics to be better conductors of curiosity these types of real-time alerts were a legitimate step in the right direction for the engineering team though we saw ourselves in an interesting position as far as demo for demo we started exploring the dynamics of using the platform to not only monitor an alert of course but to also triage and remediate just how much economy could we give the application what were the pros and cons of that Trust is a big part of that equation trust in the decision-making process trust that we can mitigate any negative impacts and Trust in the very data itself still much of the data comes from systems that interacted directly and in some cases in directly with the database by its very nature much of the data was past tense and limited you know things that had already happened without any reference or correlation to the condition the mayor to those events fortunately the vertical platform holds a tremendous amount of information about the transaction it had performed its configurations the characteristics of its objects like tables projections containers resource pools etc this treasure trove of metadata is collected in the vertical system tables and the appropriately named data collector tables as a version 9 3 there are over 190 tables that define the system tables while the data collector is the collection of 215 components a rich collection can be found in the vertical system tables these tables provide a robust stable set of views that let you monitor information about your system resources background processes workload and performance allowing you to more efficiently profile diagnose and correlate historical data such as low streams query profiles to pool mover operations and more here you see a simple query to retrieve the names and descriptions of the system tables and an example of some of the tables you'll find the system tables are divided into two schemas the catalog schema contains information about persistent objects and the monitor schema tracks transient system States most of the tables you find there can be grouped into the following areas system information system resources background processes and workload and performance the Vertica data collector extends system table functionality by gathering and retaining aggregating information about your database collecting the data collector mixes information available in system table a moment ago I show you how you get a list of the system tables in their description but here we see how to get that information for the data collector tables with data from the data collecting tables in the system tables we now have enough data to analyze that we would describe as conditional or leading data that will allow us to be proactive in our system management this is a big deal for Domo and particularly Domo for demo because from here we took the critical next step where we analyze this data for conditions we know or suspect lead to poor performance and then we can suggest the recommended remediation really for the first time we were using conditional data to be proactive in a database management in record time we track many of the same conditions the Vertica support analyzes via scrutinize like tables with too many production or non partition fact tables which can negatively affect query performance and life in vertical in viral suggests if the table has a data a time step column you recommend the partitioning by the month we also can track catalog sizes percentage of total memory and alert thresholds and trigger remediations requests per hour is a very important metric in determining when a trigger are scaling solution tracking memory usage over time allows us to adjust resource pool parameters to achieve the optimal performance for the workload of course the workload analyzer is a great example of analytic database administration I mean from here one can easily see the logical next step where we were able to execute these recommendations manually or automatically be of some configuration parameter now when I started preparing for this discussion this slide made a lot of sense as far as the logical next iteration for the workload analyzing now I left it in because together with the next slide it really illustrates how firmly Vertica has its finger on the pulse of the database engineering community in 10 that OS management console tada we have the updated work lies will load analyzer we've added a column to show tuning commands the management console allows the user to select to run certain recommendations currently tuning commands that are louder and alive statistics but you can see where this is going for us using Domo with our vertical connector we were able to then pull the metadata from all of our clusters we constantly analyze that data for any number of known conditions we build these recommendations into script that we can then execute immediately the actions or we can save it to a later time for manual execution and as you would expect those actions are triggered by thresholds that we can set from the moment nyan mode was released to beta our team began working on a serviceable auto-scaling solution the elastic nature of AI mode separated store that compute clearly lent itself to our ecosystems requirement for scalability in building our system we worked hard to overcome many of the obstacles they came with the more rigid architecture of enterprise mode but with the introduction is CRM mode we now have a practical way of giving our ecosystem at Domo the architectural elasticity our model requires using analytics we can now scale our environment to match demand what we've built is a system that scales without adding management overhead or our necessary cost all the while maintaining optimal performance well we're really this is just our journey up to now and which begs the question what's next for us we expand the use of Domo for Domo within our own application stack maybe more importantly we continue to build logic into the tools we have by bringing machine learning and artificial intelligence to our analysis and decision making really do to further illustrate those priorities we announced the support for Amazon sage maker autopilot at our demo collusive conference just a couple of weeks ago for vertical the future must include in database economy the enhanced capabilities in the new management console to me are clear nod to that future in fact with a streamline and lightweight database design process all the pieces should be in place versions deliver economists database management itself we'll see well I would like to thank you for listening and now of course we will have a Q&A session hopefully very robust thank you [Applause]

Published Date : Mar 31 2020

SUMMARY :

conductors of the symphony of data we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

VerticaORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DomoORGANIZATION

0.99+

3 a.m.DATE

0.99+

AmazonORGANIZATION

0.99+

todayDATE

0.99+

first timeQUANTITY

0.98+

this weekDATE

0.97+

over 190 tablesQUANTITY

0.97+

two schemasQUANTITY

0.96+

second pointQUANTITY

0.96+

215 componentsQUANTITY

0.96+

first pointQUANTITY

0.96+

three a.m.DATE

0.96+

Boogie NightsTITLE

0.96+

millions of ad-hoc queriesQUANTITY

0.94+

DomoTITLE

0.93+

Vertica Big Data conference 2020EVENT

0.93+

Ben whitePERSON

0.93+

10QUANTITY

0.91+

thousands of usersQUANTITY

0.9+

one sizeQUANTITY

0.89+

saltstackTITLE

0.88+

4/2DATE

0.86+

a couple of weeks agoDATE

0.84+

DattaORGANIZATION

0.82+

end of 3 a.m.DATE

0.8+

Boogie NightsEVENT

0.78+

double arrowQUANTITY

0.78+

every hourQUANTITY

0.74+

ServiceNowTITLE

0.72+

DevOpsTITLE

0.72+

Database ManagementTITLE

0.69+

su LeClairPERSON

0.68+

many questionsQUANTITY

0.63+

SLATITLE

0.62+

The RoadTITLE

0.58+

Vertica BBCORGANIZATION

0.56+

2020EVENT

0.55+

database managementTITLE

0.52+

Domo DomoTITLE

0.46+

version 9 3OTHER

0.44+

Ashok Ramu, Actifio | CUBEConversation January 2020


 

>> From the SiliconAngle media office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Stu Miniman. >> Hi, I'm Stu Miniman, and welcome to theCUBE's Boston-area studio. Welcome back to the program, CUBE alum, Ashok Ramu, Vice President and General Manager of Cloud at Actifio, great to see you. >> Happy New Year, Stu, happy to be here. >> 2020, hard to believe it said, it feels like we're in the future here. And talking about future, we've watched Actifio for many years, we remember when copy data management, the category, was created, and really, Actifio, we were talking a lot before Cloud was the topic that we spent so much talking about, but Actifio has been on this journey with its customers in Cloud for many years, and of course, that is your role is working, building the product, the team working all over it, so give us a little bit of a history, if you would, and give us the path that led to 10C announcement. >> Sure thing. We started the Cloud journey early on, in 2014 or 2013-ish, when Amazon was the only Cloud that really worked. We built our architecture, in fact, we took our enterprise architecture and put it on the Cloud and realized, "Oh my god," you know, it's a world of difference. The economics don't work, the security model is different, the scale is different. So, I think with the 8.0 version that came out in 2017, we really kind of figured out the architecture that worked for large enterprises, particularly enterprises that have diverse data sets and have requirements around, you know, marrying different applications to data sets anywhere they want, so we came up with efficient use of object, we came up with the capability of migrating workloads, taking VMware VMs, bringing up on Azure, bringing up on DCP, et cetera. So that was the first foray into Actifio's Cloud, and since then, we've been just building strength after strength, you know. It's been a building block, understanding our customers, and thank you to the customers and the hyperscalers that actually led us to the 10C release. So this, I believe, we've taken it up a notch wherein, we understand the Cloud, we understand the infrastructure, the software auto-tunes itself to know where it's running on, taking the guessing game out of the equation. So 10C really represents what we see as a launchpad for the rest of the Cloud journey that Actifio's going to embark upon. We have enabled a number of new use cases like AI and ML, data transformation is key, we tackled really complicated workloads like HANA and Sybase and MySQL, et cetera, and in addition to that, we also adopt different native Cloud technologies, like Cloud snapshots, like recovery orchestration of the Cloud, et cetera. >> Yeah, I think it's worth reminding our audience there that Actifio's always been software. And when you talk about, you know, I think back to 2013, 2014, it was the public Cloud versus the data center, and we have seen the public Cloud in many ways looks more and more like what the enterprise has been used to. >> Absolutely. >> And the data centers have been trying to Cloud-ify for a number of years, and things like containerization and Kubernetes is blurring the line, and of course, every hyperscaler out there now has something that reaches their public Cloud into the data center and of course, technologies like VMware are also extending into the public Cloud, or, SAP now, of course is all of the Cloud environment. So with hybrid Cloud and multi-Cloud as kind of the waves of driving, help us understand that Actifio lives in all of these environments, and they're all a little bit different, so how does Actifio make sure that it can provide the functionality and experience that users want, regardless of where it is? >> Absolutely, you said it right. Actifio has always been a software company. And it is our customers that showed us, by Cloudifying their data centers, that we had to operate in the Cloud. So we had on premises VMware Clouds, not before we had Amazon and Azure and Google. So that evolution started much early on. And so, from what, you know, Actifio's a very customer-driven company, be it, you know, all segments of the company are driven by the customers, and in 2019, and even before, when you see a strong trend to migrate workloads, to move workloads, we realized, there is a significant opportunity, because the hardest thing to migrate is the volume of data because it's ever-changing, and it is ever-growing. So, the key element of neutrality was the application itself. Microsoft SQL's a SQL no matter how you run it. It could be on a big Windows machine in your data center or a NGCP, it makes no difference. So Actifio's approach to start application down basically gave us the freedom to say, we're going to create SQL to SQL. I don't know if you're running in Azure, Google, DOP data center, or AliCloud, it makes no difference to me. I understand SQL, I understand SQL's availability groups, I understand logs, I can capture it and give it back to you, so when we took that approach, it kind of automatically gave us infrastructure neutrality, really didn't care. So when we have a conversation with a customer, it basically goes around lines of, "Okay, Mr. Customer, how much data do you have? And what are your key applications? Can you categorize them in terms of priority?" It usually comes out to be databases are the crown jewels, so they're the number one priority in terms of data management, migration, test Av, et cetera. And then, we basically drill down into the ecosystem the databases live into. So, because we walk application down, the conversation is the same whether the customer is in the data center, or in the Cloud. So that is how we've evolved, and that's how we're thinking from a product standpoint, from a support standpoint, and then the overall company is built that way. So it makes it easy for us to adapt a new platform that comes in. So, when you talked about, you know, how does, each Cloud is different, you're absolutely right, the security concepts are different, right? Microsoft is built on active directory, Google is built on something very different. So how do you utilize and how do you make this work? We do have an infrastructure layer that basically provides Cloud-specific capabilities for various Cloud platforms. And that has gotten to a point where it understands and tunes itself from a security standpoint and a performance standpoint. Once that's taken care of, the rest of the application stack, which is over 90% of our software, stays the same, there's no change. And so that is how we kind of tackle this. Because the ecosystem we live in, we have to keep up with two people. We have to keep up with the infrastructure people who are making it bigger, faster, and we also have to keep up with the application people who are making it fancier and more complicated. So that's unfortunately the ecosystem we live in, and taking this approach has given us a mechanism to insulate us from a lot of the complexities of these two environments. >> Yeah, that's great, 'cause when you talk to customers and you say, "What's going on in your environment," change is difficult. So, how many different pieces of what I'm doing do I need to move to be able to take advantage of the modern economics. On the one hand, you know, if I have an application and I like it, well, maybe I could just lift and shift it, but if I'm just lifting, shifting, I'm not necessarily taking advantage of the full Cloud native environments, but I need to make sure that my data is protected, backup, you mentioned security, are of course the top concerns, so. It sounds like, in many ways, you're talking, helping customers work through some of those initiatives, being able to take advantage of new environments, but not need to completely change everything. Maybe, I'd love to hear a little bit, when you talk about the developers and DevOps initiatives that are happening inside customers, where does that impact, where does that connect with what Actifio's doing? >> Well, that's a great question. So, let me start with a real customer example. We have this customer, SEI Investments, who basically, their business model is to grow by acquisition, so they're adding on tens, hundreds of developers every quarter. So it's impossible to keep up with infrastructure needs when you grow at that pace. They decided to adopt a Cloud platform. And with each Cloud platform comes some platform-specific piece that all these developers now have to re-tool themselves. So, I'm a developer, I used to come in the morning, open up my machine and start working away on the application, now I have to do something different, and if there is 300 of me, and the cost of moving to the Cloud was a lot less than training the developers. It was much harder to train the developers because it has been ongoing process. So we were presented the challenge of how do you avoid it? So, when we are able to separate the application layer from the data layer, because of the way we operate, what we present as a solution was to say, just move your, what is the heaviest layer you have? That's the database, okay. And what are the copies you're creating? I'm creating hundreds of copies of my Oracle database, okay. Let's just move that to the Cloud. All of the front-end application doesn't see a change, thanks to the great infrastructure work the Cloud providers do, you add 10 Gigabyte to everywhere. So network is not a problem, computer's not a problem, it's just available on an API call, so you provision that. All they did was a data movement, moved it from Point A to Point B, gives you the flexibility to spend up any number of copies you want in the Cloud, now, your developer tool sets haven't changed, so there's no training required for developers, but from an operations standpoint, you've completely eased the burden of creating a hundred more copies every month, because Cloud is built for that. So you take the elasticity of the Cloud, advantage of that, and provide the data in the last mile to the Cloud, thereby, developers, they will access the application with the same level of ease. So, that is the paradigm we're seeing, we're seeing, you know, in some of our customers, there is faster and better storage provision for Actifio because there are 190 developers working off Actifio, where there's only about a handful of people running production. So, it's a paradigm shift is where we see it. And the pace at which we bring up the application wherein we're able to bring up 150 terabyte article database in three hours. Before Actifio, it used to be, maybe, 30 days, if you were lucky. So it's not just an order of magnitude, it's what you can do with that data, is where we're seeing the shift going to. >> Yeah, it's interesting, when you go back and look at some of the changes that have happened in the Cloud, Cloud storage was one of the earliest discussed use cases there, and backup to the Cloud was one of the earlier pieces of the Cloud storage discussion. Yet, we've seen changes and maturation into what can actually be done, explain a little bit how Actifio enables even greater functionality when you're talking about backup to the Cloud. >> Absolutely. You know, the object storage technology, it's probably the most scalable and stable piece of storage known to mankind, because nobody can build that level of scale that Amazon, Azure, and Google have put into it. From a security standpoint, performance standpoint, and scale standpoint. So I'm able to drop my data in Boston and pick it up in Tokyo seamlessly, right? That's unheard of before. And the biggest impediment to that, was a lot of legacy application data didn't know how to consume this object storage. So what Actifio came up with on onboard technology was to light up the object storage for everybody, and basically make it a performance neutral platform, wherein you take the guessing game out of the customer. The customer doesn't need to go research S3 or Google Nearline or Google Persistent Disk and say I want ten copies there versus five copies there, Actifio figures it out for you. You give us your SLA, you give us your RTOs and RPOs, and we tell you, okay, this is the most cost effective way to store your data. You get the multi-year retention for free, you get the GDPR, appchafe and protection for free, you get the geo-redundancy for free. All this is built into the platform. In addition, you also can run DevOps off the object store. You can run DR off the object store. So we enabled a lot of the legacy use cases using this new technology, so that is kind of where we see the cusp, wherein, in the Cloud, there's always a question and a debate, does D-doop make sense? D-doop consumes a lot of compute, takes a lot of memory, you need to have that memory and compute whether you want it or not. We're seeing a lot more adoption of encryption, where the data is encrypted at source. When you encrypt data, D-doop is just a big compute-churning platform, it doesn't do much for you. So we went through this debate actively, I think four or five years ago, and we figured out, object store's the way to go. You cannot get storage, I mean, it's a buck a terabyte in Google, and dropping. How can you get storage that's reliable, scalable, at a lower cost? All we had to do was actuate the use of that storage, which is what we did. >> Yeah. I'm just laughing a little bit because, you know, gosh, I think back a dozen years ago, the industry knew that the future of storage would be object, yet it's taken a long time to really be able to leverage it and use it, and the Cloud, the hyperscalers of course, have been a huge enabler on that, but we don't want customers to have to think about that it's object underneath, and that's the bridging the gap that I think we've been looking for. There, what else. We talk about really being able to extract the value out of Cloud, you know, data protection, disaster recovery, migrations are all things that are top of mind. >> Yeah, absolutely. All those use cases, and we're seeing some of the top rating CIOs talk about AI and ML. We've had a couple of customers who want to basically take their manufacturing data from remote sites and pump it into Google bit query. Now we all know manufacturing happens in Taiwan and Singapore and all those locations, now how do you take data from all those applications, normalize it, and pump it into Google bit query and get your predictable results on a quarterly basis, it's a challenge. Because the data volumes are large. So with our Cloud technology and our onboard capability, we're able to funnel data directly into Google Nearline, and on a quarterly basis, on a scheduled basis, transform it, push it into bit query, and bring out the results for the end user. So that journey is pretty transformated, from a customer standpoint. What they used to have five people do maybe once a year, now with a push of a button happens every quarter. So it's a change in how the AI and ML analytics evolve. The other element is also you know, our partnership with IBM, we're working very closely with their Cloud bag for data. Cloud bag for data is an awesome platform built to analyze any kind of data that you might have. With Actifio's normalization platform, you basically can feed any data into Actifio and it presents a unified interface into the slow pack, so you can build your analytics workloads very quickly and easily. >> So we've talked a lot about Cloud, one of the other C's of course in 10C is containers, if we look at containerization, when it first started, it was stateless applications, most applications that are running in containers are running for very short period of time, so help us understand where Actifio fits there, what's the problem statement that you are solving? >> Oh, absolutely. So containers are coming up, up and coming and out of reality, and as we see more applications flow into containers, you see the data lives outside the container. Because containers are short-lived, they're microservices, they come up and they go down, and the state is maintained in a storage platform outside the container, so Actifio tackles containers by taking the data protection strategy we have for the storage platform already, Bell defined, but enhancing the data presentation into the container as it comes up. So a container can be brought up in seconds, maybe less. But the container is only brought to life when it can lead to data and start working again, so that's the bridge Actifio actuates. So we understand, you know, the architecture of how a container is put together, how the container system is put together, and basically, we marry the storage and the application consistent in the storage into the container so that the container's databases, or applications, come to life. >> And that could be in a customer's data center, in a public Cloud, Kubernetes enabled, all of that? >> Absolutely, it can be anywhere, and with 10C, what we have done is we've also integrated with Cloud Native Snapshot, so if you talk about net neutrality for the container platform, if it's on premises, we have all kinds of access to the storage, the infrastructure, and the platforms so our processing is very different. If you take it to the Cloud, let's say Google, Google Kubernetes platform is fairly, it's a black box. You get some storage, and you get containers. And you have an API access to the storage. So in Google, we automatically autotune and start taking the Google snapshots to take the storage perfection, so that's the other way we've kind of neutralized the platform. >> Yeah, you've got a, thinking about it just from a customer's standpoint, one of the big challenges there is they've got everything from their big monoliths, they're big databases, through these microservice Cloud native architectures there, and it sounds like you know, is that just one of the fundamental architectural designs to make sure that you can span across those environments and give customers a common look and feel between those environments? >> Absolutely. The single pane of glass is a big askt and a big focus for us, not just across infrastructure, it's across geos and across all platforms. So you could have workloads running AIX6, VMware, in the Cloud, all the way through containers, and manage it all to a single console, to know when was the last good backup, how many copies of the database am I running, and each of these databases could have their own security constructs. So we normalize all of those elements and put them in a single console. >> Okay, 10C, shipping today? >> 10C shipping today, we have early access to a few customers, the general availability releases possibly in the February timeframe. >> Okay, and if I'm an existing Actifio customer, what's the path for me to get to 10C? >> Our support will reach out and do a simple software upgrade, it's available on all Cloud platforms, it's available everywhere, so you will see that on all the marketplaces and the regular upgrade process will get you that. >> Okay, and if I'm not an Actifio customer today, how easy is it for me to try this out? >> Oh, it is very easy, with our Actifio go SAS platform, it's a one-click download, you can download and try it out, try all the capabilities of the platform, it's also available on all the Cloud marketplaces for you to go and access that. >> All right, well, Ashok, a whole lot of pieces inside of 10C, congratulations to you and the team for building that, and definitely look forward to hearing more about the customer deployments. >> Thank you, we have exciting times ahead. >> All right. Lots more coverage from theCUBE throughout 2020, be sure to check out theCUBE.net, I'm Stu Miniman, thanks for watching theCUBE. (techno music)

Published Date : Jan 6 2020

SUMMARY :

From the SiliconAngle media office of Cloud at Actifio, great to see you. the path that led to 10C announcement. and in addition to that, we also adopt And when you talk about, you know, I think that it can provide the functionality because the hardest thing to migrate On the one hand, you know, if I have an application and the cost of moving to the Cloud was a lot and look at some of the changes that And the biggest impediment to that, the value out of Cloud, you know, into the slow pack, so you can build your and the application consistent in the storage and the platforms so our processing is very different. VMware, in the Cloud, all the way through containers, releases possibly in the February timeframe. and the regular upgrade process will get you that. it's also available on all the Cloud marketplaces to you and the team for building that, be sure to check out theCUBE.net,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Ashok RamuPERSON

0.99+

TaiwanLOCATION

0.99+

2017DATE

0.99+

SingaporeLOCATION

0.99+

BostonLOCATION

0.99+

Stu MinimanPERSON

0.99+

2014DATE

0.99+

TokyoLOCATION

0.99+

AmazonORGANIZATION

0.99+

2013DATE

0.99+

MicrosoftORGANIZATION

0.99+

ActifioORGANIZATION

0.99+

2019DATE

0.99+

GoogleORGANIZATION

0.99+

300QUANTITY

0.99+

five peopleQUANTITY

0.99+

30 daysQUANTITY

0.99+

StuPERSON

0.99+

FebruaryDATE

0.99+

SQLTITLE

0.99+

2020DATE

0.99+

two peopleQUANTITY

0.99+

five copiesQUANTITY

0.99+

AshokPERSON

0.99+

190 developersQUANTITY

0.99+

one-clickQUANTITY

0.99+

MySQLTITLE

0.99+

January 2020DATE

0.99+

fourDATE

0.99+

ten copiesQUANTITY

0.99+

CloudORGANIZATION

0.99+

three hoursQUANTITY

0.99+

hundreds of copiesQUANTITY

0.99+

two environmentsQUANTITY

0.98+

HANATITLE

0.98+

over 90%QUANTITY

0.98+

Boston, MassachusettsLOCATION

0.98+

theCUBEORGANIZATION

0.98+

150 terabyteQUANTITY

0.98+

SEI InvestmentsORGANIZATION

0.98+

CUBEORGANIZATION

0.98+

eachQUANTITY

0.98+

single consoleQUANTITY

0.98+

CloudTITLE

0.98+

GDPRTITLE

0.97+

AliCloudORGANIZATION

0.97+

todayDATE

0.97+

five years agoDATE

0.97+

SybaseTITLE

0.97+

a dozen years agoDATE

0.96+

OracleORGANIZATION

0.96+

KubernetesTITLE

0.96+

10CTITLE

0.96+

once a yearQUANTITY

0.96+

oneQUANTITY

0.95+

firstQUANTITY

0.95+

ActifioTITLE

0.95+

AzureORGANIZATION

0.94+

10 GigabyteQUANTITY

0.94+

Matthew Magbee, Sonic Healthcare | Commvault GO 2019


 

>>Live from Denver, Colorado. It's the cube covering comm vault. Go 2019 brought to you by. >>Hey, welcome back to the cube Lisa Martin with Steven and Amanda. We are covering combo go 19 in Colorado day two of our coverage and we're excited to welcome a successful comm vault customer to the cube. We have from the main stage this morning, Matthew mag meet data center, director of Sonic healthcare. Matthew, welcome. Thank you for having me. This is so exciting. Oh good. We're excited to have you. So you got to, you are, you're, as your pen says, a combo customer champion. >>I am a customer champion a, I've kind of prided myself on that for the last few years. Uh, I like to get involved in the community and kind of help the other newcomers to come volt as well. As better my understanding and try to give the guys on the other end of the support line and break. >>So before we dig into Sonic and what you guys are doing and how you're working with combo, give our audience an overview of Sonic healthcare, what you guys do, where you're based, all that good background stuff. Okay. >>So I worked for a Sonic healthcare USA, so that's obviously in the United States. Uh, we are an anatomical and clinical pathology laboratory company. Um, we are based, uh, West coast central and East coast of the United States and we work with hospitals, doctor's office to provide, you know, quick and reliable laboratory results. >>So this is patient data. Yes. We think of, we think of data as I'm sure you do as well. It's the lifeblood. It's the new oil. It's all the things, right? That you hear the new bacon. It's the new bacon is that was like your quote? I saw that combo last year. >>Yeah, they had, they had teachers last year with that data. Yeah. Data is the new bacon. >>Well it's, it's critical, you know, regardless of if you're for Kim comparing it to bacon, I do like that. But it's also, there's the proliferation of it is hard to manage. Tell us a little bit about the it environment at Sonic. You guys have been using combo for about four years, but give us an overview of what you were working with before and how, what may be some of the compelling events were. >>So coming on board with Sonic, uh, the combo rollout was relatively new. We didn't, I didn't really come into a preexisting environment. It was like, okay, this is, this is what we're going to use. I need you to learn it and run with it, make sure that it works. Right. And um, you know, coming from other companies that had different software applications, I was always in charge of the disaster recovery. That's always been kind of like a, a beating heart for me. >>You're the Dr. Guy. It is apparently, >>it's really hard to find someone who's excited about backups. So I've put, it's like, yes, please take it. So I'm coming in and being able to mold this application to kind of how I wanted it was a little like touch and go at first because we had people out of our, um, overseas office that were, uh, handling already and is, they kind of set the stage of how they wanted it to go. But, you know, things change. We've got to kind of move things as we go, but I kind of owe a lot to them to kind of really introducing me to combo. >> So Matthew, one of the things that we've really enjoyed talking about at this show is everybody's ready. They're born ready, they know what they're doing, what it's preparing for when things do fail. So you talked a little bit on stage about some of those times when things fail and how today you're able to be here and you're, the other person in the D R group is here and you don't have to worry about walking away from the office and you know, having, you know, I guess not a Pedro anymore, but getting that call. >>Yeah, they need to be there. So my cell phone. But yeah, so bring us through some of those, you know, failure scenarios. We are always trying different things. You know, combo does offer a wide array of different solutions they have for plans and one of them is their active directory plan. And I'm leaning towards this cause this is my most recent failure is, you know, we were just, I've always had issues with active directory testing. The fail over and my first attempt at it was a failure. But I learned so much off the bat that I'm actually comfortable now that there might be a few tweaks that we have to do. By worst case scenario, we'd definitely be able to get it back online without any issue. But if we would've gone into it without testing, without that failure, who knows what could have happened. It could've been just a resume generating event, you know? >>Well, so you, you Stu alluded to it and what you mentioned in the keynote was, Hey, my other only other Dr. Guy is here in the audience. So I actually, I have >>team a data center team and we're all in charge. It's eight, eight people and we're, we're in charge of the disaster recovery. But, uh, the old gentleman who's with me is the only other one who's, uh, uh, done a lot of the combo training. He comes to Kai, he's been to all three combos, goes with me and uh, he's, he's probably the, if I'm not around, he's the next in line to take that. So if there's a major issue it would be one of us that they would contact by. We're both here >> and you're both here. Well that actually speaks volumes. It does. And we're comfortable and you know, we've been checking email for things but you know, everything's smooth sailing so far. >>I think I saw a quote from you, I think it was in a video where you said before it was like having a newborn. >>Absolutely, absolutely. I used to check like sign in. It's like 10 o'clock every single night for the first year that I worked for sign cause I was petrified cause you know, I knew that I was backing stuff up but I don't know, was it still running with it still being backed up? Did it pause? Was it causing performance issues on the other end? There were so many what ifs and I just, I was, I was a mess. I was a nervous wreck constantly, you know, working till one or two in the morning and then go to bed and then eagerly get up and start checking stuff even before I left the house, you know? And I'm like, Oh, okay, that's finished. But now it's like, yeah, I know I finished not worried about Matthew. I think back to early in my career it was the dreaded backup window is, you know, when am I going to be able to get that in there? >>Can I finish the backup in the window that I have? And we've mostly gotten beyond that. But you know, there's so many new now we were just talking with Sandy Hamilton who was on stage before you about some of that automation. Really great automation sounds good, but there's gotta be a little bit of fear. It's like shit, you know, talking about like texting, I said like we've all texted the wrong thing or the wrong person or you had the wrong person. So tell us your thoughts about how automation is impacting your world and how calm voltage. >> I actually have very little automation workflow running through comm vault right now. A lot of the stuff that we do automation wise lies on the VMware side. Um, so that's, that's been good. I haven't really implemented a lot just because I personally am not comfortable with it yet. >>I'm not against it. It's just something that I haven't really trained myself enough to say I'm going to leave and let this run by itself. I'm still like, Oh no, this could be better. This could be better. This can be better. So until I'm 100% comfortable with that, I think we'll just leave it at a semiautomated task of just, sorry, you said something down the road that you're absolutely even even sitting in keynote yesterday and listening about the Alexa automation and SMS tax, I like writing in a piece of paper to test that because it's something that I've always wanted and ever since combo go last year when they were using Alexa to check SLA and RPO and RTO, I'm like, I want to be able to do that. So that's definitely down the road, but it's on the back burner right now. >>So give us a landscape view distributed organization. You talked about your base in the U S but all of the different clinics and organizations that you work with, are you living in this multi-cloud world? >>So, uh, we are pretty much zero cloud initiative company. Yeah. I'm actually trying to work on a slogan, Oh no, cloud zero cloud and proud or something like that. But I'm not 100% sure. It's definitely not out of the question. But with FedRAMP co compliancy and HIPAA, there's just a lot of regulation between the data that we have for the U S that transmits back and forth, let's say Australia or Ireland or something like that. There's certain regulations that we have to deal with and uh, in the cloud there's, there's very few options of where you can actually have those servers. So it's right now, you know, on prem is kind of, it's kind of our jam. >>So as a lot of organizations are going through FedRAMP certification, I was just at one of Dell's events the other week. They're going through it. I know some other like e-signature companies are doing, a lot of companies are, are you paying attention to that? Is that something that you think in the future might provide more confidence? >>Completely transparent. It's something I should be paying more attention to that I'm, I've just, I really haven't really done as much research as I should have and you know, I take full responsibility for that. But at the same time, you know, there's, there's a lot of other things going on in the U S that until we implement something of that nature, I don't really think that I'm really too concerned about it. So Matthew, you've been to a few of these events. Last one, last year there was a lot of talk about the coming change in this year. Lot of new faces, new Hedvig metallic. Yeah. So what we'll want to get your impression on the executive changes, some of the, you know, are you seeing any indications of organizational changes and the products? What I'm seeing is I'm seeing new life to a product that I've always been told is a dinosaur, which I kind of laugh cause I'm like, well if this dinosaur is doing things that, you know, the greatest and latest and greatest things aren't or aren't really doing. So to see this new life, the new rebranding of the logo, the new leadership, the new acquisitions and everything is just like feeding fuel to the fire. That is combo. And, and I'm, I'm pretty excited. I will say that I'm a little bit more excited about the new additions to like orchestrate and activate since stuff like metallic. I won't really be implementing just because of our business practices. But yeah. >>Let's talk about in our last few minutes here, cause they actually talked about some of the new technologies with orchestrate activate yesterday and today, but in terms of support we just had as to mention, we just interviewed Sandy Hamilton and she's come on board in the last, I think she said four and a half months. Owning professional services systems, engineering support, customer success throughout the entire life cycle. Tell us a little bit in, in our closing minute or so about the support and training that you've gotten from combat that give you the confidence for you and one of your other guys to be here and not tied to your phone. >>I don't think I'd still be with combo if it wasn't for their support. I, I owe so much to their support. They've brought me through some pretty dark times with deployment, with troubleshooting, with failures where I thought that I had things right and it just didn't work. I've called in at one in the morning, got great support, I've caught any 10 in the morning and got great support, phenomenal follow up. Um, their, their community impact, like their forums and their customer champion. So much. Just additional information that helps you not have to call in and not make you feel like that, Oh, that failure. So I owe a lot to their support and their training because without a, like I wouldn't be, I wouldn't be on stage. I'm, I'm wonder if you could put a point on that, the, the forums in your participation as a customer champion, you're spending your own time, you're working with your PBM. >>Why is that so important and how is this the vibrancy of this community, you know, it belongs to the worlds, you know, naming the things that you learn. Somebody taught me, so why shouldn't I teach somebody else? And if that makes someone else be able to go out and ride mountain bikes or cook with their daughter or do anything like that, then I'm all for it because it got me, it got me through all that. So I mean I have 10 15 minutes on the customer forum to answer you. Oh yeah, I know that. I've seen that. I had a gentleman the first morning at breakfast, like I've had a ticket open for two weeks and they can't figure it out. And we worked together and actually got his problem solved, you know? And it was like the only reason is because I've seen that and I worked with combat and they showed me how to fix it and I retain that knowledge. >>That's awesome. DOE takes paying it forward to a whole new level. And it also volumes about how you followed Jimmy chin this morning and nailed it. I tried. It was very difficult, you know, I'm sure that you know why he was filming that solo climber. He was sweaty palms. I was definitely sweaty phone calls. It was, well, Matthew, what a pleasure to have you on the program. So much fun. Thank you. Congratulations on your success and we look forward to hearing it. Many more great things out of Sonic. Thank you. All right. First to a minimum I and Lisa Martin, you're watching the cube from combo go 19.

Published Date : Oct 16 2019

SUMMARY :

Go 2019 brought to you by. So you got to, you are, you're, as your pen says, I am a customer champion a, I've kind of prided myself on that for the last few years. So before we dig into Sonic and what you guys are doing and how you're working with combo, give our audience an overview we work with hospitals, doctor's office to provide, you know, quick and reliable laboratory results. It's the new bacon is that was like your quote? Data is the new bacon. Well it's, it's critical, you know, regardless of if you're for Kim comparing it to bacon, And um, you know, coming from other companies that had different software applications, But, you know, things change. away from the office and you know, having, you know, I guess not a Pedro anymore, this is my most recent failure is, you know, we were just, I've always had is here in the audience. the old gentleman who's with me is the only other one who's, uh, uh, done a lot of the combo training. And we're comfortable and you know, we've been checking email for I think I saw a quote from you, I think it was in a video where you said before it was like having career it was the dreaded backup window is, you know, when am I going to be able to get that in there? It's like shit, you know, talking about like texting, I said like we've all A lot of the stuff that we do automation wise lies on the VMware side. task of just, sorry, you said something down the road that you're absolutely but all of the different clinics and organizations that you work with, are you living in this multi-cloud world? So it's right now, you know, on prem is kind like e-signature companies are doing, a lot of companies are, are you paying attention to that? But at the same time, you know, there's, there's a lot of other things going on in the U S tied to your phone. have to call in and not make you feel like that, Oh, that failure. Why is that so important and how is this the vibrancy of this community, you know, it belongs to the worlds, you know, I'm sure that you know why he was filming that solo climber.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

MatthewPERSON

0.99+

Matthew MagbeePERSON

0.99+

100%QUANTITY

0.99+

AmandaPERSON

0.99+

StevenPERSON

0.99+

two weeksQUANTITY

0.99+

Sandy HamiltonPERSON

0.99+

ColoradoLOCATION

0.99+

yesterdayDATE

0.99+

eightQUANTITY

0.99+

United StatesLOCATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

DellORGANIZATION

0.99+

10 o'clockDATE

0.99+

Sonic healthcareORGANIZATION

0.99+

Sonic HealthcareORGANIZATION

0.99+

bothQUANTITY

0.99+

SonicORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

oneQUANTITY

0.99+

10 15 minutesQUANTITY

0.99+

four and a half monthsQUANTITY

0.99+

FirstQUANTITY

0.99+

SLATITLE

0.98+

about four yearsQUANTITY

0.98+

first attemptQUANTITY

0.98+

AlexaTITLE

0.98+

this yearDATE

0.97+

GuyPERSON

0.97+

KaiPERSON

0.96+

FedRAMPORGANIZATION

0.94+

IrelandLOCATION

0.94+

2019DATE

0.94+

eight peopleQUANTITY

0.94+

HedvigORGANIZATION

0.94+

D RORGANIZATION

0.93+

first yearQUANTITY

0.93+

USALOCATION

0.92+

AustraliaLOCATION

0.91+

StuPERSON

0.9+

PedroPERSON

0.86+

Go 2019COMMERCIAL_ITEM

0.85+

this morningDATE

0.83+

Dr.PERSON

0.81+

KimPERSON

0.79+

HIPAATITLE

0.79+

first morningQUANTITY

0.78+

day twoQUANTITY

0.74+

West coastLOCATION

0.74+

morningDATE

0.74+

one inDATE

0.73+

LastDATE

0.72+

Jimmy chinPERSON

0.71+

every single nightQUANTITY

0.71+

three combosQUANTITY

0.71+

goCOMMERCIAL_ITEM

0.7+

ULOCATION

0.68+

two inDATE

0.66+

RPOTITLE

0.66+

one of themQUANTITY

0.65+

U SORGANIZATION

0.65+

oneDATE

0.63+

firstQUANTITY

0.55+

10DATE

0.54+

zeroQUANTITY

0.54+

yearsDATE

0.51+

VMwareORGANIZATION

0.51+

CommvaultTITLE

0.48+

RTOTITLE

0.47+

EastLOCATION

0.47+

Steve Newman, Scalyr | Scalyr Innovation Day 2019


 

from San Mateo its the cube covering scaler innovation day brought to you by scaler Livan welcome to the special innovation day with the cube here in San Mateo California heart of Silicon Valley John for the cube our next guest as Steve Newman the co-founder scaler congratulations thanks for having us you guys got a great company here Thanks yeah go ahead glad to have you here so tell the story what's the backstory you guys found it interesting pedigree of founders all tech entrepreneurs tech tech savvy tech athletes as we say tell the backstory how'd it all start and had it all come together so I also traced the story back to I was part of the team that built the original Google Docs and a lot of the early people here at scaler either were part of that Google Docs team or you know they're people we met while we were at Google and really scaler is an outgrowth of the it's a solution to problems we were having trying to run that system at Google you know Google Docs of course became part of a whole ecosystem with Google Drive and Google sheets and there's that you know all these applications working together it's a very complicated system and keeping that humming behind the scenes became a very complicated problem well congratulate ago Google Docs is used by a lot of people so been great success scale is different though you guys are taking a different approach than the competition what's unique about it can you share kind of like the history of where it's going and where it came from and where it's going yeah so you know maybe it'd be helpful like just to kind of set the context a little bit to the blackboard yeah so you know I you know I talked about it's kind of probably put a little flesh on what I was saying about you know there's a very complicated system that we're trying to run in the whole Google Drive ecosystem too there are all these trends in the industry nowadays you know the move to the cloud and micro services and kubernetes and serverless and can use deployment is all everything like these are all great innovations makes you know people are building more complex applications they're evolving faster but it's making things a lot more complicated and to make that concrete imagine that you're running an e-commerce site back in the calm web 1.0 era so you're gonna have a web server maybe a patchy you've got a MySQL database behind that with your inventory and your shopping carts you may be an email gateway and some kind of payment gateway and that's about it that's your that's your system each one of these pieces involved you know going to Fry's buying a computer driving it over the data center slotting it into a rack you know a lot of sweat went into every one of those boxes but there's only about four boxes it's your whole system if you wanted to go faster you threw more hardware at it more ram exactly and like and you know not literally through but literally carried you literally brought in more hardware and so you know took a lot of work just to do the you know that simple system fast forward a couple of decades if you're running uh running an e-commerce site today well you know you're certainly not seeing the inside of a data center you know stripe will run the payments for you you know somebody's on will run the database server and say you know like this is much much you know you know one guy can get this going in an afternoon literally but nobody's running this today this is not a competitive operation today if you're an e-commerce today you also have personalization and advertising based on the surf service history or purchase history and you know there's a separate flow for gifts and you know then printing the you know interfacing to your delivery service and and you know you've got 150 blocks on this diagram and maybe your engineering team doesn't have to be so much larger because each one of those box is so much easier to run but it's still a complicated system and trying to actually understand what's working what's not working why isn't it working and and tracking that down and fixing it this is the challenge day and this and this is where we come in and that's the main focus for today is that you can figure it out but the complexity of the moving parts is the problem exactly so you know and so you see oh you know 10% of the time that somebody comes in to open their shopping cart it fails well you know the problem pops out here but the the root cause turns out to be a problem with your database system back here and and figuring that out you know that's that's the challenge okay so with cloud technology economics has changed how is cloud changing the game so it's interesting you know changes changes the game for our customers and it changes the game for us so for a customer you know kind of we touched on this a little bit like things are a lot easier people run stuff for you you know you're not running your own hardware you're not you know you're often you're not even running your own software you're just consuming a service it's a lot easier to scale up and down so you can do much more ambitious things and you can move a lot faster but you have these complexity problems for us what it presents an an economy of scale opportunity so to you know we step in to help you on the telemetry side what's happening in my system why is it happening when did it start happening what's causing it to happen that all takes a lot of data log data other kinds of data so every one of those components is generating data and by the way for our customers know that they're running a hundred and 50 services instead of four they are generating a lot more data and so traditionally if you're trying to manage that yourself running your own log management cluster or whatever solution you know it's a real challenge to you as you scale up as your system gets more complex you've got so much data to manage we've taken an approach where we're able to service all of our customers out of a single centralized cluster meaning we get an economy of scale each one of our customers gets to work with a basically log management engine that's to scale to our scale rather than the individual customers scale so the older versions of log management had the same kind of complexity challenges you just drew a lot ecommerce as the data types increase so does their complexity is that so the complexity increases and but you also get into just a data scale problem you know suddenly you're generating terabytes of data but you don't you know the you only want to devote a certain budget to the computing resources that are gonna process that data because we can share our processing across all of our customers we we fundamentally changed economics it's a little bit like when you go and run a search and Google thousands literally thousands of servers in that tenth of a second that Google is processing the query 3,000 servers on the Google site may have been involved those aren't your 3,000 servers you know you're sharing those with you know 50 million other people in your data center region but but for a millisecond there those 3,000 servers are all for you and that's that's a big part of how Google is able to give such amazing results so quickly but in still economically yeah economically for them and that's basically on a smaller scale that's what we're doing is you know taking the same hardware and making it all of it available to all of the customers people talk about metrics as the solution to scaling problems is that correct so this is a really interesting question so you know metrics are great you know basically the you know if you look up the definition of a metric it's basically just a measurement on number and you know and it's a great way to boil down you know so I've had 83 million people visit my website today and they did 163 million things in this add mirror and that's you can't make sense of that you can boil it down to you know this is the amount of traffic on the site this was the error rate this was the average response time so these you know these are great it's a great summarization to give you an overall flavor of what's going on the challenge with metrics is that they tend to measure they can be a great way to measure your problems your symptoms sites up it's down it's fast its slow when you want to get to then to the cause of that problem all right exactly why is the site now and I know something's wrong with the database but what's the error message and what you know what's the exact detail here and a metric isn't going to give that to you and in particular when people talk about metrics they tend to have in mind a specific approach to metrics where this flood of events and data very early is distilled down let's count the number of requests measure the average time and then throw away the data and keep the metric that's efficient you know throwing away data means you don't have to pay to manage the data and it gives you this summary but then as soon as you want to drill down you don't have any more data so if you want to look at a different metric one that you didn't set up in advance you can't do it and if you need to go into the the details you can't do an interesting story about that you know when you were at Google you mentioned you the problem statements came from Google but one of things I love about Google is they really kind of nailed the sre model and they clearly decoupled roles you know developers and site reliability engineers who are essentially one-to-many relationship with all the massive hardware and that's a nice operating model it's had a lot of efficiencies was tied together but you guys are kind of saying in a way that does developers use the cloud they become their own sres in a way because this cloud can give them that kind of Google like scale and in smaller ways not like Google size but but that's similar dynamic where there's a lot of compute and a lot of things happening on behalf of the application or the engineers developer as developers become the operator through their role what challenges do they have and what do you see that happening because that's interesting trim because as applications become larger cloud can service them at scale they then become their own sres what yeah well how does that roll out most how do you see that yes I mean and so this is something we see happening at more and more of our customers and one of the implications of that is you have all these people these developers who are now responsible for operations but but they're not special you know they're not that specialist SRE team they're specialists in developing code not in operations they're you know they they minor in operations and and they don't think of it as their real job you know that's the distraction something goes wrong all right they're they're called upon to help fix it they want to get it done as quickly as possible so they can get back to their real job so they're not gonna make the same mental investment in becoming an expert at operations and an expert at the operations tools and the telemetry tools you know they're not gonna be a log management expert on metrics expert um and so they need they need tools that have a gentle learning Kurt have a gentle learning curve and are gonna make it easy for them to get Ian's not really know what they're doing on this side of things but find an answer solve the problem and get back out and that's kind of a concept you guys have of speed to truth exactly so and we mean a couple of things by that sort of most literally we our tool is it's a high performance solution you you hand us your terabytes of log data you ask some question you know what's the trend on this error in this service over the last day and we you know we give you a quick answer Big Data scan through a give you a quick answer but really it's you know that's just part of the overall chain of events which goes from the you know the developer with a problem until they have a solution so they they have to figure out even how to approach the problem what question to ask us you know they have to pose the query and in our interface and so we've done a lot of work to to simplify that learning curve where instead of a complicated query language you can click a button get a graph and then start breaking down that just visually break that down which okay here's the error rate but how does that break down by server or user or whatever dimension and be able to drill down and explore in a you know very kind of straightforward way how would you describe the culture at scaler I mean you guys been around for a while you still growing fast growing startup you haven't done the B round yet got any you guys self-funded it got customers early they pushed you again now 300 plus customers what's the culture like here so you know it's been this has been a fun company to build in part because you know we're into you know the the heart of this company is the engineering team our customers our engineers so you know we're kind of the kind of the same group and that keeps the you know it kind of keeps the inside in the outside very close together and I think that's been a part of the culture we've built is you know we all know why we're building this what it's for you know we use scalar extensively internally but you but even you know even if we weren't we're it's the kind of thing we've used in the past and we're gonna use in the future and so you know I think people are really excited here because you know we understand why and you have an opinion of the future on how it should roll out what's the big problem statement you guys are solving as a company what's it how would you boil that down if asked so by a customer and engineer out there what real problem are you solving that's core problem big problem that's gonna be helping me you know at the end of the day it's giving people the confidence to keep you know building these kind of complicated systems and move quickly because because and this is the business pressure everyone is under you know whatever business you're in it has a digital element and your competitors are in the same you know doing the same thing and they are building these sophisticated systems and they're adding functionality and they're moving quickly you need to be able to do the same thing but it's easy then to get tangled up in this complexity so at the end of the day you know we're giving people the ability to understand those systems and and and the functionality and the software's getting stronger and stronger more complicated with service meshes and micro services as applications start to have these the ability to stand up and tear down services on the fly that's so annoying and they'll even wield more data exact you get more data it gets more complicated actually if you don't mind there's a little story I'd like to tell so hold on just will I clear this out this is going back back to Google and again you know kind of part of the inspiration of you know how he came to build scalar and this doesn't be a story of frustration of you know probably get ourselves into that operation and motivation yep so we were we were working on this project it was building a file system that could tie together Google Docs Google sheets Google Drive Google photos and the black diagram looks kind of like the thing I just erased but there was one particular problem we had that took us months and literally months and months and months to track down you know you'd like to solve a problem in a few minutes or a few hours but this one took months and it had to do with the the indexing system so you have all these files in Google Drive you wanna be able to search and so we had modeled out how we were gonna build this or this search engine you'd think you know Google searches a solve problem but actually so Google web search is four things the whole world can see there's also like Gmail search which is four things that only one person can see so it's lots of separate little indexes those are both solve problems at Google Google Drive is for things a few people can see you share it with your coworker or your whoever and it's actually a very different problem and but we looked at the statistics and we found that the average document our average file was shared with about 1.1 people in other words things were mostly private or maybe you share with one or two people so we said we're just gonna make if something's shared to three people we're just gonna make three copies of it and then now we have just the Gmail problem each copy is for one person and we did the math on how how much work is this going to be to build these indexes and in round numbers we were looking at something like at the time this would be so much larger now but at the time we had maybe one billion documents and files in the system each one was shared to about 1.1 people maybe it was a thousand words long on average and maybe it would change be edited once per day on average so we had about a trillion word updates per day if you multiply all that together and so we allocate it we put in a request and purchase machines to handle that much traffic and we started bringing up the system and immediately collapsed it was completely overloaded and we checked our numbers and we check them again yeah 1.1 about a billion whatever and but then work into the system with just way beyond them and we looked at our metrics so you know measuring the number documents measuring each of these things all the metrics looked right to make a month's long story short these metrics and averages were hiding some funny business there turned out there was this type of use case read of occasional documents that were shared to thousands of people and one of there was a specific example it was the signup sheet for the Google company picnic this is a spreadsheet it was shared to about 5,000 people so it wasn't the whole company but you know a big chunk of Mountain View which meant it was I don't know let's say 20 thousand words long because it had you know the name and a couple other things for each person this is one document but shared to 5,000 people and you know during the period people were signing up maybe it was changing a couple thousand times per day so you multiply out just this document and you get 200 billion word updates for that one document in a day where we're estimating a trillion for the whole earth and so there was something like a hundred documents in this kid Google was hamstringing your own thing we were hamstrung our own thing there were about a hundred examples like this so now we're up to 20 trillion and like that was the whole problem these hundred files and we would have never found that until we got way down into the details of the the logs which in this two months just took month so because we didn't have the tools because we didn't have scaler yeah and I think this is the kind of anomaly you might see with Web Services evolving with micro services which someone has an API interface with some other SAS as apps start to rely on each other this is a new dynamic we're seeing as SLA s are also tied together so the question is whose fault is it exactly you have to whose fault is it and also things get so much more varied now you know again web 1.0 e-commerce you buy a thing you buy a thing that's all the same now you're building a social media site or whatever you've got 8 followers you've got 8 million followers this person has three movies rented on Netflix this person has three thousand movies everything's different and so then you get these funny things hiding yeah you're flying blind if you don't get all the data exposed it's like it's like you know blind person trying to read Braille as we heard earlier see if thanks so much for sharing the insight great story I'm John furry you're here for the q4 innovation day at scalers headquarters thanks for watching

Published Date : May 30 2019

SUMMARY :

people the confidence to keep you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve NewmanPERSON

0.99+

San MateoLOCATION

0.99+

hundred filesQUANTITY

0.99+

50 millionQUANTITY

0.99+

3,000 serversQUANTITY

0.99+

150 blocksQUANTITY

0.99+

5,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

8 followersQUANTITY

0.99+

GoogleORGANIZATION

0.99+

one billion documentsQUANTITY

0.99+

todayDATE

0.99+

20 thousand wordsQUANTITY

0.99+

two peopleQUANTITY

0.99+

each copyQUANTITY

0.99+

one personQUANTITY

0.99+

three peopleQUANTITY

0.99+

thousands of peopleQUANTITY

0.99+

Google DocsTITLE

0.99+

three thousand moviesQUANTITY

0.99+

thousandsQUANTITY

0.99+

GmailTITLE

0.99+

three moviesQUANTITY

0.99+

Silicon ValleyLOCATION

0.98+

Steve NewmanPERSON

0.98+

one documentQUANTITY

0.98+

MySQLTITLE

0.98+

83 million peopleQUANTITY

0.98+

four thingsQUANTITY

0.98+

NetflixORGANIZATION

0.98+

about 5,000 peopleQUANTITY

0.98+

about a billionQUANTITY

0.98+

three copiesQUANTITY

0.98+

two monthsQUANTITY

0.97+

bothQUANTITY

0.97+

each personQUANTITY

0.97+

thousands of serversQUANTITY

0.97+

each oneQUANTITY

0.97+

earthLOCATION

0.96+

JohnPERSON

0.96+

a trillionQUANTITY

0.96+

BrailleTITLE

0.95+

fourQUANTITY

0.95+

about a hundred examplesQUANTITY

0.94+

a thousand wordsQUANTITY

0.94+

singleQUANTITY

0.94+

a hundred and 50 servicesQUANTITY

0.94+

8 million followersQUANTITY

0.94+

eachQUANTITY

0.93+

q4 innovation dayEVENT

0.93+

300 plus customersQUANTITY

0.93+

163 million thingsQUANTITY

0.92+

one document in a dayQUANTITY

0.92+

about 1.1 peopleQUANTITY

0.92+

terabytesQUANTITY

0.91+

one particular problemQUANTITY

0.91+

once per dayQUANTITY

0.89+

one guyQUANTITY

0.88+

GoogleTITLE

0.88+

1.1QUANTITY

0.87+

IanPERSON

0.87+

hundred documentsQUANTITY

0.87+

up to 20 trillionQUANTITY

0.87+

monthsQUANTITY

0.86+

John furryPERSON

0.85+

10% ofQUANTITY

0.84+

Chadd Kenney, PureStorage | CUBEConversation, November 2018


 

(bright instrumental music) >> Hi everyone, I'm John Furrier. Here in the Cube Studios in Palo Alto, for a special Cube conversation on some big news from PureStorage. We're here with Chadd Kenney, who's the Vice President of Product and Solutions at PureStorage. Big Cloud news. A historic announcement from PureStorage. One of the fastest growing startups in the storage business. Went public, I've been following these guys since creation. Great success story in Silicon Valley and certainly innovative products. Now announcing a Cloud product. Cloud data services, now in market. Chadd, this is huge. >> It's exciting time. Thank you so much for having us. >> So you guys, obviously storage success story, but now the reality is changed. You know we've been saying in the Cube, nothing changes, you get storage computer networking, old way, new way in the Cloud. Game is still the same. Storage isn't going away. You got to store the data somewhere and the data tsunami is coming. Still coming with Edge and a bunch of other things. Cloud more important than ever. To get it right is super important. So, what is the announcement of Cloud Data Service. Explain what the product is, why you guys built it, why now. >> Awesome. So, a couple different innovations that are part of this launch to start with. We have Cloud Block Store which is taking Purity, which is our operating system found on-prem and actually moving it to AWS. And we spent a bunch of time optimizing these solutions to make it so that, we could actually take on tier one, mission critical applications. A key differentiator is that most folks were really chasing after test-dev and leveraging the Cloud for that type of use case. Whereas Cloud Block Storage, really kind of industry strength and ready for mission critical applications. We also took protection mechanisms from FlashArray on-premises and actually made it so that you could use CloudSnap and move and protect data into the public Cloud via portable snapshot technology. Which we can dig into a little bit later. And then the last part is, we thought it was really ripe to change data protection, just as a whole. Most people are doing kind of disc to disc, to tape, and then moving tape offsite. We believe the world has shifted. There's a big problem in data protection. Restoring data is not happening in the time frame that its needed, and SLAs aren't being met, and users are not happy with the overall solution as a whole. We believe that restorations from Flash are incredibly important to the business, but in order to get there you have to offset the economics. So what we're building is a Flash to Flash to Cloud solution which enables folks to be able to take the advantages of the economics of Cloud and be able to then have a caching mechanism of Flash on-premises. So that they can restore things relatively quickly for the predominant set of data that they have out there. >> And just so I get everything right here. You guys only been on-premises only, this is now a cloud solution. It's software. >> Correct. >> Why now? Why wait 'til now, is the timing right? What's the internal conversation? And why should customers know, is this the right time. >> So, the evolution of cloud has been pretty interesting as we've gone through it. Most customers went from kind of a 100% on-premises, the Cloud came out and said, hey, I'm going to move everything to the Cloud. They found that didn't work great for enterprise applications. And so they kind of moved back and realized that hybrid cloud was going to be a world with they wanted to leverage both. We're seeing a lot of other shifts in the market. VMware already having RDS in platform. Now it's true hybrid cloud kind of playing out there. Amazon running an AWS. It's a good mixture just to showcase where people really want to be able to leverage the capabilities of both. >> So it's a good time because the customers are re-architecting as well. >> It's all about- >> Hybrid applications are definitely what people want. >> 100% and the application stack, I think was the core focus that really shifted over time. Instead of just focusing on hybrid cloud infrastructure, it was really about how applications could leverage multiple types of clouds to be able to leverage the innovation and services that they could provide. >> You know, I've always been following the IT business for 30 years or so and it's always been an interesting trend. You buy something from a vendor and there's a trade-offs. And there's always the payback periods, but now I think with this announcement that's interesting is you got the product mix that allows customers to have choice and pick what they want. There's no more trade-offs. If they like cloud, they go to cloud. If they like on-premise, you go on-premises. >> It sounds like an easy concept, but the crazy part to this is the Cloud divide is real. They are very different environments. As we've talked to customers, they were very lost on how it was going to take and enterprise application and actually leverage the innovations within the Cloud. They wanted it, they needed it, but at the same time, they weren't able to deliver up on it. And so, we realized that the data layer, fundamentally was the area that could give them that bridge between those two environments. And we could add some core values to the Cloud for even the next generation developer who's developing in the Cloud to bring in, better overall resiliency. Management and all sorts of new features that they weren't able to take advantage of in traditional public cloud. >> You know Chugg wants to do minimal about the serviceless trend and how awesome that is. It's just, look at the resource pool as a serviceless pool of resource. So is this storageless? >> So it's still backed by storage, obviously. >> No, I was just making a joke. No wait, that you're looking at it as what serviceless is to the user. You guys are providing that same kind of storage pool, addressable through the application of, >> Correct >> as if it's storageless. And what's great about taking 100% software platform and moving it to the Cloud is, customer can spin this up in like minutes. And what's great about it is, they can spend many, many, many instances of these things for various different use cases that they have out there, and get true utility out of it. So they're getting the agility that they really want while not having to offset the values that they really come to love about PureStorage on-premises. Now they can actually get it all on the public cloud as well. >> I want to dig into the products a little bit. Before we get there, I want you to answer the question that's probably on people's minds. I know you've been at Pure, really from the beginning. So, you've seen the history. Most people look at you guys and say, well you're a hardware vendor. I have Pure boxes everywhere, you guys doing a great job. You've pioneered the Flash, speed game on storage. People want, kill latency as they say. You guys have done a great job. But wait a minute, this is software. Explain how you guys did this, why it's important. People might not know that this is a software solution. They might be know you for hardware. What's the difference? Is there a difference? Why should they care and what's the impact? >> So, great question. Since we sell hardware products, most people see us as a hardware company. But at the end of the day, the majority of vinge and dev is software. We're building software to make, originally, off the shelf components to be enterprise worthy. Over time we decided to optimize the hardware too, and that pairing between the software and hardware gets them inherently great values. And this is why we didn't just take our software and just kind of throw it into every cloud and say have it, to customers. Like a lot of folks did. We spent a lot of time, just like we did on our hardware platform, optimizing for AWS to start with. So that we could truly be able to leverage the inherent technologies that they have, but build software to make it even better. >> It's interesting, I interviewed Andy Bechtolsheim at VMworld, and he's a chairman of Arista. He's called, Les Peckgesem calls him the Rembrandt of motherboards. And he goes, "John, we're in the software business." And he goes, "Let me tell ya, hardware's easy. Software's hard." >> I agree. >> So everyone's pretty much in the software business. This is not a change for Pure. >> No, this is the same game we've been in. >> Great. Alright, let's get into the products. The first one is Cloud Block Store for AWS. Which is the way Amazon does the branch. So it's on Amazon, or for Amazon as they say. They use different words. So this is Pure software in the Cloud. Your company, technically Pure software. >> Yup. >> In the Cloud as software, no hardware. >> Yup. >> A 100% storage, API support, always encrypted, seamless management and orchestration, DR backup migration between clouds. >> Yup. >> That's kind of the core premise. So what does the product do, what's the purpose of the product. On the Amazon piece, if I'm a customer of Pure or a prospect for Pure, what does the product give me? What's the capabilities? >> Great. I would say that the biggest thing that customers get is just leverage for their application stack to be able to utilize the Cloud. And let me give you a couple of examples 'cause they're kind of fun. So first off, Cloud Block Storage is just software that sits in the Cloud that has many of the same utilities that run on-premises. Any by doing so, you get the ability to be able to do stuff like I want to replicate, as a DR target. So maybe I don't have a secondary site out there, and I want to have a DR target that spin up in the event of a disaster. You can easily set up bi-directional replication to the instance that you have running in the Cloud. It's the exact same experience. The exact same APIs and you get our cloud data management with Pure1 to be able to see both sites. One single pane of glass, and make sure everything is up and running and doing well. You could also though, leverage a test-dev environment. So let's saying I'm running production on-premises, I can then go ahead and replicate to the Cloud, spin up an instance for test-dev, and running reporting, run analytics. Run anything else that I wanted on top of that. And spin up compute relatively quickly. Maybe I don't have it on-prem. Next, we could focus on replicating for protection. Let's say for compliance, I want to have many instances to be able to restore back in the event of a disaster or in the event that I just want to look back during a period of time. The last part is, not just on-prem to the Cloud, but leveraging the Cloud for even better resiliency to take enterprise applications and actually move them without having to do massive re-architecture. If you look at what happens, Amazon recommends typically, that you have data in two different availability zones. So that when you put an application on top of it, it can be resilient to any sort of failures within an AZ. What we've done is we've taken our active cluster technology which is active-active replication between two instances, and made it so that you can actually replicate between two availability zones. And your application now doesn't need to be re-architected whatsoever. >> So you basically, if I get this right, you had core software that made all that Flash, on the box which is on-premise, which is a hardware solution. Which sounds like it was commodity boxes so this, components. >> Just like the Cloud. >> You take it to the Cloud as an amazing amount of boxes out there. They have tons of data centers. So you treat the Cloud as if it's a virtual device, so to speak. >> Correct. I mean the Cloud functionally is just compute and storage, and networking on the back end has been abstracted by some sort of layer in front of it. We're leveraging compute resources for our controllers and we're leveraging persistent storage media for our storage. But what we've done in software is optimize a bunch of things. An example just as one is, in the Cloud when you, procure storage, you pay for all of it, whether you leverage it or not. We incorporate de-dupe, compression, thin provisioning, AES 256 encryption on all data arrest. These are data services that are just embedded in that aren't traditionally found in a traditional cloud. >> This makes so much sense. If you're an application developer, you focus on building the app. Not worrying about where the storage is and how it's all managed. 'Cause you want persistent data and uni-managed state, and all this stuff going on. And I just need a dashboard, I just need to know where the storage is. Is it available and bring it to the table. >> And make it easy with the same APIs that you were potentially running on, on-premises. And that last part that I would say is that, the layered services that are built into Purity, like our snapshot technology and being able to refresh test-dev environments or create 10 sandboxes for 10 developers in the Cloud and add compute instances to them, is not only instantaneous, but it's space saving as you actually do it. Where as in the normal cloud offerings, you're paying for each one of those instances. >> And the agility is off the charts, it's amazing. Okay, final question on this one is, how much is it's going to cost? How does a customer consume it? Is it in the marketplace? Do I just click a button, spin up things? How's the interface? What's the customer interaction and engagement with the product? How they buy it, how much it costs? Can you share the interaction with the customer? >> So we're just jumping into beta, so a lot of this is still being worked out. But what I will tell you is it's the exact same experience that customers have come to love with Pure. You can go download the Cloud formation template into your catalog with an AWS. So you can spin up instances. The same kind of consumption models that we've built on-prem will be applied to cloud. So it will be a very similar consumption model, which has been super consumer friendly that customers have loved from us over the years. And it will be available in the mid part of next year, and so people will be able to beta it today, test it out, see how it works, and then put it into full production in mid part of next year. >> And operationally, in the work flows, the customers don't skip a beat. It's the same kind of format, languages and the words, the word flow. It feels like Pure all the way through. >> Correct. And not only are we a 100% built on a rest API, but all of the things we've built in with, Python libraries that automate this for developers, to PowerShell toolkits, to Ansible playbooks. All the stuff we've built on codeupyourstorage.com are all applicable to both sites and you get Pure1, our Cloud based management system to be able to see all of it in one single pane of glass. >> Okay, let's move on. So the next piece I think is interesting. I'll get your thoughts on this is that the whole protection piece. On-premises, really kind of held back from the Cloud, mainly to protect the data. So you guys got CloudSnap for AWS, what does this product do? Is this the protection piece? How does this work? What is the product? What's the features and what's the value? >> So, StorReduce was a recent acquisition that we did that enables de-duplication on top of an S3 target. And so it allows you to store an S3 de-duplicated into a smaller form factor and we're pairing that with both an on-premises addition which will have a flash plate behind it for super fast restores. So think of that as a caching tier for your backups, but then also be able to replicate that out to the public cloud and leverage store reduce natively in the public cloud as well. >> So that's the store reduce product. So store reduce on it is that piece. As an object store? >> It is, yes. And we pair that with CloudSnap which is natively integrated within FlashArray, so you can also do snapshots to a FlashBlade for fast restores for both NFS, and you can send it also to S3 in the public cloud. And so you get the inherent abilities to even do, VM level granularity or volume level granularity as well from a FlashArray directly, without needing to have any additional hardware. >> Okay so the data services are the; Block Storage, Store Reduce and CloudSnap on a four AWS. >> Correct. >> How would you encapsulate this from a product and solution standpoint? How would you describe that to a customer in an elevator or just a quick value statement? What's in it for them? >> Sure. So Pure's been seen by customers as innovation engine that optimized applications and allowed them to do, I would say, amazing things into the enterprise. What we're doing now, is we're evolving that solution out of just an on-premises solution and making it available in a very agile Cloud world. We know this world is evolving dramatically. We know people really want to be able to take advantage of the innovations within the Cloud, and so what we're doing is we're finally bridging the gap between on-premises and the Cloud. Giving them the same user experience that they've come to love with Pure and all of the Clouds that they potentially need to develop in. >> Okay so from the announcement standpoint, you guys got Cloud Block Storage limited public beta, right out of the gate. GA in mid 2019. CloudSnap is GA at announcement and Store Reduce is going into beta, first half of 2019. >> Correct, we're excited about it. >> So for the skeptics out there who are- Hey you know, Chadd, I got to tell ya. I love the Cloud, but I'm a little bit nervous. How do I test and get a feeling for- this is going to be simple, if I'm going to jump in and look at this. What should I look at first? What sequence, should I try this? Do you guys have a playbook, for them to either kick the tires or how should they explore to get proficient in the new solution. >> Good question. Right, so for one if you're a FlashArray customer, CloudSnap gives you the ability to be able to take this new entity, called a portable Snapshot. Which is data paired with metadata, and allow you to be able to move data off of a FlashArray. You can put it to an NFS target or you can send it to the Cloud. And so that's the most logical one that folks will probably leverage first because it's super exciting for them to be able to leverage the Cloud and spin up instances, if they'd like to. Or protect back to their own prem. Also, Cloud Block Storage, great because you can spin it up relatively quickly and test out applications between the two. One area that I think customers are going to be really excited about is you could run an analytics environment in the Cloud and spin up a bunch of compute from your production instance by just replicating it up into the Cloud. The last part is, I think backup is not super sexy. Nobody like to talk about it, but it's a significant pain point that's out there, and I think we can make some major in-roads in helping businesses get better SLAs. We're very, very interested to see the great solutions people bring with- >> So, I'm going to put you on the spot here and ask you, there's always the, love the cliche, is it a vitamin or is it an Asprin. Is there a pain point? So obviously backup, I would agree. Backup and recovery, certainly with the disaster, you see the wildfires going on here in California. You can't stop thinking about what the, disaster recovery plan and then you got top line growth with application developers. The kind of the vitamin, if you will. What are the use cases, low hanging fruit for someone to like test this out from a pain point standpoint. Is it backup and what's the growth angle? I wanted to test out this new solution, what should I look at first? What would you recommend? >> It's a very tough question. So, CloudSnap is obviously the easy one. I'd say Cloud Block Store is one that I think, people will. I look at my biggest, customers biggest challenges out there it's how do I get application portable. So I think Cloud Block Store really gives you the application portability. So I think it's finally achieving that whole, hybrid cloud world. But at the end of the day, backup is really big pain point that the enterprise deals with, like right this second. So there's areas where we believe we can add inherent values to them with being able to do fast restores from Flash. That meets SLA's very quickly and is an easy fix. >> And you guys feel good about the data protection aspect of this? >> Yes, very much so. >> Awesome. I want to get your personal take on this. You were early on in Pure. What's the vibe inside the company? This is Cloud and people love Cloud. There's benefits for Cloud, as well as on-premises. What's the mood like inside PureStorage? You've seen from the beginning, now you're a public company and growing up really, really fast. What's the vibe like inside PureStorage? >> It's funny, it hasn't really changed all that much, in the cultural side of the thing, of the business. I love where I work because of the people. The people bring so much fun to the business, so much innovation and we have a mindset that's heavily focused on customer first. And that's one of the things. I always tell this kind of story is, when we first started, we sat in a room on a whiteboard and wrote up, what is everything that sucks about storage. And instead of trying to figure out how we make a 2.0 version of some storage array, we actually figured out what are all the customer pain points that we needed to satisfy and then we built innovations to go do that. Not go chase the competition, but actually go alleviate customer challenges. And we just continue to kind of focus on customer first and so the whole company kind of, rallies around that. And I think you see a very different motion that what you do in most companies because we love hearing about customer results of our products. Engineering just will rally around when a customer shows up just to hear exactly their experience associated to it. And so with this, I think what they see is a continued evolution of the things we've been doing and they love seeing and providing customer solutions in areas that they were challenged to deal with in the past. >> What was some of the customer feedback when you guys started going, hey, you've got a new product, you're doing all of that early work. And you got to go talk to some people and knock on the, hey, what do you think, would you like the Cloud, a little bit of the Cloud. How would you like the Cloud to be implemented? What was some of the things you heard from customers? >> A lot of them said, if you can take your core tenets, which was simplicity, efficiency, reliability, and customer focus around consumption, and if you could give that to me in the Cloud, that would be the Nirvana. So, when we looked at this model, that's exactly what we did. We said, let's take what people love about us on-prem, and give 'em the exact same experience in the Cloud. >> That's great and that's what you guys have done. Congratulations. >> Thanks so much. >> Great to hear the Cloud story here Chadd Kenney, Vice President of Products and Solutions at PureStorage. Taking the formula of success on-premises with Flash and the success there, and bringing it to the Cloud. That's the big deal in this announcement. I'm John Furrier here in the Palo Alto studios, thanks for watching. (upbeat instrumental music)

Published Date : Nov 26 2018

SUMMARY :

One of the fastest growing startups in the storage business. Thank you so much for having us. and the data tsunami is coming. of the economics of Cloud and be able to then have And just so I get everything right here. What's the internal conversation? So, the evolution of cloud has been So it's a good time because the customers 100% and the application stack, You know, I've always been following the IT business for but the crazy part to this is the Cloud divide is real. It's just, look at the resource pool You guys are providing that same kind of storage pool, and moving it to the Cloud is, What's the difference? and that pairing between the software and hardware the Rembrandt of motherboards. So everyone's pretty much in the software business. Which is the way Amazon does the branch. A 100% storage, API support, always encrypted, That's kind of the core premise. and made it so that you can actually replicate on the box which is on-premise, So you treat the Cloud as if it's a virtual device, and networking on the back end I just need to know where the storage is. Where as in the normal cloud offerings, And the agility is off the charts, it's amazing. You can go download the Cloud formation template and the words, the word flow. but all of the things we've built in with, is that the whole protection piece. And so it allows you to store an S3 de-duplicated So that's the store reduce product. And so you get the inherent abilities to even do, Okay so the data services are the; of the innovations within the Cloud, Okay so from the announcement standpoint, So for the skeptics out there who are- And so that's the most logical one The kind of the vitamin, if you will. that the enterprise deals with, You've seen from the beginning, now you're a public company And that's one of the things. a little bit of the Cloud. and give 'em the exact same experience in the Cloud. That's great and that's what you guys have done. and the success there, and bringing it to the Cloud.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy BechtolsheimPERSON

0.99+

Chadd KenneyPERSON

0.99+

AmazonORGANIZATION

0.99+

PureStorageORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

Silicon ValleyLOCATION

0.99+

30 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

mid 2019DATE

0.99+

10 developersQUANTITY

0.99+

100%QUANTITY

0.99+

ChaddPERSON

0.99+

November 2018DATE

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

Les PeckgesemPERSON

0.99+

both sitesQUANTITY

0.99+

first oneQUANTITY

0.99+

FlashArrayTITLE

0.99+

Cloud Block StoreTITLE

0.99+

VMworldORGANIZATION

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

10 sandboxesQUANTITY

0.99+

todayDATE

0.99+

PureORGANIZATION

0.98+

two environmentsQUANTITY

0.98+

two instancesQUANTITY

0.98+

AristaORGANIZATION

0.98+

CloudTITLE

0.98+

first half of 2019DATE

0.97+

Cloud Block StorageTITLE

0.97+

firstQUANTITY

0.97+

mid part of next yearDATE

0.97+

S3TITLE

0.96+

FlashTITLE

0.96+

oneQUANTITY

0.96+

GALOCATION

0.95+

One areaQUANTITY

0.95+

OneQUANTITY

0.95+

CloudSnapTITLE

0.94+

two different availability zonesQUANTITY

0.93+

secondQUANTITY

0.93+

each oneQUANTITY

0.93+

ChuggORGANIZATION

0.92+

Cube StudiosORGANIZATION

0.91+

AI-Powered Workload Management


 

>> From the Silicon Angle Media Office in Boston, Massachusetts, it's the Cube. Now here's your host Stu Miniman. >> Hi, I'm Stu Miniman and welcome to the Cube's Boston area studio. This is a Cube conversation. Happy to welcome to the program first time guest Benjamin Nye, CEO of Turbonomic, a Boston-based company. Ben, thanks so much for joining us. >> Stu, thanks for having me. >> Alright Ben, so as we say, we are fortunate to live in interesting times in our industry. Distributed architectures are what we're all working on, but at the same day, there's a lot of consolidation going on. You know, just put this in context. Just in recent past, IBM spent 34 billion dollars to buy Red Hat. And the reason I bring that up is a lot of people talk about you know, it's a hybrid multi-cloud world. What's going on? The thing I've been saying for a couple of years is as users, two things you need to watch. Care about their data an awful lot. That's what drives businesses. And what drives the data really? It's their applications. >> Perfect. >> And that's where Turbonomic sits. Workload automation is where you are. And that's really the important piece of multi-cloud. Maybe give our audience a little bit of context as to why this really, IBM buying Red Hat fits into the general premise of why Turbonomic exists. >> Super. So the IBM Red Hat combination I think is really all about managing workloads. Turbonomic has always been about managing workloads and actually Red Hat was an investor, is an investor in Turbonomic, particularly for open stack, but more importantly open shift now. When you think about the plethora of workloads, we're gonna have 10 to one number of workloads relative to VMs and so worth when you look at microservices and containers. So when you think about that combination, it's really, it's an important move for IBM and their opportunity to plan hybrid and multi-cloud. They just announced the IBM multi-cloud manager, and then they said wait a minute, we gotta get this thing to scale. Obviously open shift and Red Hat is scale. 8.9 million developers in their community and the opportunity to manage those workloads across on-prim and off in a cloud-native format is critical. So relate that to Turbo. Turbo is really about managing any workload in any environment anywhere at all times. And so we make workloads smart, which is self-managing anywhere real time, which allows the workloads themselves to care for their own performance assurance, policy adherence, and cost effectiveness. And when you can do that, then they can run anywhere. That's what we do. >> Yeah, Ben, bring us inside of customers. When people hear applications and multi-cloud, there was the original thing. Oh well, I'm gonna be able to burst to the cloud. I'm gonna be moving things all the time. Applications usually have data behind them. There's gravity, it's not easy to move them. But I wanna be able to have that flexibility of if I choose a platform, if I move things around, I think back to the storage world. Migration was one of the toughest things out there and something that I spent the most time and energy to constantly deal with. What do you see today when it comes to those applications? How do they think about them? Do they build them one place and they're static? Is it a little bit more modular now when you go to microservices? What do you see and hear? >> Great, so we have over 2,100 accounts today including 20% of the Fortune 500, so a pretty good sample set to be able to describe this. What I find is that CIOs today and meet with many of them, I want either born in the cloud, migrate to the cloud, or run my infrastructure as cloud. And what they mean is they want, they're seeking greater agility and elasticity than they've ever had. And workloads thrive in that environment. So as we decompose the applications and decompose the infrastructure and open it up, there's now more places to run those different workloads and they seek the flexibility to be able to create applications much more quickly, set up environments a lot faster, and then they're more than happy to pay for what they use. But they get tired of the waste candidly of the traditional legacy environments. And so there's a constant evolution for how do I take those workloads and distribute them to the proper location for them to run most performantly, most cost effectively, and obviously with all the compliance requirements of security and data today. >> Yeah, I'm wondering if you could help connect the dots for us. In the industry, we talk a lot about digital transformation. >> Yeah. >> If we said two or three years ago was a lot of buzz around this, when I talk to N users today, it's reality. Absolutely, it's not just, oh I need to be mobile and online and everything. What do you hear and how do my workloads fit into that discussion? >> So it's an awesome subject. When you think about what's going on in the industry today, it's the largest and fastest re-platforming of IT ever. Okay, so when you think about for example at the end of 2017, take away dollars and focus on workloads. There were 220 million workloads. 80% were still on prim. For all the growth in the cloud, it was still principally an on prim market. When you look now forward, the differential growth rates, 63% average growth across the cloud vendors, alright, in the IAS market. And I'm principally focused on AWS and Ajur. And only 3% growth rate in the on premise market. Down from five years ago and continuing a decline because of the expense, fergility, and poor performance that customers are receiving. So the re-platforming is going on and customers' number one question is, can you help me run my workloads in each of these three environments? So to your point, we're not yet where people are bursting these workloads in between one environment and another. My belief is that will come. But in today's world, you basically re-platform those workloads. You put them in a certain environment, but now you gotta make sure that you run them well performantly and cost effectively in those environments. And that's the digital transformation. >> Okay. So Ben, I think back to my career. If I turn back the clock even two decades, intelligence, automation, things we were talking about, it's different today. When I talk to the people building software, re-platforming, doing these things today, machine learning and AI, whatever favorite buzzword you have in that space is really driving significant changes into this automation space. I think back to early days of Turbonomic. I think about kinda the virtualization environments and the like. How does automation intelligence, how is it different today than it was say, when the company was founded? >> Wow. Well so for one, we've had to expand to this hybrid and multi-cloud world, right? So we've taken our data model which is AI ops, and driven it out to include Ajur and AWS. But the reason would say why. Why is that important? And ultimately, when people talk about AI ops, what they really mean whether it's on prim or off, is resource-aware applications. I can no longer affect performance by manually running around and doing the care and feeding and taking these actions. It's just wasteful. And in the days where people got around that by over-provisioning on prim sometimes as much as 70 or 80% if you look at the resource actually used, it was far too expensive. Now take that to the cloud, to the public cloud, which is a variable cost environment and I pay for that over-provisioning every second of the rest of my life and it's just prohibitive. So if I want to leverage the elasticity and agility of the cloud, I have to do it in a smarter measure and that requires analytics. And that's what Turbonomic provides. >> Yeah and actually I really like the term AI ops. I wonder if you can put a little bit of a point on that because there are many admins and architects out there that they hear automation and AI and say, oh my gosh, am I gonna be put out of a job? I'm doing a lot of these things. Most people we know in IT, they're probably doing way more than they'd like to and not necessarily being as smart with it. So how does the technology plus the people, how does that dynamic change? >> So what's fascinating is if you think about the role of tech, it was to remove some of the labor intensity in business. But when you then looked inside of IT, it's the most labor intensive business you can find, right? So the whole idea was let's not have people doing low value things. Let's do them high value. So today when we virtualize an unpremised estate, we know that we can share it. Run two workloads side by side, but when a workload spikes or a noisy neighbor, we congest the physical infrastructure. What happens then is that it gets so bad that the application SLA breaks. Alerts go off and we take super expensive engineers to go find hopefully troubleshoot and find root cause. And then do a non-disruptive action to move a workload from one host to another. Imagine if you could do that through pure analytics and software. And that's what our AI ops does. What we're allowing is the workloads themselves will pick the resources that are least congested on which to run. And when they do that rather than waiting for it to break and then try and fix it people, we just let it take that action on its own and trigger a V motion and put it into a much happier state. That's how we can assure performance. We'll also check all the compliance and policies that govern those workloads before we make a move so you can always know that you're in keeping with your affinity-in affinity rules, your HADR policies, your data sovereignty, all these different myriad of regulations. Oh and by the way, it'll be a lot more cost effective. >> Alright, Ben, you mentioned V motion. So people that know virtualization, this was kind of magic when we first saw it to be able to give me mobility with my workloads. Help modernize us with cubernetties. Where does that fit in your environment? How does multi-cloud world, as far as I see, cubernetties does not break the laws of physics and allow me to do V motion across multi-clouds. So where does cubernetties fit in your environment? And maybe you can give us a little bit of compare contrast of kinda the virtualization world and cubernetties, where that fits. >> Sure, so we look at containers or the pods, a grouping of containers, as just another form of liquidity that allows workloads to move, alright? And so again we're decomposing applications down to the level of microservices. And now the question you have to ask yourself is when demand increases on an application or on indeed a container, am I to scale up that container or should I clone it and effectively scale it out? And that seems like a simple question, but when you're looking at it at huge amounts of scale, hundreds of containers or pods per workload or per VM, now the question is, okay, whichever way I choose, it can't be right unless I've also factored the imposition I'm putting on the VM in which that container and or pod sits. Because if I'm adding memory in one, I have to add it to the other 'cause I'm stressing the VM differentially, right? Or should I actually clone the VM as well and run that separately? And then there's another layer, the IAS layer. Where should that VM run? In the same host and cluster and data center if it's on prim or in the same availability zone and region if it's off prim? Those questions all the way down the stack are what need to be answered. And no one else has an answer for that. So what we do is we instrument a cubernetties or an open shift or even on the other side a cloud foundry and we actually make the scheduler live and what we call autonomic. Able to interrelate the demand all the way down through the various levels of the stack to assure performance, check the policy, and make sure it's cost effective. And that's what we're doing. So we actually allow the interrelationship between the containers and their schedulers all the way down through the virtual layer and into the physical layer. >> Yeah, that's impressive. You really just did a good job of explaining all of those pieces. One of the challenges when I talk to users, they're having a real hard time keeping up. (laughing) We said I've started to figure out my cloud environment. Oh wait, I need to do things with containers. Oh wait, I hear about the server-less thing. What are some of the big challenges you're hearing from customers? Who do they turn to to help them stay on top of the things that are important for their business? >> So I think finding the sources of information now in the information age when everything has gone to software or virtual or cloud has become harder. You don't get it all from the same one or two monolithic vendors, strategic vendors. I think they have to come to the Cube as an example of where to find this information. That's why we're here. But I think in thinking about this, there's some interesting data points. First on the skills gap, okay, Accentra did a poll of their customer base and found that only 14% of their customers thought they had the requisite skills on staff to warrant their moves to the cloud. Think about that number, so 86% don't. And here's another one. When you get this wrong, there's some fascinating data that says 80% of customers receive a cloud bill north of three times what they expected to spend. Now just think about. Now I don't know which number's bigger frankly, Stu. Is it the 80% or the three times? But there's the conversation. Hey, boss, I just spent the entire annual budget in a little over a quarter. You still wanna get that cup of coffee? (laughing) So the costs of being wrong are enormously expensive. And then imagine if I'm not governing the policies and my workloads wind up in a country that they're not meant to per data sovereignty. And then we get breached. We have a significant problem there from a compliance standpoint. And the beauty is software can manage all this and automation can help alleviate the constrain of the skills gap that's going on. >> Yeah, you're totally right. I think back to five years ago, I was at Amazon Reinvent. And they had a tool that started to monitor a little bit of are you actually using the stuff that you're paying for? And there were customers walking out and saying, I can save 60 to 70% over what I was doing. Thank you Amazon for helping to point that out. When I lived on the data center side and vendors that sold stuff, I couldn't imagine if your sales rep came and said, hey, we deployed this stuff and we know you spent millions of dollars. It seems like we over-provisioned you by two to three x what you expected. You'd be fired. So it was like in Wall Street. Treats Amazon a little bit differently than they do everybody else. So on the one hand, we're making progress. There's lots of software companies like yourself. There's lots of companies helping people to optimize their cost on there. But still, this seems like there's a long way to go to get multi-cloud and the cost of what's going on there under control. Remember the early days? They said cloud was supposed to be simple and cheap and turned out to be neither of those. So Ben, I want to give you the opportunity. What do you see both as an industry and for Turbonomic, what's the next kinda six to 12 months bring? >> Good, can I hit your cloud point first? It's just when you think of Amazon, just to see how the changes. If I go and provision a workload in Amazon EC2 alone, there's 1.7 million different combinations from which I can choose across all the availability zones, all the regions, and all the services. There's 17 families who compute service alone as just one example. So what Amazon looks at Turbonomic and says, you're almost a customer control plane for us. You're gonna understand the demand on the workload, and then you can help the customer, advise the customer which service, which instance types, all the way down through not just compute and memory, but down into network and storage are the ones that we should do. And the reason we can do this so cost effectively is we're doing it on a basis of a consumption plan, not an allocation plan. And Amazon as a retailer in their origin, has cut prices 62 times, so they're very interested in using us as a means of making their customers more cost effective so that they're indeed paying for what they use, but not paying for what they don't use. They've recognized us as giving us the migration tools competency, as well as the third party cloud management competencies that frankly are very rare in the marketplace. And recognize that those are because production apps are now running at Amazon like never before. Ajur, Microsoft Ajur is not to be missed on this one, right? So they've said we too wanna make sure that we have cost effective operations. And what they've described is when a customer moves to Ajur, that's a Ajur customer at ACA. But then they need to make sure that they're growing inside of Ajur and there's a magic number of 5,000 dollars a month. If they exceed that, then they're Ajur for life, okay? The problem becomes if they pause and they say, wow this is expensive or this isn't quite right. Now they just lost a year of growth. And so the whole opportunity with Ajur and they actually resell our assessment products for migration planning as well as the optimization thereafter. And the whole idea is to make sure again customers are only paying for what they use. So both of these platforms in the cloud are super aggressive with one another, but also relative to the un-prim legacy environments to make sure that the workloads are coming into their arena. And if you look at the value of that, they round numbers about three to 6,000 dollars a year per workload. We have three million smart workloads that we manage today at Turbonomic. Think what that's worth in the realm of the prize at the public cloud vendors and it's a really interesting thing. And we'll help the customers get there most cost effectively as they can. >> Alright, so back to looking forward. Would love to hear your thoughts on just what customers need broadly and then some of the areas that we should look for Turbonomic in the future. >> Okay, so I think you're gonna continue to see customers look for outlets for this decomposed application as we've described it. So microservices, containers, and VMs running in multiple different environments. We believe that the next one, so today in market we have STDC, the software defined data center and virtualization. We have IAS and PASS in the public and hybrid cloud worlds. The next one we believe will be as applications at the edge become less pedestrian, more strategic and more operationally intensive, then you're talking about Amazon Prime delivery or your driverless cars or things along those lines. You're going to see that the edge really is gonna require the cell tower to become the next generation data center. You're gonna see compute memory and storage and networking on the cell tower because I need to process and I can't take the latency of going back to the core, be it cloud core or on premise core. And so you'll do both, but you'll need that edge processing. Okay, what we look at is if that's the modern data center, and you have processing needs there that are critical for those applications that are yet to be born, then our belief is you're gonna need workload automation software because you can't put people on every single cell tower in America or the rest of the world. So, this is sort of a confirming trend to us that we know we're in the right direction. Always focus on the workloads, not the infrastructure. If you make the application workloads perform, then the business will run well regardless of where they perform. And in some environments like a modern day cell tower, they're just not gonna be the opportunity to put people in manual response to a break fix problem set at the edge. So that's kinda where we see these things headed. >> Alright, well Ben Nye, pleasure to catch up with you. Thanks so much for giving us the update on where the industry is and Turbonomic specifically. And thank you so much for watching. Be sure to check out theCube.net for all of our coverage. Of course we're at all the big cloud shows including AWS Reinvent and CubeCon in Seattle later this year. So thank you so much for watching the Cube. (gentle music)

Published Date : Nov 1 2018

SUMMARY :

in Boston, Massachusetts, it's the Cube. Happy to welcome to the program first time guest And the reason I bring that up is a lot of people talk about And that's really the important piece of multi-cloud. and the opportunity to manage those workloads and something that I spent the most time and energy and then they're more than happy to pay for what they use. In the industry, we talk a lot about digital transformation. and how do my workloads fit into that discussion? And that's the digital transformation. and the like. And in the days where people got around that Yeah and actually I really like the term AI ops. it's the most labor intensive business you can find, right? compare contrast of kinda the virtualization world And now the question you have to ask yourself is One of the challenges when I talk to users, And the beauty is software can manage all this So on the one hand, we're making progress. And the reason we can do this so cost effectively Turbonomic in the future. and I can't take the latency of going back to the core, And thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Ben NyePERSON

0.99+

AmazonORGANIZATION

0.99+

Benjamin NyePERSON

0.99+

BenPERSON

0.99+

StuPERSON

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmericaLOCATION

0.99+

oneQUANTITY

0.99+

80%QUANTITY

0.99+

1.7 millionQUANTITY

0.99+

10QUANTITY

0.99+

220 millionQUANTITY

0.99+

63%QUANTITY

0.99+

AjurORGANIZATION

0.99+

BostonLOCATION

0.99+

62 timesQUANTITY

0.99+

17 familiesQUANTITY

0.99+

sixQUANTITY

0.99+

three timesQUANTITY

0.99+

twoQUANTITY

0.99+

AccentraORGANIZATION

0.99+

60QUANTITY

0.99+

SeattleLOCATION

0.99+

86%QUANTITY

0.99+

20%QUANTITY

0.99+

TurbonomicORGANIZATION

0.99+

FirstQUANTITY

0.99+

3%QUANTITY

0.99+

OneQUANTITY

0.99+

three millionQUANTITY

0.99+

70QUANTITY

0.99+

34 billion dollarsQUANTITY

0.99+

millions of dollarsQUANTITY

0.99+

bothQUANTITY

0.99+

twoDATE

0.99+

end of 2017DATE

0.99+

five years agoDATE

0.99+

eachQUANTITY

0.99+

threeQUANTITY

0.99+

14%QUANTITY

0.98+

over 2,100 accountsQUANTITY

0.98+

two decadesQUANTITY

0.98+

Boston, MassachusettsLOCATION

0.98+

todayDATE

0.98+

Wall StreetLOCATION

0.98+

one exampleQUANTITY

0.98+

8.9 million developersQUANTITY

0.98+

12 monthsQUANTITY

0.98+

first timeQUANTITY

0.98+

two thingsQUANTITY

0.97+

70%QUANTITY

0.97+

three years agoDATE

0.97+

a yearQUANTITY

0.96+

Silicon Angle Media OfficeORGANIZATION

0.96+

CubeConEVENT

0.96+

PrimeCOMMERCIAL_ITEM

0.96+

later this yearDATE

0.95+

Mayumi Hiramatsu, Infor | Inforum DC 2018


 

>> Live from Washington, D.C., it's theCUBE. Covering Inforum DC 2018. Brought to you by Infor. >> Good afternoon and welcome back to Inforum 2018. Our coverage here on theCUBE as we start to wrap up our two days of coverage here at the show. We're in Washington, D.C. at the Walter Washington Convention Center, along with Dave Vellante, John Walls here. We're joined now by Mayumi Hiramatsu, who is the SVP of Cloud Operations Engineering and Security at Infor. Mayumi, how are you doing? >> Great to be here, thanks for coming. >> And a recent honoree by the way, Woman of the Year at the Women in IT Awards, so congratulations on that. (clapping) >> Awesome! >> Thank you. >> Very nice honor. >> Great. >> Tell us... big picture here, cloud strategy as far as Infor is concerned and why that separates you from the pack. What makes that stand out, you think, from your peers? >> I think there are a couple of things. One is that when I think of cloud, a lot of people will think about cloud as, it's a software running in the cloud, but it's more than that. It's about the solution and the capabilities that we're building on the cloud. And Infor is perfect, in that we're building enterprise software solutions. So if you look at Infor and compare us to the competition, we may have multiple of competition wrapped together in a solution. And that's really powerful, and you can only do that, really well, in the cloud because it's already built for that. It's integrated and the power of data is really amazing, because when you think about cloud, it's not just the software, it's the data, what you can do with it. And with the latest technologies around artificial intelligence and machine learning, there is so much insight we can give to our enterprise customers to make them successful in their business. So, I think of cloud as not only the technology, which I love, because I'm actually an engineer, but it's really the business transformation, digital transformation that the cloud enables, with the technologies like artificial intelligence, data analytics, data science, machine learning. There's just so much bolted on, that you can really only do in the cloud. >> Can you help us understand that competitive nuance? >> Yeah. >> I'm not sure I fully understand, 'cause others will say, well, we have cloud too. What's different between the way in which you provide solutions in the cloud and... pick a company. >> Yeah. >> Another company says, we have cloud, all of our SaaS is in the cloud. >> Right, so I think the first thing is, Infor's always focused on solutions, which means that our competition may have one of, let's say, a dozen things that we put together. So, if you're using our competition, they may have a cloud and some of them were born in the cloud, but then you have to figure out, how do I integrate it with the rest of the world? Because if you think about it, ERP. It's running your business. And it might be your HR and about your employees. It might be CRM and customer information. It could be supply chain and figuring out what parts I need to buy. It could be billing and figuring out how do I bill my customers. All these different solutions today, if you look at our competition, they may solve one, two, three different portions, but certainly not a dozen of these all together and then tailored towards the industry. So, we can pretty much bolt on and get started pretty quickly, if you think about, for example, healthcare. We already have a healthcare solution ready to go, so you don't have to figure out how do I put 12, 15 different software, glue it together and make it work? And maybe some of it is running in the cloud, maybe some of it is not running in the cloud, then the integration and making it work gets really complex. But ours is already pre-built, ready for that, whether it's healthcare, manufacturing, food & beverage, fashion. We have a lot of these already ready to go, so then you just have to customize it, as opposed to starting from scratch, figuring out how to integrate all these different software, making sure they work together and then harnessing the data, and then adding all these different, artificial intelligence and machine learning capabilities that is so powerful today. You can't do that without the cloud and you certainly can't do it if you're trying to glue together different solutions. It's just really not easy. And I'll add one more thing, I was talking to a customer about this today, which I thought was brilliant. The other thing is security. Most people worry about security in the cloud and I run our security as well, the Chief Information Security Officer reports in to me and the whole security team does. And I can tell you, if you're combining 12, 15 different types of software and trying to have consistent security all across? Oh, that's a very difficult thing to do. But we've already figured it out. So all you have to do is buy the package, the solution, it' already working together. You already have security overlay on it. They have consistency in terms of how we manage the security, whether it's single sign-on and who has access, and making sure that that gets all the way through, all the way up to the data lake, where all of the data gets captured, all the way up to the artificial intelligence. So, if you think about security and how important that is, and how difficult it might be to do on one software, let alone a dozen software, the fact that we've already built that, is a big differentiator. >> So it's all there, and when you talked about, all you have to do is customize it, you're talking about, you're not talking about hardcore coding, you're talking about things like naming and setting it up. Is that right? >> Yeah, and-- >> Or are you talking about deeper levels of custom mods? >> In our multi-tenant cloud, we don't do mods, but instead, we have extensions. And extensibility is really important because now those are, again, essentially plug-and-play. We already built it for you, so it's so much easier than creating each piece of code every single time. Again, it's about, how do you make sure that you can integrate these very important sets of business processes together. Not only how quickly can you use it, how secure is it? And ensuring that you can actually focus on your business value, right, because trying to assemble all of this together and making it work, it's an enormous amount of work and I think, as an enterprise, you want to focus on actually giving customer value instead of trying to figure out, the mechanics underneath the hood. >> I mean, you certainly get the value of cloud software, right, and cloud ERP. Who doesn't? Like out of the industries that you're trying to, get in front of or whose attention you're trying to get. Where's the, if there's someone that's kicking and screaming a little bit, who might that be or what might that be? >> I don't think that there's a specific industry, if you will, I think some industries, in fact, and when I think about it, all industries are getting disrupted, right? If they don't, they're actually getting left behind. So, I think some industries feel it more, as in, they might be behind the curve. And I wouldn't necessarily say industry, maybe some of the companies in that industry. >> Companies within? >> Yeah, are waking up to it. I went to a Gartner Supply Chain Conference a couple years ago and they were talking about bimodal supply chain, right. You have the teams that are doing the old way and then companies that are doing the new way. And companies are literally going through this shift. And I had this interesting conversation that it's really not bimodal. Companies are essentially somewhere in that spectrum and what they need to do is figure out from point A to point B and how you make that transition. It's a huge transition. I would also say that there's a cultural element as well, and so one of the key things that, especially for companies that are moving from on-prem to cloud. As a provider, it's really important to realize it's a completely different business model. And it's not always talked about, again, a lot of times people think, oh well, you know, Infor, you just moved the software into AWS and you're calling it SaaS. It's more than that. Besides the capabilities, its a huge cultural shift that even Charles talked about on-stage, which is that, software companies you focus on the product, versus, as a SaaS, the last 'S', Software as a Service, you are focusing on the service. So, the analogy I use a lot is, maybe we were actually a food company, we'd build beautiful food, delicious food, nutritious food, maybe it was a rotisserie chicken, right? But now I switch to a restaurant. Food is only table stakes. And you know, restaurant reviews is about services, the ambiance, how quickly you respond, how clean it is, all these other elements matter. And if you think about Infor or any other company for that matter, that we're focused on product and software, to then becoming a SaaS service provider, it's a huge transformation for a company, and I can tell you we're going through that, right? Infor as an on-prem company moving to the SaaS, and there's so much focus now on customer experience, is because realizing that we're no longer a software company, we're a Software as a Service company. And there's a lot more we need to put in, in terms of making sure the customer experience is good. As our customers go through the same journey, they also need to realize, it's no longer about providing that product, but the experience that they're providing to the customers, and we see our customers actually going through that journey. Some might be harder to move within whatever industry, because maybe they have legacy product, legacy machines, right, to be able to lift and ship to quickly. But there's definitely a path, and if you think about some of these industries that's been around for a long time, they're definitely going through this transition, and in fact, I think they have to. >> So how did you set priorities in terms of, you come to that recognition that we're services, in the cloud. Luckily, you don't have to manage data centers, so you could take that off the table, so what were your priorities and where did you start, and what are you focused on now? >> One of the first things that I did was really pushing this cultural shift for the company, because a lot of people, some people may think, okay, it's software, I'm putting in the AWS, it's cloud. But all the other service elements, like that restaurant analogy, it wasn't mature in terms of where we needed to be and therefore you hear a lot about customer experience and customer success and a lot of these elements that we really have to put more emphasis on. But the other areas that I focused, so I came in, I focused on cloud operations, security, tooling, and architecture, that was the set that I was focused on. What I did was essentially transformation, right, it's People Process Technology in addition to culture, so culture we already talked about, the sense of urgency is very different as well. On-prem, maybe you don't have to respond in two seconds, but in cloud, you do, and so making sure that we had crisp KPIs, which are different than on-prem, making sure that processes were completely redefined. I've actually done benchmark with our competition to see that our SLAs and KPIs are either on par or better. I'm a big proponent of engineering and technology, so we built a lot of technology monitoring, tooling, so that we can do a lot more in terms of self-service and automation, that's really the only to scale, and execute consistently. Spent a lot of time over the last year, literally re-defining the identity of our jobs to how do we make sure we have the right skillset, and retraining some of the folks who may have a new identity and they need to learn new skills, to coming up with new tools and technologies that they can use, to changing our processes so we can up our SLA and make sure that we're either meeting or beating our customers' SLAs, complete transformation in the last year. >> You must be exhausted. (laughs) >> When do you sleep? >> I don't sleep much, but... >> You must not. >> So, new metrics, this is intriguing to me. Can you give us an example of sort of this, new KPIs as a result of this cloud, SaaS world? >> Yeah, for sure. I think every company has their own sort of core KPIs that are public, and cloud is usually uptime, right? If you have support, it could be how quickly you respond, we call it mean time to respond. Underneath the hood, I've created key KPIs for, what I call, critical cloud qualities. One is, of course, reliability, so that would be in addition to uptime, like 99.7%, which is two hours and 11 minutes by the way, per month downtime, so making sure that we're actually meeting that. >> Sorry, just to interrupt. >> Yeah. >> You're measuring from the application view right, not the green light on the server, is that fair? >> That's a great question, because that is exactly the evolution we want as well, so when I talk about the transformation at my organization, we were measuring the hardware first. We are now measuring, essentially, outages. So I don't care if the server's still running, but if the customers can't log in, it's an outage, right? But that's not something you can monitor by looking at a server because sometimes the server's up and running. But maybe a process went down. >> System's fine. >> Exactly. So that's the monitoring-- >> Okay, so slight adjustment in the typical metrics, sorry to interrupt, but please carry on. >> That was a perfect question. >> Okay. >> So KPIs, so underneath the hood, so here are some examples of metrics for availability. Mean time to detect, that's an internal metric, and my internal metric is five minutes, meaning, if you don't know we have an issue in five minutes, it's probably not automated and monitored, so we better hook up some additional monitoring as an example. Mean time to respond, that's a very public one, a lot of times, customers demand that, and if you look at competition, that is the only metric that's actually public, potentially even on a contract, right? So we have mean time to respond, we also have mean time to resolution, that's usually an internal metric. I'm sure competition has that as well, but making sure that we have that response right away, because it's one thing to respond, but if it's not resolved as quickly, it's not good. Other metrics when it comes to reliability, mean time to communicate. And this is really interesting. One of the things that I found was, we could be working on something but we're not telling the customers, so they're wondering if we're actually sleeping on the job, even though we're actually actively working in the background, right? >> Did they get the message, right? >> Right, so mean time to communicate, as an example of reliability metrics. So reliability is one of the core tenets. The other tenets? Performance, how quickly do you respond, right? And I always say that if performance is too long, it's equivalent of being down. Imagine if you're using Google and you put a search in, and it takes you three minutes to get a response time, you probably have left by then. So that performance, page load time, page response time, these response times actually matter. So we have actually metrics around that and we monitor and manage them. Security, we have a boatload of security KPIs, whether it's number of critical vulnerabilities, how quickly we respond to security incidents, a boatload of those as well, and then, last but not least, agility. So how quickly we can respond if we have to do a deployment. So what that means is, let's say, every software company has a bug, and let's say we actually had to quickly respond to that, can we do it within 24 hours if we needed to? Security is a perfect example. A mature company should be able to say, okay, there was a security alert that got to the industry, right? We should be able to quickly respond to that and apply a patch immediately and address it. A company that may not be so mature, it might take them months to go through thousands of machines. So I call that time to market, how quickly can we actually deploy something, and that's not just deploying it, but testing it and making sure it's not going to break anything and be able to test it and verify it. So these are examples of metrics-- >> Great examples. Are your SLAs... for a SaaS company, your SLAs presumably have to be more strict than you'll contractually agree to, but maybe not, then your typical SLA out of AWS or Google, or Microsoft Azure. Is that true? >> Yes. >> So you guys will commit contractually to these types of SLAs that you would expect in an enterprise, versus kind of the standard, off-the-shelf AWS SLA, and how do you reconcile the gap or do you have a different agreement with AWS? >> We do have a... The SLA is pretty much standard when it comes to AWS specifically, right? >> 'Cause they want-- >> Yeah. >> Homogeneity. >> Exactly. So I think the challenge is, every SaaS provider needs to architect around it and when you think about it, hardware failure rate is usually 4% industry-wide. You can expect the hardware will go down, right? >> Yep. >> Network goes down, various things go down. So then it's our job that sits on top of it, to make sure that we build it for reliability. Perhaps we actually have redundancy built-in, and we can actually go from one side to the other, we have that, for example. So if AWS goes down, and they do, all right? I ran data centers for many, many years, it happens. It's our job to make sure that we can fail over it, and not have that customer experience, so it's an overlay availability that we have to build-- >> You're architecting recovery into the system, I know we're tight on time, but I got to ask you, 'cause Pam couldn't make it today. You're part of the WIN, the Women Infor Network, I presume, right? >> Yes. >> So maybe we can just talk a little about that-- >> Yeah. >> It's a great topic. >> Women in technology, right? >> I got some of the best interviews at Infor shows with women, Deborah Norville came on, Naomi Tutu, Lara Logan. Just some awesome folks, but so-- >> So your thoughts, we know you're passionate about the role of women in technology, so how you feel about that, if you want, and Infor, what's being done, or what can be done about that? >> Great questions. So I'm a big proponent of women in technology. Partly because I went through my pain, right, I've always been a small percentage in terms of engineering role as female in technology. I'm also a board member of Girls in Tech, and I channel my energy that way as well as I try to mentor and help others, for example, mentoring engineering students at Berkeley. I'm a Berkeley alum. And I think it's really important that we get more women in technology and keep in them in technology, and candidly, our latest trend is actually going down. So the reason why I think it's important, besides making sure that everybody has a chance, and all those good reasons, we have statistics that actually show, the more diversity you have, the better your product is going to be, and the better it's actually going to hit your top line revenue. And over and over again, whether it's women in the board seat, or women executives, or women engineers, no matter where, by getting women's input into technology, you're actually representing 50% of the consumer base. >> The user base, right. >> Right and so, if we don't do that as a company, we're actually not going to be able to get the user base feedback and I think it's so really important, not only for the economy to have those wonderful workforce in the job, but also for the company products to actually reflect the user's needs and actually improve the revenue, right? So from that perspective, I think it's really important, I love the fact that at Infor, we do a couple of things when it comes to diversity. So one, is WIN, as you know, Women Infor Network. I think it's a fabulous program, and in fact, I get a lot of male colleagues saying they want to join WIN, and they do. My last session, there were actually women and men joining it, because it's really about leadership and how do we cultivate our next, next talented workforce to be successful. The other one is EAP, the Infor Education Alliance Program, so that not only looks at women, but just diversity, right, and bringing students into this workforce. I think it's a great way to help the economy, help the products, help the company. And at the end of the day, why not? >> You're awesome, super impressive and articulate, and really self-confident, and hopefully an inspiration for young women out there watching, so thank you so much, really appreciate it. >> And hope you get some sleep sometime too. (laughing) >> Thank you. >> Busy, busy schedule. All right, thank you. Thank you Mayumi. We're back with more here on theCUBE, you are watching us live in Washington, D.C., and we'll be right back. (upbeat music)

Published Date : Sep 27 2018

SUMMARY :

Brought to you by Infor. Mayumi, how are you doing? And a recent honoree by the way, What makes that stand out, you think, from your peers? that you can really only do in the cloud. What's different between the way in which you provide all of our SaaS is in the cloud. in the cloud, but then you have to figure out, So it's all there, and when you talked about, And ensuring that you can actually focus on your I mean, you certainly get the value of maybe some of the companies in that industry. that product, but the experience that they're providing to and what are you focused on now? and automation, that's really the only to scale, You must be exhausted. Can you give us an example of sort of this, new KPIs so making sure that we're actually meeting that. the evolution we want as well, so when I talk about So that's the monitoring-- Okay, so slight adjustment in the typical metrics, and if you look at competition, and it takes you three minutes to get a response time, Is that true? when it comes to AWS specifically, right? architect around it and when you think about it, so it's an overlay availability that we have to build-- You're part of the WIN, the Women Infor Network, I got some of the best interviews at Infor shows and the better it's actually going to hit I love the fact that at Infor, we do a couple of things and really self-confident, and hopefully an inspiration And hope you get some sleep sometime too. Thank you Mayumi.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Deborah NorvillePERSON

0.99+

Dave VellantePERSON

0.99+

Naomi TutuPERSON

0.99+

Lara LoganPERSON

0.99+

AWSORGANIZATION

0.99+

John WallsPERSON

0.99+

50%QUANTITY

0.99+

Mayumi HiramatsuPERSON

0.99+

three minutesQUANTITY

0.99+

MayumiPERSON

0.99+

five minutesQUANTITY

0.99+

Washington, D.C.LOCATION

0.99+

two hoursQUANTITY

0.99+

CharlesPERSON

0.99+

99.7%QUANTITY

0.99+

threeQUANTITY

0.99+

two daysQUANTITY

0.99+

4%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

two secondsQUANTITY

0.99+

each pieceQUANTITY

0.99+

last yearDATE

0.99+

InforORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.98+

Women Infor NetworkORGANIZATION

0.98+

a dozen softwareQUANTITY

0.98+

one softwareQUANTITY

0.98+

first thingQUANTITY

0.98+

one thingQUANTITY

0.98+

Infor Education Alliance ProgramTITLE

0.98+

24 hoursQUANTITY

0.98+

OneQUANTITY

0.97+

Women in IT AwardsEVENT

0.97+

Walter Washington Convention CenterLOCATION

0.97+

EAPTITLE

0.96+

one more thingQUANTITY

0.96+

DCLOCATION

0.95+

thousands of machinesQUANTITY

0.95+

11 minutesQUANTITY

0.93+

2018DATE

0.89+

first thingsQUANTITY

0.89+

15 different softwareQUANTITY

0.89+

Gartner Supply Chain ConferenceEVENT

0.89+

firstQUANTITY

0.88+

Cloud Operations Engineering and SecurityORGANIZATION

0.85+

15 different typesQUANTITY

0.84+

Inforum 2018EVENT

0.84+

a dozen thingsQUANTITY

0.83+

a dozenQUANTITY

0.83+

SLATITLE

0.82+

couple years agoDATE

0.82+

Microsoft AzureORGANIZATION

0.81+

WINORGANIZATION

0.8+

2018EVENT

0.79+

one sideQUANTITY

0.79+

12,QUANTITY

0.78+

BerkeleyLOCATION

0.78+

InforumEVENT

0.75+

single signQUANTITY

0.75+

point BOTHER

0.73+

single timeQUANTITY

0.71+

in TechORGANIZATION

0.68+

PamPERSON

0.65+

BerkeleyORGANIZATION

0.56+

Josh Rogers, Syncsort | theCUBE NYC 2018


 

>> Live from New York, it's theCUBE, covering theCUBE New York City 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Okay, welcome back, everyone. We're here live in New York City for CUBE NYC. This is our ninth year covering the big data ecosystem, now it's AI, machine-learning, used to be Hadoop, now it's growing, ninth year covering theCUBE here in New York City. I'm John Furrier, with Dave Vellante. Our next guest, Josh Rogers, CEO of Syncsort. I'm going back, long history in theCUBE. You guys have been on every year. Really appreciate chatting with you. Been fun to watch the evolution of Syncsort and also get the insight. Thanks for coming on, appreciate it. >> Thanks for having me. It's great to see you. >> So you guys have constantly been on this wave, and it's been fun to watch. You guys had a lot of IP in your company, and then just watching you guys kind of surf the big data wave, but also make some good decisions, made some good calls. You're always out front. You guys are on the right parts of the wave. I mean now it's cloud, you guys are doing some things. Give us a quick update. You guys got a brand refresh, so you got the new logo goin' on there. Give us a quick update on Syncsort. You got some news, you got the brand refresh. Give us a quick update. >> Sure. I'll start with the brand refresh. We refreshed the brand, and you see that in the web properties and in the messaging that we use in all of our communications. And, we did that because the value proposition of the portfolio had expanded so much, and we had gained so much more insight into some of the key use cases that we're helping customers solve that we really felt we had to do a better job of telling our story and, probably most importantly, engage with the more senior level within these organizations. What we've seen is that when you think about the largest enterprises in the world, we offer a series of solutions around two fundamental value propositions that tend to be top of mind for these executives. The first is how do I take the 20, 30, 40 years of investment in infrastructure and run that as efficiently as possible. You know, I can't make any compromises on the availability of that. I certainly have to improve my governance and secureability of that environment. But, fundamentally, I need to make sure I could run those mission-critical workloads, but I need to also save some money along the way, because what I really want to do is be a data-driven enterprise. What I really want to do is take advantage of the data that gets produced in these transactional applications that run on my AS400 or IBM I-infra environment, my mainframe environment, even in my traditional data warehouse, and make sure that I'm getting the most out of that data by analyzing it in a next-generation set of-- >> I mean one of the trends I want to get your thoughts on, Josh, cause you're kind of talking through the big, meagatrend which is infrastructure agnostic from an application standpoint. So the that's the trend with dev ops, and you guys have certainly had diverse solutions across your portfolio, but, at the end of the day, this is the abstraction layer customers want. They want to run workloads on environments that they know are in production, that work well with applications, so they almost want to view the infrastructure, or cloud, if you will, same thing, as just agnostic, but let the programmability take care of itself, under the hood, if you will. >> Right, and what we see is that people are absolutely kind of into extending and modernizing existing applications. This is in the large enterprise, and those applications and core components will still run on mainframe environments. And so, what we see in terms of use cases is how do we help customers understand how to monitor that, the performance of those applications. If I have a tier that's sitting on the cloud, but it's transacting with the mainframe behind the firewall, how do I get an end-to-end view of application performance? How do I take the data that ultimately gets logged in a DB2 database on the mainframe and make that available in a next-generation repository, like Hadoop, so that I can do advanced analytics? When you think about solving both the optimization and the integration challenge there, you need a lot of expertise in both sides, the old and the new, and I think that's what we uniquely offer. >> You guys done a good job with integration. I want to ask quick question on the integration piece. Is this becoming more and more table stakes, but also challenging at the same time? Integration and connecting systems together, if their stateless, is no problem, you use APIs, right, and do that, but as you start to get data that needs state information, you start to think to think about some of the challenges around different, disparate systems being distributed, but networked, in some cases, even decentralized, so distributed networking is being radically changed by the data decisions on the architecture, but also integration, call it API 2.0 or this new way to connect and integrate. >> Yeah, so what we've tried to focus on is kind of solving that piece between these older applications that run these legacy platforms and making them available to whatever the consumer is. Today, we see Kafka and in Amazon we see Kinesis as kind of key buses delivering data as a service, and so the role that we see ourselves playing and what we announced this week is an ability to track changed data, deliver it in realtime in these older systems, but deliver it to these new targets: Kafka, Kinesis, and whatever comes next. Because really that's the fundamental partner we're trying to be to our customers is we will help you solve the integration challenge between this infrastructure you've been building for 30 years and this next-generation technology that lets you get the next leg of value out of your data. >> So Jim, when you think about the evolution of this whole big data space, the early narrative in the trade press was, well, NoSQL is going to replace Oracle and DB2, and the data lake is going to replace the EDW, and unstructured data is all that matters, and so forth. And now, you look at what's really happened is the EDW is a fundamental component of making decisions and insights, and SQL is the killer app for Hadoop. And I take an example of say fraud detection, and when you think and this is where you guys sit in the middle from the standpoint of data quality, data integration, in order to do what we've done in the past 10 years take fraud detection down from well, I look at my statement a month or two later and then call the credit card company, it's now gone to a text that's instantaneous. Still some false positives, and I'm sure working on that even. So maybe you could describe that use case or any other, your favorite use case, and what your role is there in terms of taking those different data sources, integrating them, improving the data quality. >> So, I think when you think about a use case where I'm trying to improve the SLA or the responsiveness of how do manage against or detect fraud, rather than trying to detect it on a daily basis, I'm trying to detect it at transaction time. The reality is you want to leverage the existing infrastructure you have. So if you have a data warehouse that has detailed information about transaction history, maybe that's a good source. If you have an application that's running on the mainframe that's doing those transaction realtime, the ultimate answer is how do I knit together the existing infrastructure I have and embed the additional intelligence and capability I need from these new capabilities, like, for example, using Kafka, to deliver a complete solution. What we do is we help customers kind of tie that together, Specifically, we announced this integration I mentioned earlier where we can take a changed data element in a DB2 database and publish it into Kafka. That is a key requirement in delivering this real-time fraud detection if I in fact am running transactions on a mainframe, which most of the banks are. >> Without ripping and replacing >> Why would you want to rip out an application >> You don't. >> your core customer file when you can just extend it. >> And you mentioned the Cloudera 6 certification. You guys have been early on there. Maybe talk a little about that relationship, the engineering work that has to get done for you to be able to get into the press release day one. >> We just mentioned that my first time on theCUBE was in 2013, and that was on the back of our initial product release in the big data world. When we brought the initial DMX-h release to market, we knew that we needed to have deep partnerships with Cloudera and the key platform providers. I went and saw Mike Olson, I introduced myself, he was gracious enough to give me an hour, and explain what we thought we could do to help them develop more value proposition around their platform, and it's been a terrific relationship. Our architecture and our engineering and product management relationship is such that it allows us to very rapidly certify and work on their new releases, usually within a couple a days. Not only can customers take advantage of that, which is pretty unique in the industry, but we get some some visibility from Cloudera as evidenced by Tendu's quote in the press release that was released this week, which is terrific. >> Talk about your business a little bit. You guys are like a 50-year old startup. You've had this really interesting history. I remember you from when I first started in the industry following you guys. You've restructured the company, you've done some spin outs, you've done some M and A, but it seems to be working. Talk about growth and progress that you're making. >> We're the leader in the Big Iron to Big Data market. We define that as allowing customers to optimize their traditional legacy investments for cost and performance, and then we help them maximize the value of the data that get generated in those environments by integrating it with next-generation analytic environments. To do that, we need a broad set of capability. There's a lot of different ways to optimize existing infrastructure. One is capacity management, so we made an acquisition about a year ago in the capacity management space. We're allowing customers to figure out how do I make sure I've got not too much and not too little capacity. That's an example of optimization. Another area of capability is data quality. If I'm maximize the value of the data that gets produced in these older environments, it would be great that when it lands in these next-generation repositories it's as high quality as possible. We acquired Trillium about a year ago, or actually coming up >> How's that comin'? >> on two years ago and we think that's a great capability for our customers It's going terrific. We took their core data quality engine, and now it runs natively on a distributed Hadoop infrastructure. We have customers leveraging it to deliver unprecedented volume of matching, so not only breakthrough performance, but this whole notion of write once, run anywhere. I can run it on an SMP environment. I can run it on Hadoop. I can run it Hadoop in the cloud. We've seen terrific growth in that business based on our continued innovation, particularly pointing it at the big data space. >> One of the things that I'm impressed with you guys is you guys have transformed, so having a transformation message to your customers is you have a lot of credibility, but what's interesting is is that the world with containers and Kubernetes now and multi-cloud, you're seeing that you don't have to kill the legacy to bring in the new stuff. You can see you can connect systems, when you guys have done with legacy systems, look at connect the data. You don't have to kill that to bring in the new. >> Right >> You can do cloud-native, you can do some really cool things. >> Right. I think there's-- >> This rip and replace concept is kind of going away. You put containers around it too. That helps. >> Right. It's expensive and it's risky, so why do that. I think that's the realization. The reality is that when people build these mission-critical systems, they stay in place for not five years, but 25 years. The question is how do you allow the customers to leverage what they have and the investment they've made, but take advantage of the next wave, and that's what we're singularly focused on, and I think we're doing a great job of that, not just for customers, but also for these next-generation partners, which has been a lot of fun for us. >> And we also heard people doing analytics they want to have their own multi-tenent, isolated environments, which goes to don't screw this system up, if it's doing a great job on a mission-critical thing, don't bundle it, just connect it to the network, and you're good. >> And on the cloud side, we're continuing to look at our portfolio and say what capabilities will customers want to consume in a cloud-delivery model. We've been doing that in the data quality space for quite awhile. We just launched and announced over the last about three months ago capacity management as a service. You'll continue to see, both on the optimization side and on the integration side, us continuing to deliver new ways for customers to consume the capabilities they need. >> That's a key thing for you guys, integration. That's pretty much how you guys put the stake in the ground and engineer your activities around integration. >> Yeah, we start with the premise that your going to need to continue to run this older investments that you made, and you're going to need to integrate the new stuff with that. >> What's next? What's goin' on the rest of the year with you guys? >> We'll continue to invest heavily in the realtime and changed-data capture space. We think that's really interesting. We're seeing a tremendous amount of demand there. We've made a series of acquisitions in the security space. We believe that the ability to secure data in the core systems and its journey to the next-generation systems is absolutely critical, so we'll continue to invest there. And then, I'd say governance, that's an area that we think is incredibly important as people start to really take advantage of these data lakes they're building, they have to establish real governance capabilities around those. We believe we have an important role to play there. And there's other adjacencies, but those are probably the big areas we're investing in right now. >> Just continuing to move the ball down the field in the Syncsort cadence of acquisitions, organic development. Congratulations. Josh, thanks for comin' on. To John Rogers, CEO of Syncsort, here inside theCUBE. I'm John Furrier with Dave Vellante. Stay with us for more big data coverage, AI coverage, cloud coverage here. Part of CUBE NYC, we're in New York City live. We'll be right back after this short break. Stay with us. (techno music)

Published Date : Sep 17 2018

SUMMARY :

Brought to you by SiliconANGLE Media and also get the insight. It's great to see you. kind of surf the big data wave, take advantage of the data I mean one of the trends I want to in a DB2 database on the by the data decisions on the architecture, and so the role that we and SQL is the killer app for Hadoop. the existing infrastructure you have. when you can just extend it. the engineering work that has to get done in the big data world. first started in the industry of the data that get generated I can run it Hadoop in the cloud. is that the world with containers You can do cloud-native, you can do I think there's-- concept is kind of going away. but take advantage of the next wave, connect it to the network, and on the integration side, put the stake in the ground integrate the new stuff with that. We believe that the ability to secure data in the Syncsort cadence of acquisitions,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JoshPERSON

0.99+

Josh RogersPERSON

0.99+

2013DATE

0.99+

JimPERSON

0.99+

Josh RogersPERSON

0.99+

20QUANTITY

0.99+

John RogersPERSON

0.99+

John FurrierPERSON

0.99+

Mike OlsonPERSON

0.99+

SyncsortORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

New York CityLOCATION

0.99+

New York CityLOCATION

0.99+

30 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

New YorkLOCATION

0.99+

KafkaTITLE

0.99+

an hourQUANTITY

0.99+

30QUANTITY

0.99+

both sidesQUANTITY

0.99+

bothQUANTITY

0.99+

NoSQLTITLE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.99+

40 yearsQUANTITY

0.98+

two years agoDATE

0.98+

first timeQUANTITY

0.98+

IBMORGANIZATION

0.98+

TodayDATE

0.98+

HadoopTITLE

0.98+

OracleORGANIZATION

0.98+

AmazonORGANIZATION

0.98+

ninth yearQUANTITY

0.97+

NYCLOCATION

0.97+

this weekDATE

0.96+

TrilliumORGANIZATION

0.96+

SQLTITLE

0.96+

this weekDATE

0.96+

50-year oldQUANTITY

0.96+

CUBEORGANIZATION

0.96+

OneQUANTITY

0.95+

a monthDATE

0.94+

EDWTITLE

0.92+

about a year agoDATE

0.91+

ClouderaORGANIZATION

0.91+

aboutDATE

0.9+

SLATITLE

0.84+

DB2TITLE

0.84+

oneQUANTITY

0.82+

CEOPERSON

0.81+

a year agoDATE

0.81+

theCUBEORGANIZATION

0.8+

about three months agoDATE

0.79+

AS400COMMERCIAL_ITEM

0.78+

waveEVENT

0.77+

past 10 yearsDATE

0.74+

two laterDATE

0.74+

two fundamental value propositionsQUANTITY

0.72+

KinesisTITLE

0.72+

couple a daysQUANTITY

0.71+

Cloudera 6TITLE

0.7+

big dataEVENT

0.64+

day oneQUANTITY

0.61+

2018DATE

0.57+

API 2.0OTHER

0.54+

TenduPERSON

0.51+

Dave Rensin, Google | Google Cloud Next 2018


 

>> Live from San Francisco, it's The Cube. Covering Google Cloud Next 2018 brought to you by Google Cloud and its ecosystem partners. >> Welcome back everyone, it's The Cube live in San Francisco. At Google Cloud's big event, Next 18, GoogleNext18 is the hashtag. I'm John Furrier with Jeff Frick, our next guest, Dave Rensin, director of CRE and network capacity at Google. CRE stands for Customer Reliability Engineering, not to be confused with SRE which is Google's heralded program Site Reliability Engineering, categoric changer in the industry. Dave, great to have you on. Thanks for coming on. >> Thank you so much for having me. >> So we had a meeting a couple months ago and I was just so impressed by how much thought and engineering and business operations have been built around Google's infrastructure. It's a fascinating case study in history of computing, you guys obviously power yourselves and the Cloud is just massive. You've got the Site Reliability Engineer concept that now is, I won't say is a boiler plate, but it's certainly the guiding architecture for how enterprise is going to start to operate. Take a minute to explain the SRE and the CRE concept within Google. I think it's super important that you guys, again pioneered, something pretty amazing with the SRE program. >> Well, I mean, like everything it was just formed out of necessity for us. We did the calculation 12 or 13 years ago, I think. We sat down a piece of paper and we said, well, the number of people we need to run our systems scales linearly with the number of machines, which scales linearly with the number of users, and the complexity of the stuff you're doing. Alright, carry the two divide by six, plot line. In ten years, now this is 13 or 14 years ago, we're going to need one million humans to run google. And that was at the growth and complexity of 10 years ago or 12 years ago. >> Yeah, Search. (laughs) >> Search, right? We didn't have Android, we didn't have Cloud, we didn't have Assistant, we didn't have any of these things. We were like, well that's not going to work. We're going to have to do something different and so that's kind of where SRE came from. It's like, how do we automate, the basic philosophy is simple, give to the machines all the things machines can do. And keep for the humans all the things that require human judgment. And that's how we get to a place where like 2,500 SREs run all of Google. >> And that's massive and there's billions and billions of users. >> Yeah. >> Again, I think this is super important because at that time it was a tell sign for you guys to wake up and go, well I can't get a million humans. But it's now becoming, in my opinion, what this enterprise is going through in this digital transformation, whatever we call it these days, consumer's agent of IT now it's digital trasfor-- Whatever it is, the role of the human-machine interaction is now changing, people need to do more. They can collect more data than ever before. It doesn't cost them that much to collect data. >> Yeah. >> We just heard from the BigQuery guys, some amazing stuff happening. So now enterprises are almost going through the same changeover that you guys had to go through. And this I now super important because now you have the tooling and the scale that Google has. And so it's almost like it's a level up fast. So, how does an enterprise become SRE like, quickly, to take advantage of the Cloud? >> So, you know, I would like to say this is all sort of a deliberate march of a multi-year plan. But it wasn't, it was a little accidental. Starting two or three years ago, companies were asking us, they were saying, we're getting mired in toil. Like, we're not being able to innovate because we're spending all of our budget and effort just running the things and turning the crank. How do you have billions of users and not have this problem? We said, oh we use this thing called SRE. And they're like please use more words. And so we wrote a book. Right? And we expected maybe 20 people would read the book, and it was fine. And we didn't do it for any other reason other than that seemed like a very scalable way to tell people the words. And then it all just kind of exploded. We didn't expect that it was going to be true and so a couple of years ago we said, well, maybe we should formalize our interactions of, we should go out proactively and teach every enterprise we can how to do this and really work with them, and build up muscle memory. And that's where CRE comes from. That's my little corner of SRE. It's the part of SRE that, instead of being inward focused, we point out to companies. And our goal is that every firm from five to 50 thousand can follow these principles. And they can. wW know they can do it. And it's not as hard as they think. The funny thing about enterprises is they have this inferiority complex, like they've been told for years by Silicon Valley firms in sort of this derogatory way that, you're just an enterprise. We're the innovate-- That's-- >> Buy our stuff. Buy our software. Buy IT. >> We're smarter than you! And it's nonsense. There are hundreds and hundreds of thousands of really awesome engineers in these enterprises, right? And if you just give them a little latitude. And so anyway, we can walk these companies on this journey and it's been, I mean you've seen it, it's just been snowballing the last couple of years. >> Well the developers certainly have changed the game. We've seen with Cloud Native the role of developers doing toil and, or specific longer term projects at an app related IT would support them. So you had this traditional model that's been changed with agile et cetera. And dev ops, so that's great. So you know, golf clap for that. Now it's like scale >> No more than a golf clap it's been real. >> It's been a high five. Now it's like, they got to go to the next level. The next level is how do you scale it, how do I get more apps, how am I going to drive more revenue, not just reduce the cost? But now you got operators, now I have to operate things. So I think the persona of what operating something means, what you guys have hit with SRE, and CRE is part of that program, and that's really I think the aha moment. So that's where I see, and so how does someone read the book, put it in practice? Is it a cultural shift? Is it a reorganization? What are you guy seeing? What are some of the successes that you guys have been involved in? >> The biggest way to fail at doing SRE is try to do all of it at once. Don't do that. There are a few basic principles, that if you adhere to, the rest of it just comes organically at a pace that makes sense for your business. The easiest thing to think of, is simply-- If I had to distill it down to a few simple things, it's just this. Any system involving people is going to have errors. So any goal you have that assumes perfection, 100% uptime, 100% customer satisfaction, zero error, that kind of thing, is a lie. You're lying to yourself, you're lying to your customers. It's not just unrealistic its, in a way kind of immoral. So you got to embrace that. And then that difference between perfection and the amounts, the closeness to perfection that your customers really need, cuz they don't really need perfection, should be just a budget. We call it the error budget. Go spend the budget because above that line your customers are indifferent they don't care. And that unlocks innovation. >> So this is important, I want to just make sure I slow down on this, error budget is a concept that you're talking about. Explain that, because this is, I think, interesting. Because you're saying it's bs that there's no errors, because there's always errors, Right? >> Sure. >> So you just got to factor in and how you deal with them is-- But explain this error budget, because this operating philosophy of saying deal with errors, so explain this error budget concept. >> It comes from this observation, which is really fascinating. If you plot reliability and customer satisfaction on a graph what you will find is, for a while as your reliability goes up, your customer satisfaction goes up. Fantastic. And then there's a point, a magic line, after which you hit this really deep knee. And what you find is if you are much under that line your customers are angry, like pitchforks, torches, flipping cars, angry. And if you operate much above that line they are indifferent. Because, the network they connect with is less reliable than you. Or the phone they're using is less reliable than you. Or they're doing other things in their day than using your system, right? And so, there's a magic line, actually there's a term, it's called an SLO, Service Level Objective. And the difference between perfection, 100%, and the line you need, which is very business specific, we say treat as a budget. If you over spend your budget your customers aren't happy cuz you're less reliable than they need. But if you consistently under spend your budget, because they're indifferent to the change and because it is exponentially more expensive for incrementive improvement, that's literally resources you're wasting. You're wasting the one resource you can never get back, which is time. Spend it on innovation. And just that mental shift that we don't have to be perfect, less people do open and honest, blameless postmortems. It let's them embrace their risk in innovation. We go out of our way at Google to find people who accidentally broke something, took responsibility for it, redesigned the system so that the next unlucky person couldn't break it the same way, and then we promote them and celebrate them. >> So you push the error budget but then it's basically a way to do some experimentation, to do some innovation >> Safely. >> Safely. And what you're saying is, obviously the line of unhappy customers, it's like Gmail. When Gmail breaks people are like, the World freaks out, right? But, I'm happy with Gmail right now. It's working. >> But here's the thing, Gmail breaks very, very little. Very, very often. >> I never noticed it breaking. >> Will you notice the difference between 10 milliseconds of delivery time? No, of course not. Now, would you notice an hour or whatever? There's a line, you would for sure notice. >> That's the SLO line. >> That's exactly right. >> You're also saying that if you try to push above that, it costs more and there's not >> And you don't care >> An incremental benefit >> That's right. >> It doesn't effect my satisfaction. >> Yeah, you don't care. >> I'm at nirvana, now I'm happy. >> Yeah. >> Okay, and so what does that mean now for putting things in practice? What's the ideal error budget, that's an SLO? Is that part of the objective? >> Well that's part of the work to do as a business. And that's part of what my team does, is help you figure out is, what is the SLO, what is the error budget that makes sense for you for this application? And it's different. A medical device manufacturer is going to have a different SLO than a bank or a retailer, right? And the shapes are different. >> And it's interesting, we hear SLA, the Service Level Agreement, it's an old term >> Different things. >> Different things, here objective if I get this right, is not just about speed and feeds. There's also qualitative user experience objectives, right? So, am I getting that right? >> Very much so. SLOs and SLAs get confused a lot because they share two letters. But they don't mean anywhere near the same thing. An SLA is a legal agreement. It's a contract with your user that describes a penalty if you don't meet a certain performance. Lawyers, and sometimes sales or marketing people, drive SLAs. SLOs are different things driven by engineers. They are quantitative measures of your users happiness right now. And exactly to your point, it's always from the user's perspective. Like, your user does not care if the CPU and your fleet spiked. Or the memory usage went up x. They care, did my mail delivery slow down? Or is my load balancer not serving things? So, focus from your user backwards into your systems and then you get much saner things to track. >> Dave, great conversation. I love the innovation, I love the operating philosophy cuz you're really nailing it with terms of you want to make people happy but you're also pushing the envelope. You want to get these error budgets so we can experiment and learn, and not repeat the same mistake. That sounds like automation to me. But I want you to take a minute to explain, what SRE, that's an inward facing thing for Google, you are called a CRE, Customer Reliability Engineer. Explain what that is because I heard Diane Greene saying, we're taking a vertical focus. She mentioned healthcare. Seems like Google is starting to get in, and applying a lot of resources, to the field, customers. What is a CRE? What does that mean? How is that a part of SRE? Explain that. >> So a couple of years ago, when I was first hired at Google I was hired to build and run Cloud support. And one of the things I noticed, which you notice when you talk to customers a lot, is you know the industries done a really fabulous job of telling people how to get to Cloud. I used to work at Amazon. Amazon is a fantastic job! Telling people, how do you get to Cloud? How do you build a thing? But we're awful, as an industry, about telling them how to live there. How do you run it? Cuz it's different running a thing in a Cloud than it is running it in On-Prem. And you find that's the cause of a lot of friction for people. Not that they built it wrong, but they're just operating it in a way that's not quite compatible. It's a few degree off. And so we have this notion of, well we know how to operate these things to scale, that's what SRE is. What if, what if, we did a crazy thing? We took some of our SREs and instead of pointing them in at our production systems, we pointed them out at customers? Like what if we genetically screened our SREs for, can talk to human, instead of can talk to machine? Which is what you optimize for when you hire an engineer. And so we started Siri, it's this part of our SRE org that we point outwards to customer. And our job is to walk that path with you and really do it to get like-- sometimes we go so far as even to share a pager with you. And really get you to that place where your operations look a lot like we're talking that same language. >> It's custom too, you're looking at their environment. >> Oh yeah, it's bespoke. And then we also try to do scale things. We did the first SRE book. At the show just two days ago we launched the companion volume to the book, which is like-- cheap plug segment, where it's the implementation details. The first book's sort of a set of principles, these are the implementation details. Anything we can do to close that gap, I don't know if I ever told you the story, but when I was a little kid when I was like six. Like 1978, my dad who's always loved technology decided he was going to buy a personal computer. So he went to the largest retailer of personal computers in North America, Macy's in 1978, (laughs) and he came home with two things. He came home with a huge box and a human named Fred. And Fred the human unpacked the big box and set up the monitor, and the tape drive, and the keyboard, and told us about hardware and software and booting up, because who knew any of these things in 1978? And it's a funny story that you needed a human named Fred. My view is, I want to close the gap so that Siri are the Freds. Like, in a few years it'll be funny that you would ever need humans, from Google or anyone else, to help you learn how-- >> It's really helping people operate their new environment at a whole. It's a new first generation problem. >> Yeah. >> Essentially. Well, Dave great stuff. Final question, I want to get your thoughts. Great that we can have this conversation. You should come to the studio and go more and more deeper on this, I think it's a super important, and new role with SRES and CREs. But the show here, if you zoom out and look at Google Cloud, look down on the stage of what's going on this week, what's the most important story that should be told that's coming out of Google Cloud? Across all the announcements, what's the most important thing that people should be aware of? >> Wow, I have a definite set of biases, that won't lie. To me, the three most exciting announcements were GKE On-Prem, the idea that manage kubernetes you can actually run in your own environment. People have been saying for years that hybrid wasn't really a thing. Hybrid's a thing and it's going to be a thing for a long time, especially in enterprises. That's one. I think the introduction of machine learning to BigQuery, like anything we can do to bring those machine learning tools into these petabytes-- I mean, you mentioned it earlier. We are now collecting so much data not only can we not, as companies, we can't manage it. We can't even hire enough humans to figure out the right questions. So that's a big thing. And then, selfishly, in my own view of it because of reliability, the idea that Stackdriver will let you set up SLO dashboards and SLO alerting, to me that's a big win too. Those are my top three. >> Dave, great to have you on. Our SLO at The Cube is to bring the best content we possibly can, the most interviews at an event, and get the data and share that with you live. It's The Cube here at Google Cloud Next 18 I'm John Furrier with Jeff Frick. Stay with us, we've got more great content coming. We'll be right back after this short break.

Published Date : Jul 26 2018

SUMMARY :

brought to you by Google Cloud Dave, great to have you on. and the CRE concept within Google. and the complexity of the stuff you're doing. Yeah, Search. And keep for the humans And that's massive at that time it was a tell sign for you guys the same changeover that you guys and effort just running the things Buy our stuff. And if you just give them a little latitude. So you had this traditional model it's been real. and so how does someone read the book, the closeness to perfection error budget is a concept that you're talking about. and how you deal with them is-- and the line you need, obviously the line of unhappy customers, But here's the thing, Will you notice the difference between And the shapes are different. So, am I getting that right? and then you get much saner things to track. and not repeat the same mistake. And our job is to walk that path with you It's custom too, And it's a funny story that you needed It's a new first generation problem. Great that we can have this conversation. the idea that Stackdriver will let you and get the data and share that with you live.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave RensinPERSON

0.99+

Jeff FrickPERSON

0.99+

AmazonORGANIZATION

0.99+

Diane GreenePERSON

0.99+

DavePERSON

0.99+

100%QUANTITY

0.99+

1978DATE

0.99+

SiriTITLE

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

FredPERSON

0.99+

hundredsQUANTITY

0.99+

20 peopleQUANTITY

0.99+

North AmericaLOCATION

0.99+

two lettersQUANTITY

0.99+

10 millisecondsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

firstQUANTITY

0.99+

sixQUANTITY

0.99+

first bookQUANTITY

0.99+

fiveQUANTITY

0.99+

AndroidTITLE

0.99+

twoQUANTITY

0.99+

an hourQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoDATE

0.99+

The CubeORGANIZATION

0.98+

2,500 SREsQUANTITY

0.98+

GmailTITLE

0.98+

SREORGANIZATION

0.98+

10 years agoDATE

0.98+

MacyORGANIZATION

0.98+

12 years agoDATE

0.98+

oneQUANTITY

0.98+

two days agoDATE

0.98+

Google CloudTITLE

0.97+

three years agoDATE

0.97+

googleORGANIZATION

0.96+

first generationQUANTITY

0.96+

zero errorQUANTITY

0.96+

50 thousandQUANTITY

0.94+

GoogleNext18EVENT

0.94+

13DATE

0.93+

SRETITLE

0.93+

couple of years agoDATE

0.92+

Silicon ValleyLOCATION

0.91+

CREORGANIZATION

0.91+

couple months agoDATE

0.91+

CloudTITLE

0.91+

agileTITLE

0.9+

Google CloudORGANIZATION

0.9+

AssistantTITLE

0.89+

one million humansQUANTITY

0.89+

14 years agoDATE

0.89+

SLATITLE

0.88+

ten yearsQUANTITY

0.87+

12DATE

0.86+

StackdriverORGANIZATION

0.86+

last couple of yearsDATE

0.85+

Jayme Williams, TenCate | ZertoCON 2018


 

>> Announcer: Live, from Boston, Massachusetts. It's the CUBE. Covering ZertoCON 2018. Brought to you by Zerto. >> This is the CUBE, I'm Paul Gillin. We're here at ZertoCON 2018. Final day of ZertoCON here in Boston at the Hynes Convention Center and on the stage this morning with John Morency from Gartner was my next guest Jayme Williams, Senior Systems Engineer at TenCate talking about your experience with Zerto. Jamie, welcome, thanks for joining us. >> Jamie: Thank you very much. >> I'm sure a lot of people haven't heard of TenCate although it's a very big company, tell us what the company does. >> We are a multi-national company, we are developers, processes that produce, one business entity, protected fabrics, we also are in artificial turf, also advanced composites, things like the Mars Lander, so TenCate actually has material on the planet Mars right now. So, we're a multi-national, diverse company, based using in textiles and textile processes. >> Very cool and you're also a multi cloud company from an IT perspective. One of the things you talked about this morning was moving to a current federation of seven different cloud providers I think you said you use. What is the strategy and the thinking behind that. >> So, we're shifting our model right now, we call it disentanglement, we're going from regional setups to where we were the AMERS the EMEA and APAC, rather than regional, we are shifting each business entity to a global, so each one of those global business units, we had to disentangle, move from our current infrastructure to a new infrastructure. We guide them, we try and help them and tell them what would be best suited to them, but some of them went with private cloud, some of them are using public clouds and we have to disperse that infrastructure amongst whatever they so chose and help them along their journey to become a stand-alone business entity across the globe. So, that could be a AWS, it could be Azure, all of them are going to Office 365, but leveraging the technology to best serve the purposes of that specific business unit globally rather than regionally. >> And then it's your job at the back end to federate all these services that many companies are just now beginning to think about adding a second cloud to their portfolio. What advice would you give a company that's looking at moving to multi-cloud? >> Very strong, knowledgeable partners that you can actually become friends with and have them on speed-dial on your hip. Conferences like this is where you meet those people, so that if you come to something here you're going to to run into somebody who has the same struggle as you or you can help someone who's going to to have the same struggle as you along the pathway. So, I think we should disseminate the information amongst ourselves in IT to help each other. It's a community of people, we've got to keep ourselves motivated and vital and relevant and the only way to do that is by building up these partnerships, how did you do it, how did I do it, share that information so we don't all have to struggle through the same exact issues as we go along the journey or the path whatever the business dictates. >> A lot of talk at the conference about resilience. What is resilience mean to TenCate? >> So, it's gone from we can do without this data for 24 hours, that's acceptable, to 12 hours, that's acceptable, now it's an always-on world, it's more and more millennial spun into the workplace too, it's a given that I can do work from anywhere, anytime, anyplace, so you've got to be resilient in your infrastructure, in your processes to make those things available to them, so they're basically our customers as an IT organization saying, "Here's the services we're offering to you, whether it's Office 365, or an on prem business process, we've still got to guarantee that workers and people and colleagues can get to these services, so resilience is always having that service on whatever SLA that has to be implemented in order to meet those things and make them available to the workplace, the business flows, making money, we're profitable and we're on the goods with the P&L. >> Now, obviously Zerto has been important to your IT strategy, talk about your use of Zerto and what value it's delivered to your organization. >> So, we were an early adopter of Zerto, we weren't the first by any means, but we were an early adopter. When we started our cloud strategy we had a meeting, globally, TIO says we are going to the cloud, to the cloud and beyond. I called Zerto, who was implemented just for the Americas at that time and said, "What's the cloud? What do you recommend for the cloud?" And they actually came at that point in time and said, "We have some partners we're working with, one of them happens to be the data center that you're in." So, they got me linked up, that was my first step into talking about discovering what is the cloud using Zerto as the reference, those partners again, those friendships that say utilize these guys. That's how we started initial getting our feet wet with the cloud, it was private, it was more controlled, it also gave us a lot of comfort. We could go to the guys there and say, "How do you do this, what happens if", all of the what if scenarios that really are easy and simple to answer and it was put in front of us by Zerto and as their product evolved, they started supporting replication into Azure, let's go to Azure then, so we started replicating to Azure, we went to Office 365, we of course still used those third party private and Zerto partners and used resources in their data centers. I think I've tried about every offering that Zerto has come out with whether it be off-site backup, 30-day journaling, if not just to see what it is, when I find out that it works, I just keep it, why it's a value-add any time they come out with something. You key-turn it, you get additional benefits, they evolve, they're agile as a company, so they can provide and support us to be agile and pass that on down the line. >> Tell me about the journaling feature that you mentioned, how do you put that to use? >> So, we had all of our VPGs setup for 30-days, so I've got enough storage on-prem to give up to do 30-day journaling like Crypto-locker, unfortunately we were a recipient Crypto-locker, so with the journaling feature, >> Paul: Crypto-locker being a prominent form of ransomware, >> Absolutely. Unfortunately, it's not one I want to raise my hand to having been witness to, but with Zerto, going back into the journal, I recovered, I think it was first hit, 10 seconds before, bring the environment back up, everybody access your files, are you good to go, we're good to go, the end user doesn't know the technology, it's not their problem, but the feeling of morale, the team, the esprit de corps from being able to say, "We've just gotten hit by Crypto, let's fell back to ten seconds before it happened and let's go back to work. >> Paul: Phenomenal. How big was the attack? >> Jayme: So, it took out a file server, so we have a DFS file server infrastructure and it had rapidly worked its way all the way down through the DFS infrastructure, so we had to recover about a terabyte file server, scale it back, bring it back up. I won't say no one was the wiser, but when you say, "Let me reboot the server, try it now." It's back up, we're not calling for tapes, we're getting it back up instantly. >> Ransomware, of course, is the fastest growing malware of 2017, what have you done internally since then to prevent a recurrence of the attack? >> One thing that we absolutely did is go back and review who has access to what, so where did it come in at, where was the entry point, what can we do to remediate these things, do specific production machines need access to talk, needed but not now, we remediate those type things, you extend the use of a product like Zerto to say, okay, we thought this was relevant, with this new information, what happened to us as the scope widened, what else do we need to conclude that we can fall back on for journaling? And there's also a credibility hit and a morale hit to the team. So there's some PR that has to be done to the corporation, to the company, to say we are doing something, you know, we took a valid hit, but we are going to keep your confidence and this is how we are going to do it, we're going to use and leverage a product and the knowledge we gained and fix it. When you show what you are doing and keep their confidence in you from the corporation. So, it's not always just technical, there's PR, confidence that you can do your job, from the businesses, there's a lot of things behind ransomware than simply decrypting. >> I do understand that you spent eight years in the Marine Corps. >> Yes, sir. >> How did this prepare you for a job in IT. >> Oh, man, always charge towards the battle. (both laughing) I don't like to, to my detriment perhaps, I don't like the way, so if something new comes out, chances are, I'm going to try it and ask forgiveness rather than permission. But, I just like to get stuff done and if I can get it done and then move onto something else and find new and interesting things to do, I'm going to play with that, if that solves the business purpose, so be it, let's implement it, let's move to the next one. So, I like change, that's why I like IT. The job is never boring because as we speak right here it's changing, someone smart is thinking of something Germanic, something that's going to change and disrupt, next week I get to go home and discover that myself, play with it and implement it possibly. So, I don't want to be sitting there dormant, this is the job for me. >> Great attitude. Jayme Williams, thank you so much for joining us. >> Yes, sir, thank you very much. >> Jayme Williams from TenCate. We'll be back from ZertoCON 2018. I'm Paul Gillin. This is the CUBE.

Published Date : May 24 2018

SUMMARY :

Brought to you by Zerto. Hynes Convention Center and on the stage this morning with tell us what the company does. We are a multi-national company, we are developers, One of the things you talked about this morning was moving and help them along their journey to become a stand-alone beginning to think about adding a second cloud So, I think we should disseminate the information amongst What is resilience mean to TenCate? in order to meet those things and make them available to it's delivered to your organization. and simple to answer and it was put in front of us by Zerto the team, the esprit de corps from being able to say, How big was the attack? Jayme: So, it took out a file server, so we have a to the company, to say we are doing something, you know, I do understand that you spent eight years interesting things to do, I'm going to play with that, if that Jayme Williams, thank you so much This is the CUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jayme WilliamsPERSON

0.99+

John MorencyPERSON

0.99+

JamiePERSON

0.99+

APACORGANIZATION

0.99+

EMEAORGANIZATION

0.99+

BostonLOCATION

0.99+

Paul GillinPERSON

0.99+

PaulPERSON

0.99+

30-daysQUANTITY

0.99+

24 hoursQUANTITY

0.99+

eight yearsQUANTITY

0.99+

TenCateORGANIZATION

0.99+

Marine CorpsORGANIZATION

0.99+

ZertoORGANIZATION

0.99+

2017DATE

0.99+

12 hoursQUANTITY

0.99+

30-dayQUANTITY

0.99+

AWSORGANIZATION

0.99+

AMERSORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

ten secondsQUANTITY

0.99+

Office 365TITLE

0.99+

next weekDATE

0.99+

first stepQUANTITY

0.99+

JaymePERSON

0.99+

Hynes Convention CenterLOCATION

0.99+

AmericasLOCATION

0.98+

firstQUANTITY

0.98+

second cloudQUANTITY

0.98+

TIOORGANIZATION

0.98+

bothQUANTITY

0.98+

MarsLOCATION

0.97+

oneQUANTITY

0.96+

OneQUANTITY

0.96+

AzureTITLE

0.95+

first hitQUANTITY

0.94+

GartnerORGANIZATION

0.92+

ZertoCON 2018EVENT

0.91+

morningDATE

0.88+

each oneQUANTITY

0.88+

One thingQUANTITY

0.86+

one businessQUANTITY

0.85+

P&L.ORGANIZATION

0.84+

GermanicOTHER

0.84+

seven different cloud providersQUANTITY

0.83+

each business entityQUANTITY

0.76+

10 secondsDATE

0.76+

this morningDATE

0.71+

SLATITLE

0.65+

a terabyteQUANTITY

0.62+

ZertoCONORGANIZATION

0.53+

LanderCOMMERCIAL_ITEM

0.29+

Day One Afternoon Keynote | Red Hat Summit 2018


 

[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]

Published Date : May 9 2018

SUMMARY :

from the bottom this speaks to what I'm

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug FisherPERSON

0.99+

StephenPERSON

0.99+

BrendanPERSON

0.99+

ChrisPERSON

0.99+

Deutsche BankORGANIZATION

0.99+

Robert NoycePERSON

0.99+

Deutsche BankORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

ArvindPERSON

0.99+

20-yearQUANTITY

0.99+

March 14thDATE

0.99+

MattPERSON

0.99+

San FranciscoLOCATION

0.99+

NikeORGANIZATION

0.99+

PaulPERSON

0.99+

Hong KongLOCATION

0.99+

AntarcticaLOCATION

0.99+

Scott GuthriePERSON

0.99+

2018DATE

0.99+

AsiaLOCATION

0.99+

Washington DCLOCATION

0.99+

LondonLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

two minutesQUANTITY

0.99+

ArvinPERSON

0.99+

Tel AvivLOCATION

0.99+

two numbersQUANTITY

0.99+

two companiesQUANTITY

0.99+

2020DATE

0.99+

Paul correrPERSON

0.99+

SeptemberDATE

0.99+

Kerry PiercePERSON

0.99+

30 yearsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

8-bitQUANTITY

0.99+

Mike witigPERSON

0.99+

Red HatORGANIZATION

0.99+

2025DATE

0.99+

fiveQUANTITY

0.99+

dr. HawkingPERSON

0.99+

LinuxTITLE

0.99+

Arvind KrishnaPERSON

0.99+

DublinLOCATION

0.99+

first partnerQUANTITY

0.99+

RobPERSON

0.99+

first platformQUANTITY

0.99+

Matt HicksPERSON

0.99+

todayDATE

0.99+

CiscoORGANIZATION

0.99+

last yearDATE

0.99+

OpenShiftTITLE

0.99+

last weekDATE

0.99+

Bobby Allen, CloudGenera | CUBE Conversations


 

>> Speaker: From the SiliconANGLE Media Office in Boston, Massachusetts, it's TheCube. Now, here's your host, Stu Miniman. >> I'm Stu Miniman, and this is a special Cube conversation here in our Boston area studio. Happy to welcome to the program Bobby Allen, who's the chief technology officer and chief evangelist at CloudGenera. Bobby thanks so much for joining us. >> Thank you Stu, thanks for having us. >> Alright so Bobby we had a great conversation with your CEO Brian Kelly talking about CloudGenera, helping customers if, in my own words I'll say, there's this great mess of the cloud and service providers and data centers and things are changing all the time. And here's a great tool to help people understand this. Now, I've had people asking me for years, it's like "Hey, I've got my app, "or I'm building a new app, where do I do this?" And I've always said well, there are certain things that are really easy. If it's going to be up for a really short period of time, and it's something there, it's like you're not going to spend the time to rack and stack and build and do this. hey, Cloud was great for that. And on the other end of the spectrum now, the public clouds might disagree, but if I have something that's just like it's going to be cooking along and it's not changing and it's there, the rent versus buy analogy once again goes towards kind of doing it in a hosted or my own data center. But there's a whole lot of stuff in the middle, That is, well, it depends. There's there's this uncertainty in the world and that's where you live, so bring us in a little bit as to some of the thinking as to how CloudGenera helps and where let's get into it. >> That's a great question Stu. So, we feel like the market is actually changed, in the sense that information is coming faster and faster, there's more and more information that people are inundated and honestly overwhelmed by. And so when people ask us for more information, we typically tell them you don't need more information in our opinion, you really want to move from information to clarity to insight, "What should I actually do?" And so to go back to the real estate analogy you talked about, I think people think of cloud as a house. Cloud is at least a neighborhood if not a state, and you need to figure out where should I live within that state or that neighborhood. So, let's take AWS for example. AWS is a vendor that has many, many, many services, but also different flavors of how you can run things. So before people would look at CloudGenera as a company that can compare different execution venues. Do I want to run this in Amazon or Azure or Google? Still we increasingly get people that want to understand which flavor of Amazon should I do? Do I do the multi-tenant, do I do the dedicated, do I do the VMware cloud on AWS? And those are all valid choices for us. And so for us, we don't really care where a customer wants to evaluate. Let's define what you need and map that to the relevant or interesting options in the marketplace, and then take the guesswork out of it so you have some data-driven decision making. >> Yeah, I love that because I have been covering Amazon for many years, and boy I go to the show and it was like "Alright, I thought I got my arms around Aurora and now there's the serverless based Aurora, and there's 17 different database options inside of Amazon so, oh boy," and then, right. Let's not even talk about all the compute instances. I think it's more complicated to pick a compute instance in the public cloud than it is if I was going to put something in my own rack these days. >> Bobby: Yes, yes it is. >> So, but that being said I want to for a second before we talk about the public cloud, talk to your viewpoint, how are you helping customers in kind of the service provider to data center world. And because that's a complicated and very I have to say fragmented space. >> It is. >> How does CloudGenera help there? >> So CloudGenera deals with the consumers, so ones who actually want to benefit from the technology themselves, but also from the service provider side. So if you're Joe's Cloud Shack, or regional cloud provider or Vmware service provider, anyone who is offering technology services, you may want to know number one, how do you compare with the large hyperscale providers, and then number two, how can you showcase your valued proposition next to those. So maybe Amazon and Azure and Google are on the top of peoples' minds, but how do your services compare to those? So in our platform you can actually show a Joe's Cloud Shack next to an Amazon next to something like a Synergy or SimpliVity. So options inside and outside the data center that you thought about and then ones that you didn't can all be kind of presented in a fair way, so you take the guesswork out of how they compare to each other. >> Yeah, it's interesting. One of the big raging debates we've had out there is, "Oh I wish I had a cloud concierge." And it's like well, it's not a utility, and therefore, I could stand up something in my data center or I could put a Paz in my environment or there's so many layers in the stack and so much nuance that it's the paradox of choice I think that most people have. So, maybe walk us through a customer. When do they tend to come to you, what are some of those patterns, and what are the things that really help get accelerated when they use a platform like yours? >> So, some of the things that people think about are they have workloads that they want to move maybe they want to exit a data center, or what really happens commonly is there's a new leader in town. New CIO comes in, "We're going to have a cloud-first strategy." And we're not opposed to that. The biggest principle for us is do you understand why you're doing, and whether this is the right time, the when? Because if you don't do the right thing at the right time for the right reason there's a hole in your strategy. And so what we look at is, okay what is it that you're trying to move or change or transform, What are the things that are interesting to you or strategic, and then let's look at putting those things together. Now when you define what you need, you shouldn't define what you need in terms of where you're going, right. I don't decide my venue based on the airline I want to get on, I decide I need to be in Vegas for this conference at this time, and then I see the airline that can get me there on time for the best price, hopefully. And we take that same approach when it comes to helping customers. Let's talk about what you need in a vendor agnostic way that's divorced from the options in the market. Because your needs are not impacted by Amazon or Azure or HPE or Dell. And so then, after we define your expectations and your requirements let's map those to the things that you're curious about, or that your leadership says are strategic, and then let's make sure that we understand what we call the concept of logical equivalence. The spirit of your requirement may be called x in one provider and y in a different one, are they really the same as a tomato to-mah-to, or are they really two different types of, excuse me, services or entities altogether? So let's, let's evaluate then, how well your needs are met by these different vendors. Is it just a semantics issue or are these really two different things? Yes, they're both different types of block storage but the requirements are different. The latency is different, the redundancy is different, the pricing is certainly different. How close are these things to meeting the spirit of what you asked for? And the other parts too that I'll just offer that we see a lot is people are concerned overly about cost. How much does it cost? And we feel like the problem is not a problem of cost, it's a problem of value. People go to look for cost calculators but really what they need are value calculators, right? I take a Porsche and an F-150. An F-150 is a bigger vehicle but the Porsche is more expensive for a reason. There's a different experience than just space. And so the reality is people don't mind paying more if they know what they're paying for. Transparency is really the key. >> On that cost piece though, how much of the total equation do you look at? So I think about, my data center there's everything like the power, space, and all those pieces, if I go to a service provider, if it's my stuff, if I still have to manage it, versus some of the operational expenses. How much of kind of the, I hate to say total cost, but how much of that spread do you look at? >> We try to be pretty comprehensive, Stu. So, if you go to a public provider for example you're not paying for power but you're paying for a certainly hourly charge typically on an (mumbling) basis that accommodates a lot of the things that I'll say are platform or hypervisor and below. Now where I think a lot of the other people that are in this space maybe fall short, and our opinion is that they don't look at things above the hypervisor. If I move a workload to an AWS, they may have some great services I can take advantage of. The labor and the licensing and the other considerations that we consider to be carryover costs are things that I still need to accommodate. If I put a workload in Amazon, someone still needs to patch the OS, maybe manage the database, maybe audit security. Those are things that have labor and licensing and software considerations that we try to look at. So we try to be as comprehensive as possible, but we also look at SLA, we also look at security, so you may need to bring another manage services or consulting or software packages to fill those other gaps, so we try to be as holistic and comprehensive as possible. >> What other kind of patterns and data do you bring for CloudGenera? So thinking things either from a vertical standpoint or kind of size of company. I just think there's been certain movements in virtualization and containers and the like where there's been kind of that data and how do I understand what's going to make sense for me, so. Does CloudGenera get into any of that? >> We do get into some of that. So we try again not to force anything down someone's throat. We try to look at where you are, but also understand that there are some patterns. So for example, when we talk about different industry verticals it's very aligned to security and compliance for example. So we know that there are certain providers that are interesting but not ready for primetime because they don't have HIPAA, high tech, high trust, things that are typically relevant for the healthcare industry, so we're very quickly able to say this is something that may not be right for you just yet. Or if you have certain regional concerns, maybe you're looking at GDPR in Europe, you're looking at IRAP in Australia, we can, again, typically guide them to, this provider has some very interesting services but they don't have the security or the SLA that you need. So we try to do that to kind of whittle it down. The other thing that we're seeing though, Stu, is that honestly, many enterprises are biting off more than they can chew. They try to do too much at once, and so some of the things that we talked about, even off camera, is I would ask the question "Does the industry have a POT problem? "Are we trying to do too much at once?" And when I say POT I'm using that to represent the acronym of, to me, three pieces that we need to break this down to. Number one is parity, number two is optimization, and then number three is transformation. Many enterprises in our opinion are trying to eat an elephant with a spoon. They have no idea how to get there and they really don't understand what is too much in terms of the cost, and so when they're evaluating how much they can handle, how much change is too much, in terms of people, process, and technology, the thing to us is, what does parity look like? And that may mean a lift and shift in some cases, it may not, but you at least have to define what success looks like if you take what you're doing in your data center and move that somewhere else. But then, the middle ground is optimization. How do I take the spirit of what I'm doing, move it to that venue and then kind of clean it up or optimize it a little bit, and then once I'm there and I can evaluate the unintended consequences of change, what are the things that I didn't think about? The impacts to my people, the retraining, the other software package I need to put in place for monitoring and management, and so forth. Once I have a handle on that, then I can finally move from optimization to transformation, but that's not, that's not glamorous. That's not interesting. People don't want to talk about that. They want to go whole hog and change everything all at once and we get into trouble doing that. >> Bobby you've given me flashbacks. I worked in the storage industry for a decade, and migrations, you still kind of wake up in the middle of the night, screaming a little bit because it's always challenging, there's always all of those things to work through. You think you've gone through all of your checklists and then, oh wait, something didn't work. Database migrations, big discussion going on there. From Wikibon David Floyer has just been like, it's so many horror stories. People get there but it's, if you don't have to, maybe you don't want to, but there's so many reasons why you want to, so, I guess I want to highlight, we're not telling people not to change, and moving faster and getting on board, some modernization's a good thing, everywhere. You've got a virtualization environment, there's lots you can do today that we couldn't do two or four years ago. So, how do we get over this POT problem then? >> I think part of it is, so again going back to the moving analogy, if I'm going to move, Stu, it would be foolish for me to move without getting an estimate. And there are times when an estimate should be able to come in my house and tell me "It's actually better for you to sell that piano "than to try to move it, 'cause it's not worth it." I would want someone, if I were CIO in an enterprise today, to tell me, "Don't waste your time focusing on this, "this is really where you need to focus your time "because this is going to be the Pareto principle "that saves you the time and the money." The reality is bringing someone who's benefited from the land mines and the pitfalls, so in our opinion, bringing whether that's an SI, consultancy, a data service company like CloudGenera that's benefited from a lot of the things we've seen in the industry, don't hit things on your own that other people have stumbled on, right? Benefit from others' mistakes to allow you to take a look at the whole thing. So the challenge that I think we're having, Stu, is that we're proficient in talking about these things, there aren't enough use cases in terms of mature of cloud transformations to really look back at anecdotal data this comprehensive. We're still figuring a lot of this stuff out, and I know people don't want to hear that, but that's my opinion. >> So, Bobby, is there some place when I'm filling out these forms that I put in here's the skill set my team has, and a little alarm goes off and says, "Hey, time to do some retraining, some reskilling, "maybe bringing on some new people "to handle some of these new areas." How do you handle that side of it? >> I think part of it is honestly, and this may sound a little trite, I think people that are willing to raise their hand and say that we need some help or that "We don't have this all figured out," or that "There are some things that we need to bring in "a little bit of help to help us get that estimate "before we look to move everything," that's really the skill set you want to have. People that are not saying, "I'm the (mumbles) "juggernaut of everything cloud," because those people don't exist yet in my opinion. There are people that have pockets of expertise in things that they have really deep knowledge about, but we need to mix that with, I think, a healthy appreciation for the fact that there's still a lot of things that we're learning about together. The other part of that, Stu, is it's a community and it's a network. You may know storage migrations, I may know database migrations, let's put our heads together about how we can work together as an enterprise and make sure that we minimize impact to the users, because at the end of the day, that's really the challenge, is not to do a cool project, it's to deliver value to the business, and that's what I think we're loosing sight of with all this cool technology sometimes. >> Alright, so Bobby you've got over a thousand people using the tool. What are some of the big areas that people are like, "Oh wow, this is the stuff that's saving me "either lots of time, lots of money, saving my business, "and heck if I'm running the show, keeps my job"? >> I think storage is a big one. So people are oftentimes unaware that there are so many different ways that you can run storage in a given provider. So Amazon for example has four to six different ways you can just run block storage in their particular multi-tenant cloud, and people aren't aware of that. So there's a case that we did for a major bank. We showed them that a terabyte of storage in Amazon can run from 300 dollars up to 26 thousand dollars depending on the level or performance that you want to hit. Egress is another one, so what does the network behavior look like in those applications? Because people often will estimate the resources but not the traffic. What are the estimates to have a level of parity around security. So I don't have HIPAA compliance or SOP compliance in this particular provider. What is it going to take me to get to that level of parity that I need to have, because if I save money, Stu, but I have to spend all that on my lawyer because my data got accessed, then I've still got a problem, I've just kind of moved that down the road. So lots of things out there that I believe we're hiding in plain sight. Again, information is out there that we just don't have the filters to find. What I would say is a lot of people think that cloud is a commodity, we're not there yet. There're providers to this day, I can't give any names to protect the innocent, but the same service is literally triple in one provider what it costs in another one for almost exactly the same service. And there're examples like that that have been out there for years, we just can't see them. >> So, Bobby, last question, if somebody wanted to get started with CloudGenera, is there like a trial version, or how would somebody get involved? >> Yeah, so a couple things that are really interesting. So there's a try now button on our website that lets you kind of answer a few questions and actually get a sample mini-assessment, download a sample report, and actually see the type of analysis that we provide, number one. Number two, CloudGenera is a software company but also a services company. If you want to purchase the software, great, and we actually have trials that we can set up for you to do that. We also do what we call proofs of value. If you want to engage our team to come in and do five to ten applications to see how those might look with our analysis, and then they go at scale and look at your whole CMDB. We want to make sure we're meeting the needs of the business and not trying to boil the ocean if they're not ready for that yet. >> Bobby Allen, CTO and chief evangelist to CloudGenerate, thanks so much for joining me. So much happening in the cloud world. Be sure to check out thecube.net for all of our coverage, as well as wikibon.com for all the research. Thanks for watching theCUBE, I'm Stu Miniman.

Published Date : Apr 25 2018

SUMMARY :

Speaker: From the SiliconANGLE Media Office Happy to welcome to the program Bobby Allen, and that's where you live, so bring us in And so to go back to the real estate analogy and boy I go to the show and it was like kind of the service provider to data center world. and then number two, how can you showcase your and so much nuance that it's the paradox What are the things that are interesting to you but how much of that spread do you look at? a lot of the things that I'll say do you bring for CloudGenera? and so some of the things that we talked about, all of those things to work through. Benefit from others' mistakes to allow you "Hey, time to do some retraining, some reskilling, that's really the challenge, is not to do a cool project, What are some of the big areas that people are like, What are the estimates to have and do five to ten applications to see how those Bobby Allen, CTO and chief evangelist to CloudGenerate,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bobby AllenPERSON

0.99+

DellORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

AustraliaLOCATION

0.99+

VegasLOCATION

0.99+

PorscheORGANIZATION

0.99+

BobbyPERSON

0.99+

Brian KellyPERSON

0.99+

StuPERSON

0.99+

EuropeLOCATION

0.99+

300 dollarsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

F-150COMMERCIAL_ITEM

0.99+

CloudGeneraORGANIZATION

0.99+

fourQUANTITY

0.99+

HPEORGANIZATION

0.99+

ten applicationsQUANTITY

0.99+

IRAPORGANIZATION

0.99+

twoDATE

0.99+

David FloyerPERSON

0.99+

bothQUANTITY

0.99+

thecube.netOTHER

0.99+

HIPAATITLE

0.99+

todayDATE

0.99+

four years agoDATE

0.98+

Boston, MassachusettsLOCATION

0.98+

two different thingsQUANTITY

0.97+

three piecesQUANTITY

0.96+

wikibon.comORGANIZATION

0.96+

AuroraTITLE

0.95+

17 different database optionsQUANTITY

0.95+

SiliconANGLE Media OfficeORGANIZATION

0.95+

CloudGeneraTITLE

0.95+

one providerQUANTITY

0.94+

CloudTITLE

0.94+

OneQUANTITY

0.93+

VmwareORGANIZATION

0.93+

six different waysQUANTITY

0.93+

GDPRTITLE

0.92+

firstQUANTITY

0.92+

two different typesQUANTITY

0.9+

26 thousand dollarsQUANTITY

0.89+

AzureORGANIZATION

0.88+

JoePERSON

0.88+

tripleQUANTITY

0.85+

Number twoQUANTITY

0.81+

CloudGenerateORGANIZATION

0.78+

oneQUANTITY

0.75+

TheCubeCOMMERCIAL_ITEM

0.75+

Cloud ShackTITLE

0.73+

CEOPERSON

0.73+

over a thousand peopleQUANTITY

0.72+

Number oneQUANTITY

0.72+

twoQUANTITY

0.71+

CMDBORGANIZATION

0.7+

Lewis Kaneshiro & Karthik Ramasamy, Streamlio | Big Data SV 2018


 

(upbeat techno music) >> Narrator: Live, from San Jose, it's theCUBE! Presenting Big Data Silicon Valley, brought to you by SiliconANGLE Media and its ecosystem partners. >> Welcome back to Big Data SV, everybody. My name is Dave Vellante and this is theCUBE, the leader in live tech coverage. You know, this is our 10th big data event. When we first started covering big data, back in 2010, it was Hadoop, and everything was a batch job. About four or five years ago, everybody started talking about real time and the ability to affect outcomes before you lose the customer. Lewis Kaneshiro was here. He's the CEO of Streamlio and he's joined by Karthik Ramasamy who's the chief product officer. They're both co-founders. Gentlemen, welcome to theCUBE. My first question is, why did you start this company? >> Sure, we came together around a vision that enterprises need to access the value around fast data. And so as you mentioned, enterprises are moving out of the slow data era, and looking for a fast data value to their data, to really deliver that back to their users or their use cases. And so, coming together around that idea of real time action what we did was we realized that enterprises can't all access this data with projects right now that are not meant to work together, that are very difficult, perhaps, to stitch together. So what we did was create an intelligent platform for fast data that's really accessible to enterprises of all sizes. What we do is we unify the core components to access fast data, which is messaging, compute and stream storage, accessing the best of breed open-source technology that's really open-source out of Twitter and Yahoo! >> It's a good thing I was going to ask why does the world need to know there are, you know, streaming platforms, but Lewis kind of touched on it, 'cause it's too hard. It's too complicated, so you guys are trying to simplify all that. >> Yep, the reason mainly we wanted to simplify it because, based on all our experiences at Twitter and Yahoo! one of the key aspects was to to simplify it so that it's conceivable by regular enterprise because Twitter and Yahoo! kind of our position can afford the talent and the expertise in order to do this real time platforms. But when it goes to normal enterprises, they don't have access to the expertise and the cost benefits that they might have to reincur. So, because of that we wanted to use these open-source projects, the Twitter and the Yahoo!'s provider, combine them, and make sure that you have a simple, easy, drag and drop kind of interface, so that it's easily conceivable for any enterprise. Essentially, what we are trying to do is reduce the (mumbles) for enterprises for real time, for all enterprises. >> Dave: Yeah, enterprises will pay up... >> Yes. >> For a solution. The companies that you used to work for, they all gladly throw engineering at the problem. >> Yeah. >> Sure. >> To save time, but most organizations, they don't have the resources and so. Okay, so how does it, would it work prior to Streamlio? Maybe take us through sort of how a company would attack this problem, the complexities of what they have to deal with, and what life is like with you guys. >> So, current state of the world is it's fragmented solution, today. So the state of the world is where you take multiple pieces of different projects and you'd assemble them together in formats so that you can do (mumbles) right? So the reason why people end up doing is each of these big data projects that people use was the same for completely different purpose. Like messaging is one, and compute is another one, and third one is storage one. So, essentially what we have done as company is to simplify this aspect by integrating this well-known, best-of-the-breed projects called, for messaging we use something called Apache Poser, for compute we use something called Apache Krem, from Twitter, and similarly for storage, for real time storage, we use something called Apache Bookkeeper, so and to unify them, so that, under the hoods, it may be three systems, but, as a user, when you are using it, it serves or functions as a single system. So you install the system, and ingest your data, express your computation, and get the results out, in one single system. >> So you've unified or converged these functions. If I understand it correctly, we talking off camera a little bit, the team, Lewis, that you've assembled actually developed a lot of these, or hugely committed to these open-source projects, right? >> Absolutely, co-creators of each of the projects and what that allows us to do is to really integrate, at a deep level, each project. For example, Pulsar is actually a pub/sub system that is built on Bookkeeper, and Bookkeeper, in our minds, is a pure list best-of-breed stream storage solution. So, fast and durable storage. That storage is also used in Apache Heron to store State. So, as you can see, enterprises, rather than stitching together multiple different solutions for queuing, streaming, compute, and storage, now have one option that they can install in a very small cluster, and operationally it's very simple to scale up. We simply add nodes if you get data spikes. And what this allows is enterprises to access new and exciting use cases that really weren't possible before. For example, machine learning model deployment to real time. So I'm a data scientist and what I found is in data science, you spend a lot of time training models in batch mode. It's a legacy type of approach, but once the model is trained, you want to put that model into production in real time so that you can deliver that value back to a user in real time. Let's call it under two second SLA. So, that has been a great use case for Streamlio because we are a ready made intelligent platform for fast data, for MLai deployment. >> And the use cases are typically stateful and your persisting data, is that right? >> Yes, use cases, it can be used for stateless use cases also, but the key advantage that we bring to a table is stateful storage. And since we ship along with the storage (mumbles) stateful storage becomes much easier because of the fact that it can be used to store a real intermediate state of the computation or it can be used for the staging (mumbles) data when it spills over from what the memory is it's automatically stored to disk or you can even in the data for as long as you want so that you can unlock the value later after the data has been processed for the fast data. You can access the lazy data later, in time. >> So give us the run-down on the company, funding, you know, VCs, head count. Give us the basics. >> Sure, we raise Series A from Lightspeed Venture Partners, lead by John Vrionis and Sudip Chakrabarti. We've raised seven and a half million and emerged from stealth back in August. That allowed us to ramp up our team to 17, now, mainly engineers, in order to really have a very solid product, but we launched post rev, prelaunch and some of our customers are really looking at geo replication across multiple data centers and so active, active geo replication is an open-source feature in Apache Pulsar, and that's been a huge draw, compared to some other solutions that are out there. As you can see, this theme of simplifying architecture is where Streamlio sits, so unifying, queuing and streaming allows us to replace a number of different legacy systems. So that's been one avenue to help growth. The other, obviously is on the compute piece. As enterprises are finding new and exciting use cases to deliver back to their users, the compute piece needs to scale up and down. We also announce Pulsar Functions, which is stream-native compute that allows very simple function computation in native Python and Java, so you spin out the Apache Python cluster or Streamlio platform, and you simply have compute functionality. That allows us to access edge use cases, so IOT is a huge, kind of exciting POC's for us right now where we have connected car examples that don't need heavyweight schedule or deployment at the edge. It's Pulsar Pulsar functions. What that allows us to do are things like fraud detection, anomaly detection at the edge, model deployment at the edge, interpolation, observability, and alerts. >> And, so how do you charge for this? Is it usage based. >> Sure. What we found is enterprise are more comfortable on a per node basis, simply because we have the ambition to really scale up and help enterprises really use Streamlio as their fast data platform across the entire enterprise. We found that having a per data charge rate actually would limit that growth, and so per node and shared architecture. So, we took an early investment in optimizing around Kubernetes. And so, as enterprises are adopting Kubernetes, we are the most simple installation on Kubernetes, so on-prem, multicloud, at the edge. >> I love it, so I mean for years we've just been talking about the complexity headwinds in this big data space. We certainly saw that with Hadoop. You know, Spark was designed to certainly solve some of those problems, but. Sounds like you're doing some really good work to take that further. Lewis and Karthik, thank you so much for coming on theCUBE. I really appreciate it. >> Thanks for having us, Dave. >> All right, thank you for watching. We're here at Big Data SV, live from San Jose. We'll be right back. (techno music)

Published Date : Mar 9 2018

SUMMARY :

brought to you by SiliconANGLE Media and the ability to affect outcomes And so as you mentioned, enterprises are moving out so you guys are trying to simplify all that. and the cost benefits that they might have to reincur. The companies that you used to work for, and what life is like with you guys. so that you can do (mumbles) right? the team, Lewis, that you've assembled so that you can deliver that value so that you can unlock the value later you know, VCs, head count. the compute piece needs to scale up and down. And, so how do you charge for this? have the ambition to really scale up and help enterprises Lewis and Karthik, thank you so much for coming on theCUBE. All right, thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Karthik RamasamyPERSON

0.99+

KarthikPERSON

0.99+

Lewis KaneshiroPERSON

0.99+

DavePERSON

0.99+

San JoseLOCATION

0.99+

Lightspeed Venture PartnersORGANIZATION

0.99+

John VrionisPERSON

0.99+

LewisPERSON

0.99+

2010DATE

0.99+

AugustDATE

0.99+

three systemsQUANTITY

0.99+

StreamlioORGANIZATION

0.99+

Yahoo!ORGANIZATION

0.99+

eachQUANTITY

0.99+

TwitterORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

JavaTITLE

0.99+

first questionQUANTITY

0.99+

Sudip ChakrabartiPERSON

0.99+

one optionQUANTITY

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

seven and a half millionQUANTITY

0.99+

17QUANTITY

0.98+

each projectQUANTITY

0.98+

third oneQUANTITY

0.98+

KubernetesTITLE

0.98+

single systemQUANTITY

0.98+

firstQUANTITY

0.96+

PulsarTITLE

0.96+

StreamlioTITLE

0.96+

SparkTITLE

0.94+

BookkeeperTITLE

0.94+

oneQUANTITY

0.93+

one single systemQUANTITY

0.92+

theCUBEORGANIZATION

0.91+

todayDATE

0.91+

Big Data SV 2018EVENT

0.9+

ApacheORGANIZATION

0.89+

Silicon ValleyLOCATION

0.89+

SLATITLE

0.89+

one avenueQUANTITY

0.89+

Series AOTHER

0.88+

five years agoDATE

0.86+

Big DataEVENT

0.85+

About fourDATE

0.85+

Big Data SVEVENT

0.82+

IOTTITLE

0.81+

PoserTITLE

0.75+

Big Data SVORGANIZATION

0.71+

10th bigQUANTITY

0.67+

Apache HeronTITLE

0.65+

under two secondQUANTITY

0.62+

dataEVENT

0.61+

StreamlioPERSON

0.54+

eventQUANTITY

0.48+

HadoopTITLE

0.45+

KremTITLE

0.32+