Image Title

Search Results for CICDCO:

Andrew Hillier, Densify | AWS re:Invent 2020


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of AWS re:Invent 2020, sponsored by Intel, AWS and our community partners. >> Hey is Keith Townsend a CTO Advisor on the Twitter and we have yet another CUBE alum for this, AWS re:Invent 2020 virtual coverage. AWS re:Invent 2020 unlike any other, I think it's safe to say unlike any other virtual event, AWS, nearly 60, 70,000 people in person, every conference, there's hundreds of thousands of people tuning in to watch the coverage, and we're talking to builders. No exception to that is our friends at Densify, co founder and CTO of Densify Andrew Hillier, welcome back to the show. >> Thanks, Keith, it's great to be with you again. >> So we're recording this right before it gets cold in Toronto. I hope you're enjoying some of this, breaking the cold weather? >> Yeah, no, we're getting the same whether you are right now it's fantastic. We're ready for the worst, I think in the shorter days, but we'll get through it. >> So for those of you that haven't watched any of the past episodes of theCUBE in which Andrew has appeared. Andrew can you recap, Densify, what do you guys do? >> Well, we're analytics where you can think of us as very advanced cost analytics for cloud and containers. And when I say advanced, what I mean is, there's a number of different aspects of cost, there's understanding your bill, there's how to purchase. And we do those, but we also focus heavily on the resources that you're buying, and try to change that behavior. So it's basically, boils down to a business value of saving a ton of money, but by actually changing what you're using in the cloud, as well as providing visibility. So it's, again, a form of cost optimization, but combined with resource optimization. >> So cost of resource optimization, we understand this stuff on-premises, we understand network, compute, storage, heating, cooling, etc. All of that is abstracted from us in the public cloud, what are the drivers for cost in the public cloud? >> Well, I think you directly or indirectly pay for all of those things. The funny thing about it is that it happens in a very different way. And I think everybody's aware, of course, on-demand, and be able to get resources when you need them. But the flip side of on-demand, the not so good size, is it causes what we call micro-purchasing. So when you're buying stuff, if you go and turn on a, like an Amazon Cloud instance, you're paying for that instance, you're paying Rogers and storage as well. And, implicitly for some networking, a few dollars at a time. And that really kind of creates a new situation and scale because all of a sudden now what was a control purchase on-prem, becomes a bunch of possibly junior people buying things in a very granular way, that adds up to a huge amount of money. So the very thing that makes cloud powerful, the on-demand aspects, the elasticity, also causes a very different form of purchasing behavior, which I think is one of the causes of the cost problem. >> So we're about 10, 12 years into this cloud movement, where public cloud has really become mainstream inside of traditional enterprises. What are some of the common themes you've seen when it comes to good cloud management, the cost management, hygiene across organizations? >> Yeah, and hygiene is a great word for that. I think it's evolved, you're right it's been around this is nothing new. I mean, we've probably been going to cloud expos for over a decade now. But it's kind of coming waves as far as the business problem, I think the initial problem was more around, I don't understand this bill. 'Cause to your point, all those things that you purchase on-prem, you're still purchasing in some way, and a bunch of other services. And it all shows up in this really complicated bill. And so you're trying to figure out, well, who in my organization owes what. And so that was a very early driver years ago, we saw a lot of focus on slicing and dicing the bill, as we like to call it. And then that led to well, now I know where my costs are going, can I purchase a little more intelligently. And so that was the next step. And that was an interesting step because what the problem is, the people that care about cost can't always change what's being used, but they can buy discounts and coupons, and RIs and Savings Plans. So we saw that there was a, then start to be focused on, I'm going to come up with ways of buying it, where I can get a bit of a discount. And it's like having a phone bill where I can't stop people making long distance calls, but I can get on a better phone plan. And that, kind of the second wave, and what we're seeing is the next big wave now is that, okay, I've done that, now I actually should just change what I'm actually using because, there's a lot of inefficiency in there. I've got a handle on those other problems, I need to actually, hopefully make people not buy giant instances all the time, for example. >> So let's talk about that feedback loop, understand what's driving the cost, the people that's consuming that, those services and need to understand those costs. How does Densify breach that gap? >> Well, again, we have aspects of our product that lineup with basically all three of those business problems I mentioned. So there's a there's a cloud cost intelligence module that basically lets you look at the bill any different ways by different tags. Look for anomalies, we find that very important, you say, well, this something unusual happened in my bill. So there's aspect that just focuses on kind of accountability of what's happening in the cost world. And then now, one of the strengths of our product is that when we do our analytics, we look at a whole lot of things at once. So we look at, the instances and their utilization, and what the catalog is, and the RIs and Savings Plans, and everything all together. So if you want to purchase more intelligently, that can be very complicated. So we see a lot of customers that say, well, I do want to buy savings plans, but man, it's difficult to figure out exactly what to do. So we like to think of ourselves as kind of a, it's almost like a, an analytics engine that's got an equation with a lot of terms in. It's got a lot of detail of what we're taking into account when we tell you what you should be doing. And that helps you by more intelligently, it also helps you consume more intelligently, 'cause they're all interrelated. I don't want to change an instance I'm using if there's no RI on it, that would take you backwards. I don't want to buy RIs for instances that I shouldn't be using, that takes you backwards. So it's all interconnected. And we feel that looking at everything at once is the path to getting the right answer. And having the right answer is the path to having people actually make a change. >> So when I interviewed you a few years ago, we talked about very high level containers, and how containers is changing the way that we can consume Cloud Services, containers introduced this concept of oversubscription, and the public cloud. We couldn't really oversubscribe and for large instance, back then. But we can now with containers, how are containers in general complicating cloud costing? >> So it's interesting because they do allow overcommit but not in the same way that a virtual environment does. So in a virtual environment, if I say I need two CPUs for job X, I need two CPUs for job Y, I can put them both on a machine that has two CPUs, and there will be over committed. So over committed in a virtual environment, it is a very well established operation. It lets you get past people asking for too much effectively. Containers don't quite do that in the same way, when they refer to overcommit, they refer to the fact that you can ask for one CPU, but you can use up to four, and that difference is if you overcommit. But the fact that I'm asking for one CPU is actually a pretty big problem. So let me give an example. If I look into my laptop here, and I've got Outlook and Word and all these things on it, and I had to tell you how many millicores I had to give each one, or with Zoom, let's see I'm running Zoom. Now, well, I want Zoom to work well, I want to give it $4,000 millicores, I want to give it four CPUs, because it uses that when it needs it. But my PowerPoint, I also want to give 4000 or $2,000 millicores. So I add all these things up of what I need based on the actual more granular requirements. And it might add up to four laptops. But containers don't overcommit the same way, if I asked for those requests by using containers, I actually will use for laptops. So it's those request values that are the trick, if I say I need a CPU, I get a CPU, it's not the same as a virtual CPU would be in a virtual environment. So we see that as the cause of a lot of the problem and that people quite rationally say I need these resources for these containers. But because containers are much more granular, I'm asking for a lot of individual resource, that when you add them up, it's a ton of resources. So almost every container running, we see that they're very low utilization, because everybody, rightfully so asked for individual resources for each container, but they are the wrong resources, or in aggregate, it's not creating the behavior you wanted. So we find containers are a bit, people think they're going to magically cause problems to go away. But in fact, what happens is, when you start running a lot of them, you end up just with a ton of cost. And people are just starting to get to that point now. >> Yeah, I can see how that could easily be the case inside of a virtual environment. I can easily save my VM needs four CPUs, four VCPUs. And I can do that across 100 applications. And that really doesn't cost me a lot in the private data center, tools like VMware, DRS, and all of that kind of fix that for me on the back-end is magical. In the public cloud, if I ask for four CPUs, I get four CPUs, and I'm going to pay for four CPUs, even if I don't utilize it, there's no auto-balancing. So how does Densify help actually solve that problem? >> Well, so they, there's multiple aspects for that problem, ones of the thing was that people don't necessarily ask for the right thing in the first place, that's one of the biggest ones. So, I give the example of, I need to give Zoom 4,000 millicores, that's probably not true at all, if I analyze what it's doing, maybe for a second it uses that, but for the most of the time, it's not using nearly those resources. So the first step is to analyze the container behavior patterns, and say, well, those numbers should be different. And so for example, the one thing we do with that is, we say, well if a developer is using terraform templates to stand up containers, we can say, instead of putting the number 1000, in that, a thousand millercores, or 400 millicores in your template, just put a variable and that references our analytics, just let the analytics figure what that number should be. And so it's a very elegant solution to say, the machine learning will actually figure out what resources that container needs, 'cause humans are not very good at it, especially when there's 10s of thousands of containers. So that's kind of the, one of the big things is to optimize the container of requests. And then once you've done that the nodes that you're running on can be optimized, because now they start to look different. Maybe you don't have, you don't need as much memory or as much CPU. So it's all again, it's all interrelated, but it's a methodical step that's based on analytics. And, people, they're too busy to figure this out, that they can't figure it out for thousands of things. Again, if I asked you don't get your laptop, on your laptop, how many miillicores do you need to get PowerPoint? You don't know. But in containers, you have to know. So we're saying let the machine figure out. >> Yes kind of like when you're asked how many miillicores do you need to give Zoom answer's yes. >> Yeah exactly. >> (laughs) So at the end of the day, you need some way to quantify that. So you guys are doing the two things. One, you're quantifying, you're measuring how much this application typically take. And then when I go to provision it, we're using a tool like terraform. Though then instead of me answering the question, the answer is go ask Densify, and Densify will tell you, and then I'll optimize my environment. So I get both ends of that equation, if I'm kind of summarizing it correctly. >> Absolutely. And that last part is extremely important because, in a legacy environment, like in a virtual environment, I can call an API and change the size of VM, and it will stay that way. And so that's a viable automation strategy for those types of environments. In the cloud, or when you're using terraform, or in containers, they will go right back to what's in the terraform template, that's one of the powerful things about terraform is that it always matches what's in the code. So I can't go and change the cloud, it'll just go back to whatever is in the terraform template next time, it's provision. So we have to go upstream, you have to actually do it at the source, when you're provisioning applications, the actual resource specifications should be coming through at that point, you can't, you don't want to change them after the fact, you can update the terraform and redeploy with a new value, that that's the way to do automation in a container environment, it doesn't, you can't do it, like you did in a VMware environment, because it won't stick, it just gets undone the next time the DevOps pipeline triggers. So it's both a, it's a big opportunity for a kind of a whole new generation of automation, doing it, we call it CICDCO. It's, Continuous Integration, Continuous Delivery, Continuous Optimization. It's just part of the, of the fabric of the way you deploy Ops, and it's a much more elegant way to do it. >> So you hit two trigger words, or a few trigger terms, one, DevOps, two, I'm saying DevOps, CICD, and Continuous Operations. What is the typical profile of a Densify customer? >> Well, usually, they're a mix of a bunch of different technologies. So I don't want to make it sound like you have to be a DevOps shop to benefit from this, most of our customers have some DevOps teams, they also have a lot of legacy workloads, they have virtual environments, they have cloud environments. So don't necessarily have 100%, of all of these things. But usually, it's a mix of things where, there might be some newer born in the cloud as being deployed, and this whole CICDCO concept really makes sense for them, they might just have another few thousand cloud instances that they stood up, not as a part of a DevOps pipeline, but just to run apps or maybe even migrated from on-prem. So it's a pretty big mix, we see almost every company has a mix, unless you just started a company yesterday, you're going to have a mix of some EC2 services that are kind of standalone and static, maybe some skill groups running, or containers running skill groups. And there's a generally a mix of these things. So the things I'm describing do not require DevOps, the notion of optimizing the cloud instances, by changing the marching orders when they're provisioned not after the fact, that that applies to any anybody using the cloud. And our customers tend to be a mix, some again very new, new school processes and born in the cloud. And some more legacy applications that are running that look a little more like on-prem environment would, where they're not turning on and off dynamically, they're just running transactional workloads. >> So let's talk about the kind of industries, because you you hit on a key point, we kind of associate a certain type of company with born in the cloud, et cetera. What type of organizations or industries are we seeing Densify deployed in. >> So we don't really have a specific market vertical that we focus on, we have a wide variety. So we find we have a lot of customers in financial services, banks, insurance companies. And I think that's because those are very large, complicated environments, where analytics really pay dividends, if you have a lot of business services, that are doing different things, and different criticality levels. The things I'm describing are very important. But we also have logistics companies, software companies. So again, complexity plays a part, I think elasticity plays a part in the organization that wants to be able to make use of the cloud in a smart way where they're more elastic, and obviously drive costs down. So again, we have customers across all different types of industries, manufacturing, pharmaceutical. So it's a broad range, we have partners as well that use our like IBM, that use our product, and their customers. So there's no one type of company that we focus on, certainly. But we do see, again, environments that are complicated or mission critical, or that they really want to run in a more of elastic way, those tend to be very good customers for us. >> Well, CUBE alum Andrew Hillier, thank you for joining us on theCUBE coverage of AWS re:Invent 2020 virtual. Say goodbye to a couple hundred thousand of your closest friends. >> Okay, and thanks for having me. >> That concludes our interview with Densify. We really appreciate the folks that Densify, having us again to have this conversation around workload analytics and management. To find out more of, well or find out just more great CUBE coverage, visit us on the web SiliconANGLE TV. Talk to you next episode of theCUBE. (upbeat music)

Published Date : Dec 8 2020

SUMMARY :

the globe, it's theCUBE, CTO Advisor on the Twitter great to be with you again. breaking the cold weather? We're ready for the worst, any of the past episodes on the resources that you're buying, cost in the public cloud? So the very thing that What are some of the And that, kind of the second wave, So let's talk about that feedback loop, is the path to getting the right answer. the way that we can it's not creating the behavior you wanted. and all of that kind of fix that for me So the first step is to analyze Yes kind of like when you're So I get both ends of that equation, of the way you deploy Ops, So you hit two trigger So the things I'm describing the kind of industries, So again, we have customers across thank you for joining Talk to you next episode of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

$4,000QUANTITY

0.99+

Keith TownsendPERSON

0.99+

Andrew HillierPERSON

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AndrewPERSON

0.99+

$2,000QUANTITY

0.99+

DensifyORGANIZATION

0.99+

TorontoLOCATION

0.99+

4000QUANTITY

0.99+

100%QUANTITY

0.99+

PowerPointTITLE

0.99+

100 applicationsQUANTITY

0.99+

OutlookTITLE

0.99+

WordTITLE

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

first stepQUANTITY

0.99+

yesterdayDATE

0.99+

each containerQUANTITY

0.98+

bothQUANTITY

0.98+

two CPUsQUANTITY

0.98+

oneQUANTITY

0.98+

400 millicoresQUANTITY

0.98+

each oneQUANTITY

0.98+

threeQUANTITY

0.97+

4,000 millicoresQUANTITY

0.97+

hundreds of thousands of peopleQUANTITY

0.97+

twoQUANTITY

0.97+

two trigger wordsQUANTITY

0.97+

first placeQUANTITY

0.96+

AmazonORGANIZATION

0.96+

bigEVENT

0.96+

nearly 60, 70,000 peopleQUANTITY

0.95+

about 10, 12 yearsQUANTITY

0.93+

IntelORGANIZATION

0.93+

EC2TITLE

0.93+

CTOPERSON

0.92+

a thousand millercoresQUANTITY

0.92+

terraformTITLE

0.92+

RogersORGANIZATION

0.9+

10s of thousandsQUANTITY

0.88+

few years agoDATE

0.87+

one CPUQUANTITY

0.87+

second waveEVENT

0.85+

1000QUANTITY

0.85+

fourQUANTITY

0.84+

TwitterORGANIZATION

0.84+

AWS re:Invent 2020EVENT

0.84+

thousands of thingsQUANTITY

0.8+

CICDCOORGANIZATION

0.8+

yearsDATE

0.79+

four CPUsQUANTITY

0.77+

theCUBEORGANIZATION

0.77+

re:Invent 2020EVENT

0.74+

over a decadeQUANTITY

0.72+

hundred thousandQUANTITY

0.72+

a ton of moneyQUANTITY

0.71+

2020TITLE

0.71+

both endsQUANTITY

0.7+

secondQUANTITY

0.7+

VMwareTITLE

0.68+

every conferenceQUANTITY

0.68+

waveEVENT

0.66+

TVORGANIZATION

0.64+