Image Title

Search Results for Andrew Hillier:

Andrew Hillier, Densify | AWS re:Invent 2020


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of AWS re:Invent 2020, sponsored by Intel, AWS and our community partners. >> Hey is Keith Townsend a CTO Advisor on the Twitter and we have yet another CUBE alum for this, AWS re:Invent 2020 virtual coverage. AWS re:Invent 2020 unlike any other, I think it's safe to say unlike any other virtual event, AWS, nearly 60, 70,000 people in person, every conference, there's hundreds of thousands of people tuning in to watch the coverage, and we're talking to builders. No exception to that is our friends at Densify, co founder and CTO of Densify Andrew Hillier, welcome back to the show. >> Thanks, Keith, it's great to be with you again. >> So we're recording this right before it gets cold in Toronto. I hope you're enjoying some of this, breaking the cold weather? >> Yeah, no, we're getting the same whether you are right now it's fantastic. We're ready for the worst, I think in the shorter days, but we'll get through it. >> So for those of you that haven't watched any of the past episodes of theCUBE in which Andrew has appeared. Andrew can you recap, Densify, what do you guys do? >> Well, we're analytics where you can think of us as very advanced cost analytics for cloud and containers. And when I say advanced, what I mean is, there's a number of different aspects of cost, there's understanding your bill, there's how to purchase. And we do those, but we also focus heavily on the resources that you're buying, and try to change that behavior. So it's basically, boils down to a business value of saving a ton of money, but by actually changing what you're using in the cloud, as well as providing visibility. So it's, again, a form of cost optimization, but combined with resource optimization. >> So cost of resource optimization, we understand this stuff on-premises, we understand network, compute, storage, heating, cooling, etc. All of that is abstracted from us in the public cloud, what are the drivers for cost in the public cloud? >> Well, I think you directly or indirectly pay for all of those things. The funny thing about it is that it happens in a very different way. And I think everybody's aware, of course, on-demand, and be able to get resources when you need them. But the flip side of on-demand, the not so good size, is it causes what we call micro-purchasing. So when you're buying stuff, if you go and turn on a, like an Amazon Cloud instance, you're paying for that instance, you're paying Rogers and storage as well. And, implicitly for some networking, a few dollars at a time. And that really kind of creates a new situation and scale because all of a sudden now what was a control purchase on-prem, becomes a bunch of possibly junior people buying things in a very granular way, that adds up to a huge amount of money. So the very thing that makes cloud powerful, the on-demand aspects, the elasticity, also causes a very different form of purchasing behavior, which I think is one of the causes of the cost problem. >> So we're about 10, 12 years into this cloud movement, where public cloud has really become mainstream inside of traditional enterprises. What are some of the common themes you've seen when it comes to good cloud management, the cost management, hygiene across organizations? >> Yeah, and hygiene is a great word for that. I think it's evolved, you're right it's been around this is nothing new. I mean, we've probably been going to cloud expos for over a decade now. But it's kind of coming waves as far as the business problem, I think the initial problem was more around, I don't understand this bill. 'Cause to your point, all those things that you purchase on-prem, you're still purchasing in some way, and a bunch of other services. And it all shows up in this really complicated bill. And so you're trying to figure out, well, who in my organization owes what. And so that was a very early driver years ago, we saw a lot of focus on slicing and dicing the bill, as we like to call it. And then that led to well, now I know where my costs are going, can I purchase a little more intelligently. And so that was the next step. And that was an interesting step because what the problem is, the people that care about cost can't always change what's being used, but they can buy discounts and coupons, and RIs and Savings Plans. So we saw that there was a, then start to be focused on, I'm going to come up with ways of buying it, where I can get a bit of a discount. And it's like having a phone bill where I can't stop people making long distance calls, but I can get on a better phone plan. And that, kind of the second wave, and what we're seeing is the next big wave now is that, okay, I've done that, now I actually should just change what I'm actually using because, there's a lot of inefficiency in there. I've got a handle on those other problems, I need to actually, hopefully make people not buy giant instances all the time, for example. >> So let's talk about that feedback loop, understand what's driving the cost, the people that's consuming that, those services and need to understand those costs. How does Densify breach that gap? >> Well, again, we have aspects of our product that lineup with basically all three of those business problems I mentioned. So there's a there's a cloud cost intelligence module that basically lets you look at the bill any different ways by different tags. Look for anomalies, we find that very important, you say, well, this something unusual happened in my bill. So there's aspect that just focuses on kind of accountability of what's happening in the cost world. And then now, one of the strengths of our product is that when we do our analytics, we look at a whole lot of things at once. So we look at, the instances and their utilization, and what the catalog is, and the RIs and Savings Plans, and everything all together. So if you want to purchase more intelligently, that can be very complicated. So we see a lot of customers that say, well, I do want to buy savings plans, but man, it's difficult to figure out exactly what to do. So we like to think of ourselves as kind of a, it's almost like a, an analytics engine that's got an equation with a lot of terms in. It's got a lot of detail of what we're taking into account when we tell you what you should be doing. And that helps you by more intelligently, it also helps you consume more intelligently, 'cause they're all interrelated. I don't want to change an instance I'm using if there's no RI on it, that would take you backwards. I don't want to buy RIs for instances that I shouldn't be using, that takes you backwards. So it's all interconnected. And we feel that looking at everything at once is the path to getting the right answer. And having the right answer is the path to having people actually make a change. >> So when I interviewed you a few years ago, we talked about very high level containers, and how containers is changing the way that we can consume Cloud Services, containers introduced this concept of oversubscription, and the public cloud. We couldn't really oversubscribe and for large instance, back then. But we can now with containers, how are containers in general complicating cloud costing? >> So it's interesting because they do allow overcommit but not in the same way that a virtual environment does. So in a virtual environment, if I say I need two CPUs for job X, I need two CPUs for job Y, I can put them both on a machine that has two CPUs, and there will be over committed. So over committed in a virtual environment, it is a very well established operation. It lets you get past people asking for too much effectively. Containers don't quite do that in the same way, when they refer to overcommit, they refer to the fact that you can ask for one CPU, but you can use up to four, and that difference is if you overcommit. But the fact that I'm asking for one CPU is actually a pretty big problem. So let me give an example. If I look into my laptop here, and I've got Outlook and Word and all these things on it, and I had to tell you how many millicores I had to give each one, or with Zoom, let's see I'm running Zoom. Now, well, I want Zoom to work well, I want to give it $4,000 millicores, I want to give it four CPUs, because it uses that when it needs it. But my PowerPoint, I also want to give 4000 or $2,000 millicores. So I add all these things up of what I need based on the actual more granular requirements. And it might add up to four laptops. But containers don't overcommit the same way, if I asked for those requests by using containers, I actually will use for laptops. So it's those request values that are the trick, if I say I need a CPU, I get a CPU, it's not the same as a virtual CPU would be in a virtual environment. So we see that as the cause of a lot of the problem and that people quite rationally say I need these resources for these containers. But because containers are much more granular, I'm asking for a lot of individual resource, that when you add them up, it's a ton of resources. So almost every container running, we see that they're very low utilization, because everybody, rightfully so asked for individual resources for each container, but they are the wrong resources, or in aggregate, it's not creating the behavior you wanted. So we find containers are a bit, people think they're going to magically cause problems to go away. But in fact, what happens is, when you start running a lot of them, you end up just with a ton of cost. And people are just starting to get to that point now. >> Yeah, I can see how that could easily be the case inside of a virtual environment. I can easily save my VM needs four CPUs, four VCPUs. And I can do that across 100 applications. And that really doesn't cost me a lot in the private data center, tools like VMware, DRS, and all of that kind of fix that for me on the back-end is magical. In the public cloud, if I ask for four CPUs, I get four CPUs, and I'm going to pay for four CPUs, even if I don't utilize it, there's no auto-balancing. So how does Densify help actually solve that problem? >> Well, so they, there's multiple aspects for that problem, ones of the thing was that people don't necessarily ask for the right thing in the first place, that's one of the biggest ones. So, I give the example of, I need to give Zoom 4,000 millicores, that's probably not true at all, if I analyze what it's doing, maybe for a second it uses that, but for the most of the time, it's not using nearly those resources. So the first step is to analyze the container behavior patterns, and say, well, those numbers should be different. And so for example, the one thing we do with that is, we say, well if a developer is using terraform templates to stand up containers, we can say, instead of putting the number 1000, in that, a thousand millercores, or 400 millicores in your template, just put a variable and that references our analytics, just let the analytics figure what that number should be. And so it's a very elegant solution to say, the machine learning will actually figure out what resources that container needs, 'cause humans are not very good at it, especially when there's 10s of thousands of containers. So that's kind of the, one of the big things is to optimize the container of requests. And then once you've done that the nodes that you're running on can be optimized, because now they start to look different. Maybe you don't have, you don't need as much memory or as much CPU. So it's all again, it's all interrelated, but it's a methodical step that's based on analytics. And, people, they're too busy to figure this out, that they can't figure it out for thousands of things. Again, if I asked you don't get your laptop, on your laptop, how many miillicores do you need to get PowerPoint? You don't know. But in containers, you have to know. So we're saying let the machine figure out. >> Yes kind of like when you're asked how many miillicores do you need to give Zoom answer's yes. >> Yeah exactly. >> (laughs) So at the end of the day, you need some way to quantify that. So you guys are doing the two things. One, you're quantifying, you're measuring how much this application typically take. And then when I go to provision it, we're using a tool like terraform. Though then instead of me answering the question, the answer is go ask Densify, and Densify will tell you, and then I'll optimize my environment. So I get both ends of that equation, if I'm kind of summarizing it correctly. >> Absolutely. And that last part is extremely important because, in a legacy environment, like in a virtual environment, I can call an API and change the size of VM, and it will stay that way. And so that's a viable automation strategy for those types of environments. In the cloud, or when you're using terraform, or in containers, they will go right back to what's in the terraform template, that's one of the powerful things about terraform is that it always matches what's in the code. So I can't go and change the cloud, it'll just go back to whatever is in the terraform template next time, it's provision. So we have to go upstream, you have to actually do it at the source, when you're provisioning applications, the actual resource specifications should be coming through at that point, you can't, you don't want to change them after the fact, you can update the terraform and redeploy with a new value, that that's the way to do automation in a container environment, it doesn't, you can't do it, like you did in a VMware environment, because it won't stick, it just gets undone the next time the DevOps pipeline triggers. So it's both a, it's a big opportunity for a kind of a whole new generation of automation, doing it, we call it CICDCO. It's, Continuous Integration, Continuous Delivery, Continuous Optimization. It's just part of the, of the fabric of the way you deploy Ops, and it's a much more elegant way to do it. >> So you hit two trigger words, or a few trigger terms, one, DevOps, two, I'm saying DevOps, CICD, and Continuous Operations. What is the typical profile of a Densify customer? >> Well, usually, they're a mix of a bunch of different technologies. So I don't want to make it sound like you have to be a DevOps shop to benefit from this, most of our customers have some DevOps teams, they also have a lot of legacy workloads, they have virtual environments, they have cloud environments. So don't necessarily have 100%, of all of these things. But usually, it's a mix of things where, there might be some newer born in the cloud as being deployed, and this whole CICDCO concept really makes sense for them, they might just have another few thousand cloud instances that they stood up, not as a part of a DevOps pipeline, but just to run apps or maybe even migrated from on-prem. So it's a pretty big mix, we see almost every company has a mix, unless you just started a company yesterday, you're going to have a mix of some EC2 services that are kind of standalone and static, maybe some skill groups running, or containers running skill groups. And there's a generally a mix of these things. So the things I'm describing do not require DevOps, the notion of optimizing the cloud instances, by changing the marching orders when they're provisioned not after the fact, that that applies to any anybody using the cloud. And our customers tend to be a mix, some again very new, new school processes and born in the cloud. And some more legacy applications that are running that look a little more like on-prem environment would, where they're not turning on and off dynamically, they're just running transactional workloads. >> So let's talk about the kind of industries, because you you hit on a key point, we kind of associate a certain type of company with born in the cloud, et cetera. What type of organizations or industries are we seeing Densify deployed in. >> So we don't really have a specific market vertical that we focus on, we have a wide variety. So we find we have a lot of customers in financial services, banks, insurance companies. And I think that's because those are very large, complicated environments, where analytics really pay dividends, if you have a lot of business services, that are doing different things, and different criticality levels. The things I'm describing are very important. But we also have logistics companies, software companies. So again, complexity plays a part, I think elasticity plays a part in the organization that wants to be able to make use of the cloud in a smart way where they're more elastic, and obviously drive costs down. So again, we have customers across all different types of industries, manufacturing, pharmaceutical. So it's a broad range, we have partners as well that use our like IBM, that use our product, and their customers. So there's no one type of company that we focus on, certainly. But we do see, again, environments that are complicated or mission critical, or that they really want to run in a more of elastic way, those tend to be very good customers for us. >> Well, CUBE alum Andrew Hillier, thank you for joining us on theCUBE coverage of AWS re:Invent 2020 virtual. Say goodbye to a couple hundred thousand of your closest friends. >> Okay, and thanks for having me. >> That concludes our interview with Densify. We really appreciate the folks that Densify, having us again to have this conversation around workload analytics and management. To find out more of, well or find out just more great CUBE coverage, visit us on the web SiliconANGLE TV. Talk to you next episode of theCUBE. (upbeat music)

Published Date : Dec 8 2020

SUMMARY :

the globe, it's theCUBE, CTO Advisor on the Twitter great to be with you again. breaking the cold weather? We're ready for the worst, any of the past episodes on the resources that you're buying, cost in the public cloud? So the very thing that What are some of the And that, kind of the second wave, So let's talk about that feedback loop, is the path to getting the right answer. the way that we can it's not creating the behavior you wanted. and all of that kind of fix that for me So the first step is to analyze Yes kind of like when you're So I get both ends of that equation, of the way you deploy Ops, So you hit two trigger So the things I'm describing the kind of industries, So again, we have customers across thank you for joining Talk to you next episode of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

$4,000QUANTITY

0.99+

Keith TownsendPERSON

0.99+

Andrew HillierPERSON

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AndrewPERSON

0.99+

$2,000QUANTITY

0.99+

DensifyORGANIZATION

0.99+

TorontoLOCATION

0.99+

4000QUANTITY

0.99+

100%QUANTITY

0.99+

PowerPointTITLE

0.99+

100 applicationsQUANTITY

0.99+

OutlookTITLE

0.99+

WordTITLE

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

first stepQUANTITY

0.99+

yesterdayDATE

0.99+

each containerQUANTITY

0.98+

bothQUANTITY

0.98+

two CPUsQUANTITY

0.98+

oneQUANTITY

0.98+

400 millicoresQUANTITY

0.98+

each oneQUANTITY

0.98+

threeQUANTITY

0.97+

4,000 millicoresQUANTITY

0.97+

hundreds of thousands of peopleQUANTITY

0.97+

twoQUANTITY

0.97+

two trigger wordsQUANTITY

0.97+

first placeQUANTITY

0.96+

AmazonORGANIZATION

0.96+

bigEVENT

0.96+

nearly 60, 70,000 peopleQUANTITY

0.95+

about 10, 12 yearsQUANTITY

0.93+

IntelORGANIZATION

0.93+

EC2TITLE

0.93+

CTOPERSON

0.92+

a thousand millercoresQUANTITY

0.92+

terraformTITLE

0.92+

RogersORGANIZATION

0.9+

10s of thousandsQUANTITY

0.88+

few years agoDATE

0.87+

one CPUQUANTITY

0.87+

second waveEVENT

0.85+

1000QUANTITY

0.85+

fourQUANTITY

0.84+

TwitterORGANIZATION

0.84+

AWS re:Invent 2020EVENT

0.84+

thousands of thingsQUANTITY

0.8+

CICDCOORGANIZATION

0.8+

yearsDATE

0.79+

four CPUsQUANTITY

0.77+

theCUBEORGANIZATION

0.77+

re:Invent 2020EVENT

0.74+

over a decadeQUANTITY

0.72+

hundred thousandQUANTITY

0.72+

a ton of moneyQUANTITY

0.71+

2020TITLE

0.71+

both endsQUANTITY

0.7+

secondQUANTITY

0.7+

VMwareTITLE

0.68+

every conferenceQUANTITY

0.68+

waveEVENT

0.66+

TVORGANIZATION

0.64+

Andrew Hillier, Densify | AWS re:Invent


 

>> Announcer: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hi, I'm Stu Miniman. I'm here with my co-host Keith Townsend, and you're watching theCUBE's live coverage of AWS re:Invent 2017 here in the heart of the Sands Convention Center in Las Vegas, 43,000 people in attendance, spread out across many of the facilities here in Vegas, So lots of lines, lots of things going on. Happy to welcome back to the program Andrew Hillier, who is the CTO and co-founder of Densify. Great to see you, how have you been? >> It's good to be back. It's been great, been loving the show. It's a huge show. >> All right, so we're pretty excited, because we've got a double set here at Amazon's show for the first time, it's our fifth year doing the show. There's another show that we interviewed at where we've had a double set for a few years. That, of course, is VMWorld. We've been watching this change, as AWS says, it's from the old guard, you know, that cloud native if you will. Talk to us a little bit about what Densify's doing, how you fit into the ecosystem here? >> Sure, yeah, it's a very different show than I think the one two months ago, also right here, I think. Even this morning at the keynote, they were referring to that as what you've been doing the last 10 years, and this is all very forward looking, much more vision, and what we're finding is that a lot of the challenges of the past are coming right back again. When you start moving to this new operational model, how do you optimize it, how do save money, how do you keep pace with change? So today on the keynote was a very different thing than you would have seen two months ago. It's all about innovation, innovation, innovation, not just new feature, new feature. >> Andrew, I want you to talk about the customers that you talk to, the mindset. Are there VMware customers and AWS customers? How are they approaching things like innovation and strategy? >> Well I think everybody's kinda caught in between, and you know, people talk about hybrid a lot, and what we find is that a lot people's mentality is there's really the cloud, and then all that other stuff, and one is just one that I divest or get rid of and one is really where the mind share is. So even though people might have 10 times as much in VMs as they do in the cloud, they're just thinking about the cloud, that's what they... And then there's a lot of questions about how do I move there, how do I run there, how do I get that bill down, because the bill's are very, very high, you know when you start running there. And there's constantly new services, like saw this morning, that also can help make that bill even higher. So how do you get there in a safe way, safety and efficiency. >> Densify focuses on that cloud optimization piece. We used to talk about VMware when it first started, it was like, oh great, utilization efficiencies, to be able to kind of consolidate, but we had VM sprawl. And now there's cloud sprawl and containers, people are trying to figure out however things. What are some of the key challenges that you see from customers, what are some of the big places that they can really save a lot of money? >> Sure, yeah, you know in the virtual world it's all about what we would call playing Tetris, where you look at the workload patterns, we do a lot of workload pattern analysis, and say, that's busy in the morning, that's busy at night, put them on the same host, it's cheaper, and runs better. And you can you get huge efficiency gains in those environments. As you move to the cloud, it starts to look a bit different, and buying small, medium and large. So we do the same pattern analysis, but say, yeah you're on an M3, you should be on a T2, you're on the wrong thing. And we're seeing around 40% savings on average just by pointing and doing that. So we're seeing massive... You know, it's a different kind of opportunity, but it's equal in magnitude, the savings. We saw one customer last week, was 57% savings by just getting the cloud watch data, analyzing the patterns and saying, you're buying the wrong stuff. Now where this is all going interestingly is that when you start to move to containers, that game of Tetris comes back into play again. It's a much more advanced analytics to say, yeah, how do I combine my workloads to make them fit on the smallest footprint? You know, hence Densify, you know, that's what we do. And we're seeing savings of upwards of 80% when you do that. So there's huge savings with the right analytics. >> So with the analytics, much different conversation as you mentioned than it was a few years ago to now. You wouldn't go to a data center manager and say, you know what, I can really save you on infrastructure costs by optimizing your efficiency. You know what, sunk costs, I don't care that, you know, if I'm oversubscribed, not a big deal. In the cloud, that is a tangible thing. Someone has to pay that op ex bill. But with that said, even with the optimization, that can sometimes go into reverse, especially with all the announcements today. You gotta figure out, you know what, am I optimal in the cloud or can I use some of these older assets in my data center, move workloads back there? Do you guys help with that decision matrix sort of thing, you know what, do I run what's existing in the cloud that's not elastic, back in the data center? >> Yeah you can do that and whatever else, absolutely. People do ask that question. And again it comes down to overcommit. If I can actually take multiple workloads, stack them up, some workloads, that's what's cheaper than others, and it really depends on the workload. So if you're running in the cloud, we can say, those are good where they are, we're working on reports to say that's better off in a Docker container in the cloud. Or as you say, what if I put them back on-prem? You can do that too. Now that's kind of regressive. We don't see a lot of people, they're curious about that number, but we don't see people moving backwards. But in the cloud there's so many different ways you can run the workloads, and there are wildly different cost structures. Again, if you have very peaky workloads that are just busy for a short period of time, to put them in a standard large instance is very expensive. >> So let's talk about that cloud Tetris in the cloud. Because with containers obviously, I can now oversubscribe a bit, 'cause I couldn't do that before. You know if I had a EC2 instance, a M2 large, if I was too big, I'm just too big and I can't oversubscribe that, and I pay for what I use, there's advantages and disadvantages. How does that impact the conversations, the design conversations customers are having over stuff like we heard, EKF this morning, Fargate, and all these container orchestration management tools. How does that complicate the conversation? >> Yeah, it's interesting because you have a workload that's not doing very much, but you can't turn it off, it's doing something, and then peaks once a day, and you put it in a large M4 or whatever, you're gonna pay for the area, you're gonna pay for it all even though you're only doing a little bit of work, right? So that can be very expensive. That same type of workload, if you put it in a container with other ones that have a similar pattern but at different times, they all stack and gravitate nicely. So that's where we see a huge opportunity to run much higher density and lower cost, but the big challenge we're seeing that affects all the technologies that interest us this morning is that from a development group, they don't know how to say what those containers need. >> Right. >> So we're seeing a lot of Kubernetes environments, lot of ECS environments that are running very low utilization, 'cause the developers are asking for two CPUs for each container, and it scales based on the number of CPUs or the amount of memory, the resources, not the utilization. So we're finding it's like history repeating itself. I've got the scale group running on containers, very low utilization, and coming to say, well, wait a minute now, that's all wrong, if you do this and you recommend that our analytics give you a bunch of very prescriptive actions, you're gonna run much higher utilization, your bill goes way down again. So it's the same, you know lack of visibility, lack of analytics to figure out how to optimize that equation. >> Andrew, one of the biggest challenges coming to a show like this is things change so fast. Last year or two, heard a lot of grumbling from customers about oh reserved instances kinda locked me in, it was inflexible, Google was better, wait, Amazon changed what they're doing. This year, a lot of new things on spot instances. The spot market's been around for years, but didn't seem to have a lot of utilization, Amazon was like, no, this is gonna be it, if you don't need to have it now, we're gonna save you 90% if you do that. So you help with that. What are you seeing and hearing from customers and how do they take advantage, and you know, don't get locked into some huge bill? >> Yeah absolutely. Well in general, we find that the pace of change is fantastic for everybody if you know how to figure it out. So what we find in customers is that just keeping track of, you know we have a lot of customers that haven't seen this keynote this morning. They may not be aware of all this stuff yet. So, and even if you are aware, do you know how it impacts you? You know, can I actually leverage that M5, how does that affect me? So that's one of the things that, the strength of us is that we deliver a service, not a product, not a tool. So it's analytics, SAS-hosted analytics, very powerful analytics, with a densification advisor, a human that comes with it. So we're on top of these things. So when new stuff comes out, for example, we're sending a message out to our customers right now saying, we're on it, there's three new instance types came out this morning, we're analyzing your environment to tell you if you can use them. So what we're seeing is a kind a conversion to say that customers, they can't figure it all out anymore. It used to be that in the old world I could buy gear once every three years, I could understand that gear and I could understand my apps. Now you have to pick. Do you want to follow all the news in the cloud or do you want to work on your apps? And what we do is say, work on your business services, your differentiation, we will tell you how it maps to whatever Amazon's selling today, and don't worry about that, and our advisors just do that for you. >> Yeah Andrew I laugh, I think how much of my career was it like, oh well, we're managing that on a spreadsheet and we (mumbles) No no, >> No, not anymore. >> Forget about it. I'm curious. The bare metal instance is one there's been a little bit of buzz. It's what they designed for the VMware on AWS environment and offered it, but it's big honking machine, it's super-expensive, but have to think, if you're working with them, you could probably help customers optimize, get great utilization. What do you see that being used for? Is that something that you think your customers are gonna be interested in? >> Absolutely, I think it goes back to that playing Tetris discussion. So if you take one of these big bare metal nodes and run Docker containers stacked properly, we see that, in one study we did it's 82% cheaper than if you put them in small, medium and large instances. >> Stu: Wow. >> Again, it's because of the shapes and patterns of the workloads, and they can dovetail. So there is a place for these big monstrous machines, like the X1 32 extra large is all of those things. If you use them right, you can save a ton of money cause you get economies of scale. So bare metal is great, it's just another way to host things that for certain apps, certain workloads, makes a lot of sense. For other ones it makes no sense. >> So, we're at a conference full of developers. They don't care about infrastructure, more or less. And the Tetris works well when we're talking about containers, EC2-size instances, even bare metal. What about concepts such as serverless, okay, in which we're just running code? Obviously, we can't make every application based on microservices and it's not practical to take Lambda and build an entire stack. However, there's obviously some opportunity for some really incredible savings if we choose Lambda for certain functions. How do you guys help customers make that determination? >> So I mean, Lambda's very interesting because there is a break even point. So if I'm charged for every hundred milliseconds for what I run, if it doesn't happen very often that's a much better way than running an instance. Now once that gets beyond a certain point, it might be cheaper to actually just run it on an instance. If you have constant workload that's taking up many servers worth of capacity, There's a break even point there where it'll become more expensive to run that, 'cause again, you're paying by the hundred milliseconds by the resources that you're being allocated. So if you can run that workload with other workloads and get economies of scale, it might be cheaper. So it's kind of, if we picture Lambda it's almost like the area under the curve. What work am I doing in my app or my service, and I turn that into how many, and we use benchmarks to normalize everything, so we understand that that running there is the equivalent of that running over here. So that workload would require this many time slices, and cost you this much. And so that's something we're working on, it's not released yet, it's kinda coming, but we see that as being able to analyze the equivalency between different models. You know, that workload, very expensive in Lambda, that other one, perfect candidate. >> Andrew, while you're there, bring us inside a little bit, what it's like to be a partner in the AWS ecosystem? You know, any announcements that surprise you, you guys get some preview on this, how fast can you kind of ramp your team up to take advantage of the umpteen billion new announcements, 1,300 announcements a year? So you know, take us inside how that works for you, and then how you help your customers to take advantage of that. >> Sure, yeah, so I mean we stay pretty plugged in, we are partnered with all the major providers, and we do of course. And a lot of them, in fact Amazon provides the API to get all the latest and greatest stuff, if you're constantly hitting that, you get all the latest stuff anyway. So that pace is just built into what we do. You know, we had a customer said, hey, you guys have really kind of, you know, while we were evaluating your software, you did a lot of things to kind of show up the latest stuff coming out of the plug. Is that just you selling us, and you'll stop? We said, no, that's just the way we operate. That is the new model that everything is just up to date all the time. You know, for customers long term, you are getting new stuff every day, we're sending out the notes. So we try and stay on top, and again the key here is that interpreting what that means for you. To know that there's a new M5 is one thing. To know that it's got a different hypervisor, and you might need to rebuild your AMIs to run on it, is something we can say, okay, we're gonna help you with this and help interpret this delta. And I think it's a very important thing that Amazon has so many priorities of all the breadth and everything else that they're doing. They're not really focused on helping you shake your bill down. They're just not doing that to date. And so I think this fills a very important role in the ecosystem. >> Andrew, want to give you the final word. What are the things that your customers are doing that really helping to transform their businesses, kind of the biggest kind of challenges and opportunities that they're seeing? >> Well I mean I think, clearly I think containers are gonna be a very big part, I know we talked about them a lot already, but I think that's one of the most exciting areas. I think everybody's moving to the cloud or starting to leverage it, and doing it in various ways, but I think everybody's goal, and really Kubernetes is a big part of that, you know we saw different options right there, really centered on Kubernetes, everybody's doing that, and I think that's really gonna be the transformative technology. You can host, you know, born in the cloud microservices, you can host legacy apps in it. You can get through to a really high level of efficiency all in the cloud with that one technology. I think it's a real game changer and just needs to roll through these environments. >> All right, Andrew Hillier, always a pleasure to catch up. Thank you for giving us all the updates on Densify. For Keith Townsend, I'm Stu Miniman. You're watching theCUBE.

Published Date : Nov 29 2017

SUMMARY :

Announcer: Live from Las Vegas, it's theCUBE, across many of the facilities here in Vegas, It's been great, been loving the show. what Densify's doing, how you fit into the ecosystem here? how do you keep pace with change? that you talk to, the mindset. So how do you get there in a safe way, What are some of the key challenges that you see And we're seeing savings of upwards of 80% when you do that. You know what, sunk costs, I don't care that, you know, Again, if you have very peaky workloads So let's talk about that cloud Tetris in the cloud. and you put it in a large M4 or whatever, So it's the same, you know lack of visibility, and how do they take advantage, and you know, So, and even if you are aware, Is that something that you think your customers So if you take one of these big bare metal nodes If you use them right, you can save a ton of money How do you guys help customers make that determination? So if you can run that workload with other workloads and then how you help your customers and you might need to rebuild your AMIs to run on it, Andrew, want to give you the final word. You can host, you know, born in the cloud microservices, Thank you for giving us all the updates on Densify.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Andrew HillierPERSON

0.99+

90%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AndrewPERSON

0.99+

82%QUANTITY

0.99+

57%QUANTITY

0.99+

VegasLOCATION

0.99+

1,300 announcementsQUANTITY

0.99+

fifth yearQUANTITY

0.99+

10 timesQUANTITY

0.99+

LambdaTITLE

0.99+

80%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

DensifyORGANIZATION

0.99+

43,000 peopleQUANTITY

0.99+

last weekDATE

0.99+

This yearDATE

0.99+

Sands Convention CenterLOCATION

0.99+

first timeQUANTITY

0.99+

Last yearDATE

0.98+

VMwareORGANIZATION

0.98+

once a dayQUANTITY

0.98+

TetrisTITLE

0.98+

each containerQUANTITY

0.98+

two months agoDATE

0.98+

todayDATE

0.98+

hundred millisecondsQUANTITY

0.98+

IntelORGANIZATION

0.97+

oneQUANTITY

0.97+

VMWorldORGANIZATION

0.96+

Las VegasLOCATION

0.95+

one customerQUANTITY

0.95+

one studyQUANTITY

0.95+

this morningDATE

0.94+

firstQUANTITY

0.93+

two CPUsQUANTITY

0.93+

few years agoDATE

0.93+

M5COMMERCIAL_ITEM

0.92+

umpteen billion new announcementsQUANTITY

0.91+

double setQUANTITY

0.9+

three new instance typesQUANTITY

0.89+

one thingQUANTITY

0.85+

around 40% savingsQUANTITY

0.83+

re:Invent 2017EVENT

0.79+

FargateORGANIZATION

0.78+

three yearsQUANTITY

0.76+

one technologyQUANTITY

0.74+

a yearQUANTITY

0.74+

KubernetesTITLE

0.73+

last 10 yearsDATE

0.73+

onceQUANTITY

0.71+

theCUBETITLE

0.69+

Andrew Hillier, Densify | VMworld 2017


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering VMworld 2017, brought to you by VMware, and its ecosystem partner. >> I'm Stu Miniman, here with my co-host John Troyer, and you're watching theCUBE, SiliconANGLE Media's production of VMworld 2017. We're the world-wide leader in live tech coverage. Happy to welcome to the program not only a first-time guest but a first-time for the company, Andrew Hillier, who is the CTO and co-founder of Densify, and not only the first time we've had Densify, we didn't even have Serva on so, I'm not sure what the problem was, but appreciate you joining us, and looking forward to learning about you and the company. >> Glad to be here, it's good. >> All right, Andrew, tell us a little bit about, you're a co-founder, so bring us back to the early days, what the idea was, and then there's some rebranding recently so I know that's relevant to the conversation. >> Sure, I'll tell you the story. So, we're all about analytics. I mean, we started off by looking at, with all the data that's available, and saying if you really do the math on it, you can make a lot of very important decisions, and not leave them to opinions or chance. So we built out a very powerful analytics engine, a lot of big customers adopted it, run on-prem, drive huge savings in virtual environments, de-risk. And what we found is that everybody's interested in those outcomes of the analytics, but not necessarily wanting to adopt software products. I mean, it's kind of the basis of all SaaS. So, we went and made a SaaS version of that product that runs, it's like a brain in the cloud, to give the same outcomes, and we've kind of really now taken that to the extreme where it's as a service now. And it's called Densify, we rebranded around that in the June timeframe to really capture the simplicity and the outcome of what we do. Which is to drive down cloud costs, drive down the amount of infrastructure you need on-prem, and make it all work better. >> Yeah, I'm wondering if you could give us just a little, from a macro standpoint, software. And the different consumption models you just walked through a little bit, but what are customers looking for, why has it been challenging before, and do we have it right this time? >> Yeah, well, I mean, from our perspective again, I think we get adopted, and traditionally in the past you would have to deploy the product, you would have to provision servers to run it on, a database server, train people, you know, have maybe a center of excellence around using it, and so that's worked really well. But I think that's, I think the novelty of running software has worn off for most organizations. They want to move on, and we see the cloud being adopted. People just want to get out of the business of running anything, really, and have it all done for them. And so we support on-prem model, and as many serve as on-prem, but really, this new model is where everybody's going because it's just so simple. It means, you can just adopt it and get results right away without reading any manuals or doing anything. >> Andrew, we've been talking about cloud for years now, right? It was almost a joke, it's much more real now. Your customers and the people you talk with, hybrid cloud, multicloud, how many, we have a choice of many different platforms. On-prem is not going away anytime soon, at least, I don't know, I'd love your opinion on that, but your customer base, the people you talk with, what kind of a, how many platforms are they on, what kind of platforms, and how does Densify pull all that together? >> Yeah, it's funny because it's a bit of everything, and that's IT, right? You always have one of everything you've ever had, plus all the new stuff, so, we support, these huge virtual footprints out there, a lot of companies have big VMware environments. But there's definitely a big focus on the cloud. So almost every customer we have is in some form looking, is really they see that as the future, the cloud containers, some mix of on- and off-prem. So I think it's going to be hybrid for quite some time. I don't think you're going to see the on-prem go away, that would just be unrealistic, but again, a lot of energy is being put into the public cloud, and it shows. So you know, one's almost a maintain mode in some cases, one's kind of the invest, we're investing in new technology, that's where a lot of the excitement is. So even our most conservative customers are looking at cloud in some way, and some of our newer customers are 100% cloud, there's no on-prem. >> Andrew, talk to us about the relationship with VMware that you've had and have today, and I guess one of the questions I looked, VMware announced like seven SaaS services, one which was Cost Insight. Does that compete at all against what you are doing? >> Well it's, it's a hugely complicated space with a lot of different, and a lot of the same words used for all the same thing. So we have a very good relationship with VMware. We integrate with all the product line, VRA, vROps, DRS, pDRS, we have integrations with all these things, and it works with that. But I think there's some confusion sometimes around all the people using the same words, like we optimize, or we do this or that. So what we find is that, in the core of what we do, is we analyze workload patterns. And it's like playing a game of Tetris. It's like saying, that workload's busy in the morning, that one's busy at night, we combine them together, we get a lot of efficiency. And nothing in the VMware product line does that. So it really plugs in very nicely with DRS, and again vROps, but there is confusion in the words people use. You might think that that does that, and there's some cost, you know, there's a lot of products that do cloud costs. Every product that starts with the word Cloud does cloud cost, but that's not really where you get this cost-saving, it's really analyze the workloads in the cloud is where you get those cost-savings. >> Yeah, I'm curious, you must have a really good view as to utilization. So I think back, there's a lot of argument as to how much utilization we're actually getting, 'cause VMware in the early days, it was like, oh I'll consolidate servers, I'll get greater utilization. But we still kind of stink at utilization, and I have gear, even cloud today, I've seen lots of companies, right, that I can take huge amount of costs out of what you're doing, so how are customers doing, what are they good at, what do they suck at, and where are some of the things that you're helping really well? >> Well, I mean, you struck a nerve there, because I mean, people are doing a terrible job in the cloud but often it's, a lot of times they throw things up there, and they don't even really look at what they're doing. It's kind of primitive in terms of the data collectioning, and the tooling around that. So a lot of times, these people don't even know what the workload is or what the utilization is. So we see some pretty big opportunity to carve that down. I mean, on-prem, I think people have gotten better. I think when they run our product, they really, it's designed to get at the optimal utilization, and that might be at 90%, it might be 50%, it might be 30 depending on your requirements. And if you have a mission critical environment that is active-active, and redundant, and all these things going on, then maybe your utilization won't be very high, but that's as high as you can make it and meet all your obligations. If in that test environment, you can run it a lot higher. So, there is no one right answer for what the best utilization is, it kind of, it depends on your workloads and what the environment's supposed to be doing, but universally in the cloud, we find it's just terrible. 'Cause they rush things into the cloud without having all the maturity around it to figure out how to optimize it. >> Right, Andrew, does that mean then the common mistake is under-utilization? Are people just running a lot of instances without actually knowing what's running in them, or how it's costing them? >> Yep, there's underutilized, there's deadwood for starters. So there's, and that's kind of a different problem. It's not that they don't know what they're doing, somebody forgot them, so there's no process around that, there's no ITSM pros that can turn these things off, so we find a lot of that. And there's this stuff that's not utilized very well at all that you could just be running better, 'cause somebody setting's extra large, and they never revisited it. The other, last thing that we do, we find this quite a lot lately, is we call modernization. So, you look okay, but you're on an old instance. There's a newer one that's a lot cheaper. You know, you're on an R3, you could be on an R4 on Amazon, and so we find a ton of those. And it's because people deployed an app six months ago, a year ago, and it looked great, it still looks great, but they don't have the ability to analyze and use benchmarks to say I have a new instance that's as powerful as that one, that's cheaper. You need the benchmarks. >> That's something that really doesn't happen when you have hardware, right? It's not like the server vendor calls you up and says, I have a new version I can swap out if you just tell me. >> Yeah, I mean, in the cloud I give the analogy to a cell phone company. It's like, they don't phone you and tell you that they have a new plan that'll be cheaper for you. You've got to kind of do that on your own. And so we do that for customers, it's one of the things that we do, and we kind of do it for you. So, we just tell you that, just make this move, it's got a lateral move to a new instance, and save a ton of money, and customers are just, they're too busy to become experts, and read the news every morning to figure out if new instances come up, it's just too much, right? >> But wait, they haven't seen these 17 announcements that Amazon had today that might have affected them? Does your tool make the change, recommend the change, how does that kind of workflow work? >> Yeah, it depends on the platform, how it works, but we have a very high degree of automation that we enable, and there's a few reasons. One is that the analysis is so precise, that when it says do this, you can just do that. So, for example, on-prem, if we say move the VM, we know it's not supposed to go with those other ones for PCI compliance. We know that that won't drive up the over-commit, so there's, you know, our equation, there's a lot of terms. That means it's very precise. So that means when we say to do something, you can just do it, and that means you drive a very high automation as a result. >> What kind of granularity? Is this happening minute-by-minute or hour-by-hour, or day-by-day, or? >> Well, there's two levels of granularity. There's predictive and there's real time. So one of the main things we do is that we will kind of gather all the workload history, and kind of learn the patterns of that, and to come up with a strategic plan saying for tomorrow, do this, but the VM in these places. And then, leave DRS turned on, it'll do its thing, but it won't do very much, because we've anticipated all the workflow patters. So a lot of times, we will do the kind of daily optimization and DRS and vROps can do their things all day long. They just don't do as much. But we also have real-time, so if we see something getting hot, we will do a hot add on it, we can do that as well. So we kind of have the combination of both predictive and reactive at the same time. >> Okay, um. How do you handle kind of your pricing of the solution? I've heard some offerings out there that it's like, oh, we're going to save you millions, and we're just going to take a fraction of that, rather that, or are you more traditional licensing, how does that all work? >> It's funny, the gainshare we found on that is very hard to structure. It sounds great until you finally make the contracts for it. What we do is for on-prem, we do it by target, and that's a physical-virtual system, and that's worked really well. That's the way a lot of our customers go there. And in the cloud, that doesn't work, because an instance could be anything from a tiny docker container to a giant X1, so it's as a percentage of spending. That's kind of what a lot of vendors kind of sat along in the cloud world. But we kind of don't make it infinitely variable. We know people want, they want the predictability, so we kind of say, you're in this band, it's just going to cost this and you can do whatever you want. >> And what, did you have any kind of standard rule of thumb as to you know, if you're, have X kind of spend in the cloud we'll usually save you X percentage, or, and wait, if you save them a lot more, doesn't that mean you're pushing them down into a lower tier, and so, you know, how does that get sorted out? >> It's a great question, that's the hazard of it it. It is, I mean, it doesn't hold us back from wanting to optimize, so, what we find is if you just take, for example, right-sizing the cloud. If you say you're under-utilized, we can make you smaller in the same class, a lot of people would say, 15% you might save. We find that the ability to go between instance classes, so again, you're memory optimized and you're computer optimized, or vice versa, we found is 35 to 40%, pretty reliably in our customers. So it's a pretty, more than pays for the product, many times over, it's pretty compelling. And it's pretty easy to get to. And in fact, next month's bill, which is the biggest thing, you know, on-prem is some cost. You can optimize it a lot, but until the next refresh, may not realize the gains. But in the cloud, next month's bill will actually be smaller. So we find it's a lot of urgency to do it in the cloud. >> Um, I'm curious, what have you seen from customers these days between their on-premises environment and the public cloud? One thing that struck me for years is, you know, if I bought gear and I'm not getting the results, the utilization out of it, you know, that kind of got a lot of attention. When I go see the public cloud, there's plenty of customers like, oh, what do you know, I was overspending 3x more than I expected, haha, I guess I'll fix it later. And I was like, wait, if you were buying hardware, you would have fired somebody, and, like, beat up your sales rep and things like that, but public cloud seems to be less mature in that standpoint? Are you seeing that changing, or what are you seeing from customers? >> Yeah, I think there is a realization, that kind of sticker shock for these people where it is kind of three times more than they thought it would be, but your point is also not really anybody whose problem that is, a lot of times. So we do see that becoming someone's problem. The cloud architects kind of seek out more roles that are financial optimization in the cloud, so people do care. I think that's a very positive thing. I think when a lot of dev ops groups start using Amazon for the first time, it's a bit of a Wild West, and they get agility, but nobody's really looking over their shoulder. I think that's starting to change pretty quickly. >> Yeah, I wonder. One of the problems I've heard, I've talked to plenty of customers that are like, I have to dedicate an engineer to pricing when it comes to the cloud. Do you solve that, do they still need to kind of have like a dedicated person, or part of a person, or is that part of the value that you offer? >> Well, and that's a good question. It depends on the customer's size, I think. So we see really small organizations, and again, the beauty of our new offering is that you, we can, you know, we can go to really small companies or really huge companies. We have customers with a hundred thousand systems, and some with 500. And the smaller ones, they may not have a big team, so they may not have those roles. So some of our smallest ones, we're just that role for them. They don't have a person that's dedicated to that kind of thing, they just wait for our advisors to try and make, 'cause we actually have a human advisor that's part of our service that gives you advice or insight into what's happening. So, for the small ones, that can be that person. For larger companies like the big banks that are customers of ours, we kind of become one of the team. So you probably still have people with lots of expertise, but maybe you don't need to rely on it so much, maybe you can not have it at all, but it's more like we're someone that makes their life easier. So they can go on and focus on what they should be doing, which is not looking at the cloud pricing every morning. >> Nice, I see that more and more, right, that you need, it's a service you're delivering, right? And it's not just bits and bytes, that's customer success, and you have people there that can help. This stuff is crazy complicated, especially if you are, say, a VMware admin just getting into Amazon. The pricing, like we just said right, the pricing is very complicated. So, can you talk a little bit about from the admin standpoint, the vRealize integration and some things like that, or is this, there's an admin-facing piece and then I suppose then there's a cost-facing piece, or? >> Yeah, I think there's several ways it can be used. You can use us almost like middleware and the admin doesn't necessarily need to interact. We, some of our customers, we run as an engine that just sits there, it's getting data, analyzing it and making changes. But, you're still using VRA for blueprints and VRO, and that kind of comes through us, but it's kind of behind the scenes. So it's a nice use case, because it just adds value without making anybody's life more difficult. We do have consols that are very powerful, so if you're a capacity manager or a data scientist, or a cloud architect you can actually start logging in and seeing workload curves and stuff. So we have some use cases where we are, our interface is used quite heavily, and some where it kind of sits behind the scenes. And so for administrators, again, it tends to make your life better without making it worse. You know, they're really busy as well, and they don't, they save time to look at that, so. >> If you have a big investment in vRealize, right? That's great, it just sits behind the scenes. Tools you already know. >> Exactly, we just pull data from it, and we push the rest back. We pull the rules from DRS, we push new rules down the DRS, it's all very clean, and so it just makes it all better without overlapping, and again, it makes your environment calmer. So, what we see in a lot of environments is you'll be able to fit a lot more work into it, and you don't have the vMotion activity during business hours. So we're starting to measure that in our customers, because volatility's an important thing. Like if vMotion at noon, at the peak of your app being busy, it's not good, right? So, we actually cause that to go away. >> Andrew, how much of your business is on-premises, and virtualized environment, versus cloud and any kind of line up as to where you spend the most time on the cloud? >> Well, I think for, I mean, we have lot of customers that are mostly VMware. I think a good portion of them are looking at cloud in some way. Some of our newer customers are 100% in the cloud, so that's kind of more because it's a newer offering, and Densify's quite new, I think that's a smaller number right now. But as far as what we're chasing down, it's big. It's very large portion of it. So I think it's really where we see where things are going. Again, it's, we usually do both, but the cloud stuff really captures our imagination. That's what they want to be doing. >> Yeah, any commentary on the VMware on AWS, you know, stuff that we've heard so far? >> Well I mean, I think it's cool, it's great. It's another option. What I like about is that what we find is when we analyze, there's technologies that over-commit and ones that don't. So I could take a workload and put it in the VMware environment and over-commit it, and the patterns match up, and get efficiency. If I put it in Amazon in like a large instance, I might be wasting my money 'cause I'm not using the whole instance, so. And I can't run a hypervisor in one of those. What we found is that for certain transactional applications it's much better to stack them together. For like batch workloads, it's better to run them in, rent a large for an hour. So I think it's a great offering, because for certain workloads it is quite efficient. For other workloads, it's not. And we have, you know, we're showing you here today, the ability to analyze and compare the two, saying if you took that app and put it in the new VMware on Amazon versus standard small, medium, and large, what's the cost difference? It's a cool analysis, 'cause it's different for each app, right? >> If I saw right, there are free trials available on your site. Is there anybody that ever comes up, tries your stuff, and then doesn't have something that saves them money? >> No, we have a very good success when you try it because it's a, partly because it's so easy. It's just, it's 15 minutes, you pull down a connector for VMware and you plug it into vCenter or vROps, and the data goes up and then we just do it all for you. And it'll always find something you didn't know or some savings, or some hidden risks in there. Usually a lot of savings, hardware savings or software savings. We will optimize the software licensing. And in the cloud it uncovers all kinds of stuff. We see all kinds of crazy stuff, utilization's very low, so it's a, yeah. >> I've run across people that do this similar sorts of thing, at least at a high level on the virtualized side and on the cloud side. I'm not sure that I've seen anybody that does it at both, is that one of your differentiators, how do you line up, what's the competitive landscape look like? >> Yeah, doing both is a big part. I think on each individual, we also do it much deeper. So like I said, in the virtual environments our ability to play Tetris with the workloads, there's nothing else really like it. We put a lot of R and D on that, and in the cloud there's a lot of focus on the cost. But not necessarily digging deeper into what's the cause of that cost? Or your Kubernetes environment, the utilization of those nodes, it requires deeper analytics than a lot of vendors actually have, so. >> Do you give any advice as to them saying I'm trying to decide if I want to do it on-premises or in the cloud, do you give any guidance that way? >> I don't think there's any standard answer. We don't try and take sides, like, the data talks. And it's not, in my opinion, it's not an area for opinion, it's just that numbers will tell you what's best for your app and everybody's different. >> You were talking certain, you know, I've got this batch application, oh well heck, I can run this in, you know, some extra large thing in the cloud, and therefore it would cost me this versus standing up some server farm. >> Yeah, and what we find is that the only real trick is that, absolutely, if you have something that's live for 12 hours and then off for a week, renting an instance for 12 hours is the way to go. But the other consideration, it goes back to one of your earlier questions, is multicloud, and how many providers do you want? 'Cause we'll analyze the environment and that app might be cheaper in Azure and that one might be cheaper in Google, but you're not going to put each app in each, so you're going to choose one or two and kind of send them all there. So, the analytics understand that as well. They're saying, well, you're not going to spread this stuff everywhere, we're going to find the best overall answer for your portfolio of workloads. And that's an important thing. >> Okay, so last question for ya. The virtualization admins out there, is there anything that they're still doing kind of very wrong that would make their environment more efficient? >> Well I think, I mean, it's funny that we still see an awful lot of spreadsheets out there. It's funny when people try and do the numbers, like to figure out where to put a new app. And they'll still kind of figure that out in a very rudimentary way, when again, science will tell you that. So you can make that happen automatically. So, there's still certain things people are doing manually that don't need to be done manually anymore, and it maybe it's their comfort zone. Maybe sometimes maybe it's other groups. But I think, again, our focus is saying that's great, let's take your policies and your rules, we'll just embed them, encode them, codify them, and then you can move on to better things than updating a spreadsheet or generating reports to send to your team every, you know, like, it's, we have very powerful reporting, so you can just make that happen automatically to people. And so, it's getting out of those kind of tasks that people have done for years and moving up the value chain and saying, now I'm going to focus on cloud, or on VSAN, or whatever it is people want to be doing next. >> All right, Andrew Hillier, appreciate you giving us all the updates on your company, and look forward to hearing more in the future. John Troyer and I will be back with lots more coverage here from VMworld 2017. You're watching theCUBE. (cheerful music)

Published Date : Aug 28 2017

SUMMARY :

Covering VMworld 2017, brought to you by VMware, and looking forward to learning about you and the company. so I know that's relevant to the conversation. and the outcome of what we do. and do we have it right this time? It means, you can just adopt it and get results right away Your customers and the people you talk with, plus all the new stuff, so, we support, Does that compete at all against what you are doing? So we have a very good relationship with VMware. of the things that you're helping really well? If in that test environment, you can run it a lot higher. and so we find a ton of those. It's not like the server vendor calls you up and says, and read the news every morning to figure out that when it says do this, you can just do that. So one of the main things we do is that How do you handle kind of your pricing of the solution? it's just going to cost this and you can do whatever you want. We find that the ability to go between instance classes, the utilization out of it, you know, So we do see that becoming someone's problem. or is that part of the value that you offer? we can, you know, we can go to really small companies and you have people there that can help. and the admin doesn't necessarily need to interact. If you have a big investment in vRealize, right? We pull the rules from DRS, we push new rules down the DRS, Some of our newer customers are 100% in the cloud, And we have, you know, we're showing you here today, and then doesn't have something that saves them money? and the data goes up and then we just do it all for you. on the virtualized side and on the cloud side. and in the cloud there's a lot of focus on the cost. it's just that numbers will tell you You were talking certain, you know, and how many providers do you want? that they're still doing kind of very wrong and then you can move on to better things and look forward to hearing more in the future.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

Andrew HillierPERSON

0.99+

AndrewPERSON

0.99+

AmazonORGANIZATION

0.99+

12 hoursQUANTITY

0.99+

15 minutesQUANTITY

0.99+

35QUANTITY

0.99+

Stu MinimanPERSON

0.99+

15%QUANTITY

0.99+

100%QUANTITY

0.99+

3xQUANTITY

0.99+

DensifyORGANIZATION

0.99+

50%QUANTITY

0.99+

17 announcementsQUANTITY

0.99+

90%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

30QUANTITY

0.99+

AWSORGANIZATION

0.99+

a year agoDATE

0.99+

a weekQUANTITY

0.99+

six months agoDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

40%QUANTITY

0.99+

three timesQUANTITY

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.99+

next monthDATE

0.99+

500QUANTITY

0.99+

Las VegasLOCATION

0.99+

each appQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.98+

TetrisTITLE

0.98+

JuneDATE

0.98+

todayDATE

0.98+

two levelsQUANTITY

0.98+

tomorrowDATE

0.98+

first-timeQUANTITY

0.97+

first timeQUANTITY

0.97+

millionsQUANTITY

0.97+

DRSORGANIZATION

0.97+

theCUBEORGANIZATION

0.96+

AzureTITLE

0.95+

GoogleORGANIZATION

0.95+

One thingQUANTITY

0.94+

each individualQUANTITY

0.93+

ServaORGANIZATION

0.93+

an hourQUANTITY

0.92+

vMotionTITLE

0.9+

VMworld 2017EVENT

0.89+

KubernetesPERSON

0.85+

seven SaaS servicesQUANTITY

0.83+

VSANORGANIZATION

0.82+

Cost InsightORGANIZATION

0.8+

vRealizeTITLE

0.8+