Image Title

Search Results for Andrzej:

Amir Sharif, Opsani | CUBE Conversation


 

>>mhm. What the special cube conversation here in Palo alto, I'm john Kerry host of the cube. We're here talking about kubernetes Cloud native and all things Cloud, cloud enterprise amir Sure VP of product and morgan Stanley is with me and we are great to have you on the cube. Thanks for coming on. I appreciate you taking the time, >>appreciate it, john good to be here. You >>know, cloud Native obviously super hot right now as the edges around the corner, you're seeing people looking at five G looking at amazon's wavelength outposts you've got as you got a lot of cloud companies really pushing distributed computing and I think one of the things that people really are getting into is okay, how do I take the cloud and re factor my business and then that's one business side then, the technical side. Okay, How do I do it? Like it's not that easy. Right. So it sounds, it sounds really easy to just go to move to the cloud. This is something that's been a big problem. So I know you guys in the center of all this uh and you've got, you know, microservices, kubernetes at the core of this, take a minute to introduce the company, what you guys do then I want to get into some specific questions. >>Mhm, of course. Well, bob Sani is a startup? Silicon Valley startup and what we do is automate system configuration that's typically worked at an engineer does and take lengthy and if done incorrectly at least to a lot of errors and cost overruns and the user experience problems. We completely automate that using an Ai and ml back end so that the engineering can focus on writing code and not worry about having to tune the little pieces working together. >>You know, I love the, I was talking to a V. C on our last uh startup showcase, cloud startup showcase and uh really prominent VC and he was talking about down stack up stack benefits and he says if you're going to be a down stack um, provider, you got to solve a problem. It has to be a big problem that people don't want to deal with. So, and you start getting into some of the systems configuration when you have automation at the center of this as a table stakes item problems are cropping up as new use cases are emerging. Can you talk about some of the problems that you guys see that you solve for developers and companies, >>of course. So they're basically, they're, the problem expresses itself in a number of domains. The first one is that he who pays the bills is separate from he who consumes the resources. It's the engineers that consume the resources and the incentives are to deliver code rapidly and deliver code that works well, but they don't really care about paying the bills. And then the CFO office sees the bills and there's a disparity between the two. The reason that creates a problem, a business problem is that the developers uh, will over provision stuff, uh to make sure that everything works and uh, they don't want to get caught in the middle of the night. You know, the bill comes due at the end of the month or into the quarter and then the CFO has smoke coming out of his ears because there's been clawed overruns. Then the reaction happens to all right, let's cut costs. And then, you know, there's an edict that comes down that says everything, reduce everything by 30%. So people go across and give a haircut to everything. So what happens next to systems out of balance? There's allocation resource misallocation and uh, systems start uh, suffering. So the customers become unhappy. And ironically, if you're not provisioned correctly, Not ironically, but maybe understandably, customers start suffering and that leads to a revenue problem down the line if you have too many problems unhappy. So you have to be very careful about how you cut costs and how you apportion resources. So both the revenue side is happy and it costs are happy because it all comes down to product experience and what the customers consume. You >>know, that's something that everyone who's done. Cloud development knows, you know, whose fault is it? You know, it's this fall. But now you can actually see the services you leave a switch open or, you know, I'm oversimplifying it. But, you know, you experiment services, you can the bills can just have massive, you know, overruns and then, and then you got to call the cloud company and you gotta call the engineers and say why did you do this? You got to get a refund or or the bad one. Bad apple could ruin it for everyone as you, as you highlighted over the bigger companies. So I have to ask you mean everyone lives this. How do companies have cost overruns? Is their patterns that you see that you guys wrote software 4-1, automate the obvious ones. Is there is there are certain things that you know always happen. Are there areas that have some indications? So why do, first of all, why do companies have cloud cost overruns? >>That's a great question. And let's start with a bit of history where we came from a pre cloud world, you built your own data centers, which means that you have an upfront Capex cost and you spend the money and you were forced to live within the needs that your data center provided. You really couldn't spend anymore. That provided kind of a predictable expenditure bottle it came in big chunks. But you know what, your budget was going to be four years from now, three years from now. And you built for that with the cloud computing, Your consumption is now on on demand basis and it's api enabled. So the developer can just ask for more resources. So without any kind of tools that tell the developer here is x amount of CPU or X amount of memory that you need for this particular service, that for it to deliver the right uh, performance that for the customer. The developers incentivized to basically give it a lot more than the application needs. Why? Because the developer doesn't want to pick up service tickets. He's incentivizing delivering functionality quickly and moving on to next project, not in optimizing costs. So that creates kind of uh an agency problem that the guy that actually controls how research are consumed is not incentivized to control the consumption of these resources. And we see that across the board in every company, engineers, engineering organization is a separate organization than the financial organization. So the control place is different. The consumption place and it breaks down the patterns are over provisions. And what we want to do is give engineers the tools to consume precisely the right amount of resources for the service level objectives that they have, given that you want a transaction rate of X and the literacy rate of Why here's how you configure your cloud infrastructure. So the application delivers according to the sls with the least possible resources consumed. >>So on this tool you guys have in the software you guys have, how how do you guys go to mark with that, you target the business buyer or the developer themselves and and how do you handle the developers say, I don't want anyone looking over my shoulder. I'm gonna go, I'm gonna have a blank check to do whatever it takes, um how do you guys roll that out because actually the business benefits are significant controlling the budget, I get that. Um how do you guys rolling this out? How do people engage with you? What's your strategy? >>Right. Are there, is the application owner, is the guy that owns the PML for the application? It tends to be a VP level or a senior director person that owns a SAAS platform and he or she is responsible for delivering good products to the market and delivering good financial results to the CFO So in that person of everything is rolled up, but that person will always favor the revenue site, which means consume more resources than you need in order to maximize customer happiness, therefore faster growth and uh they do that while sacrificing the cost side. So by giving the product owner the optimization tools autonomous of optimization tools that Sandy has, we allow him or her to deliver the right experience to the customer, with the right sufficient resources and address both the performance and the cost side of equation simultaneously, >>awesome. Can you talk about the impact c I C D s having in the cloud native computing on the optimization cycle? Um Obviously, you know, shifting left for security, we hear a lot of that, you're hearing a lot of more microservices being spun up, spun down automatically. Uh I'll see kubernetes clusters are going mainstream, you start to see a lot more dynamic uh activity if you if you in these new workflows, what is the impact of these new CSC D cloud? Native computing on the optimization cycle? >>C i c D is there to enable a fast delivery of software features basically. Uh So, you know, we have a combination of get get ups where you can just pull down repositories, libraries, open source projects from left and right. And using glue code, developers can deliver functionality really quick. In fact, microservices are there in service of that capability, deliver functionality quickly by being able to build functional blocks and then through a piece you put everything together. So ci cd is just accelerates the software delivery code. Between the time the boss says, give me an application until the application team plus the devops team plus SRE team puts it out in production. Now we can do this really quickly. The problem is though, nobody optimizes in the process. So when we deliver 1.0 in six months or less, we've done zero in terms of optimization and at one point, oh, becomes a way that we go through QA in many cases, unfortunately. And it also becomes a way that we go through the optimization. The customer screams that you eyes Laghi, you know, the throughput is really slow and we tinker and tinker and tinker and by the time it typically goes through a 12 month cycle of maturation, we get that system stability in the right performance with a I and machine learning that a person has enabled. We can deliver that, we can shrink that time out considerably. In fact, uh you know what we're going to announce in q khan is something that be called Kite storm is the ability to uh install our product and kubernetes environment in roughly 20 minutes and within two days you get the results. So before you have this optimization cycle that was going on for a very long time now that it's frank down and because of Ci Cd, you know, you don't have the luxury of waiting and the system itself can become part of the way of contributing system. The system being the uh ai ml service, that the presiding deliveries can be uh part and parcel of the Ci cd pipeline, that optimizes the code and gives you the right configuration and you get to go. So >>you guys are really getting down and injecting in some uh instrumentation for metadata around key areas. That right. Is that kind of how it's working? Are you getting in there with codes going to watch? Um how was it working under the hood? Can you just give me a quick example of, you know, how this would play out and what people might expect, how it would handle, >>of course. So what the way we optimize application performance is we have to have a metric against which we measure performance. That metric is an S L O service level, objective and in a kubernetes environment, we typically tap into Prometheus, which is the metrics gathering place metrics database for kubernetes workloads and we really focus on red metrics, the rate of transactions, the error rate and the for delay or latency. So we focus on these three metrics and what we have to do is inject a small container, it's an open source container into the application work space that we call that a container. Servo. Servo interacts with Prometheus to get the metrics and then it talks to our back end to tell the M L engine what's happening and then L engine and does this analysis and comes back with a new configuration which then servo implements in a canary instance. So the Canary instances where we run our experiments and we compare it against the main line, Which the application is doing after roughly 20 generations or so. The Bellingen Learns what part of the problem space to focus on in order to optimize to deliver optimal results. And then it very quickly comes to the right set of solutions to try and it tries those inside uh inside the canary instance and when it finds the optimal solution, it gives the recommendation back to the application team or alternatively, when you have enough trust in the tiny you can ought to promote it into mainline that >>gets the learning in there is a great example of some cloud native action. I want to get into some examples with your customer, but before we get there, I want to ask you, since I have you here, if you don't mind, what is cloud native mean these days, because you know, cloud native become kind of much cloud computing, um which essentially go move to the cloud, but as people start developing in the cloud where there's real new benefits, people talk about the word cloud native, could you take a quick minute to define? What is cloud Native, Does that even mean? What does cloud native mean? >>I'll try to give you my understanding government, we could get into a bit of philosophy. Uh Yeah, that's good. But basically cloud Native means it's, your application is built for the cloud and it takes advantages of the inherent benefits that a cloud environment can give you, which means that you can grow and shrink resources on the fly, if you built your application correctly, that you can scale up and scale down, you're a number of instances very quickly and uh, everything has taken advantage of a P I S so initially that was kind of done inside of the environment. Uh AWS Ec two is a perfect example of that. Kubernetes shifted cloud native to container its workload because it allows for rapid, more, rapid deployment and even enables or it takes advantage of a more rapid development cycle as we look forward. Cloud Native is more likely to be a surplus environment where you write functions and the backend systems of the cloud service provider, just give you that capability and you don't have to worry about maintaining and managing a fleet of any sort, whether it's VMS or containers, that's where it's gonna go. Currently we are to contain our space >>so as you start getting into the service molly good land, which we've been playing with, loves that as you get into that, that's going to accelerate more data. So I gotta ask you as you get into more of this this month, I will say monitoring or observe ability, how we want to look at it. You gotta get at the data. This becomes a critical part of solving a lot of problems and also making sure the machine learning is learning the right thing. How do you view that you guys over there? Because I think everyone is like getting that cloud native and it's not hard sell to say that's all good, but we can go back, you know, the expression ships created ships and then you have shipwrecks, you know, there's always a double edged sword here. So what's the downside? If you don't get the data right? >>Uh well, so the for us, the problem is not too much data, it's lack of data. So if you don't get data right is you don't have enough data. And the places where optimization cannot be automated is where the transaction rates are slow, where you don't have enough fruit. But coming into the application and it really becomes difficult to optimize that application with any kind of speed. You have to be able to profile the application long enough to know what moves its needle and in order for you to hit the S. L. O. Targets. So it's not too much data, it's not enough data. That seems to be the problem. And there are a lot of applications that are expensive to run but have a low throughput. And I would uh in all cases actually in every customer environment that have been in, where that's been the case if the application is just over provision, if you have a low throughput environment and it's costing too much, don't use ml to solve it. That's a wrong application of the technology. Just take a sledgehammer and back your resources by 50%, see what happens. And if that thing breaks back it again, until you find the baggage point. >>Exactly for you over prison, you bang it back down again. It's like the old school now with the cloud. Take me through some examples when you guys had some success, obviously you guys are in the right area right now, you're seeing a lot of people looking at this area to do that in some cases like changing the whole data center and respect of their business. But as you get it with customers with the app side, what some successes can you share some of the use cases, what you guys are being successful, your customers can get some examples. >>Yeah. So well known financial software for midsize businesses that that does accounting. It's uh there are customer during a large fleet and this product has been around for a while. It's not a container ice product. This product runs on VMS. Angela is a large component of that. So the problem for this particular vendor has been that they run on heterogeneous fleet that the application has been a along around for a very long time. And as new instance types on AWS have come in, developers have used those. So the fleet itself is quite heterogeneous and depending on the time of the day and what kind of reports are being run by organisations, they, the mix of resources that the applications need are different. So uh when we started analyzing the stack, we started we started looking at three different tiers, we looked at the database level, we looked at the job of mid tier and we looked at the web front end. And uh one of the things that became counterproductive is that m L. Discovered that using for the mid tier using larger instances but fear of a lot for better performance and lower cost and uh typically your gut feel is to go with smaller instances and more of a larger fleet if you would. But in this case, what the ML produced was completely counter intuitive And the net result for the customer was 78% cost reduction while agency went down by 10%. So think about it that you're, the response time is less, uh 10% less but your costs are down almost 80% 78% in this case. And the other are the fact that happened in the job of mitt here is that we improve garbage collection significantly and because whenever garbage collection happens on a JV M it takes a pause and that from a customer perspective it reflects as downtime because the machines are not responding so by tuning garbage collection Andrzej VMS across this very large fleet we were able to recover over 5000 minutes and month across the entire fleet. So uh, these are some substantial savings and this is what the right application of machine learning on a large fleet can do for assess business. >>And so talk about this fleet dynamic, You mentioned several lists. How do you see the future evolving for you guys? Where are you skating to where the puck is? As the expression goes? Um obviously with server list is going to have essentially unlimited fleets potentially That's gonna put a lot of power in the hands of developers. Okay. And people building experiences, What's the next five years look like for you guys? >>So I'm looking at the product from a product perspective, the service market depends on the mercy of the cloud service provider and typically the algorithms that they use. Uh basically they keep very few instances warm for you until you're the rate of api calls goes up and they start they start uh start turning on VMS are containers for you and then the system becomes more responsive over time. One place that we can optimize the service environment is give predictability of what the cyclicality of load is. So we can pre provision those instances and warm up the engine before the loads come into the system always stays responsive. You may have noticed that some of your apps on your phone that when you start them up, they may have a start up like a minute or two. Especially if it's a it's a terror gap. What's happening in those cases that you're starting an api calls goes in containers being started up for you to start up that instance, not enough of our warm to give you that rapid response. And that can lead to customer churn. So by by analyzing what the load on the overall load of the system is and pre provision the system. We can prevent the downtime uh prevent the lag to start up black on the downside. Which when you know when the usage goes down, it doesn't make sense to keep that many instances up. So we can talk to the back in infrastructure and the commission of those VMS in order to make to prevent cost creeps basically. So that's one place that we're thinking about extending our technology. >>So it's like, it's like the classic example where people say, oh during black monday everyone searches to do e commerce. You guys are thinking about it on A level that's a user centric kind of use case where you look at the application and be smart about what the expectation is on any given situation and then flex the resources on that. Is that right? That by getting right? So if it's your example, the app is a good one. If I wanted to load fast, that's the expectation. It better load fast. >>Yes, that's exactly but more romantic. So I use valentine's day and flowers my example. But you know, it doesn't have to be annual cycles. It can be daily cycles or hourly cycles. And all those patterns are learning about by an Ml back in. >>Alright, so I gotta ask you love the, this, this this new concept because most people think auto scaling right? Because that's a server concept. Can auto scale or database. Okay. On a scale up, you're getting down to the point where, okay, we'll keep the engines warm, getting more detailed. How do you explain this versus a concept like auto scaling. Is it the same as a cousins? >>They're they're basically the way they're expressed, it's the same technology but their way there expressed is different. So uh in a cooper native environment, the H. B A is your auto scaler basically in response to the need, response more instances and you get more containers going on. What happens as services? Less environment is you're unaware of the underpinnings that do that scale up for you. But there is an auto Scaler in place that does that scale up for you. So the question becomes that we're in a stack from a customer's perspective, are you talking about if you imagine your instances we're dealing with the H. B. A. If you're managing at the functional level we have to have api calls on the service provider's infrastructure to pre warm up the engine before the load comes. >>I love I love this under the hood is kind of love new dynamics kind of the same wine, new bottle but still computer science, still coding, still cool and relevant to make these experiences great. Thanks for coming on this cube conversation. I really appreciate it. Take a minute to put a plug in for the company. What are you guys doing in terms of status funding scale employees, what are you looking for? And if someone's watching this and there should be a customer of you guys, what what's, what's, what's going on in their world? What tells them that they need to be calling you? >>Yeah, so we're serious. Dave we've had the privilege of uh, our we've been privileged by having a very good success with large enterprises. Uh, if you go to our website, you'll see the logos of who we have, we will be at Q khan and there were going to be actively targeting the mid market or smaller kubernetes instances, as I mentioned, it's gonna take about 20 minutes to get started and we'll show the results in two hours. And our goal is for our customers to deliver the best user experience in terms of performance, reliability. Uh, so that they, they delight their customers in return and they do so without breaking the bank. So deliver excellent products, do it at the most efficient way possible, deliver a good financial results for your stakeholders. This is what we do. So we encourage anybody who is running a SAS company to come and take a look at us because we think we can help them and we can accelerate there. The growth at the lower cost >>and the last thing people need is have someone coming breathing down their necks saying, hey, we're getting overcharged. Why are you guys screwing up when they're not? They're trying to make a great experience. And I think this is kind of where people really want to do push the envelope and not have to go back and revisit the cost overruns, which if it's actually a good sign if you get some cost overruns here and there because you're experimenting. But again, you don't want to get out of control. >>You don't want to be a visual like the U. S. Debt. >>Exactly. I'm here. Thank you for coming on. Great. We'll see a coupe con. The key will be there in person is a hybrid event. So uh, coupon is gonna be awesome and thanks for coming on the key. Appreciate it. >>John is a pleasure. Thank you for having me on. >>Okay. I'm john fryer with acute here in Palo alto California remote interview with upsetting hot startup series. I'm sure they're gonna do well in the right spot in the market. Really well poisoned cloud Native. Thanks for watching. Yeah.

Published Date : Sep 13 2021

SUMMARY :

I appreciate you taking the time, appreciate it, john good to be here. So I know you guys in the center of all this uh and you've got, that the engineering can focus on writing code and not worry about having to tune the little pieces So, and you start getting into some of the systems configuration when you have automation at the center of this revenue problem down the line if you have too many problems unhappy. So I have to ask you mean everyone lives this. of X and the literacy rate of Why here's how you configure your cloud infrastructure. So on this tool you guys have in the software you guys have, how how do you guys go to mark So by giving the product uh activity if you if you in these new workflows, now that it's frank down and because of Ci Cd, you know, you don't have the luxury of waiting and of, you know, how this would play out and what people might expect, how it would handle, it gives the recommendation back to the application team or alternatively, native mean these days, because you know, cloud native become kind of much cloud computing, on the fly, if you built your application correctly, that you can scale up and scale down, So I gotta ask you as you get into more of this this So if you don't get data right is you don't have enough data. of the use cases, what you guys are being successful, your customers can get some examples. So the problem for this particular vendor has been that What's the next five years look like for you guys? to give you that rapid response. So it's like, it's like the classic example where people say, oh during black monday everyone searches to do e commerce. But you know, it doesn't have to be annual cycles. How do you explain this versus a concept like auto scaling. basically in response to the need, response more instances and you get more And if someone's watching this and there should be a customer of you guys, So deliver excellent products, do it at the most efficient way possible, cost overruns, which if it's actually a good sign if you get some cost overruns here and there because you're Thank you for coming on. Thank you for having me on. I'm sure they're gonna do well in the right spot in the market.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Amir SharifPERSON

0.99+

john fryerPERSON

0.99+

50%QUANTITY

0.99+

john KerryPERSON

0.99+

12 monthQUANTITY

0.99+

10%QUANTITY

0.99+

twoQUANTITY

0.99+

appleORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

78%QUANTITY

0.99+

amazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

two hoursQUANTITY

0.99+

30%QUANTITY

0.99+

SASORGANIZATION

0.99+

bob SaniPERSON

0.99+

JohnPERSON

0.99+

six monthsQUANTITY

0.99+

over 5000 minutesQUANTITY

0.98+

AngelaPERSON

0.98+

bothQUANTITY

0.98+

SandyPERSON

0.98+

three metricsQUANTITY

0.98+

one pointQUANTITY

0.97+

first oneQUANTITY

0.97+

Palo alto CaliforniaLOCATION

0.97+

valentine's dayEVENT

0.97+

One placeQUANTITY

0.97+

oneQUANTITY

0.96+

about 20 minutesQUANTITY

0.96+

amirPERSON

0.96+

Palo altoLOCATION

0.96+

zeroQUANTITY

0.95+

PrometheusTITLE

0.95+

20 minutesQUANTITY

0.95+

three different tiersQUANTITY

0.95+

johnPERSON

0.95+

two daysQUANTITY

0.94+

1.0QUANTITY

0.93+

black mondayEVENT

0.93+

one business sideQUANTITY

0.93+

a minuteQUANTITY

0.93+

this monthDATE

0.92+

four yearsQUANTITY

0.92+

OpsaniPERSON

0.91+

almost 80%QUANTITY

0.9+

Q khanORGANIZATION

0.88+

20 generationsQUANTITY

0.85+

U. S.ORGANIZATION

0.85+

this fallDATE

0.81+

next five yearsDATE

0.78+

LaghiPERSON

0.77+

c I C DTITLE

0.72+

yearsQUANTITY

0.7+

doubleQUANTITY

0.67+

StanleyPERSON

0.66+

C i c DTITLE

0.66+

khanORGANIZATION

0.65+

4-1OTHER

0.64+

CFOORGANIZATION

0.64+

CEVENT

0.59+

threeDATE

0.59+

Ec twoTITLE

0.58+

CloudCOMMERCIAL_ITEM

0.56+

firstQUANTITY

0.54+

ServoORGANIZATION

0.52+

CapexORGANIZATION

0.48+

five GORGANIZATION

0.48+

ServoTITLE

0.43+

AndrzejPERSON

0.4+

KubernetesTITLE

0.37+

Mike Evans, Red Hat | Google Cloud Next 2019


 

>> reply from San Francisco. It's the Cube covering Google Club next nineteen Tio by Google Cloud and its ecosystem partners. >> We're back at Google Cloud next twenty nineteen. You're watching the Cube, the leader in live tech coverage on Dave a lot with my co host to minimum John Farriers. Also here this day. Two of our coverage. Hash tag. Google Next nineteen. Mike Evans is here. He's the vice president of technical business development at Red Hat. Mike, good to see you. Thanks for coming back in the Cube. >> Right to be here. >> So, you know, we're talking hybrid cloud multi cloud. You guys have been on this open shift for half a decade. You know, there were a lot of deniers, and now it's a real tail one for you in the whole world is jumping on. That bandwagon is gonna make you feel good. >> Yeah. No, it's nice to see everybody echoing a similar message, which we believe is what the customers demand and interest is. So that's a great validation. >> So how does that tie into what's happening here? What's going on with the show? It's >> interesting. And let me take a step back for us because I've been working with Google on their cloud efforts for almost ten years now. And it started back when Google, when they were about to get in the cloud business, they had to decide where they're going to use caveat present as their hyper visor. And that was a time when we had just switched to made a big bet on K V M because of its alignment with the Lenox Colonel. But it was controversial and and we help them do that. And I look back on my email recently and that was two thousand nine. That was ten years ago, and that was that was early stages on DH then, since that time, you know, it's just, you know, cloud market is obviously boomed. I again I was sort of looking back ahead of this discussion and saying, you know, in two thousand six and two thousand seven is when we started working with Amazon with rail on their cloud and back when everyone thought there's no way of booksellers goingto make an impact in the world, etcetera. And as I just play sort of forward to today and looking at thirty thousand people here on DH you know what sort of evolved? Just fascinated by, you know, sort of that open sources now obviously fully mainstream. And there's no more doubters. And it's the engine for everything. >> Like maybe, you know, bring us inside. So you know KK Veum Thie underpinning we know well is, you know, core to the multi clouds tragedy Red hat. And there's a lot that you've built on top of it. Speak, speak a little bit of some of the engineering relationships going on joint customers that you have. Ah, and kind of the value of supposed to, you know, write Hatton. General is your agnostic toe where lives, but there's got to be special work that gets done in a lot of places. >> Ralph has a Google. Yeah, yeah, yeah. >> Through the years, >> we've really done a lot of work to make sure that relative foundation works really well on G C P. So that's been a that's been a really consistent effort and whether it's around optimization for performance security element so that that provides a nice base for anybody who wants to move any work loader application from on crime over there from another cloud. And that's been great. And then the other maid, You know, we've also worked with them. Obviously, the upstream community dynamics have been really productive between Red Hat and Google, and Google has been one of the most productive and positive contributors and participants and open source. And so we worked together on probably ten or fifteen different projects, and it's a constant interaction between our upstream developers where we share ideas. And do you agree with this kind of >> S O Obviously, Cooper Netease is a big one. You know, when you see the list, it's it's Google and Red Hat right there. Give us a couple of examples of some of the other ones. I >> mean again, it's K B M is also a foundation on one that people kind of forget about that these days. But it still is a very pervasive technology and continuing to gain ground. You know, there's all there's the native stuff. There's the studio stuff in the AML, which is a whole fascinating category in my mind as well. >> I like history of kind of a real student of industry history, and so I like that you talk to folks who have been there and try to get it right. But there was a sort of this gestation period from two thousand six to two thousand nine and cloud Yeah, well, like you said, it's a book seller. And then even in the down turn, a lot of CFO said, Hey, cap backstop ex boom! And then come out of the downturn. And it was shadow I t around that two thousand nine time frame. But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in in In our cloud and homogeneity had a lot of a lot of traditional companies fumbling with their cloud strategies. And and And he had the big data craze. And obviously open source was a huge part of that. And then containers, which, of course, have been around since Lennox. Yeah, yeah, and I guess Doctor Boom started go crazy. And now it's like this curve is reshaping with a I and sort of a new era of data thoughts on sort of the accuracy of that little historical narrative and and why that big uptick with containers? >> Well, a couple of things there won the data, the whole data evolution and this is a fascinating one. For many, many years. I'm gonna be there right after nineteen years. So I've seen a lot of the elements of that history and one of the constant questions we would always get sometimes from investor. Why don't you guys buy a database company? You know, years ago and we would, you know, we didn't always look at it. Or why aren't you guys doing a dupe distribution When that became more spark, etcetera. And we always looked at it and said, You know, we're a platform company and if we were to pick anyone database, it would only cover some percentage and there's so many, and then it just kind of upsets the other. So we've we've decided we're going to focus, not on the data layer. We're going to focus on the infrastructure and the application layer and work down from it and support the things underneath. So it's consistent now with the AML explosion, which, you know, we're who was a pioneer of AML. They've got some of the best services and then we've been doing a lot of work within video in the last two years to make sure that all the GP use wherever they're run. Hybrid private cloud on multiple clouds that those air enabled and Raylan enabled in open shift. Because what we see happening and in video does also is right now all the applications being developed by free mlr are written by extremely technical people. When you write to tense airflow and things like that, you kind of got to be able to write a C compiler level, but so were working with them to bring open shift to become the sort of more mass mainstream tool to develop. A I aml enable app because the value of having rail underneath open shift and is every piece of hardware in the world is supported right for when that every cloud And then when we had that GPU enablement open shift and middleware and our storage, everything inherits it. So that's the That's the most valuable to me. That's the most valuable piece of ah, real estate that we own in the industry is actually Ralph and then everything build upon that and >> its interest. What you said about the database, Of course, we're a long discussion about that this morning. You're right, though. Mike, you either have to be, like, really good at one thing, like a data stacks or Cassandra or a mongo. And there's a zillion others that I'm not mentioning or you got to do everything you know, like the cloud guys were doing out there. You know, every one of them's an operational, you know, uh, analytics already of s no sequel. I mean, one of each, you know, and then you have to partner with them. So I would imagine you looked at that as well. I said, How're we going to do all that >> right? And there's only, you know, there's so many competitive dynamics coming at us and, you know, for we've always been in the mode where we've been the little guy battling against the big guys, whoever, maybe whether it was or, you know, son, IBM and HP. Unix is in the early days. Oracle was our friend for a while. Then they became. Then they became a nen ime, you know, are not enemy but a competitor on the Lennox side. And the Amazon was early friend, and then, though they did their own limits. So there's a competitive, so that's that's normal operating model for us to us to have this, you know, big competitive dynamic with a partnering >> dynamic. You gotta win it in the marketplace that the customers say. Come on, guys. >> Right. We'Ll figure it out >> together, Figured out we talked earlier about hybrid cloud. We talked about multi cloud and some people those of the same thing. But I think they actually you know, different. Yeah, hybrid. You think of, you know, on prim and public and and hopefully some kind of level of integration and common data. Plain and control plan and multi cloud is sort of evolved from multi vendor. How do you guys look at it? Is multi cloud a strategy? How do you look at hybrid? >> Yeah, I mean, it's it's it's a simple It's simple in my mind, but I know the words. The terms get used by a lot of different people in different ways. You know, hybrid Cloud to me is just is just that straightforward. Being able to run something on premise have been able to run something in any in a public cloud and have it be somewhat consistent or share a bowl or movable and then multi cloud has been able to do that same thing with with multiple public clouds. And then there's a third variation on that is, you know, wanting to do an application that runs in both and shares information, which I think the world's you know, You saw that in the Google Antos announcement, where they're talking about their service running on the other two major public cloud. That's the first of any sizable company. I think that's going to be the norm because it's become more normal wherever the infrastructure is that a customer's using. If Google has a great service, they want to be able to tell the user toe, run it on their data there at there of choice. So, >> yeah, so, like you brought up Antos and at the core, it's it's g k. So it's the community's we've been talking about and, he said, worked with eight of us work for danger. But it's geeky on top of those public clouds. Maybe give us a little bit of, you know, compare contrast of that open shift. Does open ship lives in all of these environments, too, But they're not fully compatible. And how does that work? So are >> you and those which was announced yesterday. Two high level comments. I guess one is as we talked about the beginning. It's a validation of what our message has been. Its hybrid cloud is a value multi clouds of values. That's a productive element of that to help promote that vision And that concept also macro. We talked about all of it. It it puts us in a competitive environment more with Google than it was yesterday or two days ago. But again, that's that's our normal world way partnered with IBM and HP and competed against them on unit. We partner with that was partnered with Microsoft and compete with them, So that's normal. That said, you know, we believe are with open shift, having five plus years in market and over a thousand customers and very wide deployments and already been running in Google, Amazon and Microsoft Cloud already already there and solid and people doing really things with that. Plus being from a position of an independent software vendor, we think is a more valuable position for multi cloud than a single cloud vendor. So that's, you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For all these public called companies going on from its, you know, it's It's a lot of complexity when you have to deal with, You know, American Express is Infrastructure, Bank of Hong Kong's infrastructure, Ford Motors infrastructure and it's a it's a >> right right here. You know Google before only had to run on Google servers in Google Data Center. Everything's very clean environment, one temperature on >> DH Enterprise customers have it a little different demands in terms of version ality and when the upgrade and and how long they let things like there's a lot of differences. >> But actually, there was one of the things Cory Quinn will. It was doing some analysis with us on there. And Google, for the most part, is if we decide to pull something, you've got kind of a one year window to do, you know? How does Red Hot look at that? >> I mean, and >> I explained, My >> guess is they'LL evolve over time as they get deeper in it. Or maybe they won't. Maybe they have a model where they think they will gain enough share and theirs. But I mean, we were built on on enterprise DNA on DH. We've evolved to cloud and hybrid multi cloud, DNA way love again like we love when people say I'm going to the cloud because when they say they're going to the cloud, it means they're doing new APs or they're modifying old apse. And we have a great shot of landing that business when they say we're doing something new >> Well, right, right. Even whether it's on Prem or in the public cloud, right? They're saying when they say we'LL go to the club, they talk about the cloud experience, right? And that's really what your strategy is to bring that cloud experience to wherever your data lives. Exactly. So talking about that multi cloud or a Romney cloud when we sort of look at the horses on the track and you say Okay, you got a V M. We're going after that. You've got you know, IBM and Red Hat going after that Now, Google sort of huge cloud provider, you know, doing that wherever you look. There's red hat now. Course I know you can't talk much about the IBM, you know, certainly integration, but IBM Executive once said to me still that we're like a recovering alcoholic. We learned our lesson from mainframe. We are open. We're committed to open, so we'LL see. But Red hat is everywhere, and your strategy presumably has to stay that sort of open new tia going last year >> I give to a couple examples of long ago. I mean, probably five. Six years ago when the college stuff was still more early. I had a to seo conference calls in one day, and one was with a big graphics, you know, Hollywood Graphics company, the CEO. After we explained all of our cloud stuff, you know, we had nine people on the call explaining all our cloud, and the guy said, Okay, because let me just tell you, right, that guy, something the biggest value bring to me is having relish my single point of sanity that I can move this stuff wherever I want. I just attach all my applications. I attached third party APS and everything, and then I could move it wherever we want. So realize that you're big, and I still think that's true. And then there was another large gaming company who was trying to decide to move forty thousand observers, from from their own cloud to a public cloud and how they were going to do it. And they had. They had to Do you know, the head of servers, a head of security, the head of databases, the head of network in the head of nine different functions there. And they're all in disagreement at the end. And the CEO said at the end of day, said, Mike, I've got like, a headache. I need some vodka and Tylenol now. So give me one simple piece of advice. How do I navigate this? I said, if you just write every app Terrell, Andrzej, boss. And this was before open shift. No matter >> where you want >> to run him, Raylan J. Boss will be there, and he said, Excellent advice. That's what we're doing. So there's something really beautiful about the simplicity of that that a lot of people overlooked, with all the hand waving of uber Netease and containers and fifty versions of Cooper Netease certified and you know, etcetera. It's it's ah, it's so I think there's something really beautiful about that. We see a lot of value in that single point of sanity and allowing people flexibility at you know, it's a pretty low cost to use. Relish your foundation >> over. Source. Hybrid Cloud Multi Cloud Omni Cloud All tail wins for Red Hat Mike will give you the final world where bumper sticker on Google Cloud next or any other final thoughts. >> To me, it's It's great to see thirty thousand people at this event. It's great to see Google getting more and more invested in the cloud and more and more invested in the enterprise about. I think they've had great success in a lot of non enterprise accounts, probably more so than the other clowns. And now they're coming this way. They've got great technology. We've our engineers love working with their engineers, and now we've got a more competitive dynamic. And like I said, welcome to the jungle. >> We got Red Hat Summit coming up stew. Writerly May is >> absolutely back in Beantown data. >> It's nice. Okay, I'll be in London there, >> right at Summit in Boston And May >> could deal. Mike, Thanks very much for coming. Thank you. It's great to see you. >> Good to see you. >> All right, everybody keep right there. Stew and I would back John Furry is also in the house watching the cube Google Cloud next twenty nineteen we'LL be right back

Published Date : Apr 10 2019

SUMMARY :

It's the Cube covering Thanks for coming back in the Cube. So, you know, we're talking hybrid cloud multi cloud. So that's a great validation. you know, it's just, you know, cloud market is obviously boomed. Ah, and kind of the value of supposed to, you know, Yeah, yeah, yeah. And do you agree with this kind of You know, when you see the list, it's it's Google and Red Hat right there. There's the studio stuff in the AML, But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in You know, years ago and we would, you know, we didn't always look at it. I mean, one of each, you know, and then you have to partner with them. And there's only, you know, there's so many competitive dynamics coming at us and, You gotta win it in the marketplace that the customers say. We'Ll figure it out But I think they actually you know, different. which I think the world's you know, You saw that in the Google Antos announcement, where they're you know, compare contrast of that open shift. you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For You know Google before only had to run on Google servers in Google Data Center. and how long they let things like there's a lot of differences. And Google, for the most part, is if we decide to pull something, And we have a great shot of landing that business when they say we're doing something new talk much about the IBM, you know, certainly integration, but IBM Executive one day, and one was with a big graphics, you know, at you know, it's a pretty low cost to use. final world where bumper sticker on Google Cloud next or any other final thoughts. And now they're coming this way. Writerly May is It's nice. It's great to see you. Stew and I would back John Furry is also in the house watching the cube Google Cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

HPORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Mike EvansPERSON

0.99+

OracleORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

American ExpressORGANIZATION

0.99+

Ford MotorsORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

five plus yearsQUANTITY

0.99+

one yearQUANTITY

0.99+

tenQUANTITY

0.99+

TwoQUANTITY

0.99+

nine peopleQUANTITY

0.99+

yesterdayDATE

0.99+

Hollywood GraphicsORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

thirty thousand peopleQUANTITY

0.99+

John FarriersPERSON

0.99+

eightQUANTITY

0.99+

last yearDATE

0.99+

DavePERSON

0.99+

firstQUANTITY

0.99+

TerrellPERSON

0.99+

RalphPERSON

0.99+

StewPERSON

0.99+

two thousandQUANTITY

0.99+

Six years agoDATE

0.99+

thirty thousand peopleQUANTITY

0.99+

two days agoDATE

0.99+

LenoxORGANIZATION

0.99+

Bank of Hong KongORGANIZATION

0.99+

BostonLOCATION

0.99+

oneQUANTITY

0.99+

CassandraPERSON

0.98+

John FurryPERSON

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

ten years agoDATE

0.98+

AndrzejPERSON

0.98+

half a decadeQUANTITY

0.98+

over a thousand customersQUANTITY

0.98+

Red HotORGANIZATION

0.98+

one dayQUANTITY

0.97+

forty thousand observersQUANTITY

0.97+

Google CloudTITLE

0.97+

HattonPERSON

0.96+

third variationQUANTITY

0.96+

Cory QuinnPERSON

0.95+

one simple pieceQUANTITY

0.95+

two thousand nineQUANTITY

0.95+

fifty versionsQUANTITY

0.94+

Raylan J. BossPERSON

0.93+

single pointQUANTITY

0.93+

next twenty nineteenDATE

0.93+

LennoxORGANIZATION

0.92+

UnixORGANIZATION

0.92+

KK Veum ThiePERSON

0.92+

two thousand sevenQUANTITY

0.92+

stewPERSON

0.91+