Image Title

Search Results for KubeVirt:

Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native


 

(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Chris JonesPERSON

0.99+

12 gigQUANTITY

0.99+

five yearQUANTITY

0.99+

John FurrierPERSON

0.99+

two yearsQUANTITY

0.99+

six peopleQUANTITY

0.99+

two personasQUANTITY

0.99+

AdobeORGANIZATION

0.99+

JavaTITLE

0.99+

three monthsQUANTITY

0.99+

20 mileQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

SeattleLOCATION

0.99+

two toolsQUANTITY

0.99+

twoQUANTITY

0.99+

eight coresQUANTITY

0.99+

KubeConEVENT

0.99+

last yearDATE

0.99+

GitOpsTITLE

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

over two yearsQUANTITY

0.99+

HashiCorpORGANIZATION

0.99+

TerraformORGANIZATION

0.99+

two separate platformsQUANTITY

0.99+

24QUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

two waysQUANTITY

0.98+

third alternativeQUANTITY

0.98+

each releaseQUANTITY

0.98+

IntuitORGANIZATION

0.98+

third pillarQUANTITY

0.98+

2:00amDATE

0.98+

first jobsQUANTITY

0.98+

Mobile World CongressEVENT

0.98+

Cloud NativeTITLE

0.98+

this yearDATE

0.98+

late 2019DATE

0.98+

Platform9TITLE

0.98+

one environmentQUANTITY

0.98+

last fallDATE

0.97+

KubernetesTITLE

0.97+

yesterdayDATE

0.97+

two experiencesQUANTITY

0.97+

about eight yearsQUANTITY

0.97+

DevSecOpsTITLE

0.97+

GitTITLE

0.97+

FluxORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

two big contributorsQUANTITY

0.96+

Cloud NativeTITLE

0.96+

DevOpsTITLE

0.96+

RebarsORGANIZATION

0.95+

Mathew Ericson, Commvault and David Ngo, Metallic | KubeCon + CloudNativeCon NA 2020


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon North America 2020 virtual brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hi, and welcome back to theCUBE. I'm Joep Piscaer, I'm covering KubeCon CloudNativeCon here remotely from the Netherlands. And I'm joined by Commvault, Mathew Pearson, he's a Senior Product Manager, as well as David Ngo, Vice President of Metallic Products and Engineering to talk about the cloud native space and data protection in the Cloud Native space. So both, welcome to the show. And I want to start off with kind of the why question, right? Why are we here obviously, but also why are we talking about data protection? I thought we had that figured out. So David, can you shed some light on how, data protection is totally different in the cloud native container space? >> Sure, absolutely, thank you. I think the thing to keep in mind is that, containers are an evolution and a revolution actually in the virtualization space in the cloud space. What we're seeing is that customers are turning more and more to SaaS based applications and infrastructure in order to modernize their data centers and their data state in their compute environments. And when they do that, they're looking for solutions that match how they deploy their applications. And SaaS for us is an important area of that space. So, Metallic is Commvault portfolio of SaaS delivered and SaaS native data protection capabilities and offerings to allow customers to take the advantage of the best SaaS that is easy to try, easy to buy, easy to deploy, no infrastructure required and combine that with the technology and experience of Commvault. It'll build over last 20 years to deliver an enterprise grade data protection solution delivered as SaaS. And so, with Kubernetes and deploying in the cloud and modernizing applications I think that's very appealing to customers to also be able to modernize their data protection. >> Yeah, so I get the SaaS part. I mean, SaaS is an important way of delivering services. It is especially in the mid-market, something customers prefer, they want to have that simplicity, that easy onboarding as well as the OPEX of paying a subscription fee instead of longer term fees. So, the delivery model makes sense that fits into, the paradigm of making it simple, getting started easily. I get that, but Metallic isn't a traditional backup solution in that sense, right? It's not backing up necessarily just physical machines or just virtual machines. It has a relevance in the cloud native space. And the way I understand it, and please, if you can shed some light on that, Matt, is how is it different? What does it do that kind of makes it stand apart? >> Yeah, look, what we've found is the application developers can be in control now. So it's not like a traditional backup, that's what's changed. At this point, the application developer is free to create the infrastructure that he or she needs. And that freedom has meant that a bunch of stateful applications, the apps that we didn't think were going to live in Kubernetes have made their way to Kubernetes and they're making their way fast. So why is Metallic different? Because it's taking its lead from the developer. So it's using things like namespaces and label selectors. So basically take input from the developer on what information is important and needs to be protected and then protecting it. So it's your easy button to keep that Kubernetes development protected while you keep pace with the innovation within the organization. >> So you raise a valid point, cloud native has many advantages. It also has an extra challenge to account for which is fragmentation, right? In the olden days, let's call it that. We had a virtual machine, maybe a couple dozen that made up an application. And it was fairly easy to pinpoint the kind of the sort of conference of an application. This is my application. But now with cloud native, applications data can basically live anywhere. In a single cloud vendor, in many different cloud accounts, across different services, even across the public clouds themselves, like in a true multi-cloud scenario and figuring out what is part of an application in that enormous fragmentation is a challenge I think is understated and underestimated in a lot of operational environments with customers, with their applications in production. And that's where I think a product needs to figure out how to make sure an application is still backed up, is still protected in the way that is necessary for that given application. So I wonder how that works with Metallic. How do you kind of figure out what part of that enormous fragmentation is part of a single application? >> Yeah, so Metallic effectively integrates and speaks natively with the kube-apiserver. So it's taking its lead from the system of truth which is the orchestrator, which is Kubernetes itself. So for example, if you say everything in your production namespace needs protection, every night or every four hours, whatever that may be, it steps out and asks Kubernetes what applications exist there. It then maps all of the associated API resources associated with that application including the persistent volumes and persistent volume claims, man throws up and grabs the data from them as well. And that allows us to then reapply or reschedule that application either back to that original cluster or to another one for application mobility, where they are. >> So how do you make sure you, it kind of, what's the central point where everything comes together for that given application? Is that something the developer does as part of their release process or as part of their CICD? How do you figure out what components are part of an application? >> That is definitely a big challenge in the industry today? So, today we use label selectors predominantly. We find developers have been educating us on what works for them. And they've said, "Our CICD system is going "to label everything associated with this app, "as namespaced, then non-named space resources. 'So just here, take my label, grab everything under that, "and you will be good." The reality is that doesn't work for every business. Some businesses drop things into a specific namespace. And then you've got the added challenge that all of your data doesn't actually just live in Kubernetes. What about your image registries? What about it HCD? What about your Source Code Control and CICD systems? So we're finding that even VMs as well are playing a part in this ecosystem right now until applications can fully migrate. >> Yeah, and then let's zoom out on that a little bit. I mean, I think it's great that developers now kind of have flipped the paradigm where backup and data protection used to be something squarely in the OPS domain. It's now made its way into the .dev domain where it's become fairly easy to tag resources as application X, application Y, and then it automatically gets pulled into the backup based on policies. I mean, that's great, but let's zoom out a little bit and figure out, why is this happening? Why are developers even being put in a position of backing up their applications? So David, do you want to shed some light on that for me? >> Sure, I think data protection is always going to be a requirement and you'll have persistent data, right? There are other elements of applications that will always need to be protected and data protection is often something that is an afterthought, but it's something that needs to be considered from the beginning. And Metallic in being able to support deployments, not just in the cloud, but on-premises as well. We support any number of certified distributions of Kubernetes, gives you the flexibility to make sure that there was apps and that data is protected no matter where it lives. Being able to do that from a single pane of glass, being able to manage your Kubernetes deployments in different environments is very important there. >> So let's dive into that a little bit. I hear you say, Certified Kubernetes Distributions. So what's kind of the common denominator we need to use Metallic in an environment? Because I hear On-Prem, I hear public cloud. So it seems to me like this is a pretty broad product in terms of what it supports in its scope. But what's the lowest common denominator for instance, in the On-Prem environment? >> Sure, so we support all CNCF certified distributions of Kubernetes today. And in the cloud, we support Azure with AKS and AWS with EKS. So you can really use the one Metallic environment, the one interface to be able to manage all of those environments. >> And so what about that storage underneath? Is that all through CSI? >> Yes. So we support CSI on the backend of the Kubernetes applications, and we can then protect all the data stored there. >> And so how does this, I mean, you acquired Hedvig about a year ago, I want to say. Not sure on the exact date, but you acquired Hedvig a little while ago. So how does that come into play in Metallic offering? >> Sure, the Hedvig distributed storage platform is a fantastic platform on which to provision and scale Kubernates's applications and clusters. And that having full integration with Kubernetes on the storage side, we support that natively and really builds on the value that Commvault can bring as a whole with all of its offerings as a platform to Kubernetes. >> All right. So, zooming out just a little more, I want to get a feel for the cover of the portfolio of Commvault, as we're ushering into this cloud native era, as we're helping customers make that move and make that transition. What's the positioning of Metallic basically in the transformation customers are going through from On-Prem kind of lift and shift cloud into the cloud native space? >> Yeah, so with today's announcements, our hybrid cloud support and our hybrid cloud initiatives really help customers manage data wherever it lives as I've mentioned earlier. Customers can start with workloads On-Prem and start protecting workloads that they either have migrated or starting to build in the cloud natively and really cover the gamut of infrastructure and hypervisors and file systems and storage locations amongst all of these locations. So from our perspective, we think that hybrid is here to stay, right? There are very few customers who are either going to be all on-premises or all in the cloud. Most customers have some requirement that keeps them in a hybrid configuration, and we see that being prevalent for quite some time. So supporting customers in their transformation, right? Where they are moving applications from on-premises to the cloud, either refactoring or lift and shift, or what have you. It's very important to them, it's very important for us to be able to support that motion. And we look forward to helping them along the way. >> Awesome, so one last question for Matt. I mean, Metallic is a set of servers, right? That means you run it, you operate it, you build it. So I wonder, is Metallic itself cloud native? How does it scale? What are kind of the big components that Metallic has made up of? >> So Metallic itself is absolutely cloud native. It is sitting inside Azure today. I won't go into all the details. In fact, David could probably provide far more detail there. But I think Metallic is cloud native with respect to the fact that it's speaking natively to your applications, your cloud instances, your Vms. And then it's giving you the agility and the ability to move them where you need them to be. And that's assisting people in that migration. So in the past, we helped people get from P to V. Now that there are virtualized, applications like Metallic can protect you wherever you are and get you to wherever you need to be, especially into your next cloud of choice. And there's always another cloud. What I'm interested to see and what I'm hoping to see out of KubeCon is how are we doing with KubeVirt and Kubernetes becoming the orchestrator of the data center. And how are we doing with some of these other projects like application CRDs and hierarchical namespaces that are truly going to build a multi-tenanted software defined, distributed application ecosystem, that Metallic I can speak natively to via Kubernetes. >> Awesome. Well, thank you both for being with me here today. I certainly learned a ton about Metallic. I learned a lot about the challenges in cloud native that'll certainly be an area of development in the next couple of years. As you know, that the CNCF will continue to support projects in this space and vendors to work with us in that space as well. So that's it for now. I'm Joep Piscaer, I'm covering for KubeCon here remotely from the Netherlands. I will see you next time, thanks. (bright upbeat music)

Published Date : Nov 19 2020

SUMMARY :

the Cloud Native Computing Foundation in the cloud native container space? and deploying in the cloud And the way I understand it, and please, So basically take input from the developer is still protected in the way And that allows us to challenge in the industry today? kind of have flipped the the flexibility to make sure in the On-Prem environment? And in the cloud, we of the Kubernetes applications, So how does that come into and really builds on the value Metallic basically in the and really cover the What are kind of the big components So in the past, we helped in the next couple of years.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Joep PiscaerPERSON

0.99+

MattPERSON

0.99+

David NgoPERSON

0.99+

MetallicORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

NetherlandsLOCATION

0.99+

KubeConEVENT

0.99+

AKSORGANIZATION

0.99+

Mathew PearsonPERSON

0.99+

todayDATE

0.99+

CloudNativeConEVENT

0.98+

bothQUANTITY

0.98+

Metallic Products and EngineeringORGANIZATION

0.98+

CNCFORGANIZATION

0.98+

CommvaultORGANIZATION

0.96+

KubernetesTITLE

0.96+

.devOTHER

0.95+

EKSORGANIZATION

0.94+

HedvigORGANIZATION

0.94+

CloudNativeCon North America 2020EVENT

0.93+

one last questionQUANTITY

0.91+

single applicationQUANTITY

0.91+

single paneQUANTITY

0.9+

KubernetesORGANIZATION

0.89+

every four hoursQUANTITY

0.88+

Mathew EricsonPERSON

0.87+

AzureTITLE

0.87+

NA 2020EVENT

0.87+

KubeCon CloudNativeConEVENT

0.85+

a couple dozenQUANTITY

0.81+

KubernatesTITLE

0.77+

next couple of yearsDATE

0.72+

about a year agoDATE

0.72+

single cloudQUANTITY

0.7+

oneQUANTITY

0.69+

Vice PresidentPERSON

0.69+

nightQUANTITY

0.65+

KubeVirtORGANIZATION

0.64+

one interfaceQUANTITY

0.64+

theCUBEORGANIZATION

0.62+

Cloud NativeLOCATION

0.59+

CommvaultPERSON

0.59+

last 20 yearsQUANTITY

0.54+

tonQUANTITY

0.52+

Steve Gordon, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Voice over: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 virtual, brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hi, I'm Stu Mittleman, and welcome back to theCUBE's Coverage of KubeCon CloudNativeCon Europe for 2020. Get to talk to the participants in this great community and ecosystem where they are around the globe. And when you think back to the early days of containers, it was, containers, they're lightweight, they're small, going to obliterate virtualization is often the headline that we had. Of course, we know everything in IT tends to be additive. And here we are in 2020 and containers and virtual machines, living side by side and often we'll see the back and forth that happens when we talk about virtualization in containers. To talk about that topic specifically, happy to welcome to the program, first time guest, Steve Gordon. He's the director of product management at Red Hat. Steve, thanks so much for joining us. >> Thanks so much Stu, it's great to be here. >> All right, as I teed up of course, virtualization was a wave that swept through the data center. It is a major piece, not only of what's in the data center, but even if you look at the public Clouds, often it was virtualization underneath there. Certain companies like Google, of course, really drove a container adoption. And often you hear when people talk about, I built something CloudNative, that underlying piece of being containerized and then using an orchestration layer like Kubernetes is what they talk about. So maybe stop for a sec, Red Hat of course, heavily involved in virtualization and containers, how you see that landscape and what's the general conversation you have with customers as to how they make the choice and how the lines blur between those worlds? >> Yeah, so at Red Hat, I think we've been working on certainly the current iteration of the next specialization with KVM for around 12 years and myself large portion of that. I think, one thing that's always been constant is while from the outside-in, specialization looks like it's been a fairly stable marketplace. It's always changing, it's always evolving. And what we're seeing right now is as people are adopting containers and even constructs built on top of containers into their workflows, there is more interest and more desire around how can I combine these things, recognizing that still an enormous percentage of my workloads are out there running in virtual machines today, but I'm building new things around them that need to be able to interact with them and springboard off of that. So I think for the last couple of years, I'm sure you yourself have seen a number of different projects pop up and the opensource community around this intersection of containers and visualization and how can these technologies compliment each other. And certainly KubeVirt is one of the projects that we've started in this space, in reaction to both that general interests, but also the real customer problems that people have, as they try and meld these two worlds. >> So Steve, at Red Hat Summit earlier this year, there was a lot of talk around container native virtualization. If you could just explain what that means, how that might be different from just virtualization in general, and we'll go from there. >> Sure, so back in, I think early 2017, late 2016, we started playing around this idea. We'd already seen the momentum around Kubernetes and the result the way we architected OpenShift, three at a time around, Kubernetes has this strength as an orchestration platform, but also a shared provider of storage, networking, et cetera, resources. And really thinking about, when we look at virtualization and containers, some of these problems are very common regardless of what footprint the workload happens to fit into. So leveraging that strength of Kubernetes as an orchestration platform, we started looking at, what would it look like to orchestrate virtual machines on that same platform right next to our application containers? And the extension of that the KubeVirt project and what has ultimately become OpenShift virtualization is based around that core idea of how can I make a traditional virtual machine to a full operating system, interact with and look exactly like a Kubernetes native construct, that I can use from the same platform? I can manage it using the same constructs, I can interact with it using the same console, all of these kinds of ideas. And then on top of that, not just bring in workloads as they lie, but enable really powerful workforce with people who are building a new application in containers that still need some backend components, say a database that's sitting in a VM, or also trying to integrate those virtual machines into new constructs, whether it's something like a pipeline or a service mesh. We're hearing a lot of questions around those things these days where people don't want to just apply those things to brand new workloads, but figure out how do they apply those constructs to the broader majority of their fleet of workflows that exist today. >> All right, so I believe back at Red Hat Summit, OpenShift virtualization was in beta. Where's the product that solution sets till today? >> Right, so at this year's KubeCon, we're happy to announce that OpenShift virtualization is moving to general availability. So it will be a fully supported part of OpenShift. And what that means is, you, as a subscriber to OpenShift, the platform, get virtualization as just an additional capability of that platform that you can enable as an operator from the operator hub, which is really a powerful thing for admins to be able to do that. But also is just really powerful in terms of the user experience. Like once that operator is enabled on your cluster, the little tab shows up, that shows that you can now go and create a virtual machine. But you also still get all of the metrics and the shared networking and so on that goes with that cluster, that underlies it all. And you can again do some really powerful things in terms of combining those constructs for both virtual machines and containers. >> When you talk about that line between virtualization and containers, a big question is, what does this mean for developers? How is it different from what they were using before? How do they engage and interact with their infrastructure today? >> Sure, so I think the way a lot of this current wave of technology got started for people was whether it was with Kubernetes or Docker before that, people would go and grab, easiest way they could grab compute for capacity was go to their virtual machine firm, whether that was their local virtualization estate at their company, or whether that was taking a credit card to public Cloud, getting a virtual machine and spinning up a container platform on top of that. What we're now seeing is, as that's transitioning into people building their workloads, almost entirely around these container constructs, in some cases when they're starting from scratch, there is more interest in, how do I leverage that platform directly? How do I, as my application group have more control over that platform? And in some cases, depending on the use case, like if they have demand for GPUs, for example, or other high-performance devices, that question of whether the virtualization layer between my physical host and my container is adding that much value? But then still wanting to bring in the traditional workloads they have as well. So I think we've seen this gradual transition where there is a growing interest in reevaluating, how do we start with container based architectures? To, okay, how has we transitioned towards more production scenarios and the growth in production scenarios? What tweaks do we make to that architecture? Does it still make sense to run all of that on top of virtual machines? Or does it make more sense to almost flip that equation as my workload mix gradually starts changing? >> Yeah, two thoughts come to mind on that. Number one is, are there specific applications out there, or I think about traditional VMs, often that Windows environments that we have there, is that some of the use case to bring them over to containers? And then also, once I've gotten it into the container environment, what are the steps to move forward? Because I have to expect that there's going to be some refactoring, some modernization to take advantage of the innovation and pace of change, not just to take it, containerize it and leave it. >> Yeah, so certainly, there is an enormous amount of potential out there in terms of Windows workloads, and people are definitely trying to work out how do they leverage those workloads in the context of OpenShift and Kubernetes based environment. And Windows containers obviously, is one way to address that. And certainly, that is very powerful in and of itself, for bringing those workloads to OpenShift and Kubernetes, but does have some constraints in terms of needing to be on a relatively recent version of Windows server and so on for those workloads to run in that construct. So where OpenShift virtualization helps with that is we can actually take an existing virtual machine workload, bring that across, even if it's say Windows server 2012, run it on top of the OpenShift virtualization platform as a VM, And then if or when you start modernizing more of that application, you can start teasing that out into actual containers. And that's actually something, it is one of our very early demos at Red Hat Summit 2018, I think was how you would go about doing that, and primarily we did that because it is a very powerful thing for customers to see how they can bring those, all the applications into this mix. And the other aspect of that I'll mention is one of our financial services customers who we've been working with, basically since that demo, they saw it from a hallway at Red Hat Summit and came and said, "Hey, we want to talk to you guys about that." One of the primary workload, is a Windows 10 style environment, that they happened to be bringing in as well. And that's more in that construct of treating OpenShift almost as a pool of compute, which you can use for many different workload types with the Windows 10 being just one aspect of that. And the other thing I'll say in terms of the second part of the question, what do I need to do in terms of refactoring? So we are very conscious of the fact that, if this is to provide value, you have to be able to bring in existing virtual machines with as minimal change as possible. So we do have a migration solution set, that we've had for a number of years, for bringing our virtual machines to Linux specialization stacks. We're expanding that to include OpenShift virtualization as a target, to help you bring in those existing virtual machine images. Where things do change a little bit is in terms of the operational approaches. Obviously, admin console now is OpenShift for those virtual machines, that does right now present a change. But we think it is a very powerful opportunity in terms of, as people get more and more production workloads into containers, for example, it's going to become a lot more appealing to have a backup solution, for example, that can cater to both the virtual machine workloads as well as any stateful container workloads you may have, which do exist in increasing numbers. >> Well, I'm glad you brought up a stateful discussion because as an industry, we've spent a long time making sure that virtual machines, have storage and have networking that is reliable in performance and the like. What should customers be thinking about and operators when they move to containers? Are there things that are different you manage bringing into, this brings them into the OpenShift management plane. So what else should I be thinking about? What do I need to do differently when I've embraced this? >> Yeah, so I think in terms of the things that virtual machine expects, the two big ones that come to mind to me are networking and storage. The compute piece is still there obviously, but I think is a little less complicated to solve just because the OpenShift and broader Kubernetes community have done such a great job of addressing that piece, and that's really what attracted us to it in the first place. But on the networking side, certainly the expectations of a traditional virtual machine are a little bit different to the networking model of Kubernetes by default. But again, we've seen a lot of growth in container based applications, particularly in the context of CloudNative network functions that have been pushing the boundaries of Kubernetes networking as well. That's resulted in projects like Motus, which allow us to give a virtual machine related to networking interface that it expects, but also give it the option of using the pod networking natively, for some of those more powerful constructs that are native to Kubernetes. So that's one of those areas where you've got a mix of options, depending on how far you want to go from a modernization perspective versus do I just want to bring this workload in and run it as it is. And my modernization is more built around it, in terms of the other container based things. Then similarly in storage, it's an area where obviously at Red Hat, we've been working close with the OpenShift container storage team, but we also work with a number of ecosystem partners on, not just how do we certify their storage plugins and make sure they work well both for containers and virtual machines, but also how do we push forward upstream efforts, around things like the container storage interface specification, to allow for these more powerful capabilities like snapshots cloning and so on which we need for virtual machines, but are also very valuable for container based workloads as well. >> Steve, you've mentioned some of the reasons why customers were moving towards this environment. Now that you're GA, what learnings did you have during beta? Are there any other customer stories you could share that you've learned along this journey? >> Yeah, so I think one of the things I'll say is that, there's no feedback like direct product in the hands of customer feedback. And it's really been interesting to see the different ways that people have applied it, not necessarily having set out to apply it, but having gotten partway through their journey and realized, hey, I need this capability. You have something that looks pretty handy and then having success with it. So in particular, in the telecommunications vertical, we've been working closely with a number of providers around the 5G rollouts and the 5G core in particular, where they've been focused on CloudNative network functions. And really what I mean by that is the wave of technology and the push they're making around 5G is to take what they started with network function virtualization a step further, and build that next generation network around CloudNative technologies, including Kubernetes and OpenShift. And as I've been doing that, I have been finding that some of the vendors are more or less prepared for that transition. And that's where, while they've been able to leverage the power of containers for those applications that are ready, they're also able to leverage OpenShift virtualization as a transitionary step, as they modernize the pieces that are taking a little bit longer. And that's where we've been able to run some applications in terms of the load balancer, in terms of a carrier grade database on top of OpenShift virtualization, which we probably wouldn't have set out to do this early in terms of our plan, but we're really able to react quickly to that customer demand and help them get that across the line. And I think that's a really powerful example where the end state may not necessarily be to run everything as a virtual machine forever, but that was still able to leverage this technology as a powerful tool in the context of our broadened up optimization effort. >> All right, well, Steve, thank you so much for giving us the updates. Congratulations on going GA for this solution. Definitely look forward to hearing more from the customers as they come. >> All right, thanks so much Stu. I appreciate it. >> All right, stay tuned for more coverage of KubeCon CloudNativeCon EU 2020, the virtual edition. I'm Stu Stu Mittleman. And thank you for watching theCUBE. (upbeat music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, is often the headline that we had. it's great to be here. and how the lines blur that need to be able to interact with them how that might be different that the KubeVirt project Where's the product that of that platform that you can enable and the growth in production scenarios? is that some of the use case that they happened to sure that virtual machines, that have been pushing the boundaries some of the reasons that is the wave of technology from the customers as they come. All right, thanks so much Stu. 2020, the virtual edition.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Stu MittlemanPERSON

0.99+

Steve GordonPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

late 2016DATE

0.99+

Windows 10TITLE

0.99+

2020DATE

0.99+

oneQUANTITY

0.99+

early 2017DATE

0.99+

OpenShiftTITLE

0.99+

KubeConEVENT

0.99+

WindowsTITLE

0.98+

StuPERSON

0.98+

two thoughtsQUANTITY

0.98+

bothQUANTITY

0.98+

Red Hat SummitEVENT

0.97+

one wayQUANTITY

0.97+

LinuxTITLE

0.97+

Red Hat SummitEVENT

0.97+

around 12 yearsQUANTITY

0.97+

CloudNativeCon Europe 2020EVENT

0.97+

todayDATE

0.97+

earlier this yearDATE

0.97+

first timeQUANTITY

0.96+

CloudNativeTITLE

0.96+

Ecosystem PartnersORGANIZATION

0.95+

MotusTITLE

0.94+

one aspectQUANTITY

0.94+

this yearDATE

0.93+

OneQUANTITY

0.93+

KubernetesTITLE

0.93+

Red Hat Summit 2018EVENT

0.92+

last couple of yearsDATE

0.91+

two worldsQUANTITY

0.9+

first placeQUANTITY

0.89+

second partQUANTITY

0.89+

theCUBEORGANIZATION

0.85+

Windows server 2012TITLE

0.82+

Number oneQUANTITY

0.81+

Stu Stu MittlemanPERSON

0.8+

both virtualQUANTITY

0.79+

KubeCon CloudNativeCon Europe for 2020EVENT

0.79+

one ofQUANTITY

0.79+

two big onesQUANTITY

0.78+

one thingQUANTITY

0.78+

Breaking Analysis: VMworld 2019 Containers in Context


 

>> From the Silicon Angle Media Office, in Boston Massachusetts, it's theCUBE. Now, here's your host Dave Vellante. >> Hi everybody, welcome to this breaking analysis where we try to provide you some insights on theCUBE. My name is Dave Vellante. I'm here with Jim Kobielus who was up today, and Jim we were just off of the VMworld 2019. Big show, lot of energy, lot of announcements. I specifically want to focus on containers and the impact that containers are having on VMware, specifically the broader ecosystem and the industry at large. So, first of all, what was you take on VMworld 2019? >> Well, my take was that VMware is growing fast, and they're investing in the future, which is fairly clearly cloud and native computing on containers with Kubernetes and all that. But really that's the future and so, what VMware is doing is they're making significant bets that containers will rule the roost in cloud computing and application infrastructures going forward. But in fact virtual machines, VMs hypervisors are hotter than ever and that was well established last week by the fact that the core predominate announcement last week was a VMware Tanzu, which is not yet a production solution, but is in a limited preview, which is the new platform for coexistence of containers and vSphere. A container run time embedded in vSphere, so that customers can run containers in a highly-iso workloads, in a highly isolated VM environment. In other words, VMware is saying, we're saying to their customers, "You don't have to migrate away from VMs "until you're good and ready. "You can continue to run whatever containers "you build on vShpere, "but we more than encourage you to continue to run VMs "until you're good and ready "to migrate, if ever." >> All right. So, I want to come back and unpack that a little bit, but does your data, does your analysis, when you're talking to customers and the industry at large, is there any evidence from what you see that containers are hurting VMware's business? >> I don't get any sense that containers are hurting VMware's business. I get the strong sense that containers, they've just of course acquired Pivotal, a very additive to the revenue mix at VMware. And VMware, most of their announcements last week were in fact all around Kubernetes, and containers, and products that are very much for those customers who are going deep down the container road. >> So that was a setup question. >> You've got lots of products for them. >> So that was a setup question. So I have some data on this. >> Go ahead >> Right answer. So, I want to show you this. So, Alex, if you wouldn't mind bringing up that slide. And we shared this with you last week when we were prepping for VMworld. This is data from Enterprise Technology Research ETR, and they have a panel of 4500 end user customers that they go out and do spending surveys with them. So, what this shows is, this is container customers spending on VMware. So, you can see it goes back to early January. Now it's a little deceiving here. You see that big spike, but what it shows it that, A, that big spike is the number of shared customers. So, you really didn't have many customers back then that were doing both containers and VMware that ETR found. But as the N gets bigger, 186, 248, 257, 361, across those 461 customers, those are the shared customers in the green. And you can see that it's kind of a flat line. It's holding very well in the high 30's percent range, which is their sort of proprietary metric. So, there's absolutely no evidence, Jim, that containers, thus far anyway, are hurting VMware's business. Which of course was the narrative, containers are going to kill VMware, no evidence of that. But then why would they acquire Pivotal? Are they concerned about the future, what's your-- >> Well, they're concerned about cross selling their existing customer base who are primarily on V's, fearing the hypervisors, cross selling them on the new world of Kubernetes base products for cloud computing, and so forth and so on. In other words it's all about how do they grow their revenue base? VMware's been around for more than 20 years now. They rule the roost on the hypervisors. Where do they go from here, in terms of their product mix? Well, Kubernetes and beyond that, things like serverless will clearly be in the range of the things that they could add on. Their customers could add on to their existing deploys. I mean, look at Pivotal. Pivotal has a really strong Kubernetes distribution, which of course VMware co-developed with them. Pivotal also has a strong functions as a service backplane, the Pivotal function service for, serverless environments. So, this acquisition of Pivotal very much positions VMware to capitalize on those opportunities to sell those products when that market actually develops. But I see some evidence that virtual machines are going like gang busters in terms of customer deployments. Last week on theCUBE at VMworld, Mark Lohmeyer who's an SVP at a VMware for one of their cloud business unit, said that in the last year, for example, customers who are using a VMware cloud on AWS, VMware grew the customer base by 400% last year, and grew the number of VMs running in VMware, cloud, and AWS by 900%, which would imply that on average each customer more than doubled the number of VMs they're running on that particular cloud service. That means VMs are very much relevant now, and probably will be going forward. And why is that? That's a good question, we can debate that. >> Well, so the naysayers at VMworld in the audience were tweeting that, "Oh, I though we started Pivotal. "We launched Pivotal so that we didn't have to run VMs on, "or run containers on VMs, "so we could run them on bare metal." Are people running containers on virtual machines? >> Well, they are, yes. In fact, there's a broad range of industry initiatives, not just Tanzu at VMware, to do just that. To run containers on VMs. I mean, there is the KubeVirt, open source project over at CNCF, that's been going for a couple years now. But also, Google has Gvisor, Intel has the Kata containers initiative, I believe that there are a few others. Oh yeah, AWS with Firecracker, last year's reinvent. All this would imply, strongly indicate that these large cloud and tech vendors wouldn't be investing heavily into convergence of containers and VMs and hypervisors, if there weren't a strong demand from customers for hybrid environments where they're going to run both stacks as it were in parallel, why? Well, one of the strong advantages of VMs is workload isolation at the hardware level, which is something that typically container run times don't offer. For example, the workload isolation seems to be one of the strong features that VMware's touting for Tanzu going forward. >> So, VMware is--the centerpiece of VMware's strategy is obviously multicloud, Kubernetes as a lynch pin to enable running applications on different platforms. Will, in your opinion, and of course VMware is hard core enterprise, right? Will VMware, two things, will they be able to attract the developers, number one. And number two, will those developers build on top of VMware's platform or are they going to look to their cloud? >> That's a very important question. Last week at VMworld, I didn't get a sense that VMware has a strong developer story. I think that's a really open issue going forward for them. Why would a developer turn to VMware as their core solution provider when they don't offer a strong workbench for building these hybridized VM, /container/serverless applications that seem to be springing up all over? AWS and Microsoft and Google are much stronger in that area with their respective portfolios. >> So, I guess the obvious answer there is Pivotal is their answer to the developer quandary. >> Yes. >> And so, let's talk about that. So, Pivotal was struggling. I talked last week in my analysis, you saw the IPO price and then it dipped down, it never made it back up. Essentially the price that VMware paid the public shareholders for Pivotal was about half of it's initial IPO price, so, okay. So, the stock was struggling, the company didn't have the kind of momentum that, I think, that it wanted, so VMware picks it up. Can VMware fold in Pivotal, and use its go-to-market, and its largess to really prop up Pivotal and make it a leader? >> Well, possibly because Cloud Foundry, Pivotal Cloud Foundry could be the lynch pin of VMware's emerging developer story, if they position in that and really invest in the product in that regard. So yeah, in other words this could very much make VMware a go-to-vendor for the developers who are building the new generation of applications that present serverless functional interfaces, but will have containers under the cover, but also have VMs under the cover providing strong workload isolation in a multi-tenant environment. That would be the promise. >> Now, a couple things. You mentioned Microsoft, of course as you're in the clouding, and Google. The ETR data that I dug into when I wanted to understand, better understand multicloud. Who's got the multicloud momentum? Well, guess who has the most multicloud momentum? It's the cloud guys. Now, AWS doesn't specifically say they participate in multicloud. Certainly their marketing suggest that multicloud is for somebody else, that really they want to have uni-cloud. Whereas Google, and as you're kind of embracing multicloud and Kubernetes specifically, now of course AWS has a Kubernetes offering, but I suspect it's not something that they want to promote hard in the market place because it makes it easier for people to get off of AWS. Your thoughts on multicloud generally, but specifically Kubernetes, and containers as it relates to the big cloud providers. >> Yeah, well my thoughts on multicloud generally is that multicloud is the strategy of the second tier cloud vendors, obviously. If they can't dominate the entire space, at least they can maintain a strong, provide a strong connective tissue for the clouds that actually are deployed in their customer's environments. So, in other words, the Ciscos of the world, the VMwares of the world, IBM. In other words, these are not among the top tier of the public cloud players, hence where do they go to remain relevant? Well, they provide the connective tissue, and they provide the virtualized networking backbones, and they provide the AI ops that enables end-to-end automated monitoring management of the entire mesh. The whole notion of a mesh architecture is something that grew up with IBM and Google for lots of reasons, especially due to the fact that they themselves, as vendors, didn't dominate the public cloud. >> Well, so I agree with you. The only issue I would take is I think Microsoft is a leader in public cloud, but because it has a big On-Prem presence, it's in its best interest to push containers and Kubernetes, and so forth. But you're right about the others. Cisco doesn't have a public cloud, VMware doesn't have a public cloud, IBM has a public cloud but it's really small market share, and so it's in those companies, and Google is behind, but it's in those companies best interest really to promote multicloud, try to use it as a bull work against AWS, who's got an obviously awesome market momentum. The other thing that's interesting in the ETR data when I poke in there, it seems like there are more people looking at Google. Now maybe that's 'cause they have such strong strength in data and analytics, maybe it's 'cause they're looking for a hedge on AWS, but the spending data suggests that more and more people are kicking the tires, and more than kicking the tires on Google. Who of course is obviously behind Kubernetes and that container movement, and open source, your thoughts? >> Yeah, well, many ways, you have to think, that Google has developed the key pieces of the new stack for application development in the multicloud. Clearly they developed Kubernetes, its open source, and also they developed TensorFlow open sources, it's the predominant AI workbench essentially for the new generation of AI driven applications, which is everything. But also, if you look at Google developed Node JS for web applications and so forth. So really, Google now is the go-to-vendor for the new generation of open source application development, and increasingly DevOps in a multicloud environment, running over Istio meshes and so forth. So, I think that's, so, look at one of the announcements last weekend at VMworld. VMware and NVIDIA, their announcement of their collaboration, their joint offering to enable AI workloads, training workloads to run in GPUs in an optimal high performance fashion within a distributive of VMware cloud end-to-end. So really, I think VMware recognizes that the new workloads in the multicloud are predominately, increasingly AI workloads. And in order to, as the market goes towards those kinds of workloads, VMware very much recognizes they need to have a strong developer play, and they do with NVIDIA in a sense. Very much so because NVIDIA with the rapid framework and so forth, and NVIDIA being the predominant GPU vendor, very much is a very strategic partner for VMware as they're going forward, as they hope to line up the AI developers. But Google still is the vendor to beat as regards to AI developers of the world, in that regard, so-- >> So we're entering a world we sometimes call the post-virtual machine world. John Furrier is kind of tongue and cheek on a play on web tudauto. He calls it cloud tudauto, which is a world of multiple clouds. As I've said many times, I'm not sure multicloud is necessarily a coherent strategy yet as opposed to sort of a multi-vendor situation, Shadow IT, >> Yes. >> Lines on business, et cetera. But Jim, thanks very much-- >> Sure. >> For coming on and breaking down the container market, and VMworld 2019. It was great to see you. >> Likewise. >> All right, thank you for watching everybody. This is Dave Vellante with Jim Kobielus. We'll see you next time on theCUBE. (upbeat music)

Published Date : Sep 3 2019

SUMMARY :

From the Silicon Angle Media Office, and the industry at large. But really that's the future and so, what VMware is doing is there any evidence from what you see that containers and products that are very much for those customers So that was a setup question. A, that big spike is the number of shared customers. said that in the last year, for example, Well, so the naysayers at VMworld in the audience Well, one of the strong advantages of VMs or are they going to look to their cloud? AWS and Microsoft and Google are much stronger in that area So, I guess the obvious answer there So, the stock was struggling, Pivotal Cloud Foundry could be the lynch pin that they want to promote hard in the market place is that multicloud is the strategy and more than kicking the tires on Google. that Google has developed the key pieces of the new stack the post-virtual machine world. But Jim, thanks very much-- For coming on and breaking down the container market, This is Dave Vellante with Jim Kobielus.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Mark LohmeyerPERSON

0.99+

JimPERSON

0.99+

NVIDIAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Dave VellantePERSON

0.99+

900%QUANTITY

0.99+

400%QUANTITY

0.99+

John FurrierPERSON

0.99+

last weekDATE

0.99+

last yearDATE

0.99+

Last weekDATE

0.99+

461 customersQUANTITY

0.99+

PivotalORGANIZATION

0.99+

AlexPERSON

0.99+

Boston MassachusettsLOCATION

0.99+

early JanuaryDATE

0.99+

vSphereTITLE

0.99+

todayDATE

0.99+

CNCFORGANIZATION

0.99+

4500 end user customersQUANTITY

0.99+

Node JSTITLE

0.99+

more than 20 yearsQUANTITY

0.98+

two thingsQUANTITY

0.98+

Silicon Angle Media OfficeORGANIZATION

0.98+

KubernetesTITLE

0.98+

second tierQUANTITY

0.98+

CiscosORGANIZATION

0.98+

IntelORGANIZATION

0.98+

both stacksQUANTITY

0.97+

VMworldORGANIZATION

0.97+

Wikibon 2019 Predictions


 

>> Hi, I'm Peter Burris, Chief Research Officer for Wikibon Cube and welcome to another special digital community event. Today we are going to be presenting Wikibon's 2019 trends. Now, I'm here in our Palo Alto Studios in kind of a low tech mode. Precisely, because all our our crews are out at all the big shows bringing you the best of what's going on in the industry, and broadcasting it over The Cube. But that is okay because I've asked each of our Wikibon analysts to use a similar approach to present their insights into what would be the most impactful trends for 2019. Now the way we are going to do this is first we are going to use this video as base of getting our insights out, and then at the end we are going to utilize a crowd chat to give you an opportunity to present your insights back to the community. So, at the end of this video, please stay with us, and share your insights, share your thoughts, your experience, ask your questions about what you think will be the most impactful trends of 2019 and beyond. >> A number of years ago Wikibon predicted that cloud, while dominating computing, would not feature all data moving to the cloud but rather, the cloud experience and cloud services moving to the data. We call that true private cloud computing, and there has, nothing has occurred in the last couple of years that has suggested that we were, anyway, wrong about this prediction. In fact, if we take a look at what's going on with Edge, our expectations that increasingly Edge computing and on Premise technology, or needs, would further accelerate the rate at which cloud experiences end up on Premise, end up at the Edge, and that would be the dominant model for how we think about computing over the course of the next few years. That leads to greater distribution of data. That leads to greater distribution of places where data actually will be used. All under the aegis of cloud computing but not utilizing the centralized public cloud model that so many predicted. >> A prediction we'd like to talk about is how multi-cloud and orchestration of those environments fit together. At Wikibon, we've been looking for many years at how digital businesses are going to leverage cloud, and cloud is not a singular entity, and therefore the outcomes that you are looking for, often require that you use more than one cloud, specially if you are looking at public clouds. We've been seeing the ascendance of Kubernetes as a fundamental foundational piece of enabling this multi-cloud environment. Kubernetes is not the sole thing, and of course, you don't want to overemphasize any specific tool, but you are seeing, driven by the CNC AFT in a broad ecosystem, that Kubernetes is getting into all the platforms, both public and private cloud, and that we predict that by 2020, 90% of multi-cloud enterprise applications will use Kubernetes to lead for the enablement of their multicloud strategies. >> One of the biggest challenges that the industry is going to face over the next few years is how to deal with multi-cloud. We predict, ultimately, that a sizable percentage of the marketplace, as much as 90%, will be taking a multi--cloud approach first to how they conceive, build, and operate their high, strategic value applications that are engaging customers, engaging partners, and driving their businesses forward. However, that creates a pressing need for three new classes of Technology. Technology that provides multi-cloud inter-networking; Technology that provides orchestration services across clouds, and finally Technologies that ensure data protection across multi-cloud. While each of these domains by themselves is relatively small today, we think that over the next decade they will, each, grow into market that are tens of billions if not hundreds of billions of dollars in size. >> The picture I'd like to talk about a very few, the Robotic Process Automation, RPA. So we've observed that there's a widening gap between how many jobs are available world wide and the number of qualified candidates to fill those jobs. RPA, we believe, is going to become a fundamental approach to closing that gap, and really operationalizing artificial intelligence. Executives that we talk to in The Cube; They realize they just can't keep throwing bodies at the problem, so this so called "software robots" are going to become increasingly easy to use. And we think that low code or no code approaches to automation and automating work flows are going to drive the RPA market from its current position, which is around a billion dollars to more than ten X, or ten billion dollars plus by 2023. I predict that in 2019 what we are going to see is more containerization of AI machine learning for deployment to the Edge, throughout the multi-cloud. It's a trend that's been going on for some time. In particular, what we are going to be seeing is a increasing focus on technologies, or projects in code base such as Cube flow, which has been established in this year just gone by to support that approach for containerization of AI out to the edges. In 2019, we are going to see the big guys, like Google, and AWS, and Microsoft, and others in the whole AI space begin to march around the need for a common delatched framework suck such as Cube Flow, because really that is where many of their customers are going. The data scientists and App developers who are building these applications; They want to manage these over Kubernetes using these CNC stacks of tooling and projects to enable a degree of supportability and maintain ability and scalability around containerized intelligent applications. >> My prediction is around the move from linear programming and data models to matrix computing. This is a move that's happening very quicly, indeed, as new types of workload come on. And these workloads include AI, VR, AR, Video Gaming, very much at the edge of things. And ARM is the key provider of these types of computing chips and computing models that are enabling this type of programming to happen. So my prediction is that this type of programming is gonna start very quickly in 2019. It's going to rule very rapidly about two years from now, in 2021, into the enterprise market space, but that the preparation for this type of computing and the movement of work right to the edge, very, very close to the senses, very, very close to where the users are themselves is going to accelerate over the next decade. >> The prediction I'd like to make in 2019 is that the CNCF, as the steward of the growing cloud native stack, they'll expand the range of projects to include the frontier topics, really the frontier paradigms, in micro sources in cloud computing; I'm talking about Serverlus. My prediction is that virtual Kubelets will become an incubating project at CNCF to address the need to provide Serverlus event driven interfaces to containerize orchestrated micro sources. I'd also like to predict that VM and container coexistence will proceed apace in terms of a project such as, specially Kubevirt. I think will become also a CNCF project. And I think it will be adopted fairly widely. And one last prediction, in that vein, is that the recent working group that CNCF has established with Eclipse, around IOT, the internet of things. I think that will come to fruition. There is an Eclipse project called Ditto that uses IOT, and AI, and digital twins, a very interesting way for industrial and other applications. I think that will come under the auspices of CNC in the coming year. >> Security remains vexing to the cloud industry, and the IT industry overall. Historically, it's been about restricting access, largely at the perimeter, and once you provide through the perimeter user would have access to an entire organization's resources, digital resources, whether they be files, or applications, or identities. We think that has to change, largely as a consequence of businesses now being restructured, reorganized, and re-institutionalizing work around data. That what's gonna have to happen is a notion of zero trust security is going to be put in place that is fundamentally tied to the notion of sharing data. So, instead of restriction access at the perimeter, you have to restrict access at the level of data. That is going to have an enormous set of implication overall, for how the computing industry works. But two key technologies are essential to making zero trust security work. One is software to find infrastructure, so that you can make changes to the configuration of your security policies and instances by other software and to, very importantly, high quality analytics that are bringing the network and security functions more closely together and through the shared data are increasing the use of AI, the use of machine learning, etc and ensuring higher quality security models across multiple clouds. It's always great to hear from the Wikibon analysts about what is happening in the industry and what is likely to happen in the industry. But now, let's hear from you, so let's jump into the cloud chat as an opportunity for you to present your ideas, your insights, ask your questions, share your experience. What will be the most important trends and issues in 2019 and beyond, as far as you are concerned. Thank you very much for listening. Now let's cloud chat.

Published Date : Oct 17 2018

SUMMARY :

each of our Wikibon analysts to use and cloud services moving to the data. and that we predict that by 2020, 90% that the industry is going to face over the and the number of qualified candidates to fill those jobs. but that the preparation for this type of computing is that the recent working group So, instead of restriction access at the perimeter,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2019DATE

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

2021DATE

0.99+

ten billion dollarsQUANTITY

0.99+

90%QUANTITY

0.99+

2020DATE

0.99+

CNCFORGANIZATION

0.99+

more than ten XQUANTITY

0.99+

2023DATE

0.99+

eachQUANTITY

0.99+

around a billion dollarsQUANTITY

0.99+

WikibonORGANIZATION

0.98+

tens of billionsQUANTITY

0.98+

more than one cloudQUANTITY

0.98+

two key technologiesQUANTITY

0.98+

TodayDATE

0.98+

EclipseTITLE

0.97+

OneQUANTITY

0.97+

oneQUANTITY

0.96+

Wikibon CubeORGANIZATION

0.96+

firstQUANTITY

0.96+

ARMORGANIZATION

0.95+

next decadeDATE

0.95+

todayDATE

0.95+

hundreds of billions of dollarsQUANTITY

0.94+

bothQUANTITY

0.94+

last couple of yearsDATE

0.89+

Palo Alto StudiosLOCATION

0.88+

KubernetesTITLE

0.86+

this yearDATE

0.86+

zeroQUANTITY

0.84+

next few yearsDATE

0.84+

EdgeORGANIZATION

0.83+

number of years agoDATE

0.82+

Cube FlowTITLE

0.79+

WikibonTITLE

0.74+

Process AutomationORGANIZATION

0.74+

threeQUANTITY

0.72+

ServerlusTITLE

0.71+

CNCORGANIZATION

0.68+

PremiseORGANIZATION

0.67+

few yearsDATE

0.64+

KubevirtORGANIZATION

0.64+

CubeTITLE

0.63+

about two yearsQUANTITY

0.61+

The CubeORGANIZATION

0.58+

Chief Research OfficerPERSON

0.57+

CubeORGANIZATION

0.57+

KubernetesORGANIZATION

0.56+

classesQUANTITY

0.54+

DittoTITLE

0.53+

EdgeTITLE

0.51+

EdgeCOMMERCIAL_ITEM

0.33+

Jonathan Donaldson, Google Cloud | Red Hat Summit 2018


 

(upbeat electronic music) >> Narrator: Live from San Francisco, it's The Cube, covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back, everyone. We are here live, The Cube in San Francisco, Moscone West for the Red Hat Summit 2018 exclusive coverage. I'm John Furrier, the cohost of The Cube. I'm here with my cohost, John Troyer, who is the co-founder of Tech Reckoning, an advisory and community development firm. Our next guest is Jonathan Donaldson, Technical Director, Office of the CTO, Google Cloud. Former Cube Alumni. Formerly was Intel, been on before, now at Google Cloud for almost two years. Welcome back, good to see you. >> Good to see you too, it's great to be back. >> So, had a great time last week with the Google Cloud folks at KubeCon in Denmark. Kubernetes, rocking the world. Really, when I hear the word de facto standard and abstraction layers, I start to get, my bells go off, let me look at that. Some interesting stuff. You guys have been part of that from the beginning, with the CNCF, Google, Intel, among others. Really created a movement, congratulations. >> Yeah, thank you. It really comes down to the fact that we've been running containers for almost a dozen years. Four billion a week, we launch and collapse. And we know that at some point, as Docker and containers really started to take over the new way of developing things, that everyone is going to run into that scalability wall that we had run into years and years and years ago. And so Craig and the team at Google, again, I wasn't at Google at this time, but they had a really, let's take what we know from internally here and let's take those patterns and let's put them out there for the world to use, and that became Kubernetes. And so I think that's really the massive growth there, is that people are like, "Wow, you've solved a problem, "but not from a science project. "It's actually from something "that's been running for a decade." >> Internally, that's called bore. That's tools that Google used, that their SRE cyber lab engineers used to massively provision manage. And they're all software engineers, so it's not like they're operators. They're all Google engineers. But I want to take a minute, if you can, to explain. 'Cause you're new to Google Cloud. You're in the industry, you've been around, you helped form the CNCF, which is the Cloud Native Foundation. You know cloud, you know tech. Google's changed a lot, and Google Cloud specifically has a narrative of, they're one big cloud and they have an application called Google stuff and enterprises are different. You've been there now for almost a year or more. >> Jonathan: Little over a year, yeah. >> What's Google Cloud like right now? Break the myths down around Google Cloud. What's the current status? I know personally, a lot of cloud DNA is coming in from the industry. They've been hiring, making some great progress. Take a minute to explain the Google Cloud. >> Yeah, so it's really interesting. So again, it comes back from where you started from. So Google itself started from a scale consumer SAS type of business. And so that, they understood really well. And we still understand, obviously, uptime and scalability really, really well. And I would say if you backtrack several years ago, as the enterprise really started to look at public clouds and Google Cloud itself started to spin up, that was probably not, they probably didn't understand exactly all of the things that an enterprise would need. Really, at that point in time, no one cloud understood any of the enterprise specifically. And so what they did is they started hiring in people like myself and others that are in the group that I'm in. They're former CIOs of large enterprise companies or former VPs of engineering, and really our job in the Office of the CTO for Google Cloud is to help with the product teams, to help them build the products that enterprises need to be able to use the public cloud. And then also work with some of those top enterprise customers to help them adopt those technologies. And so I think now that if you look at Google Cloud, they understand enterprise really, really well, certainly from the product and the technology perspective. And I think it's just going to get better. >> I interviewed Jennifer Lynn, I had a one-on-one with her. I didn't publish it, it was more of a briefing. She runs Product Management, all on security side. >> Jonathan: Yeah, she's fantastic. >> So she's checking the boxes. So the table stakes are set for Google. I know you got to do some basic things to catch up to get in the cloud. But also you have partnerships. Google Next is coming up, The Cube will be there. Red Hat's a partner. Talk about that relationship with Red Hat and partners. So you're very partner-centric with Google Cloud. >> Jonathan: We are. >> And that's important in the enterprise, but so what-- >> Well, there tends to be two main ares that we focus on, from what we consider the right way to do cloud. One of them is open source. So having, which again, aligns perfectly with Red Hat, is putting the technologies that we want customers to use and that we think customers should use in open source. Kubernetes is an example, there's Istio and others that we've put out that are examples of those. A lot of the open source projects that we all take for granted today were started from white papers that we had put out at one point in time, explaining how we did those things. Red Hat, from a partner perspective, I think that that follows along. We think that the way that customers are going to consume these technologies, certainly enterprise customers are, through those partners that they know and trust. And so having a good, flourishing ecosystem of partners that surround Google Cloud is absolutely key to what we do. >> And they love multicloud too. >> They love multicloud. >> Can't go wrong with it. >> And we do too. The idea is that we want customers to come to Google Cloud and stay there because they want to stay there, because they like us for who we are and for what we offer them, not because they're locked into a specific service or technology. And things like Kubernetes, things like containers, being open sourced allows them to take their tool chains all the way from their laptop to their own cloud inside their own data center to any cloud provider they want. And we think hopefully they'll naturally gravitate towards us over time. >> One of the things I like about the cloud is that there's a flywheel, if you will, of expertise. Like I look at Amazon, for instance. They're getting a lot of metadata of the kinds of workloads that are on their cloud, so they can learn from that and turn that into an advantage for them, or not, or for their customers, and how they could do that. That's their business decision. Google has a lot of flywheel action going on. A lot of Android devices connected in the Google system. You have a lot of services that you can bring to bear in the cloud. How are you guys looking at, say, from a security standpoint alone, that would be a very valuable service to have. I can tap into all the security goodness of Google around what spear phishing is out there, things of that nature. So are you guys thinking like that, in terms of services for customers? How does that play out? >> So where we, we're very consistent on what we consider is, privacy is number one for our customers, whether they're consumer customers or whether they're enterprise customers. Where we would use data, you had mentioned a lot of things, but where we would use some data across customer bases are typically for security things, so where we would see some sort of security impact or an attack or something like that that started to impact many customers. And we would then aggregate that information. It's not really customer information. It's just like you said, metadata, themes, or trends. >> John Furrier: You're not monetizing it. >> Yeah, we're not monetizing it, but we're actually using it to protect customers. But when a customer actually uses Google Cloud, that instance is their hermetically sealed environment. In fact, I think we just came out recently with even the transparency aspects of it, where it's almost like the two key type of access, for if our engineers have to help the customer with a troubleshooting ticket, that ticket actually has to be opened. That kind of unlocks one door. The customer has to say, "Yes," that unlocks the other door. And then they can go in there and help the customer do things to solve whatever the problem is. And each one of those is transparently and permanently logged. And then the customer can, at any point in time, go in and see those things. So we are taking customer privacy from an enterprise perspective-- >> And you guys are also a whole building from Google proper, like it's a completely different campus. So that's important to note. >> It is. And a lot of it just chains on from Google proper itself. If you understood just how crazy and fanatical they are about keeping things inside and secret and proprietary. Not proprietary, but not allowing that customer data out, even on the consumer side, it would give a whole-- >> Well, you got to amplify that, I understand. But what I also see, a good side of that, which is there's a lot of resources you're bringing to bear or learnings. >> Yeah, absolutely. >> The SRE concept, for instance, is to me, really powerful, because Google had to build that out themselves. This is now a paradigm, we're seeing a cloud scale here, with the Cloud Native market bringing in all-new capabilities at scale. Horizontally scalable, fully synchronous, microservices architecture. This future is a complete game-changer on functionality at the different scale points. So there's no longer the operator's room, provisioning storage here. >> And this is what we've been doing for years and years and years. That's how all of Google itself, that's how search and ads and Gmail and everything runs, in containers all orchestrated by Borg, which is our version of Kubernetes. And so we're really just bringing those leanings into the Google Cloud, or learnings into Google Cloud and to our customers. >> Jonathan, machine learning and AI have been the big topic this week on OpenShift. Obviously that's a big strength of Google Cloud as well. Can you drill down on that story, and talk about what Google Cloud is bringing on, and machine learning on OpenShift in general? Give us a little picture of what's running. >> Yeah, so I think they showed some of the service broker stuff. And I think, did they show some of the Kubeflow stuff, which is taking some machine learning and Kubernetes underneath OpenShift. I think those are very, very interesting for people that want to start getting into using AutoML, which is kind of roll-your-own machine learning, or even the voice or vision APIs to enhance their products. And I think that those are going to be keys. Easing the adoption of those, making them really, really easy to consume, is what's going to drive the significant ramp on using those types of technologies. >> One of the key touchpoints here has been the fact that this stuff is real-world and production-ready. The fact that the enterprise architecture now rolling out apps within days or weeks. One of those things that's now real is ML. And even in the opening keynote, they talked about using a little bit of it to optimize the scheduling and what sessions were in which rooms. As you talk to enterprises, it does seem like this stuff is being baked into real enterprise apps today. Can you talk a little bit about that? >> Sure, so I certainly can't give any specific examples, because what I think what you're saying is that a lot of enterprises or a lot of companies are looking at that like, "Oh, this is our new secret sauce." It always used to be like they had some interesting feature before, that a competitor would have to keep up with or catch up with. But I think they're looking at machine learning as a way to enhance that customer experience, so that it's a much more intimate experience. It feels much more tailored to whomever is using their product. And I think that you're seeing a lot of those types of things that people are starting to bake into their products. We've, again, this is one of these things where we've been using machine learning for almost 10 years inside Google. Things like for Gmail, even in the early days, like spam filtering, something just mundane like that. Or we even used it, turned it on in our data centers, 'cause it does a really good job of lowering the PUE, which is the power efficiency in data centers. And those are very mundane things. But we have a lot of experience with that. And we're exposing that through these products. And we're starting to see people, customers gravitate to grab onto those. Instead of having to hard code something that is a one to many kind of thing, I may get it right or I may have to tweak it over time, but I'm still kind of generalizing what the use cases are that my customers want to see, once they turn on machine learning inside their applications, it feels much more tailored to the customer's use cases. >> Machine learning as a service seems to be a big hot button that's coming out. How are you guys looking at the technical direction from the cloud within the enterprise? 'Cause you have three classes of enterprise. You have the early adopters, the power, front, cutting-edge. Then you have the fast followers, then you have everybody else. The everybody else and fast followers, they know about Kubernetes, some might not even, "What is Kubernetes?" So you have kind of-- >> Jonathan: "What containers?" >> A level of progress where people are. How are you guys looking at addressing those three areas, because you could blow them away with TensorFlow as a service. "Whoa, wowee, I'm just trying to get my storage LUNs "moving to a cloud operation system." There's different parts of this journey. Is there a technical direction that addresses these? What are you guys doing? >> So typically we'll work with those customers to help them chart the path through all those things, and making it easy for them to use and consume. Machine learning is still, unless you are a stats major or you're a math major, a lot of the algorithms and understanding linear algebra and things like that are still very complex topics. But then again, so is networking and BGP and things like OSPF back a few years ago. So technology always evolves, and the thing that you can do is you can just help pull people along the continuum there, by making it easy for them to use and to provide a lot of education. And so we work with customers on all ends of the spectrum. Even if it's just like, "How do I modernize my applications, "or how do I even just put them into the cloud?" We have teams that can help do that or can educate on that. If there are customers that are like, "I really want to go do something special "with maybe refactoring my applications. "I really want to get the Cloud Native experience." We help with that. And those customers that say, "I really want to find out this machine learning thing. "How can I actually make that an impactful portion of my company's portfolio?" We can certainly help with that. And there's no one, and typically you'll find in any large enterprise, because there'll be some people on each one of those camps. >> Yeah, and they'll also want to put their toe in the water here and there. The question I have for you guys is you got a lot of goodness going on. You're not trying to match Amazon speed for speed, feature for feature, you guys are picking your shots. That is core to Google, that's clear. Is there a use case or a set of building blocks that are highly adopted with you guys now, in that as Google gets out there and gets some penetration in the enterprise, what's the use, what are the key things you see with successes for you guys, out of the gate? Is there a basic building? Amazon's got EC2 and S3. What are you guys seeing as the core building blocks of Google Cloud, from a product standpoint, that's getting the most traction today? >> So I think we're seeing the same types of building blocks that the other cloud providers are, I think. Some of the differences is we look at security differently, because of, again, where we grew up. We do things like live migration of virtual machines, if you're using virtual machines, because we've had to do that internally. So I think there are some differences on just even some of the basic block and tackling type of things. But I do think that if you look at just moving to the cloud, in and of itself is not enough. That's a stepping stone. We truly believe that artificial intelligence and machine learning, Cloud Native style of applications, containers, things like service meshes, those things that reduce the operational burdens and improve the rate of new feature introduction, as well as the machine learning things, I think that that's what people tend to come to Google for. And we think that that's a lot of what people are going to stay with us for. >> I overheard a quote I want to get your reaction to. I wrote it down, it says, "I need to get away from VPNs and firewalls. "I need user and application layer security "with un-phishable access, otherwise I'm never safe." So this is kind of a user perspective or customer perspective. Also with cloud there's no perimeters, so you got phishing problems. Spear phishing's one big problem. Security, you mentioned that. And then another quote I had was, "Kubernetes is about running frameworks, "and it's about changing the way "applications are going to be built over time." That's where, I think, SRE and Istio is very interesting, and Kubeflow. This is a modern architecture for-- >> There's even KubeVirt out there, where you can run a VM inside a container, which is actually what we do internally too. So there's a lot of different ways to slice and dice. >> Yeah, how relevant is that, those concepts? Because are you hearing that as well on the customers? 'Cause that's pain point, but also the new modern software development's future way to do things. So there's pain point, I need some aspirin for that. And then I need some growth with the new applications being built and hiring talent. Is that consistent with how you guys see it? >> So which one should I tackle? So you're talking about. >> John Furrier: VPN, do the VPNs first. >> The VPNs first, okay. >> John Furrier: That's my favorite one. >> So one of the most, kind of to give you the backstory, so one of the most interesting things when I came to Google, having come from other large enterprise vendors before this, was there's no VPNs. We don't even have it on our laptop. They have this thing called BeyondCorp, which is essentially now productized as the Identity-Aware Proxy. Which is, it actually takes, we trust no one or nothing with anything. It's not the walled garden style of approach of firewall-type VPN security. What we do is, based upon the resource you're going to request access for, and are you on a trusted machine? So on one that corporate has given you? And do you have two-factor authentication that corporate, not only your, so what you have and what you know. And so they take all of those things into awareness. Is this the laptop that's registered to you? Do you have your two-factor authentication? Have you authenticated to it and it's a trusted platform? Boom, then I can gain access to the resources. But they will also look for things like if all of a sudden you were sitting here and I'm in San Francisco, but something from some country in Asia pops up with my credentials on it, they're going to slam the door shut, going, "There's no way that you can be in two places at one time." And so that's what the Identity-Aware Proxy or BeyondCorp does, kind of in a nutshell. And so we use that everywhere, internally, externally. And so that's one of the ways that we do security differently is without VPNs. And that's actually in front of a lot of the GCP technologies today, that you can actually leverage that. So I would say we take-- >> Just rethinking security. >> It's rethinking security, again, based upon a long history. And not only that, but what we use internally, from our corporate perspective. And now to get to the second question, yeah. >> Istio, Kubeflow, is more of the way software gets run. One quote from one of the ex-Googlers who left Google then went out to another company, she goes, she was blown away, "This is the way you people ship software?" Like she was a fish out of water. She was like, "Oh my god, where's Borg?" "We do Waterfall." So there's a new approach that opens doors between these, and people expect. That's this notion of Kubeflow and orchestration. So that's kind of a modern, it requires training and commitment. That's the upside. Fix the aspirin, so Identity Proxy, cool. Future of software development architecture. >> I think one of the strong things that you're going to see in software development is I think the days of people running it differently in development, and then sandbox and testing, QA, and then in prod, are over. They want to basically have that same experience, no matter where they are. They want to not have to do the crossing your fingers if it, remember, now it gets reddited or you got slash-dotted way back in the past and things would collapse. Those days of people being able to put up with those types of issues are over. And so I think that you're going to continue to see the development and the style of microservices, containers, orchestrated by something that can do auto scaling and healing, like Kubernetes. You're going to see them then start to use that base layer to add new capabilities on top, which is where we see Kubeflow, which is like, hey, how can I go put scalable machine learning on top of containers and on top of Kubernetes? And you even see, like I said, you see people saying, "Well, I don't really want to run "two different data planes and do the inception model. "If I can lay down a base layer "of Kubernetes and containers, then I can run "bare metal workloads against the bare metal. "If I need to launch a virtual machine, "I'll just launch that inside the container." And that's what KubeVirt's doing. So we're seeing a lot of this very interesting stuff pop. >> John Furrier: Yeah, creativity. >> Creativity. >> Great, talk about your role in the Office of the CTO. I know we got a couple of minutes left. I want to get out there, what is the role of the CTO? Bryan Stevens, formerly a Red Hat executive. >> Yeah, Bryan's our CTO. He used to run a big chunk of the engineering for Google Cloud, absolutely. >> And so what is the office's charter? You mentioned some CIOs, former CIOs are in there. Is it the think tank? Is it the command and control ivory tower? What's the role of the office? >> So I think a couple of years ago, Diane Greene and Bryan Stevens and other executives decided if we want to really understand what the enterprise needs from us, from a cloud perspective, we really need to have some people that have walked in those shoes, and they can't just be Diane or can't just be Bryan, who also had a big breadth of experience there. But two people can't do that for every customer for every product. And so they instituted the Office of the CTO. They tapped Will Grannis, again, had been in Boeing before, been in the military, and so tapped him to build this thing. And they went and they looked for people that had experience. Former VPs of Engineering, former CIOs. We have people from GE Oil and Gas, we have people from Boeing, we have people from Pixar. You name it, across each of the different verticals. Healthcare, we have those in the Office of the CTO. And about, probably, I think 25 to 30 of us now. I can't remember the exact numbers. And really, what our day to day life is like is working significantly with the product managers and the engineering teams to help facilitate more and more enterprise-focused engineering into the products. And then working with enterprise customers, kind of the big enterprise customers that we want to see successful, and helping drive their success as they consume Google Cloud. So being the conduit, directly into engineering. >> So in market with customers, big, known customers, getting requirements, helping facilitate product management function as well. >> Yeah, and from an engineering perspective. So we actually sit in the engineering organization. >> John Furrier: Making sure you're making the good bets. >> Jonathan: Yes, exactly. >> Great, well thanks for coming on The Cube. Thanks for sharing the insight. >> Jonathan: Thanks for having me again. >> Great to have you on, great insight, again. Google, always great technology, great enterprise mojo going on right now. Of course, The Cube will be at Google Next this July, so we'll be having live coverage from Google Next here in San Francisco at that time. Thanks for coming on, Jonathan. Really appreciate it, looking forward to more coverage. Stay with us for more of day three, as we start to wrap up our live coverage of Red Hat Summit 2018. We'll be back after this short break. (upbeat electronic music)

Published Date : May 10 2018

SUMMARY :

Brought to you by Red Hat. Technical Director, Office of the CTO, Google Cloud. You guys have been part of that from the beginning, And so Craig and the team at Google, But I want to take a minute, if you can, to explain. is coming in from the industry. And so I think now that if you look at Google Cloud, I interviewed Jennifer Lynn, I had a one-on-one with her. So she's checking the boxes. is putting the technologies that we want customers to use The idea is that we want customers to come to Google Cloud You have a lot of services that you can that started to impact many customers. that ticket actually has to be opened. And you guys are also a whole building from Google proper, And a lot of it just chains on from Google proper itself. Well, you got to amplify that, I understand. The SRE concept, for instance, is to me, really powerful, and to our customers. have been the big topic this week on OpenShift. And I think that those are going to be keys. And even in the opening keynote, And I think that you're seeing So you have kind of-- How are you guys looking at addressing those three areas, and the thing that you can do is you can just help that are highly adopted with you guys now, Some of the differences is we look at security differently, "and it's about changing the way where you can run a VM inside a container, Is that consistent with how you guys see it? So which one should I tackle? So one of the most, kind of to give you the backstory, And now to get to the second question, yeah. "This is the way you people ship software?" Those days of people being able to put up with I want to get out there, what is the role of the CTO? Yeah, Bryan's our CTO. Is it the think tank? and the engineering teams to help facilitate more and more So in market with customers, big, known customers, So we actually sit in the engineering organization. Thanks for sharing the insight. Great to have you on, great insight, again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JonathanPERSON

0.99+

John FurrierPERSON

0.99+

John TroyerPERSON

0.99+

GoogleORGANIZATION

0.99+

Jennifer LynnPERSON

0.99+

AmazonORGANIZATION

0.99+

Jonathan DonaldsonPERSON

0.99+

Red HatORGANIZATION

0.99+

AsiaLOCATION

0.99+

Bryan StevensPERSON

0.99+

BryanPERSON

0.99+

25QUANTITY

0.99+

San FranciscoLOCATION

0.99+

CraigPERSON

0.99+

Will GrannisPERSON

0.99+

Diane GreenePERSON

0.99+

second questionQUANTITY

0.99+

DenmarkLOCATION

0.99+

IntelORGANIZATION

0.99+

OneQUANTITY

0.99+

Cloud Native FoundationORGANIZATION

0.99+

two placesQUANTITY

0.99+

DianePERSON

0.99+

two keyQUANTITY

0.99+

Tech ReckoningORGANIZATION

0.99+

One quoteQUANTITY

0.99+

Office of the CTOORGANIZATION

0.99+

PixarORGANIZATION

0.99+

Red Hat Summit 2018EVENT

0.99+

OpenShiftTITLE

0.99+

GE Oil and GasORGANIZATION

0.99+

GmailTITLE

0.98+

oneQUANTITY

0.98+

30QUANTITY

0.98+

CNCFORGANIZATION

0.98+

one timeQUANTITY

0.98+

last weekDATE

0.98+

BoeingORGANIZATION

0.98+

almost 10 yearsQUANTITY

0.97+

AndroidTITLE

0.97+

todayDATE

0.97+

KubernetesTITLE

0.97+

Google CloudORGANIZATION

0.97+

Four billion a weekQUANTITY

0.97+

day threeQUANTITY

0.97+

two-factorQUANTITY

0.97+

The CubeORGANIZATION

0.96+