Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native
(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Chris Jones | PERSON | 0.99+ |
12 gig | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
two personas | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
three months | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
last year | DATE | 0.99+ |
GitOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
over two years | QUANTITY | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
two separate platforms | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
third alternative | QUANTITY | 0.98+ |
each release | QUANTITY | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
third pillar | QUANTITY | 0.98+ |
2:00am | DATE | 0.98+ |
first jobs | QUANTITY | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Cloud Native | TITLE | 0.98+ |
this year | DATE | 0.98+ |
late 2019 | DATE | 0.98+ |
Platform9 | TITLE | 0.98+ |
one environment | QUANTITY | 0.98+ |
last fall | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
yesterday | DATE | 0.97+ |
two experiences | QUANTITY | 0.97+ |
about eight years | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.97+ |
Git | TITLE | 0.97+ |
Flux | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
two big contributors | QUANTITY | 0.96+ |
Cloud Native | TITLE | 0.96+ |
DevOps | TITLE | 0.96+ |
Rebars | ORGANIZATION | 0.95+ |
Satish Iyer, Dell Technologies | SuperComputing 22
>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.
SUMMARY :
Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian Coley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Terry Ramos | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Gell | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
190 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
European Space Agency | ORGANIZATION | 0.99+ |
Max Peterson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Arcus Global | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Bahrain | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Everee | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Zero Days | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Department for Wealth and Pensions | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
1.8 billion | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
450 applications | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Satish Iyer | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Middle East | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
Jet Propulsion Lab | ORGANIZATION | 0.99+ |
The Truth About MySQL HeatWave
>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.
SUMMARY :
Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Ron Holger | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Mark Stammer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ron Westfall | PERSON | 0.99+ |
Ryan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Holgar Mueller | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
17 times | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
David Foyer | PERSON | 0.99+ |
44% | QUANTITY | 0.99+ |
1.2% | QUANTITY | 0.99+ |
4.8 billion | QUANTITY | 0.99+ |
Jason | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Fu Chim Research | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Thijs Ebbers & Arno Vonk, ING | KubeCon + CloudNativeCon NA 2022
>>Good morning, brilliant humans. Good afternoon or good evening, depending on your time zone. My name is Savannah Peterson and I'm here live with the Cube. We are at CubeCon in Detroit, Michigan. And joining me is my beautiful co-host, Lisa, how you feeling? Afternoon of day three. >>Afternoon day three. We've had such great conversations. We have's been fantastic. The momentum has just been going like this. I love it. >>Yes. You know, sometimes we feel a little low when we're at the end of a conference. Not today. Don't feel that that way at all, which is very exciting. Just like the guests that we have up for you next. Kind of an unexpected player when we think about technology. However, since every company, one of the themes is every company is trying to be a software company. I love that we're talking to I n G. Joining us today is Ty Evers and Arno vk. Welcome to the show gentlemen. Thank >>You very much. Glad to be you. Thank you. >>Yes, it's wonderful. All the way in from Amsterdam. Probably some of the farthest flying folks here for this adventure. Starting off. I forgot what's going on with the shirts guys. You match very well. Tell, tell everyone. >>Well these are our VR code shirts. VR code is basically the player of our company to get people interested as an IT person in banking. Right? Actually, people don't think banking is a good place to work as an IT professional, but actually this, and we are using the OC went with these nice logos to get it attention. >>I love that. So let's actually, let's just talk about that for a second. Why is it such an exciting role to be working in technology at a company like I N G or traditional bank? >>I N G is a challenging environment. That's how do you make an engineer happy, basically give them a problem to solve. So we have lots and lots of problems to solve. So that makes it challenging. But yeah, also rewarding. And you can say a lot of things about banks and with looking at the IT perspective, we are doing amazing things in I and that's what we talked about. Can >>You, can you tell us any of those amazing things or are they secrets? >>Think we talked about last Tuesday at S shift commons conference. Yeah, so we had two, two presentations I presented with my coho sand on my journey over the last three years. So what has IG done? Basically building a secure container hosting platform. Yeah. How do we live a banking cot with cloud native technology and together with our coho young villa presented actually showed it by demo making life and >>Awesome >>In person. So we were not just presenting, >>It's not all smoke and mirrors. It's >>Not smoke and mirror, which we're not presenting our fufu marketing block now. We actually doing it today. And that's what we wanted to share here. >>Well, and as consumers we expect we can access our banking on any device 24 by seven. I wanna be able to do all my transactions in a way that I know is secure. Obviously security's a huge thing there, but talk about I n G Bank aren't always been around for a very long time. Talk about this financial institution as a software company. Really obviously a lot of challenges to solve, a lot of opportunity. But talk about what it's like working for a history and bank that's really now a tech company. >>Yes. It's been really changing as a bank to a tech company. Yeah. We have a lot of developers and operators and we do deliver offer. We OnPrem, we run in the public. So we have a huge engineers and people around to make our software. Yes. And I am responsible for the i Container Ocean platform and we deliver that the name space as a surface and as a real, real secure environment. So our developers, all our developers in, I can request it, but they only get a name space. Yeah, that's very important there. They >>Have >>Resources and all sort of things. Yeah. And it is, they cannot access it. They can only access it by one wifi. So, >>So Lisa and I were chatting before we brought you up here. Name space as a service. This is a newer term for us. Educate us. What does that mean? >>Basically it means we don't give a full cluster to our consumers, right? We only give them basically cpu, memory networking. That's all they need to host application. Everything else we abstract away. And especially in a banking context where compliance is a big thing, you don't need to do compliance for an entire s clusterized developer. It's really saves development time for the colleagues in the bank. It >>Decreases the complexity of projects, which is a huge theme here, especially at scale. I can imagine. I mean, my gosh, you're serving so many different people, it probably saves you time. Let's talk about regulation. What, how challenging is that for you as technologists to balance in all the regulations around banking and FinTech? It's, it's, it's, it's not like some of these kind of wild, wild west industries where we can just go out and play and prototype and do whatever we want. There's a lot of >>Rules. There's a lot of rules. And the problem is you have legislation and you have the real world. Right. And you have to find something in, they're >>Not the same thing. >>You have to find something in between with both parties on the stands and cannot adhere to. Yeah. So the challenge we had, basically we had to wide our, in our own container security standards to prove that the things we were doing were the white things to be in control as a bank because there was no market standard for container security. So basically we took some input from this. So n did a lot of good work. We basically added some things on top to be valid for a bank in Europe. So yeah, that's what we did. And the nice thing is today we take all the boxes we defined back in 2019. >>Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Yep. Yeah. That it feels like a very good strategic call >>And they makes sense. Yeah. Right. Because the hardest problem is try to be compliant for something which doesn't make sense. Right, >>Right. Arnold, talk about, let's double click on namespace as a service. You talked about what that is, but give us a little bit of information on why I N G really believes this is the right approach for this company. >>It's protects for the security that developers doing things they don't shoot. Yeah. They cannot access their store anymore when it is running in production. And that is the most, most important. That is, it is immutable running in our platform. >>Excellent. Talk about both of you. How long have you, have you both been at I n G for a long time? >>I've been with I N G since September, 2001. So that's more than 20 years >>Now. Long time. Ana, what about you? >>Before 2000 already before. >>So both of your comment on that's a long time. Yeah. Talk about the culture of innovation that's at I N G to be able to move at such speed and be groundbreaking in what you're, how you're using technology, what, what's the appetite like at the bank to embrace new and emerging technologies? >>So we are really looking, basically the, the mantra of the bank is to help our customers get a step ahead in life and in business. And we do that by one superior customer service and secondly, sustainability at the heart. So anything which contributes to those targets, you can go to your manager and if you can make goods case why it contributes most of the cases you get some time or some budgets or even some additional colleagues to help you out and give it a try require from a culture perspective required open to trying things out before we reach production. Once you go to production. Yeah. Then we are back to being a bank and you need to take all the boxes to make really sure that we are confident with our customers data and basically we're still a bank but a lot of is possible. >>A lot. It is possible. And there's the customer on the other end who's expecting, like I said earlier, that they can access their data any time that they want, be able to do any transaction they want, making sure the content that's delivered to them is relevant, that it's secure. Obviously with, that's the biggest challenge especially is we think about how many generations are alive today and and those that aren't tech savvy. Yeah. Have challenges with that. Talk about what the bank's dedication is to ensuring from a security perspective that its customers don't have anything to worry about. >>That's always a thin line between security and the user experience. So I n g, like every other bank needs to make choices. Yes. We want the really ease of customers and take the risk that somebody abuses it or do we make it really, really secure and alienate part of our customer base. And that's an ongoing, that's a, that's a a hard, >>It's a trade off. That's >>A line. >>So it's really hard. Interesting part is in Netherlands we had some debates about banks closing down locations, but the moment we introduced our mobile weapon iPads, basically the debates became a lot quieter because a lot of elderly people couldn't work with an iPhone. It turned out they were perfectly fine with a well-designed iPad app to do their banking. Really? >>Okay. >>But that's already learning from like 15 years ago. >>What was the, what was the product roadmap on that? So how, I mean I can imagine you released a mobile app, you're not really thinking that. >>That's basically, I think that was a heavy coincidence. We just, Yeah, okay. Went out to design a very good mobile app. Yeah. And then looking out afterwards at the statistics we say, hey, who was using this way? We've got somebody who's signing on and I dunno the exact age, but it was something like somebody of 90 plus who signed on to use that mobile app. >>Wow. Wow. I mean you really are the five different generations living and working right now. Designing technology. Everybody has to go to the bank whether we are fans of our bank or we're not. Although now I'm thinking about IG as a bank in general. Y'all have a a very good attitude about it. What has kept you at the company for over 20 years? That is we, we see people move around, especially in this technology industry. Yes. Yeah. You know, every two to three years. Sometimes obviously you're in positions of leadership, they're obviously taking good care of you. But I mean multiple decades. Why have you stuck? >>Well first I didn't have the same job in I N D for two decades. Nice. So I went around the infrastructure domain. I did storage initially I did security, I did solution design and in the end I ended up in enterprise architecture. So yeah, it's not like I stuck 20 years in the same role. So every so years >>Go up the ladder but also grow your own skill sets. >>Explore. Yeah. >>So basically I think that's what's every, everybody should be thinking in these days. If you're in a cloud head industry, if you're good at it, you can out quite a nice salary. But it also means that you have some kind of obligation to society to make a difference. And I think, yeah, >>I wouldn't say that everybody feels that way. I >>Need to make a difference with I N G A difference for being more available to our consumers, be more secure to, to our consumers. I, I think that's what's driving me to stick with the company. >>What about you R Now? >>Yes, for me it's very important. Every two, three years are doing new things. I can work with the latest technology so I become really, really innovative so that it is the place to be. >>Yeah. You sort of get that rotation every two to three years with the different tools that you're using. Speaking of or here we're at Cuan, we're talking cloud native, we're talking Kubernetes. Do you think it's possible to, I'm coming back to the regulations. Do you think it's possible to get to banking grade security with cloud native Tech? >>Initially I said we would be at least as secure traditional la but last Tuesday we've proven we can get more secure than situational it. So yeah, definitely. Yes. >>Awesome. I mean, sounds like you proved it to yourself too, which is really saying something. >>Well we actually have Penta results and of course I cannot divulge those, but I about pretty good. >>Can you define, I wanna kind of double book on thanking great security, define what that is, thanking great security and how could other industries aim to Yeah, >>Hit that, that >>Standard. I want security everywhere. Especially my bank. The >>Architecture is zero privilege. So you hear a lot about lease privilege in all the security talks. That's not what you should be aiming for. Zero privilege is what you should be aiming for. And once you're at zero privileged environments, okay, who can leak data because no natural person has access to it. Even if you have somebody invading your infrastructure, there are no privileges. They cannot do privilege escalations. Yeah. So the answer for me is really clear. If you are handling customer data, if you're and customer funds aim for zero privilege architecture, >>What, what are you most excited about next? What's next for you guys? What's next for I n G? What are we gonna be talking about when we're chatting to you Right here? Atan next year or in Amsterdam actually, since we're headed that way in the spring, which is fun. Yes. >>Happy to be your host in Amsterdam. The >>Other way around. We're holding you to that. You've talked about how fun the culture is. Now you're gonna ask, she and I we need, but we need the tee-shirts. We, we obviously need a matching outfit. >>Definitely. We'll arrange some teachers for you as well. Yeah, no, for me, two highlights from this com. The first one was kcp. That can potentially be a paradigm change on how we deal with workloads on Kubernetes. So that's very interesting. I don't know if you see any implementations by next year, but it's definitely something. Looks >>Like we had them on the show as well. Yeah. So it's, it's very fun. I'm sure, I'm sure they'll be very flattered that you just just said. What about you Arnoldo that got you most excited? >>The most important for me was talking to a lot of Asian is other people. What if they thinking how we go forward? So the, the, the community and talk to each other. And also we found those and people how we go forward. >>Yeah, that's been a big thing for us here on the cube and just the energy, the morale. I mean the open source community is so collaborative. It creates an entirely different ethos. Arna. Ty, thank you so much for being here. It's wonderful to have you and hear what I n g is doing in the technology space. Lisa, always a pleasure to co-host with you. Of course. And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit, Michigan. My name is Savannah Peterson and we'll see you up next for a great chat coming soon.
SUMMARY :
And joining me is my beautiful co-host, Lisa, how you feeling? I love it. Just like the guests that we have up for you next. Glad to be you. I forgot what's going on with the shirts guys. VR code is basically the player of our company So let's actually, let's just talk about that for a second. So we have lots and lots of problems to solve. How do we live a banking cot with cloud native technology and together So we were not just presenting, It's not all smoke and mirrors. And that's what we wanted to share here. Well, and as consumers we expect we can access our banking on any device 24 So we have a huge engineers and people around to And it is, they cannot access it. So Lisa and I were chatting before we brought you up here. Basically it means we don't give a full cluster to our consumers, right? What, how challenging is that for you as technologists And the problem is you have legislation and So the challenge we had, basically we had to wide our, in our own container security standards to prove Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Because the hardest problem is try to be compliant for something You talked about what that is, And that is the most, most important. Talk about both of you. So that's more than 20 years Ana, what about you? So both of your comment on that's a long time. of the cases you get some time or some budgets or even some additional colleagues to help you out and making sure the content that's delivered to them is relevant, that it's secure. abuses it or do we make it really, really secure and alienate part of our customer It's a trade off. but the moment we introduced our mobile weapon iPads, basically the debates became a So how, I mean I can imagine you released a mobile app, And then looking out afterwards at the statistics we say, What has kept you at the company for over 20 years? I did solution design and in the end I ended up in enterprise architecture. Yeah. that you have some kind of obligation to society to make a difference. I wouldn't say that everybody feels that way. Need to make a difference with I N G A difference for being more available to our consumers, technology so I become really, really innovative so that it is the place to be. Do you think it's possible to get to we can get more secure than situational it. I mean, sounds like you proved it to yourself too, which is really saying something. I want security everywhere. So you hear a lot about lease privilege in all the security talks. What are we gonna be talking about when we're chatting to you Right here? Happy to be your host in Amsterdam. We're holding you to that. I don't know if you see any implementations by What about you Arnoldo that got you most excited? And also we And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
Ana | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Arnold | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
September, 2001 | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
I N G | ORGANIZATION | 0.99+ |
iPads | COMMERCIAL_ITEM | 0.99+ |
two decades | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
Arno Vonk | PERSON | 0.99+ |
both parties | QUANTITY | 0.99+ |
IG | ORGANIZATION | 0.99+ |
more than 20 years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
over 20 years | QUANTITY | 0.98+ |
I n G | ORGANIZATION | 0.98+ |
Thijs Ebbers | PERSON | 0.98+ |
15 years ago | DATE | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
seven | QUANTITY | 0.97+ |
Cuan | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
90 plus | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.96+ |
Zero privilege | QUANTITY | 0.95+ |
Penta | ORGANIZATION | 0.94+ |
Arna | PERSON | 0.94+ |
first one | QUANTITY | 0.93+ |
zero privilege | QUANTITY | 0.93+ |
one wifi | QUANTITY | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
2000 | DATE | 0.92+ |
Arnoldo | PERSON | 0.92+ |
OnPrem | ORGANIZATION | 0.92+ |
two highlights | QUANTITY | 0.92+ |
day three | QUANTITY | 0.91+ |
five different generations | QUANTITY | 0.9+ |
ING | ORGANIZATION | 0.9+ |
24 | QUANTITY | 0.89+ |
CubeCon | ORGANIZATION | 0.88+ |
G Bank | ORGANIZATION | 0.87+ |
zero privilege architecture | QUANTITY | 0.86+ |
secondly | QUANTITY | 0.86+ |
Atan | LOCATION | 0.85+ |
two presentations | QUANTITY | 0.83+ |
S shift commons conference | EVENT | 0.82+ |
NA 2022 | EVENT | 0.82+ |
zero privileged | QUANTITY | 0.81+ |
every two | QUANTITY | 0.81+ |
last three years | DATE | 0.79+ |
double | QUANTITY | 0.77+ |
Ty Evers | ORGANIZATION | 0.76+ |
device | QUANTITY | 0.72+ |
Afternoon | DATE | 0.72+ |
Cuban Live | EVENT | 0.7+ |
a second | QUANTITY | 0.69+ |
Ty | PERSON | 0.68+ |
three | QUANTITY | 0.65+ |
every | QUANTITY | 0.57+ |
i Container Ocean | ORGANIZATION | 0.56+ |
Afternoon of day | DATE | 0.54+ |
Kubernetes | TITLE | 0.52+ |
Bhaskar Gorti, Platform9 | Cloud Native at Scale
>>Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's all going? Making it super multi-Cloud is around the corner and public cloud is winning at the private cloud on premise and edge. Got a great guest here, Vascar go, D CEO of Platform nine. Just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you again. >>So Kubernetes is a blocker enabler by, with a question mark. I put on on that panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's thing about what you guys are doing a platform nine Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm, >>Right? Absolutely. In fact, things are moving very well and since they came to us it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know the application world is moving very fast in trying to become digital and cloud native. There are many options for you to run the infrastructure. The biggest blocking factor now is having a unified platform. And that's what where we come into >>Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days in 2000, 2001 when the first ASPs application service providers came out. Kind of a SaaS vibe, but that was kind of all kind of cloud-like >>It wasn't, >>And and web services started then too. So you saw that whole growth. Now fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>In fact, you know, as we were talking offline, I was in one of those asbs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflation's here. It's interesting. This is the first downturn in the history of the world where the hyperscale clouds have, have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. Nope. Because pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud >>E Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing infrastructure is not just some, you know, new servers and new application tools. It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to >>Survive. I wanna get your thoughts on Super Cloud because one of the things Dave, Alan and I want to do with Super Cloud and calling at that was we, I I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like the not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, more dynamic, more real. >>I, yeah, I think the reason why we think super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? But they're not connected. So you can say okay, it's more than one cloud. So it's you know, multi-cloud. But Supercloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch, you are looking at this as one unit. And that's where we see the, the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pane, a single platform for you to build your innovations on regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is to, as a pilots, to get the conversations flowing with with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third cloudworks in public cloud. We see the use cases on-premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration and then containers had a good shot with, with Docker, they re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. What's, what's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere in the journey is going on and you know, most companies are, 70 plus percent of them have 1, 2, 3 container based, Kubernetes based applications now being rolled out. So it's very much here, it is in production at scale by many customers and it, the beauty of it is yes, open source, but the biggest gating factor is the skillset. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool and >>Just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores. There are thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also on-prem as an air gap version. >>Can you give an example on how you guys are deploying your platform to enable a super cloud experience for your customer? >>Right. So I'll give you two different examples. One is a very large networking company, public networking company. They have hundreds of products, hundreds of r and d teams that are building different different products. And if you look at few years back, each one was doing it on a different platforms but they really needed to bring the agility and they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact the customer says like, like the Maytag service person cuz we provide it as a service and it barely takes one or two people to maintain it for them. So >>It's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, whatever >>They want on their tools. They're using whatever app development tools they use, but they use our platform. >>And what benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely their speeding speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we heard it all here, ops and security teams. Cause they're kind of two part of one thing, but ops and great specifically need to catch up. Speedwise, are you delivering that value to ops and security? >>Right? So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams, >>So you were just two sides to that coin. You've got the dev side and then >>And the infrastructure >>Side. Okay, >>Another customer, I give you an example which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install rack in the store or they can't move everything to the cloud because the store operations have to be local. The menu changes based on it's classic edge. >>It's >>Classic edge, yeah. Right? They can't send it people to go install rack of servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You say little service, like how big one like a box, like a small little >>Box, right? And all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. Yeah, we provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage thousands >>Of them. True plugin >>Play two plugin play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the >>Location. So you guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud >>Native. Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented where >>Youcar, of course Detroit's >>Coming in, so, so it's already there, right? So we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud, not you have to rewrite and redevelop your application and business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing lot of our customers in that journey. Now everybody wants to be cloud native, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to you use. Okay? So, so I think the complexities there, but the business benefits of agility and uniformity and customer experience are truly being done. >>And I'll give you an example, I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how Domino's actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered, they were the pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow through the application changes what the expectations >>Are >>For the customer. >>Customer, the customer's expectations change, right? Once you get used to a better customer experience, you will not, >>Thats got to wrap it up. I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have in a lot of insights. I want to ask you, given your career where you've been and what you've done and now the CEO of Platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the, the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super Cloud. >>Sure. I think as you said, a lot of battles. Cars being, being at an asb, being in a realtime software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is, couple of parallels come to me. One is, think of it, which was forced to on us, like y2k, everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. And >>Disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloud native. You know, that is what I see >>Happening there. I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the game, it's just changing how you operate, >>How you think, and how you operate. See, if you think about the early days of eCommerce, just putting up a shopping cart then made you an e-commerce or e retailer or e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Nascar, thank you for coming on. Spend the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, We're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Feer with Super Cloud 22 in the Cube. Thanks for >>Watching. Thank you. Thank you, John. >>Hello. Welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super cloud and its challenges, new opportunities around the solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.
SUMMARY :
Great to have you on. What's thing about what you guys are doing a platform nine Is your role there as CEO and So absolutely whether you are doing it in public clouds or private Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days So you saw that whole growth. In fact, you know, as we were talking offline, I was in one of those asbs And if you look at the tech trends, GDPs down, but not tech. not just some, you know, new servers and new application tools. you know, more, more dynamic, more real. the branch, you are looking at this as one unit. So you got containers you know, most companies are, 70 plus percent of them have 1, 2, 3 container It runs on the And if you look at few years back, each one was doing It's kinda like an SRE vibe. They want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Speedwise, are you delivering that value to ops and security? So it it's, it's somewhere in the DevOps to infrastructure. So you were just two sides to that coin. that happens when you either order the product or go into the store and pick up your product or buy then they can't sell software people to go install the software and any change you wanna put through And all the person in the store has to do of the public clouds. So you guys got some success. How do you talk to customers? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring out which I don't know anything about that, but the whole experience of how you order, One of the benefits of chatting with you here been on the app side, I did the infrastructure right and then tried to build our And disruptive. If you did not adapt and adapt and accelerate I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your customers? Spend the time to come in and share with our community and being part of Super Thank you, John. I hope you enjoyed this program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Sean Feer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
John Fur | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
Alan | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Nascar | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Thousands | QUANTITY | 0.99+ |
2001 | DATE | 0.99+ |
Bhaskar Gorti | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
eight years ago | DATE | 0.99+ |
Office 365 | TITLE | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
22 years later | DATE | 0.98+ |
4,000 engineers | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
Super Cloud | TITLE | 0.98+ |
one unit | QUANTITY | 0.98+ |
Domino | ORGANIZATION | 0.97+ |
first downturn | QUANTITY | 0.97+ |
20 years later | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
2 | QUANTITY | 0.97+ |
thousands of stores | QUANTITY | 0.96+ |
Maytag | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
Vascar go | PERSON | 0.96+ |
Platform nine | TITLE | 0.96+ |
thousands of stores | QUANTITY | 0.95+ |
Kubernetes | TITLE | 0.95+ |
One person | QUANTITY | 0.95+ |
two plugin | QUANTITY | 0.94+ |
Platform nine | TITLE | 0.94+ |
Platform nine | ORGANIZATION | 0.94+ |
each one | QUANTITY | 0.94+ |
two different examples | QUANTITY | 0.94+ |
70 plus percent | QUANTITY | 0.93+ |
1 | QUANTITY | 0.93+ |
D | PERSON | 0.93+ |
Azure | ORGANIZATION | 0.93+ |
pandemic | EVENT | 0.93+ |
first | QUANTITY | 0.92+ |
three little servers | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
hundreds of products | QUANTITY | 0.92+ |
single platform | QUANTITY | 0.91+ |
Arlon | ORGANIZATION | 0.91+ |
Super Cloud 22 | ORGANIZATION | 0.87+ |
Supercloud | ORGANIZATION | 0.87+ |
single pane | QUANTITY | 0.86+ |
OpenStack | TITLE | 0.86+ |
Detroit | LOCATION | 0.86+ |
50,000, 60,000 | QUANTITY | 0.85+ |
Cuba | LOCATION | 0.84+ |
Super Cloud 22 | ORGANIZATION | 0.84+ |
Number two | QUANTITY | 0.84+ |
Youcar | ORGANIZATION | 0.8+ |
20 year old | QUANTITY | 0.79+ |
3 container | QUANTITY | 0.78+ |
Cloud Native | TITLE | 0.77+ |
few years back | DATE | 0.77+ |
thousands of | QUANTITY | 0.73+ |
stores | QUANTITY | 0.72+ |
Salesforce | ORGANIZATION | 0.69+ |
platform nine | TITLE | 0.68+ |
third cloudworks | QUANTITY | 0.67+ |
hundreds of r | QUANTITY | 0.66+ |
Platform9, Cloud Native at Scale
>>Everyone, welcome to the cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furry, your host of The Cube. We've got a great lineup of three interviews we're streaming today. Mattor Makki, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloud native at scale. So enjoy the program, see you soon. Hello and welcome to the cube here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Forry, host of the Cube. Pleasure to have here me Makowski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. >>Thank you for having >>Me. So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on Super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model, where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think super cloud is a, is an appropriate term for >>That. So you brought a couple things I want to dig into. You mentioned Edge Notes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, o ot, and it kind of coming together, but you also got this idea of regions, global infrastructures, big part of it. I just saw some news around cloud flare shutting down a site here, there's policies being made at scale. These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because it's something business consequences as well, but there are technical challenge. Can you share your view on what the technical challenges are for the super cloud across multiple edges and >>Regions? Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that skill really needs some well thought out, well-structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. >>Yeah, absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer's site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on all of these various factors at their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale because there are a lot of multiple steps involved when you see the success cloud native, you know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can configure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotpot is. And when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the, the two factors of scale is we talked about start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and POS from support teams, et cetera. And those issues can be really difficult to try us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because yeah, you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say you radio sell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did it, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like ishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching, Can you share what our lawn is, this new product, What is it all about? Talk about this new introduction. >>Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arwan is, it's an open source project and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what alarm lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what this solves in, in terms of the chaos you guys are reigning in. What's the, what's the bumper sticker? Yeah, >>What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right online. And if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each components, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what Arlon really does. That's like the I pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency >>For those. So keeping it smooth, the assembly on things are flowing. C C I CD pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, the developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of application that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both those >>Teams. Yeah. It's DevOps. So the DevOps is the cloud native developer. The OP teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, Kubernetes really in introduced or elevated this declarative management, right? Because, you know, c communities clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so online addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>Ed, do I wanna get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model on from model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fi, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also opensource because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. I mean, what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that, Well, but that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for helpers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating me metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlo? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo customer? >>Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native es, and then we have our C I CD pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS CD pipelines can deploy the apps. Somebody needs to do all of their groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Spico would be delighted. The folks that we've talked, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on s Amazon and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us >>Stability. Yeah, I think people are scared, not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Lon uses Argo cd, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of Intuit team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is arlon also makes use of cluster api capi, which is a ES sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with algo cd. Now Arlan just extends the scope of what Algo CD can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, Platform nine has a role to play, which is when you are ready to deploy Alon at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that sla. >>And what's been the reaction from customers you've talked to Platform nine customers with, with, that are familiar with, with Argo and then Arlo? What's been some of the feedback? >>Yeah, I, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo CD and they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our line before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and, you know, give you an inventory and that, >>So customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yeah. And or scale out. >>That's right. Exactly. >>And you provide that layer of policy. >>Absolutely. >>Yes. That's the key value >>Here. That's right. >>So policy based configuration for cluster scale up >>Profile and policy based declarative configuration and life cycle management for clusters. >>If I asked you how this enables Super club, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And alon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And, you know, alarm flows >>In one. Okay, so now the next level is, Okay, that makes sense. There's under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for >>That. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this Arlo solution takes place, as you say, and the apps are gonna be stupid, there's designed to do, the question is, what did, does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo, and, and, and on all the other goodness to automate, What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be in indications of things are effed up a little bit. >>Yeah. More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your, So those are, those are the >>Signals. This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the IIA terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. The company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and see, and boards is saying, how is technology driving the top line revenue? That's the number one conversation. Yeah. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things with things. So just >>Taking care of, and the CIO doesn't exist. There's no CSO there at the beach. >>Yeah. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for having >>Me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here at Bickley, who's the chief architect and co-founder of Platform nine b. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or well later, earlier when opens Stack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now was realized, and you've seen what Docker's doing with the new docker, the open source Docker now just a success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructure's code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our R lawn you guys just launched, the infrastructure's code is going to another level. And then it's always been DevOps infrastructure is code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructures code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure as configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specify. So I think it's, it's a even better version of infrastructures code. >>Yeah, yeah. And, and that really means it's developer just accessing resources. Okay. Not declaring, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source, so popular, you don't have to have to write a lot of code. It's code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new, new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space that, >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your >>View? It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at our GoCon, which was put on here in Silicon Valley at the, at the by intu. They had their own little day over there at their headquarters. But before we get there, Vascar, your CEO came on and he talked about Super Cloud at our inaugural event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so to deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer too is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna continue. >>It's interesting. I just really wrote another post today on LinkedIn called the Silicon Wars AMD Stock is down arm has been on rise, we've remember pointing for many years now, that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in, in this community of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at ar GoCon, which came out of Intuit. We've had Maria Teel at our super cloud event, She's a cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? Yeah, >>So the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built AR lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the, the, this abstraction or thin layer below the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads at the end of the day, and I talk to CXOs and IT folks that, that are now DevOps engineers. They care about the workloads and they want the infrastructure's code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And here's my workloads running effectively. So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, right? >>So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem, like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's coming like an EC two instance, spin up a cluster. We've heard people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with AR loan you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call the profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, So >>It's essentially standard, like creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook, just deploy it. Now what there is between say a script like I'm, I have scripts, I can just automate scripts >>Or yes, this is where that declarative API and infrastructure as configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things are controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure as configuration is built kind of on, it's a super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years. I mean people are now starting to figure out, okay, it's not as easy as it sounds. Kubernetes has value. We're gonna hear this year at CubeCon a lot of this, what does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state, and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot rogue, that CubeCon coming up and now this'll be shipping this segment series out before. What do you expect to see at this year? It's the big story this year. What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jockeying for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of coupon and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career VMware over decades with them within 12 years with 14 years or something like that. Big number co-founder here a platform. I you's been around for a while at this game, man. We talked about OpenStack, that project we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a Cloud Aati team at that time. We would joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform Nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? Opens Stack was an example and then Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing, doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud need of its scale? >>The, the hyper squares? >>Yeah, yeah. A's Azure Google, >>You mean from a business perspective, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep well, >>They got great performance. I mean, from a, from a hardware standpoint, yes. That's gonna be key, >>Right? Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyper skaters really want to be in the game in terms of, you know, the, the new risk and arm ecosystems, the platforms. >>Yeah. Not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer. It's, it's hardware and you got software and you got middleware and he kinda over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. Yes, >>Exactly. >>It's, we're back in the same game. Thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud, cloud native develop for developers. And John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud is around the corner and public cloud is winning. Got the private cloud on premise and Edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark I put on on there. Panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's just thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm, right? >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you to run the infrastructure. The biggest blocking factor now is having a unified platform. And that's what where we come into >>Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days in 2000, 2001 when the first ASPs application service providers came out. Kind of a SaaS vibe, but that was kind of all kind of cloud-like >>It wasn't, >>And web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>In fact, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflations sea year. It's interesting. This is the first downturn, the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. Nope. Cause pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing infrastructure is not just some, you know, new servers and new application tools. It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Alon and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, More dynamic, more unreal. >>Yeah. I think the reason why we think Super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say, okay, it's more than one cloud. So it's, you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pan or a single platform for you to build your innovations on, regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third Cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration. And then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. >>What's, >>What's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere in the journey is going on. And you know, most companies are, 70 plus percent of them have 1, 2, 3 container based, Kubernetes based applications now being rolled out. So it's very much here. It is in production at scale by many customers. And it, the beauty of it is yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool and >>Just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores about thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un on-prem as an air gap version. Can >>You give an example on how you guys are deploying your platform to enable a super cloud experience for your customer? Right. >>So I'll give you two different examples. One is a very large networking company, public networking company. They have hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms, but they really needed to bring the agility. And they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact, the customer says like, like the Maytag service person, cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. What >>Benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we heard it all here, ops and security teams. Cause they're kind of part of one thing, but option security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right? >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams. So >>You working two sides to that coin. You've got the dev side and then >>And then infrastructure >>Side. >>Okay. Another customer that I give an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on it's classic edge. It's classic edge, yeah. Right? They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a box, like a small little box, >>Right? And all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage thousands of >>Them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the locations. >>So you guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented, very well. >>Tucan, of course Detroit's >>Coming so, so it's already there, right? So we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud, not you have to rewrite and redevelop your application in business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexity is there, but the business benefits of agility and uniformity and customer experience are truly being done. >>And I'll give you an example, I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations >>Are >>For the customer. Customer, >>The customer's expectations change, right? Once you get used to a better customer experience, you learn. >>That's to wrap it up. I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now let's CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super Cloud. >>Sure. I think as you said, a lot of battles. CARSs being been in an asb, being in a real time software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative. You know, that is what I see >>Happening there. I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the gain, it's just changing how you operate, >>How you think and how you operate. See, if you think about the early days of e-commerce, just putting up a shopping cart that made you an e-commerce or e retailer or an e e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Fur with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you. >>Hello and welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.
SUMMARY :
So enjoy the program, see you soon. a lot different, but kind of the same as the first generation. And so you gotta rougher and it kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, Can you scope the scale of the problem? And I think, you know, I I like to call it, you know, And that is just, you know, one example of an issue that happens. you know, you see some, you know, some experimentation. which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching, Can you share what So what alarm lets you do in a in terms of the chaos you guys are reigning in. And if you look at the logo we've designed, So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, the developers are responsible for one picture of So the DevOps is the cloud native developer. And so online addresses that problem at the heart of it, and it does that using So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fi, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying to It's created by folks that are as part of Intuit team now, you know, And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure That's right. And alon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for But this is a key point, and I have to ask you because if this Arlo solution of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, not where it used to be supporting the business, you know, that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Taking care of, and the CIO doesn't exist. Thank you for your time. Thanks for having of Platform nine b. Great to see you Cube alumni. And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our R lawn you guys just launched, you know, do step A, B, C, and D instead with Kubernetes, I mean now with open source, so popular, you don't have to have to write a lot of code. you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, all the variations around and you know, compute storage networks the DevOps engineers, they get a a ways to So how do you guys look at the workload side of it? like K native, where you can express your application in more at a higher level, It's coming like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook, just deploy it. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at this year? If you look at a stack necessary for hosting We would joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find I mean, from a, from a hardware standpoint, yes. terms of, you know, the, the new risk and arm ecosystems, It's, it's hardware and you got software and you got middleware and he kinda over, Great to have you on. What's just thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days So you saw that whole growth. In fact, you know, as we were talking offline, I was in one of those And if you look at the tech trends, GDPs down, but not tech. some, you know, new servers and new application tools. you know, more, More dynamic, more unreal. So it's, you know, multi-cloud. the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves And you know, most companies are, 70 plus percent of them have 1, 2, 3 container It runs on the edge, You give an example on how you guys are deploying your platform to enable a super And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then enhance the customer experience that happens when you either order the product or go into And all the person in the store has to do like And so that dramatically brings the velocity for them. of the public clouds. So you guys got some success. How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring I don't know anything about that, but the whole experience of how you order, For the customer. Once you get used to a better customer experience, One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our If you did not adapt and adapt and accelerate I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you. I hope you enjoyed this program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vascar | PERSON | 0.99+ |
Mattor Makki | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul Morritz | PERSON | 0.99+ |
Sean Fur | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Vascar Gorde | PERSON | 0.99+ |
Adrian Karo | PERSON | 0.99+ |
John Forry | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
50,000 | QUANTITY | 0.99+ |
Dave Alon | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
Maria Teel | PERSON | 0.99+ |
14 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Gort | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Nascar | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
4,000 engineers | QUANTITY | 0.99+ |
one site | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
Makowski | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
Arlo | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
both | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
first generation | QUANTITY | 0.99+ |
22 years later | DATE | 0.99+ |
1 | QUANTITY | 0.99+ |
first downturn | QUANTITY | 0.99+ |
Platform nine | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
one flavor | QUANTITY | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
two thousands | QUANTITY | 0.98+ |
One person | QUANTITY | 0.98+ |
Bickley | PERSON | 0.98+ |
Bacar | PERSON | 0.98+ |
12 years | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
GoCon | EVENT | 0.98+ |
each site | QUANTITY | 0.98+ |
thousands of stores | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
20 years later | DATE | 0.98+ |
Platform9, Cloud Native at Scale
>>Hello, welcome to the Cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furr, your host of The Cube. We had a great lineup of three interviews we're streaming today. Meor Ma Makowski, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloudnative at scale. So enjoy the program. See you soon. Hello everyone. Welcome to the cube here in Palo Alto, California for special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Furry, host of the Cube. A pleasure to have here, me Makoski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. Thank >>You for having me. >>So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good, in a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributions of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of microsites, these microsites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think supercloud is a, is an appropriate term for that. >>So you brought a couple of things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, ot, and IT kind of coming together, but you also got this idea of regions, global infras infrastructures, big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale, These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's something business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? >>Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have deploy a number of clusters in the Kubernetes space. And then on the other axis you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. Yeah, >>Absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So I, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this chain, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies, or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors are their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale, because there are a lot of multiple steps involved when you see the success of cloud native. You know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can figure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is in when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and pos from support teams, et cetera. And those issues can be really difficult to triage us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say your radio cell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes nightmarishly hard, right? It's just one of the examples of the problem, another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >>Yeah, absolutely. Very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arlon is, it's an open source project, and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the site of those clusters, security policies, your middleware, plug-ins, and finally your applications. So what our LA you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what dissolves in, in terms of the chaos you guys are reigning in, what's the, what's the bumper sticker? Yeah, what >>Would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Our line, and if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we think of these enterprise large scale environments, you know, sprawling at scale, creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what arlon really does. That's like the deliver pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for >>Those. So keeping it smooth, the assembly on things are flowing. See c i CD pipe pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both >>Those teams. Yeah. It's DevOps. So the DevOps is the cloud needed developer's. That's right. The option teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, ES really in introduced or elevated this declarative management, right? Because, you know, s clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined a declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>And do I want to get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at Platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model or on-prem model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fision, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also open source, because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. And what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that long. But that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for developers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating met metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer. Yep. Why should I be enthused about Arla? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo? I'm a >>Customer. Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native Kubernetes, and then we have our C I C D pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS c D pipelines can deploy the apps. Somebody needs to do all of that groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Ops FICO would be delighted. The folks that we've talk, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on ecos Amazon, and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us the >>Ability to, Yeah, I think people are scared. Not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Arlan uses Argo cd, which is probably one of the highest and used CD open source tools that's out there. Right's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is Alon also makes use of Cluster api cappi, which is a Kubernetes sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with Argo cd. Now Arlan just extends the scope of what City can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, platform line has a role to play, which is when you are ready to deploy online at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that >>Sla. And what's been the reaction from customers you've talked to Platform nine customers with, with that are familiar with, with Argo and then rlo? What's been some of the feedback? >>Yeah, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo adn, they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our land before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customers' hands and offloading it to our hands, right? And giving them that full white glove treatment, as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and you know, give you an inventory. And that will, >>So if customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yep. And or scale out. >>That's right. Exactly. And >>You provide that layer of policy. >>Absolutely. >>Yes. That's the key value here. >>That's right. >>So policy based configuration for cluster scale up, >>Well profile and policy based declarative configuration and lifecycle management for clusters. >>If I asked you how this enables supercloud, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And arlon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And you know, our alarm fills in >>One. Okay. So now the next level is, Okay, that makes sense. Is under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for that. >>So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this RLO solution takes place, as you say, and the apps are gonna be stupid, they're designed to do, the question is, what did does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo and, and all the other goodness to automate? What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be indications of things are effed up a little bit. Yeah. >>More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they're, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because they're the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your budget. So those are, those are the signals. >>This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the maybe terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. Company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and boards is saying, How is technology driving the top line revenue? That's the number one conversation. Yep. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are >>Just taking care of the CIO doesn't exist. There's no ciso, they're at the beach. >>Yep. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for >>Having me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here with Bickley, who's the chief architect and co-founder of Platform nine Pick. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or later, earlier when OpenStack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now has realized, and you've seen what Docker's doing with the new docker, the open source Docker now just the success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructures code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our, our lawn, and you guys just launched the infrastructures code is going to another level, and then it's always been DevOps infrastructures code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructure as code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure is configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specified. So I think it's, it's a even better version of infrastructures code. >>Yeah. And that really means it's developer just accessing resources. Okay. That declare, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source so popular, you don't have to have to write a lot of code, this code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space. >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. The trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your view? >>It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at AR GoCon, which was put on here in Silicon Valley at the, at the community meeting by in two, they had their own little day over there at their headquarters. But before we get there, vascar, your CEO came on and he talked about Super Cloud at our in AAL event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or applications specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so of deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer two is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think that the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna >>Continue. It's interesting. I just, when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on a rise. We remember pointing for many years now that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in in this community of, of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at Argo Con, which came out of Intuit. We've had Mariana Tessel at our super cloud event. She's the cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? >>Yeah, so the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and, you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself. You can, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built our lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the the, this abstraction or thin layer below as the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads. At the end of the day, you know, I talk to CXOs and IT folks that are now DevOps engineers. They care about the workloads and they want the infrastructures code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And is my workloads running effectively? So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, >>Right? So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's kinda like an EC two instance, spin up a cluster. We very, people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with Armon you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call a profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, so >>Essentially standard creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook. You deploy it. Now what's there is between say a script like I'm, I have scripts, I could just automate scripts >>Or yes, this is where that declarative API and infrastructures configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things about controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure has configuration is built kind of on, it's as super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring and saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years? I mean people are now starting to figure out, okay, it's not as easy as it sounds. Could be nice, it has value. We're gonna hear this year coan a lot of this. What does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot road that CubeCon coming up and obviously this will be shipping this segment series out before. What do you expect to see at Coan this year? What's the big story this year? What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jogging for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of cons and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career, VMware over decades with them obviously in 12 years with 14 years or something like that. Big number co-founder here at Platform. Now you guys have been around for a while at this game. We, man, we talked about OpenStack, that project you, we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. And I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a cloud ERO team at that time. We would to joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys tr pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? OpenStack was an example when the Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud native of its scale? >>The, the hyperscalers, >>Yeahs Azure, Google. >>You mean from a business perspective? Yeah, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep, >>Well they got great I performance, I mean from a, from a hardware standpoint, yes, that's gonna be key, right? >>Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyperscalers really want to be in the game in terms of, you know, the the new risk and arm ecosystems and the platforms. >>Yeah, not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer, it's, it's hardware and he got software and you got middleware and he kind over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. >>Yes, >>Exactly. It's, we're back on the same game. Vic, thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud Cloud Native Phil for developers and John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud clouds around the corner and public cloud is winning. Got the private cloud on premise and edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark. I put on on that panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's interest thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm? Yeah, right. >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you do on the infrastructure. The biggest blocking factor now is having a unified platform. And that's what we, we come into, >>Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in 2000, 2001, when the first as piece application service providers came out, kind of a SaaS vibe, but that was kind of all kind of cloudlike. >>It wasn't, >>And and web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>I, in fact you, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflation's here. It's interesting. This is the first downturn in the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. >>Nope. >>Cuz the pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing Infras infrastructure is not just some new servers and new application tools, It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Ante and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, more dynamic, more real. >>Yeah. I think the reason why we think super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say okay, it's more than one cloud. So it's you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pain, a single platform for you to build your innovations on regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is to, as a pilots, to get the conversations flowing with with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration and then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. What's the, what's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere else in the journey is going on. And you know, most companies are, 70 plus percent of them have won two, three container based, Kubernetes based applications now being rolled out. So it's very much here, it is in production at scale by many customers. And the beauty of it is, yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool >>And just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores are thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un OnPrem as an air gap version. >>Can you give an example on how you guys are deploying your platform to enable a super cloud experience for your >>Customer? Right. So I'll give you two different examples. One is a very large networking company, public networking company. They have, I dunno, hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms but they really needed to bring the agility and they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact the customer says like, like the Maytag service person cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. >>What benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we've heard it all here, ops and security teams cuz they're kind of too part of one theme, but ops and security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right. >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams, >>You working two sides of that coin. You've got the dev side and then >>And then infrastructure >>Side side, okay. >>Another customer like give you an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on, It's a classic edge. It's classic edge. Yeah. Right. They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a net box box, like a small little >>Box and all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage >>Thousands of them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the location. So >>You guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented where you, well >>Con of course Detroit's >>Coming here, so, so it's already there, right? So, so we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud native, you have to rewrite and redevelop your application and business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexities there, but the business benefits of agility and uniformity and customer experience are truly them. >>And I'll give you an example. I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations of the, for the customer. >>Customer, the customer's expectations change, right? Once you get used to a better customer experience, you learn >>Best car. To wrap it up, I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now the CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super >>Cloud. Sure. I think as you said, a lot of battles. Cars being been, been in an asp, been in a realtime software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with a lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative that know that is what I see >>Happening there. I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the game, it's just changing how you operate, >>How you think, and how you operate. See, if you think about the early days of eCommerce, just putting up a shopping cart didn't made you an eCommerce or an E retailer or an e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Feer with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you, John. >>Hello. Welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around the solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.
SUMMARY :
See you soon. but kind of the same as the first generation. And so you gotta rougher and IT kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, this, Can you scope the scale of the problem? the problem that the scale creates, you know, there's various problems, but I think one, And that is just, you know, one example of an issue that happens. Can you share your reaction to that and how you see this playing out? which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching. So what our LA you do in a But again, it gets, you know, processed in a standardized way. So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, developers are responsible for one picture of So the DevOps is the cloud needed developer's. And so Arlon addresses that problem at the heart of it, and it does that using existing So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fision, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying And that's where, you know, platform line has a role to play, which is when been some of the feedback? And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and And And arlon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for that. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to to be supporting the business, you know, the back office and the maybe terminals and that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Just taking care of the CIO doesn't exist. Thank you for your time. Thanks for Great to see you and great to see congratulations on the success And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our, our lawn, and you guys just launched the So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures I mean now with open source so popular, you don't have to have to write a lot of code, you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, the state that you want and more consistency. the DevOps engineers, they get a a ways to So how do you guys look at the workload native ecosystem like K native, where you can express your application in more at It's kinda like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at Coan this year? If you look at a stack necessary for hosting We would to joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find terms of, you know, the the new risk and arm ecosystems it's, it's hardware and he got software and you got middleware and he kind over, Great to have you on. What's interest thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in So you saw that whole growth. So I think things are in And if you look at the tech trends, GDPs down, but not tech. Cuz the pandemic showed everyone digital transformation is here and more And modernizing Infras infrastructure is not you know, more, more dynamic, more real. So it's you know, multi-cloud. So you got containers And you know, most companies are, 70 plus percent of them have won two, It runs on the edge, And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then that happens when you either order the product or go into the store and pick up your product or like what you and I do at home and we get a, you know, a router is And so that dramatically brings the velocity for them. Thousands of them. of the public clouds. The question I want to ask you is that's How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring out which I don't know anything about that, but the whole experience of how you order, Being agility and having that flow to the application changes what the expectations of One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our own If you did not adapt and adapt and accelerate I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you, John. I hope you enjoyed this program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Paul Morritz | PERSON | 0.99+ |
Vascar | PERSON | 0.99+ |
Adrian Karo | PERSON | 0.99+ |
Sean Feer | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
John Furry | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
50,000 | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Furr | PERSON | 0.99+ |
Vascar Gorde | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
Meor Ma Makowski | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Makoski | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
14 years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
12 years | QUANTITY | 0.99+ |
2001 | DATE | 0.99+ |
Gort | PERSON | 0.99+ |
Mascar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Mariana Tessel | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
hundreds | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Two | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
next year | DATE | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Kubernetes | TITLE | 0.99+ |
eight years ago | DATE | 0.99+ |
one site | QUANTITY | 0.99+ |
Thousands | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
each component | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
one unit | QUANTITY | 0.99+ |
one flavor | QUANTITY | 0.99+ |
4,000 engineers | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
Super Cloud | TITLE | 0.99+ |
Dave Ante | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Vic | PERSON | 0.99+ |
two sides | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two thousands | QUANTITY | 0.99+ |
Bickley | PERSON | 0.98+ |
tens of thousands of nodes | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
two people | QUANTITY | 0.98+ |
each site | QUANTITY | 0.98+ |
Kubernetes | PERSON | 0.98+ |
super cloud | TITLE | 0.98+ |
One person | QUANTITY | 0.98+ |
two factors | QUANTITY | 0.98+ |
Arlan | ORGANIZATION | 0.98+ |
Omer Singer, Snowflake & Julie Chickillo, Guild Education | Snowflake Summit 2022
>>Hey everyone. Welcome back to the queue of Lisa Martin with Dave Valante and we're live in Vegas. This is snowflake summit, 22, their fourth annual event. A lot of people here, a lot of news, a lot to unpack so far, and this is only day, day one. We've got two guests here with us to talk about, uh, cyber security, a very important topic, please welcome Omar singer the head of cyber security strategy at snowflake and Julie Chilo VP of security at Guild education. Welcome. Thank >>You. Thank you >>For having all of >>Our favorite topics. Yeah. Oh >>One. It's not boring. >>You know this much and you have so much more to learn now. So here >>We go. Cybersecurity is, is not to say it's boring. Not boring is an understatement. Yeah. Omar, I wanna start with you so much news coming out today. Talk to us about what's new with cybersecurity. Workload is snowflakes. Flywheel of innovation just seems to be getting bigger and faster. >>Yeah. Yeah. Well, well, I'll tell you it's been a long road to get to where we are today. Um, my initial role at snowflake was to lead security engineering. So I've actually been using snowflake as the home for security data, basically from day one. And we saw that it worked, it worked really well. And we started hearing from customers that they were dealing with some of the same challenges that we faced as an internal security team. And we decided as snowflake that we wanna bring the benefits of the data cloud to cyber security teams at all of our customers. And that's what the workload is all about. >>Talk to us about the, the voice of the customer. Obviously we saw a lot of customer stories heard your customer. We're gonna be talking about Guild education in a minute, but in the voice of the customer, in terms of being influential, obviously you were an internal customer drinking that champagne like this tastes really good. This is better of the Flaco <laugh>, but how is the voice of the customer influential in terms of the, the cybersecurity workload, as we've seen the threat landscape change so much in the last two years alone? >>Sure, sure. And you know, security, it's a really hard problem. We like to think of it as a data problem. And when you start thinking about it, that way snowflake is re very relevant for it. But many security teams don't yet think about their challenge as a data challenge. And so they're struggling with a very fragmented data landscape. The facts are all over the place and they're not able to ask the kind of questions that they need to understand. Where are my risks? How are the bad guys gonna try to get into my network? And they can't reflect that to leadership to everybody that really cares about cyber security. This is a board level concern today without the unified data and without the analytics. Um, they really can't do any of that. And, and yeah, representing the customer is, is a big part of what I do. And we have great customers like, like Julie, who's been kind of with us on this journey. She's, she's a part of the movement. I mean, Julie, what, what has it been like, uh, for, for you? >>Oh, it's been, uh, it's been game changer for, for Guild for sure. When we first, uh, started, I didn't one, I didn't know this was a concept <laugh> so when I first started talking O me and, um, snowflake, uh, I had just heard through the grapevine that, that you could do, like, this was a thing you could use the data, you could get everything you needed in one place. And, um, it's been game changing for my team. Uh, we, we were in many different security tools. They were all isolated, siloed, and we're now able to move everything into one, uh, one area, uh, and get we're getting close to the one pane of glass, which I, um, I just heard was a mythical concept for >>Security for >>A long time. Yeah. For a long time. Um, so it's, uh, it's just been amazing and it's, uh, brought us closer to our data ops team. So I'm here this week, uh, with somebody from data ops, actually, that's awesome to help us out. >>So can you describe that further? I'm I'm, I'm, I'm amazed and skeptical the, the, the I'm imagining, you know, the Optiv chart that says eight, 8 million security tools on there, are you actually able, uh, describe how you're able to consolidate your tooling? >>So, one of, one of the biggest problem, one of the biggest problems we were facing initially was our SIM, um, the security incident and event management tool could not take anything from our DevSecOps tools. And so any security that we had in a developer pipeline was really isolated to that tool, and we could never get it into a SIM Sims just aren't meant they're not built to handle that they're built to handle, um, not, not really old school networks and, and data center traffic and everything I have is in the cloud. And so we were really, I, everything was isolated. So with snowflake, what we do is we, um, worked with our data ops team. We can move things from, um, like our, our scanning tools for, for the developer pipelines into snowflake. We can use then correlate different things such as, from like eight year ADP. Like if a, do you have somebody pushing code to production who's out on vacation, you can actually do that correlation with snowflake that was never available before. These are things we could never do before. And we're able to, um, just do correlations. You could not get in that you cannot get in a SIM. >>Why couldn't I just throw those into any old, you know, run of the mill cloud data warehouse? >>Well, you know, it's not just the scale, it's the complexity of the data. I think snowflake how we have the, the sche on read and then all of the kind of things that make snowflake really good for other departments turns out, works really well for security. And it's the ecosystem too. Nobody else has this ecosystem approach. You know, you heard on the keynote today that snowflake is the, this disrupting, um, the, the software application development, right? All, all that kind of focus. The tool consolidation doesn't need to mean that you only have one tool you can actually have best of breed, choose the tool you want. As long as the data's consolidated, you're not building more silos. And that's what our partners are doing. They're separating the application from the data. They're bringing the work to the data, and that's what you hear here. So Julie's team can still choose to use a variety of tools that get the job done, but all those tools are working off of the single source of truth. And that, that is unique to what snowflake >>Can enable. So we, we are Reiss. Uh, we should have asked you about Guild education, explain your, your, your organization. >>Oh, what does Guild do? Uh, so we're a late stage startup. Uh, we manage education as a benefit for, for large companies. So we, we house data from very large organizations with like their workforce and, and help students help, help their workforce go back to school. >>Okay. So unpacking some of the things you said, schema on Reed, but not necessarily no schema on, right. It's a little different, right. Because you're ingesting. Yeah. And then you're determining the scheme on read that's right. Right. Okay. So that makes it simple and fast for zoom, but you get data in and then you figure it out, bringing work to data. Can we just double click on that a little bit? Cuz I think when I think about that, we've heard terms like over the years bring compute to the data. That's what Hadoop was supposed to do. And it didn't, you know, it was like, everything was mm-hmm <affirmative> shoved. So what do you mean by that? How, how, what, what actually does that >>Mean? Yeah. So if you think about the traditional SAS solution, the vendor needed to invest in a data center and to have a data platform that would be scalable and robust because their service dependent on it and they couldn't trust that the customer would have that kind of data platform on the customer's side. What Snowflake's data cloud has done has democratized the data platform. So now you have startups to fortune 500 S the vendors, the customers, they're all uneven footing when it comes to the data platform. So now the vendors can say, bring your own snowflake. Why not? You know, and they can focus on building the best application to solve the real challenges that security teams have. But by the way, not only cybersecurity, we see this and for example, the, um, customer data space as well. So we're seeing more and more kind of SaaS industries seeing this approach and the applications are gonna come yeah. To the data platform of choice, uh, for the practitioner. >>Julie, can we talk about some of the outcomes that Guild education has achieved so far by working with this solution in terms of, we look at the threat landscape and how it's changed so much the last couple of years and how it's a matter of if, or sorry, when not, if I get hit with an attack, how, what are some of the key outcomes that a snowflake partnership and technology has enabled you to achieve? >>So the, the biggest one, again, it's around the Def sec ops program, um, where you see so many attacks these days happening in the code base. So you really have to be careful with your, your pipeline where the code's getting moved through, who has access, who can move code into production. Um, and these are so the, like if you're using GitHub or, um, like using a scanning tool called snake, they're, they're separate, like they're completely separate the only way that we can see who's moving code into production, or if there was a vulnerability or somebody turned off, the security tool is to move these logs, this data into snowflake, uh, and our engineering teams were already using snowflake. Uh, so that made it, that was an easy transition for us. I didn't have to go out and convince another team to support us somewhere else, but a great example where we were, we're seeing great, um, savings, not only in people time, but, but for security, um, we were having problems or the security or the <laugh>, the engineers were turning off our secure codes scanner. >>And we didn't find out until a little bit later. Uh, oh yeah. Yeah. So found out we, my team, we had a team, we spent about 160 hours going through a thousand pole requests manually. And I said, no, no more go find the go figure out where this data exists. We put it in a snowflake and we can create an automatic, uh, ping to the security team saying, Hey, they turned off the, the scanner, go check and see what, why did the scanner get turned off? So it's an immediate response from my team instead of finding out two months later. And this is just, isn't something you can do right now. That's you can't set it up. So, um, makes it so easy. Ping goes to slack. We can go to the, immediately to the engineering team and say, why did you >>Using using automation? >>Yeah. Did you, did you turn this off? Why did you turn it off? Get an exception in so one, it like helps with compliance, so we're not messing up our SOC two audit. Uh, and then two, from a security perspective, we are able to, to trust, but verify, um, which is a big part of the DevSecOps landscape, where they need code to move into production. They need a scan to run in under five minutes. My team can't be there to scan, you know, 10, like 10 times a day or a hundred times a day. So we have to automate all of that and then just get information as it comes in. >>Is it accurate to say that, um, you're not like shutting off your tools, you're just taking advantage of them and compressing the time to get value out of them or are you actually reducing the tool sets? >>No, we don't. Well, no, we, our goal wasn't to reduce the tool set. I mean, we did actually get rid of the SIM we were using. Uh, so we were partnering with one of, um, uh, snowflakes partners, um, >>Because yeah, but you still have a SIM, >>We still have it. It's just minimized what goes to the SIM, because most of what I care about, isn't actually going to a SIM. Yeah. It's all the other pieces that are in a cloud because we use all like, we're, we're a hundred percent in the cloud. I don't have servers, I don't have firewalls. We don't have routes routers or switches. So all the things I care about live in a cloud somewhere. And, and I want that information. And so a lot of times, um, especially when it comes to the engineering tools, they were already sending the information to snowflake or they're also interested. And so we're partnering like it's, we're doubling up on the use of the >>Data. Okay. And you couldn't get that outta your SIM. Maybe you're asking your SIM to do too much, or it just didn't deliver. >>No systems are built on search engines. You know, they don't, >>They, they can't do it. >>You kind of knew what you were looking for and you say, Hey, where did I see this? Where did I see that? Very different from data analytics and the kinds of question that security teams really want to ask. These are emergent properties. You need context, you need sequel, you need Python. That's how you ask the questions that security teams really want to ask the legacy Sims. They don't let you ask that kind of question. They weren't built with that in mind. And they're so expensive that by moving off of them, to this approach, you kind of pay for all these other solutions that, that then you can bring on. >>That seems to make the, what you just said. There was brilliant. It seems to make the customer conversation quite easy if they're saying, well, why should I replace my SIM? It's doing just fine. You just nailed it with, with what you said there. >>So, yeah. And we're, and we're seeing that happen extensively. And I'm excited that we have customers here at summit talking about their experience, moving off of a legacy SIM where the security team was off to the side, away from the rest of the company to a unified approach, the SIM and the other security solutions working on top of the snowflake and a collaboration between security and the data >>Team. So what does your security ecosystem look like? You've got SIM partners. Do you have identity access partners, endpoint partner. Absolutely. >>Describe that compliance automation ass. Yeah. We hear about companies really struggling to meet all the compliance requirements. Well, if all the data's already centralized, then I can kind of prove to my auditors and not just once a quarter, but once a day, I can make sure that all the environment is in compliance with whatever standard I have. So we see a lot of that cloud security is another big one because there's just 10 times more things happening in the cloud environment than in the data center. Everything is so heavily instrumented. And so we see cloud security solutions as significant as well. And the identity space, the list goes on and on. We do see the future being the entire security program uses connected applications with a single source of truth in the company's snowflake. And >>Would you say centralized, you, you it's logically centralized, right? I mean, it's virtually centralized, right? It's not, >>Well, that's >>Not shoved into one container, right? >>I mean, it's right. Well, that's the beauty of the data cloud, right? We, everybody that's on the data cloud is able to collaborate. And so whether it's in the same account or table or database, you know, that's really besides the point because all of the platform investments that snowflake is making on cross region, cross cloud collaboration means that once it's in snowflake, then it is unified and can be used together. But >>I think people misunderstand that sometimes. And BEWA made this point, uh, as the Christian about the global nature of, of snowflake and it's globally distributed, but it's logically a data cloud. >>Yeah. I like to call it one big database in the sky. You know, that's how I explain to security teams that are kind of new to the concept, but >>It's not, it's could be a lot of little databases, but it, but having the same framework, the same governance structure, the same security >>You're right. I think that's how it's achieved is what you're describing. You know, I think from the outcome, what the security team needs to know is that when there's some breach hitting the headline and they need to go to their leadership and say, I can assure you, we were not affected. They can be confident in that answer because they have access to the data, wherever it is in the world, they have access to ask you the questions they need to ask. >>And that confidence is critical. These days as that threat landscape just continues to change. Thank you both so much for joining us. Thank you. Talking about from a cyber security perspective, some of the things that are new, new at snowflake, what you guys are doing at Guild education and how you're really transforming the organization with the data cloud, we appreciate your insights. Thank you for having us. Thank you. Thanks you guys for our guests and Dave ante. I'm Lisa Martin. You're watching the queue live from Las Vegas on the show floor of snowflake summit 22. We'll be right back with our next guest.
SUMMARY :
Welcome back to the queue of Lisa Martin with Dave Valante and we're live in Vegas. You know this much and you have so much more to learn now. Omar, I wanna start with you so much news coming out today. And we decided as snowflake that we wanna bring the benefits of the data cloud to cyber This is better of the Flaco <laugh>, but how is the voice of the customer influential The facts are all over the place and they're not able to ask the kind of questions that they need to that you could do, like, this was a thing you could use the data, you could get everything you needed in one place. actually, that's awesome to help us out. And so any security that we had in a developer pipeline was doesn't need to mean that you only have one tool you can actually have best of breed, Uh, we should have asked you about Guild education, Uh, we manage education as And it didn't, you know, it was like, everything was mm-hmm <affirmative> shoved. So now you have startups to fortune 500 S the vendors, So the, the biggest one, again, it's around the Def sec ops program, um, where you see so many And this is just, isn't something you can do right now. to scan, you know, 10, like 10 times a day or a hundred times a Uh, so we were partnering with one of, So all the things I care about live Maybe you're asking your SIM to do too much, or it just didn't deliver. You know, they don't, You kind of knew what you were looking for and you say, Hey, where did I see this? That seems to make the, what you just said. And I'm excited that we have customers here at summit talking about Do you have identity access Well, if all the data's already centralized, then I can kind of prove to my auditors and We, everybody that's on the data cloud is able to collaborate. And BEWA made this point, uh, as the Christian about the You know, that's how I explain to security teams that are kind of new to the concept, They can be confident in that answer because they have access to the new at snowflake, what you guys are doing at Guild education and how you're really transforming the organization
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Julie Chilo | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Julie | PERSON | 0.99+ |
Omar | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
Julie Chickillo | PERSON | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
once a day | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
Guild | ORGANIZATION | 0.98+ |
two months later | DATE | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Guild Education | ORGANIZATION | 0.98+ |
Guild education | ORGANIZATION | 0.98+ |
once a quarter | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Snowflake Summit 2022 | EVENT | 0.97+ |
under five minutes | QUANTITY | 0.97+ |
10 times a day | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Omer Singer | PERSON | 0.97+ |
Hadoop | PERSON | 0.96+ |
BEWA | ORGANIZATION | 0.96+ |
about 160 hours | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one container | QUANTITY | 0.96+ |
day one | QUANTITY | 0.96+ |
eight year | QUANTITY | 0.95+ |
a hundred times a day | QUANTITY | 0.94+ |
eight, 8 million security | QUANTITY | 0.92+ |
DevSecOps | TITLE | 0.92+ |
one place | QUANTITY | 0.91+ |
single source | QUANTITY | 0.91+ |
hundred percent | QUANTITY | 0.91+ |
one pane | QUANTITY | 0.9+ |
SAS | ORGANIZATION | 0.89+ |
one area | QUANTITY | 0.85+ |
fourth annual event | QUANTITY | 0.84+ |
One | QUANTITY | 0.84+ |
Reed | PERSON | 0.84+ |
Christian | ORGANIZATION | 0.83+ |
last couple of years | DATE | 0.82+ |
Flaco | ORGANIZATION | 0.79+ |
last two years | DATE | 0.79+ |
one big database | QUANTITY | 0.77+ |
ops | ORGANIZATION | 0.77+ |
a thousand pole requests | QUANTITY | 0.76+ |
snowflake | ORGANIZATION | 0.75+ |
double | QUANTITY | 0.59+ |
fortune 500 | ORGANIZATION | 0.58+ |
GitHub | TITLE | 0.57+ |
summit 22 | LOCATION | 0.48+ |
SOC | ORGANIZATION | 0.46+ |
22 | QUANTITY | 0.41+ |
Kickoff with Taylor Dolezal | Kubecon + Cloudnativecon Europe 2022
>> Announcer: "theCUBE" presents "Kubecon and Cloudnativecon Europe, 2022" brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and "Kubecon + Cloudnativecon Europe, 2022." I'm Keith Townsend, and we're continuing the conversations with amazing people doing amazing things. I think we've moved beyond a certain phase of the hype cycle when it comes to Kubernetes. And we're going to go a little bit in detail with that today, and on all the sessions, I have today with me, Taylor Dolezal. New head of CNCF Ecosystem. So, first off, what does that mean new head of? You're the head of CNCF Ecosystem? What is the CNCF Ecosystem? >> Yeah. Yeah. It's really the end user ecosystem. So, the CNCF is comprised of really three pillars. And there's the governing board, they oversee the budget and fun things, make sure everything's signed and proper. Then there's the Technical Oversight Committee, TOC. And they really help decide the technical direction of the organization through deliberation and talking about which projects get invited and accepted. Projects get donated, and the TOC votes on who's going to make it in, based on all this criteria. And then, lastly, is the end user ecosystem, that encompasses a whole bunch of different working groups, special interest groups. And that's been really interesting to kind of get a deeper sense into, as of late. So, there are groups like the developer experience group, and the user research group. And those have very specific focuses that kind of go across all industries. But what we've seen lately, is that there are really deep wants to create, whether it be financial services user group, and things like that, because end users are having trouble with going to all of the different meetings. If you're a company, a vendor member company that's selling authentication software, or something in networking, makes sense to have a SIG network, SIG off, and those kinds of things. But when it comes down to like Boeing that just joined, does that make sense for them to jump into all those meetings? Or does it make sense to have some other kind of thing that is representative of them, so that they can attend that one thing, it's specific to their industry? They can get that download and kind of come up to speed, or find the best practices as quickly as possible in a nice synthesized way. >> So, you're 10 weeks into this role. You're coming from a customer environment. So, talk to me a little bit about the customer side of it? When you're looking at something, it's odd to call CNCF massive. But it is, 7.1 million members, and the number of contributing projects, et cetera. Talk to me about the view from the outside versus the view now that you're inside? >> Yeah, so honestly, it's been fun to kind of... For me, it's really mirrored the open-source journey. I've gone to Kubecon before, gotten to enjoy all of the booths, and trying to understand what's going on, and then worked for HashiCorp before coming to the CNCF. And so, get that vendor member kind of experience working the booth itself. So, kind of getting deeper and deeper into the stack of the conference itself. And I keep saying, vendor member and end user members, the difference between those, is end users are not organizations that sell cloud native services. Those are the groups that are kind of more consuming, the Airbnbs, the Boeings, the Mercedes, these people that use these technologies and want to kind of give that feedback back to these projects. But yeah, very incredibly massive and just sprawling when it comes to working in all those contexts. >> So, I have so many questions around, like the differences between having you as an end user and in inter-operating with vendors and the CNCF itself. So, let's start from the end user lens. When you're an end user and you're out discovering open-source and cloud native products, what's that journey like? How do you go from saying, okay, I'm primarily focused on vendor solutions, to let me look at this cloud native stack? >> Yeah, so really with that, there's been, I think that a lot of people have started to work with me and ask for, "Can we have recommended architectures? Can we have blueprints for how to do these things?" When the CNCF doesn't want to take that position, we don't want to kind of be the king maker and be like, this is the only way forward. We want to be inclusive, we want to pull in these projects, and kind of give everyone the same boot strap and jump... I missing the word of it, just ability to kind of like springboard off of that. Create a nice base for everybody to get started with, and then, see what works out, learn from one another. I think that when it comes to Kubernetes, and Prometheus, and some other projects, being able to share best practices between those groups of what works best as well. So, within all of the separations of the CNCF, I think that's something I've found really fun, is kind of like seeing how the projects relate to those verticals and those groups as well. Is how you run a project, might actually have a really good play inside of an organization like, "I like that idea. Let's try that out with our team." >> So, like this idea of springboarding. You know, is when an entrepreneur says, "You know what? I'm going to quit my job and springboard off into doing something new." There's a lot of uncertainty, but for enterprise, that can be really scary. Like we're used to our big vendors, HashiCorp, VMware, Cisco kind of guiding us and telling us like, what's next? What is that experience like, springboarding off into something as massive as cloud native? >> So, I think it's really, it's a great question. So, I think that's why the CNCF works so well, is the fact that it's a safe place for all these companies to come together, even companies of competing products. you know, having that common vision of, we want to make production boring again, we don't want to have so much sprawl and have to take in so much knowledge at once. Can we kind of work together to create all these things to get rid of our adminis trivia or maintenance tasks? I think that when it comes to open-source in general, there's a fantastic book it's called "Working in Public," it's by Stripe Press. I recommend it all over the place. It's orange, so you'll recognize it. Yeah, it's easy to see. But it's really good 'cause it talks about the maintainer journey, and what things make it difficult. And so, I think that that's what the CNCF is really working hard to try to get rid of, is all this monotonous, all these monotonous things, filing issues, best practices. How do you adopt open-source within your organization? We have tips and tricks, and kind of playbooks in ways that you could accomplish that. So, that's what I find really useful for those kinds of situations. Then it becomes easier to adopt that within your organization. >> So, I asked Priyanka, CNCF executive director last night, a pretty tough question. And this is kind of in the meat of what you do. What happens when you? Let's pick on service mesh 'cause everyone likes to pick on service mesh. >> XXXX: Yeah. >> What happens when there's differences at that vendor level on the direction of a CIG or a project, or the ecosystem around service mesh? >> Yeah, so that's the fun part. Honestly, is 'cause people get to hash it out. And so, I think that's been the biggest thing for me finding out, was that there's more than one way to do thing. And so, I think it always comes down to use case. What are you trying to do? And then you get to solve after that. So, it really is, I know it depends, which is the worst answer. But I really do think that's the case, because if you have people that are using something within the automotive space, or in the financial services space, they're going to have completely different needs, wants, you know, some might need to run Coball or Fortran, others might not have to. So, even at that level, just down to what your tech stack looks like, audits, and those kinds of things, that can just really differ. So, I think it does come down to something more like that. >> So, the CNCF loosely has become kind of a standards body. And it's centered around the core project Kubernetes? >> Mm-hmm. >> So, what does it mean, when we're looking at larger segments such as service mesh or observability, et cetera, to be Kubernetes compliant? Where's the point, if any, that the CNCF steps in versus just letting everyone hash it out? Is it Kubernetes just need to be Kubernetes compliant and everything else is free for all? >> Honestly, in many cases, it's up to the communities themselves to decide that. So, the groups that are running OCI, the Open Container Interface, Open Storage Interface, all of those things that we've agreed on as ways to implement those technologies, I think that's where the CNCF, that's the line. That's where the CNCF gets up to. And then, it's like we help foster those communities and those conversations and asking, does this work for you? If not, let's talk about it, let's figure out why it might not. And then, really working closely with community to kind of help bring those things forward and create action items. >> So, it's all about putting the right people in the rooms and not necessarily playing referee, but to get people in the right room to have and facilitate the conversation? >> Absolutely. Absolutely. Like all of the booths behind us could have their own conferences, but we want to bring everybody together to have those conversations. And again, sprawling can be really wild at certain times, but it's good to have those cross understandings, or to hear from somebody that you're like, "Oh, my goodness, I didn't even think about that kind of context or use case." So, really inclusive conversation. >> So, organizations like Boeing, Adobe, Microsoft, from an end user perspective, it's sometimes difficult to get those organizations into these types of communities. How do you encourage them to participate in the conversation 'cause their voice is extremely important? >> Yeah, that I'd also say it really is the community. I really liked the Kubernetes documentary that was put out, working with some of the CNCF folks and core, and beginning Kubernetes contributors and maintainers. And it just kind of blew me away when they had said, you know, what we thought was success, was seeing Kubernetes in an Amazon Data Center. That's when we knew that this was going to take root. And you'd rarely hear that, is like, "When somebody that we typically compete with, its success is seeing it, seeing them use that." And so, I thought was really cool. >> You know, I like to use this technology for my community of skipping rope. You see the girls and boys jumping double Dutch rope. And you think, "I can do that. Like it's just jumping." But there's this hesitation to actually, how do you start? How do you get inside of it? The question is how do you become a member of the community? We've talked a lot about what happens when you're in the community. But how do you join the community? >> So, really, there's a whole bunch of ways that you can. Actually, the shirt that I'm wearing, I got from the 114 Release. So, this is just a fun example of that community. And just kind of how welcoming and inviting that they are. Really, I do think it's kind of like a job breaker. Almost you start at the outside, you start using these technologies, even more generally like, what is DevOps? What is production? How do I get to infrastructure, architecture, or software engineering? Once you start there, you start working your way in, you develop a stack, and then you start to see these tools, technologies, workflows. And then, after you've kind of gotten a good amount of time spent with it, you might really enjoy it like that, and then want to help contribute like, "I like this, but it would be great to have a function that did this. Or I want a feature that does that." At that point in time, you can either take a look at the source code on GitHub, or wherever it's hosted, and then start to kind of come up with that, some ideas to contribute back to that. And then, beyond that, you can actually say, "No, I kind of want to have these conversations with people." Join in those special interest groups, and those meetings to kind of talk about things. And then, after a while, you can kind of find yourself in a contributor role, and then a maintainer role. After that, if you really like the project, and want to kind of work with community on that front. So, I think you had asked before, like Microsoft, Adobe and these others. Really it's about steering the projects. It's these communities want these things, and then, these companies say, "Okay, this is great. Let's join in the conversation with the community." And together again, inclusivity, and bringing everybody to the table to have that discussion and push things forward. >> So, Taylor, closing message. What would you want people watching this show to get when they think about ecosystem and CNCF? >> So, ecosystem it's a big place, come on in. Yeah, (laughs) the water's just fine. I really want people to take away the fact that... I think really when it comes down to, it really is the community, it's you. We are the end user ecosystem. We're the people that build the tools, and we need help. No matter how big or small, when you come in and join the community, you don't have to rewrite the Kubernetes scheduler. You can help make documentation that much more easy to understand, and in doing so, helping thousands of people, If I'm going through the instructions or reading a paragraph, doesn't make sense, that has such a profound impact. And I think a lot of people miss that. It's like, even just changing punctuation can have such a giant difference. >> Yeah, I think people sometimes forget that community, especially community-run projects, they need product managers. They need people that will help with communications, people that will help with messaging, websites updating. Just reachability, anywhere from developing code to developing documentation, there's ways to jump in and help the community. From Valencia, Spain, I'm Keith Townsend, and you're watching "theCUBE," the leader in high tech coverage. (bright upbeat music)
SUMMARY :
brought to you by Red Hat, and on all the sessions, and the user research group. and the number of contributing Those are the groups that So, let's start from the end user lens. and kind of give everyone the I'm going to quit my job and have to take in so the meat of what you do. Yeah, so that's the fun part. So, the CNCF loosely has So, the groups that are running OCI, Like all of the booths behind us participate in the conversation I really liked the Kubernetes become a member of the community? and those meetings to What would you want people it really is the community, it's you. and help the community.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Priyanka | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
10 weeks | QUANTITY | 0.99+ |
Taylor Dolezal | PERSON | 0.99+ |
Taylor | PERSON | 0.99+ |
TOC | ORGANIZATION | 0.99+ |
Stripe Press | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Mercedes | ORGANIZATION | 0.99+ |
Technical Oversight Committee | ORGANIZATION | 0.99+ |
Boeings | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Coball | ORGANIZATION | 0.99+ |
Valencia, Spain | LOCATION | 0.99+ |
today | DATE | 0.99+ |
7.1 million members | QUANTITY | 0.99+ |
HashiCorp | ORGANIZATION | 0.98+ |
Kubecon | ORGANIZATION | 0.98+ |
Airbnbs | ORGANIZATION | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
last night | DATE | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
Fortran | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.95+ |
Working in Public | TITLE | 0.93+ |
Amazon Data Center | ORGANIZATION | 0.92+ |
Dutch | OTHER | 0.92+ |
thousands of people | QUANTITY | 0.91+ |
theCUBE | TITLE | 0.91+ |
more than one way | QUANTITY | 0.9+ |
Cloudnativecon | ORGANIZATION | 0.89+ |
theCUBE | ORGANIZATION | 0.86+ |
Kubernetes | ORGANIZATION | 0.84+ |
DevOps | TITLE | 0.84+ |
CNCF Ecosystem | ORGANIZATION | 0.83+ |
one thing | QUANTITY | 0.83+ |
three pillars | QUANTITY | 0.82+ |
Europe | LOCATION | 0.79+ |
Open Container Interface | OTHER | 0.77+ |
double | QUANTITY | 0.76+ |
OCI | OTHER | 0.73+ |
Cloudnativecon Europe | ORGANIZATION | 0.69+ |
Open Storage Interface | OTHER | 0.62+ |
2022 | DATE | 0.58+ |
CIG | ORGANIZATION | 0.53+ |
2022 | TITLE | 0.46+ |
114 Release | ORGANIZATION | 0.38+ |
Michael Cade, Veeam | VeeamON 2022
(calm music) >> Hi everybody. We're here at VeeamON 2022. This is day two of the CUBE's continuous coverage. I'm Dave Vellante. My co-host is Dave Nicholson. A ton of energy. The keynotes, day two keynotes are all about products at Veeam. Veeam, the color of green, same color as money. And so, and it flows in this ecosystem. I'll tell you right now, Michael Cade is here. He's the senior technologist for product strategy at Veeam. Michael, fresh off the keynotes. >> Yeah, yeah. >> Welcome. Danny Allen's keynote was fantastic. I mean, that story he told blew me away. I can't wait to have him back. Stay tuned for that one. But we're going to talk about protecting containers, Kasten. You guys got announcements of Kasten by Veeam, you call it K10 version five, I think? >> Yeah. So just rolled into 5.0 release this week. Now, it's a bit different to what we see from a VBR release cycle kind of thing, cause we're constantly working on a two week sprint cycle. So as much as 5.0's been launched and announced, we're going to see that trickling out over the next couple of months until we get round to Cube (indistinct) and we do all of this again, right? >> So let's back up. I first bumped into Kasten, gosh, it was several years ago at VeeamON. Like, wow this is a really interesting company. I had deep conversations with them. They had a sheer, sheer cat grin, like something was going on and okay finally you acquire them, but go back a little bit of history. Like why the need for this? Containers used to be ephemeral. You know, you didn't have to persist them. That changed, but you guys are way ahead of that trend. Talk a little bit more about the history there and then we'll get into current day. >> Yeah, I think the need for stateful workloads within Kubernetes is absolutely grown. I think we just saw 1.24 of Kubernetes get released last week or a couple of weeks ago now. And really the focus there, you can see, at least three of the big ticket items in that release are focused around storage and data. So it just encourages that the community is wanting to put these data services within that. But it's also common, right? It's great to think about a stateless... If you've got stateless application but even a web server's got some state, right? There's always going to be some data associated to an application. And if there isn't then like, great but that doesn't really work- >> You're right. Where'd they click, where'd they go? I mean little things like that, right? >> Yeah. Yeah, exactly. So one of the things that we are seeing from that is like obviously the requirement to back up and put in a lot of data services in there, and taking full like exposure of the Kubernetes ecosystem, HA, and very tiny containers versus these large like virtual machines that we've always had the story at Veeam around the portability and being able to move them left, right, here, there, and everywhere. But from a K10 point of view, the ability to not only protect them, but also move those applications or move that data wherever they need to be. >> Okay. So, and Kubernetes of course has evolved. I mean the early days of Kubernetes, they kept it simple, kind of like Veeam actually. Right? >> Yeah. >> And then, you know, even though Mesosphere and even Docker Swarm, they were trying to do more sophisticated cluster management. Kubernetes has now got projects getting much more complicated. So more complicated workloads mean more data, more critical data means more protection. Okay, so you acquire Kasten, we know that's a small part of your business today but it's going to be growing. We know this cause everybody's developing applications. So what's different about protecting containers? Danny talks about modern data protection. Okay, when I first heard that, I'm like, eh, nice tagline, but then he peel the onion. He explains how in virtualization, you went from agents to backing up of VMware instance, a virtual instance. What's different about containers? What constitutes modern data protection for containers? >> Yeah, so I think the story that Danny tells as well, is so when we had our physical agents and virtualization came along and a lot of... And this is really where Veeam was born, right, we went into the virtualization API, the VMware API, and we started leveraging that to be more storage efficient. The admin overhead around those agents weren't there then, we could just back up using the API. Whereas obviously a lot of our competition would use agents still and put that resource overhead on top of that. So that's where Veeam initially got the kickstart in that world. I think it's very similar to when it comes to Kubernetes because K10 is deployed within the Kubernetes cluster and it leverages the Kubernetes API to pull out that data in a more efficient way. You could use image based backups or traditional NAS based backups to protect some of the data, and backup's kind of the... It's only one of the ticks in the boxes, right? You have to be able to restore and know what that data is. >> But wait, your competitors aren't as fat, dumb and happy today as they were back then, right? So it can't... They use the same APIs and- >> Yeah. >> So what makes you guys different? >> So I think that's testament to the Kubernetes and the community behind that and things like the CSI driver, which enables the storage vendors to take that CSI abstraction layer and then integrate their storage components, their snapshot technologies, and other efficiency models in there, and be able to leverage that as part of a universal data protection API. So really that's one tick in the box and you're absolutely right, there's open source tools that can do exactly what we're doing to a degree on that backup and recovery. Where it gets really interesting is the mobility of data and how we're protecting that. Because as much as stateful workloads are seen within the Kubernetes environments now, they're also seen outside. So things like Amazon RDS, but the front end lives in Kubernetes going to that stateless point. But being able to protect the whole application and being very application aware means that we can capture everything and restore wherever we want that to go as well. Like, so the demo that I just did was actually a Postgres database in AWS, and us being able to clone or migrate that out into an EKS cluster as a staple set. So again, we're not leveraging RDS at that point, but it gives us the freedom of movement of that data. >> Yeah, I want to talk about that, what you actually demoed. One of the interesting things, we were talking earlier, I didn't see any CLI when you were going through the integration of K10 V5 and V12. >> Yeah. >> That was very interesting, but I'm more skeptical of this concept, of the single pane of glass and how useful that is. Who is this integration targeting? Are you targeting the sort of traditional Veeam user who is now adding as a responsibility, the management of protecting these Kubernetes environments? Or are you at the same time targeting the current owners of those environments? Cause I know you talk about shift left and- >> Yeah. >> You know, nobody needs Kubernetes if you only have one container and one thing you're doing. So at some point it's all about automation, it's about blueprints, it's about getting those things in early. So you get up, you talk about this integration, who cares about that kind of integration? >> Yeah, so I think it's a bit of both, right? So we're definitely focused around the DevOps focused engineer. Let's just call it that. And under an umbrella, the cloud engineer that's looking after Kubernetes, from an application delivery perspective. But I think more and more as we get further up the mountain, CIS admin, obviously who we speak to the tech decision makers, the solutions architects systems engineers, they're going to inherit and be that platform operator around the Kubernetes clusters. And they're probably going to land with the requirement around data management as well. So the specific VBR centralized management is very much for the backup admin, the infrastructure admin or the cloud based engineer that's looking after the Kubernetes cluster and the data within that. Still we speak to app developers who are conscious of what their database looks like, because that's an external data service. And the biggest question that we have or the biggest conversation we have with them is that the source code, the GitHub or the source repository, that's fine, that will get your... That'll get some of the way back up and running, but when it comes to a Postgres database or some sort of data service, oh, that's out of the CI/CD pipeline. So it's whether they're interested in that or whether that gets farmed out into another pre-operations, the traditional operations team. >> So I want to unpack your press release a little bit. It's full of all the acronyms, so maybe you can help us- >> Sure. >> Cipher. You got security everywhere enhance platform hardening, including KMS. That's key- >> Yeah, key management service, yeah. >> System, okay. With AWS, KMS and HashiCorp vault. Awesome, love to see HashiCorp company. >> Yeah. >> RBAC objects in UI dashboards, ransomware attacks, AWS S3. So anyway, security everywhere. What do you mean by that? >> So I think traditionally at Veeam, and continue that, right? From a security perspective, if you think about the failure scenario and ransomware's, the hot topic, right, when it comes to security, but we can think about security as, if we think about that as the bang, right, the bang is something bad's happen, fire, flood, blood, type stuff. And we tend to be that right hand side of that, we tend to be the remediation. We're definitely the one, the last line of defense to get stuff back when something really bad happens. And I think what we've done from a K10 point of view, is not only enhance that, so with the likes of being able to... We're not going to reinvent the wheel, let's use the services that HashiCorp have done from a HashiCorp vault point of view and integrate from a key management system. But then also things like S3 or ransomware prevention. So I want to know if something bad's happened and Kasten actually did something more generic from a Veeam ONE perspective, but one of the pieces that we've seen since we've then started to send our backups to an immutable object storage, is let's be more of that left as well and start looking at the preventative tasks that we can help with. Now, we're not going to be a security company, but you heard all the way through Danny's like keynote, and probably when he is been on here, is that it's always, we're always mindful of that security focus. >> On that point, what was being looked for? A spike in CPU utilization that would be associated with encryption? >> Yeah, exactly that. >> Is that what was being looked- >> That could be... Yeah, exactly that. So that could be from a virtual machine point of view but from a K10, and it specifically is that we're going to look at the S3 bucket or the object storage, we're going to see if there's a rate of change that's out of the normal. It's an abnormal rate. And then with that, we can say, okay, that doesn't look right, alert us through observability tools, again, around the cloud native ecosystem, Prometheus Grafana. And then we're going to get insight into that before the bang happens, hopefully before the bang. >> So that's an interesting when we talk about adjacencies and moving into this area of security- >> We're talking to Zeus about that too. >> Exactly. That's that sort of creep where you can actually add value. It's interesting. >> So, okay. So we talked about shift left, get that, and then expanded ecosystem, industry leading technologies. By the way, one of them is the Red Hat Marketplace. And I think, I heard Anton's... Anton was amazing. He is the head of product management at Veeam. Is been to every VeeamON. He's got family in Ukraine. He's based in Switzerland. >> Yeah. >> But he chose not to come here because he's obviously supporting, you know, the carnage that's going on in Ukraine. But anyway, I think he said the Red Hat team is actually in Ukraine developing, you know, while the bombs are dropping. That's amazing. But anyway, back to our interview here, expanded ecosystem, Red Hat, SUSE with Rancher, they've got some momentum. vSphere with Tanzu, they're in the game. Talk about that ecosystem and its importance. >> Yeah, and I think, and it goes back to your point around the CLI, right? Is that it feels like the next stage of Kubernetes is going to be very much focused towards the operator or the operations team. The CIS admin of today is going to have to look after that. And at the moment it's all very command line, it's all CLI driven. And I think the marketplace is OpenShift, being our biggest foothold around our customer base, is definitely around OpenShift. But things like, obviously we are a longstanding alliance partner with VMware as well. So their Tanzu operations actually there's support for TKGS, so vSphere Tanzu grid services is another part of the big release of 5.0. But all three of those and the common marketplace gives us a UI, gives us a way of being able to see and visualize that rather than having to go and hunt down the commands and get our information through some- >> Oh, some people are going to be unhappy about that. >> Yeah. >> But I contend the human eye has evolved to see in color for a very good reason. So I want to see things in red, yellow, and green at times. >> There you go, yeah. >> So when we hear a company like Veeam talk about, look we have no platform agenda, we don't care which cloud it's in. We don't care if it's on-prem or Google Azure, AWS. We had Wasabi on, we have... Great, they got an S3 compatible, you know, target, and others as well. When we hear them, companies like you, talk about that consistent experience, single pane of glass that you're skeptical of, maybe cause it's technically challenging, one of the things, we call it super cloud, right, that's come up. Danny and I were riffing on that the other day and we'll do that more this afternoon. But it brings up something that we were talking about with Zeus, Dave, which is the edge, right? And it seems like Kubernetes, and we think about OpenShift. >> Yeah. >> We were there last week at Red Hat Summit. It's like 50% of the conversation, if not more, was the edge. Right, and really true edge, worst cases, use cases. Two weeks ago we were at Dell Tech, there was a lot of edge talk, but it was retail stores, like Lowe's. Okay, that's kind of near edge, but the far edge, we're talking space, right? So seems like Kubernetes fits there and OpenShift, you know, particularly, as well as some of the others that we mentioned. What about edge? How much of what you're doing with container data protection do you see as informing you about the edge opportunity? Are you seeing any patterns there? Nobody's really talking about it in data protection yet. >> So yeah, large scale numbers of these very small clusters that are out there on farms or in wind turbines, and that is definitely something that is being spoken about. There's not much mention actually in this 5.0 release because we actually support things like K3s,(indistinct), that all came in 4.5, but I think, to your first point as well, David, is that, look, we don't really care what that Kubernetes distribution is. So you've got K3s lightweight Kubernetes distribution, we support it, because it uses the same native Kubernetes APIs, and we get deployed inside of that. I think where we've got these large scale and large numbers of edge deployments of Kubernetes and that you require potentially some data management down there, and they might want to send everything into a centralized location or a more centralized location than a farm shed out in the country. I think we're going to see a big number of that. But then we also have our multi cluster dashboard that gives us the ability to centralize all of the control plane. So we don't have to go into each individual K10 deployment to manage those policies. We can have one big centralized management multi cluster dashboard, and we can set global policies there. So if you're running a database and maybe it's the same one across all of your different edge locations, where you could just set one policy to say I want to protect that data on an hourly basis, a daily basis, whatever that needs to be, rather than having to go into each individual one. >> And then send it back to that central repository. So that's the model that you see, you don't see the opportunity, at least at this point in time, of actually persisting it at the edge? >> So I think it depends. I think we see both, but again, that's the footprint. And maybe like you mentioned about up in space having a Kubernetes cluster up there. You don't really want to be sending up a NAS device or a storage device, right, to have to sit alongside it. So it's probably, but then equally, what's the art of the possible to get that back down to our planet, like as part of a consistent copy of data? >> Or even a farm or other remote locations. The question is, I mean, EVs, you know, we believe there's going to be tons of data, we just don't.. You think about Tesla as a use case, they don't persist a ton of their data. Maybe if a deer runs across, you know, the front of the car, oh, persist that, send that back to the cloud. >> I don't want anyone knowing my Tesla data. I'll tell you that right now. (all laughing) >> Well, there you go, that one too. All right, well, that's future discussion, we're still trying to squint through those patterns. I got so many questions for you, Michael, but we got to go. Thanks so much for coming to theCUBE. >> Always. >> Great job on the keynote today and good luck. >> Thank you. Thanks for having me. >> All right, keep it right there. We got a ton of product talk today. As I said, Danny Allan's coming back, we got the ecosystem coming, a bunch of the cloud providers. We have, well, iland was up on stage. They were just recently acquired by 11:11 Systems. They were an example today of a cloud service provider. We're going to unpack it all here on theCUBE at VeeamON 2022 from Las Vegas at the Aria. Keep it right there. (calm music)
SUMMARY :
Veeam, the color of green, I mean, that story he told blew me away. and we do all of this again, right? about the history there So it just encourages that the community I mean little things like that, right? So one of the things that I mean the early days of Kubernetes, but it's going to be growing. and it leverages the Kubernetes API So it can't... and be able to leverage that One of the interesting things, of the single pane of glass So you get up, you talk And the biggest question that we have It's full of all the acronyms, You got security everywhere With AWS, KMS and HashiCorp vault. So anyway, security everywhere. and ransomware's, the hot topic, right, or the object storage, That's that sort of creep where He is the head of product said the Red Hat team and the common marketplace gives us a UI, to be unhappy about that. But I contend the human eye on that the other day It's like 50% of the and maybe it's the same one So that's the model that you see, but again, that's the footprint. that back to the cloud. I'll tell you that right now. Thanks so much for coming to theCUBE. on the keynote today and good luck. Thanks for having me. a bunch of the cloud providers.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Danny Allen | PERSON | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Ukraine | LOCATION | 0.99+ |
Danny | PERSON | 0.99+ |
Michael Cade | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
Anton | PERSON | 0.99+ |
VeeamON | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Two weeks ago | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
two week | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Veeam | PERSON | 0.99+ |
11:11 Systems | ORGANIZATION | 0.99+ |
Danny Allan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
SUSE | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one policy | QUANTITY | 0.98+ |
first point | QUANTITY | 0.98+ |
Rancher | ORGANIZATION | 0.98+ |
K10 | COMMERCIAL_ITEM | 0.98+ |
this week | DATE | 0.98+ |
S3 | TITLE | 0.98+ |
one container | QUANTITY | 0.98+ |
several years ago | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
CIS | ORGANIZATION | 0.97+ |
KMS | TITLE | 0.96+ |
Dell Tech | ORGANIZATION | 0.96+ |
Zeus | ORGANIZATION | 0.96+ |
K10 V5 | COMMERCIAL_ITEM | 0.95+ |
OpenShift | TITLE | 0.95+ |
VMware | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
this afternoon | DATE | 0.95+ |
V12 | COMMERCIAL_ITEM | 0.94+ |
iland | ORGANIZATION | 0.94+ |
GitHub | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.94+ |
TKGS | ORGANIZATION | 0.93+ |
S3 | COMMERCIAL_ITEM | 0.92+ |
Red Hat Summit | EVENT | 0.92+ |
day two | QUANTITY | 0.92+ |
Tanzu | ORGANIZATION | 0.92+ |
Ajay Mungara, Intel | Red Hat Summit 2022
>>mhm. Welcome back to Boston. This is the cubes coverage of the Red Hat Summit 2022. The first Red Hat Summit we've done face to face in at least two years. 2019 was our last one. We're kind of rounding the far turn, you know, coming up for the home stretch. My name is Dave Valentin here with Paul Gillon. A J monger is here is a senior director of Iot. The Iot group for developer solutions and engineering at Intel. AJ, thanks for coming on the Cube. Thank you so much. We heard your colleague this morning and the keynote talking about the Dev Cloud. I feel like I need a Dev Cloud. What's it all about? >>So, um, we've been, uh, working with developers and the ecosystem for a long time, trying to build edge solutions. A lot of time people think about it. Solutions as, like, just computer the edge. But what really it is is you've got to have some component of the cloud. There is a network, and there is edge and edge is complicated because of the variety of devices that you need. And when you're building a solution, you got to figure out, like, where am I going to push the computer? How much of the computer I'm going to run in the cloud? How much of the computer? I'm gonna push it at the network and how much I need to run it at the edge. A lot of times what happens for developers is they don't have one environment where all of the three come together. And so what we said is, um, today the way it works is you have all these edge devices that customers by the instal, they set it up and they try to do all of that. And then they have a cloud environment they do to their development. And then they figure out how all of this comes together. And all of these things are only when they are integrating it at the customer at the solution space is when they try to do it. So what we did is we took all of these edge devices, put it in the cloud and gave one environment for cloud to the edge. Very good to your complete solution. >>Essentially simulates. >>No, it's not >>simulating span. So the cloud spans the cloud, the centralised cloud out to the edge. You >>know, what we did is we took all of these edge devices that will theoretically get deployed at the edge like we took all these variety of devices and putting it put it in a cloud environment. So these are non rack mountable devices that you can buy in the market today that you just have, like, we have about 500 devices in the cloud that you have from atom to call allusions to F. P. G s to head studio cards to graphics. All of these devices are available to you. So in one environment you have, like, you can connect to any of the cloud the hyper scholars, you could connect to any of these network devices. You can define your network topology. You could bring in any of your sources that is sitting in the gate repository or docker containers that may be sitting somewhere in a cloud environment, or it could be sitting on a docker hub. You can pull all of these things together, and we give you one place where you can build it where you can test it. You can performance benchmark it so you can know when you're actually going to the field to deploy it. What type of sizing you need. So >>let me show you, understand? If I want to test, uh, an actual edge device using 100 gig Ethernet versus an Mpls versus the five G, you can do all that without virtualizing. >>So all the H devices are there today, and the network part of it, we are building with red hat together where we are putting everything on this environment. So the network part of it is not quite yet solved, but that's what we want to solve. But the goal is here is you can let's say you have five cameras or you have 50 cameras with different type of resolutions. You want to do some ai inference type of workloads at the edge. What type of compute you need, what type of memory you need, How many devices do you need and where do you want to push the data? Because security is very important at the edge. So you gotta really figure out like I've got to secure the data on flight. I want to secure the data at Brest, and how do you do the governance of it. How do you kind of do service governance? So that all the services different containers that are running on the edge device, They're behaving well. You don't have one container hogging up all the memory or hogging up all the compute, or you don't have, like, certain points in the day. You might have priority for certain containers. So all of these mortals, where do you run it? So we have an environment that you could run all of that. >>Okay, so take that example of AI influencing at the edge. So I've got an edge device and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing in real time. You got something? They become some kind of streaming data coming in, and I want you to persist, uh, every hour on the hour. I want to save that time stamp. Or if the if some event, if a deer runs across the headlights, I want you to persist that day to send that back to the cloud and you can develop that tested, benchmark >>it right, and then you can say that. Okay, look in this environment I have, like, five cameras, like at different angles, and you want to kind of try it out. And what we have is a product which is into, um, open vino, which is like an open source product, which does all of the optimizations you need for age in France. So you develop the like to recognise the deer in your example. I developed the training model somewhere in the cloud. Okay, so I have, like, I developed with all of the things have annotated the different video streams. And I know that I'm recognising a deer now. Okay, so now you need to figure out Like when the deer is coming and you want to immediately take an action. You don't want to send all of your video streams to the cloud. It's too expensive. Bandwidth costs a lot. So you want to compute that inference at the edge? Okay. In order to do that inference at the edge, you need some environment. You should be able to do it. And to build that solution What type of age device do you really need? What type of compute you need? How many cameras are you computing it? What different things you're not only recognising a deer, probably recognising some other objects could do all of that. In fact, one of the things happened was I took my nephew to San Diego Zoo and he was very disappointed that he couldn't see the chimpanzees. Uh, that was there, right, the gorillas and other things. So he was very sad. So I said, All right, there should be a better way. I saw, like there was a stream of the camera feed that was there. So what we did is we did an edge in friends and we did some logic to say, At this time of the day, the gorillas get fed, so there's likelihood of you actually seeing the gorilla is very high. So you just go at that point and so that you see >>it, you >>capture, That's what you do, and you want to develop that entire solution. It's based on whether, based on other factors, you need to bring all of these services together and build a solution, and we offer an environment that allows you to do it. Will >>you customise the the edge configuration for the for the developer If if they want 50 cameras. That's not You don't have 50 cameras available, right? >>It's all cameras. What we do is we have a streaming capability that we support so you can upload all your videos. And you can say I want to now simulate 50 streams. Want to simulate 30 streams? Or I want to do this right? Or just like two or three videos that you want to just pull in. And you want to be able to do the infant simultaneously, running different algorithms at the edge. All of that is supported, and the bigger challenge at the edge is developing. Solution is fine. And now when you go to actual deployment and post deployment monitoring, maintenance, make sure that you're like managing it. It's very complicated. What we have seen is over 50% 51% to be precise of developers are developed some kind of a cloud native applications recently, right? So that we believe that if you bring that type of a cloud native development model to the edge, then you're scaling problem. Your maintenance problem, you're like, how do you actually deploy it? All of these challenges can be better managed, Um, and if you run all of that is an orchestration later on kubernetes and we run everything on top of open shift, so you have a deployment ready solution already there it's everything is containerised everything. You have it as health charged Dr Composed. You have all their you have tested and in this environment, and now you go take that to the deployment. And if it is there on any standard kubernetes environment or in an open ship, you can just straight away deploy your application. >>What's that edge architecture looked like? What's Intel's and red hats philosophy around? You know what's programmable and it's different. I know you can run a S, a p a data centre. You guys got that covered? What's the edge look like? What's that architecture of silicon middleware? Describe that for us. >>So at the edge, you think about it, right? It can run traditional, Uh, in an industrial PC. You have a lot of Windows environment. You have a lot of the next. They're now in a in an edge environment. Quite a few of these devices. I'm not talking about Farage where there are tiny micro controllers and these devices I'm talking about those devices that connect to these forage devices. Collect the data. Do some analytics do some compute that type of thing. You have foraged devices. Could be a camera. Could be a temperature sensor. Could be like a weighing scale. Could be anything. It could be that forage and then all of that data instead of pushing all the data to the cloud. In order for you to do the analysis, you're going to have some type of an edge set of devices where it is collecting all this data, doing some decisions that's close to the data. You're making some analysis there, all of that stuff, right? So you need some analysis tools, you need certain other things. And let's say that you want to run like, UH, average costs or rail or any of these operating systems at the edge. Then you have an ability for you to manage all of that. Using a control note, the control node can also sit at the edge. In some cases, like in a smart factory, you have a little data centre in a smart factory or even in a retail >>store >>behind a closet. You have, like a bunch of devices that are sitting there, correct. And those devices all can be managed and clustered in an environment. So now the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera? Other sensors and you're processing it close to where the data is being generated make immediate decisions. So the architecture would look like you have some club which does some management of this age devices management of this application, some type of control. You have some network because you need to connect to that. Then you have the whole plethora of edge, starting from an hybrid environment where you have an entire, like a mini data centre sitting at the edge. Or it could be one or two of these devices that are just collecting data from these sensors and processing it that is the heart of the other challenge. The architecture varies from different verticals, like from smart cities to retail to healthcare to industrial. They have all these different variations. They need to worry about these, uh, different environments they are going to operate under, uh, they have different regulations that they have to look into different security protocols that they need to follow. So your solution? Maybe it is just recognising people and identifying if they are wearing a helmet or a coal mine, right, whether they are wearing a safety gear equipment or not, that solution versus you are like driving in a traffic in a bike, and you, for safety reasons. We want to identify the person is wearing a helmet or not. Very different use cases, very different environments, different ways in which you are operating. But that is where the developer needs to have. Similar algorithms are used, by the way, but how you deploy it very, quite a bit. >>But the Dev Cloud make sure I understand it. You talked about like a retail store, a great example. But that's a general purpose infrastructure that's now customised through software for that retail environment. Same thing with Telco. Same thing with the smart factory, you said, not the far edge, right, but that's coming in the future. Or is that well, that >>extends far edge, putting everything in one cloud environment. We did it right. In fact, I put some cameras on some like ipads and laptops, and we could stream different videos did all of that in a data centre is a boring environment, right? What are you going to see? A bunch of racks and service, So putting far edge devices there didn't make sense. So what we did is you could just have an easy ability for you to stream or connect or a Plourde This far edge data that gets generated at the far edge. Like, say, time series data like you can take some of the time series data. Some of the sensor data are mostly camera data videos. So you upload those videos and that is as good as your streaming those videos. Right? And that means you are generating that data. And then you're developing your solution with the assumption that the camera is observing whatever is going on. And then you do your age inference and you optimise it. You make sure that you size it, and then you have a complete solution. >>Are you supporting all manner of microprocessors at the edge, including non intel? >>Um, today it is all intel, but the plan, because we are really promoting the whole open ecosystem and things like that in the future. Yes, that is really talking about it, so we want to be able to do that in the future. But today it's been like a lot of the we were trying to address the customers that we are serving today. We needed an environment where they could do all of this, for example, and what circumstances would use I five versus i nine versus putting an algorithm on using a graphics integrated graphics versus running it on a CPU or running it on a neural computer stick. It's hard, right? You need to buy all those devices you need to experiment your solutions on all of that. It's hard. So having everything available in one environment, you could compare and contrast to see what type of a vocal or makes best sense. But it's not >>just x 86 x 86 your portfolio >>portfolio of F. P. G s of graphics of like we have all what intel supports today and in future, we would want to open it up. So how >>do developers get access to this cloud? >>It is all free. You just have to go sign up and register and, uh, you get access to it. It is difficult dot intel dot com You go there, and the container playground is all available for free for developers to get access to it. And you can bring in container workloads there, or even bare metal workloads. Um, and, uh, yes, all of it is available for you >>need to reserve the endpoint devices. >>Comment. That is where it is. An interesting technology. >>Govern this. Correct. >>So what we did was we built a kind of a queuing system. Okay, So, schedule, er so you develop your application in a controlled north, and only you need the edge device when you're scheduling that workload. Okay, so we have this scheduling systems, like we use Kafka and other technologies to do the scheduling in the container workload environment, which are all the optimised operators that are available in an open shift, um, environment. So we regard those operators. Were we installed it. So what happens is you take your work, lord, and you run it. Let's say on an I seven device, when you're running that workload and I summon device, that device is dedicated to you. Okay, So and we've instrumented each of these devices with telemetry so we could see at the point your workload is running on that particular device. What is the memory looking like power looking like How hard is the device running? What is a compute looking like? So we capture all that metrics. Then what you do is you take it and run it on a 99 or run it on a graphic, so can't run it on an F p g a. Then you compare and contrast. And you say Huh? Okay for this particular work, Lord, this device makes best sense. In some cases, I'll tell you. Right, Uh, developers have come back and told me I don't need a bigger process that I need bigger memory. >>Yeah, sure, >>right. And some cases they've said, Look, I have I want to prioritise accuracy over performance because if you're in a healthcare setting, accuracy is more important. In some cases, they have optimised it for the size of the device because it needs to fit in the right environment in the right place. So every use case where you optimise is up to the solution up to the developer, and we give you an ability for you to do that kind >>of folks are you seeing? You got hardware developers, you get software developers are right, people coming in. And >>we have a lot of system integrators. We have enterprises that are coming in. We are seeing a lot of, uh, software solution developers, independent software developers. We also have a lot of students are coming in free environment for them to kind of play with in sort of them having to buy all of these devices. We're seeing those people. Um I mean, we are pulling through a lot of developers in this environment currently, and, uh, we're getting, of course, feedback from the developers. We are just getting started here. We are continuing to improve our capabilities. We are adding, like, virtualisation capabilities. We are working very closely with red hat to kind of showcase all the goodness that's coming out of red hat, open shift and other innovations. Right? We heard, uh, like, you know, in one of the open shift sessions, they're talking about micro shifts. They're talking about hyper shift, the talking about a lot of these innovations, operators, everything that is coming together. But where do developers play with all of this? If you spend half your time trying to configure it, instal it and buy the hardware, Trying to figure it out. You lose patience. What we have time, you lose time. What is time and it's complicated, right? How do you set up? Especially when you involve cloud. It has network. It has got the edge. You need all of that right? Set up. So what we have done is we've set up everything for you. You just come in. And by the way, not only just that what we realised is when you go talk to customers, they don't want to listen to all our optimizations processors and all that. They want to say that I am here to solve my retail problem. I want to count the people coming into my store, right. I want to see that if there is any spills that I recognise and I want to go clean it up before a customer complaints about it or I have a brain tumour segmentation where I want to identify if the tumour is malignant or not, right and I want to telehealth solutions. So they're really talking about these use cases that are talking about all these things. So What we did is we build many of these use cases by talking to customers. We open sourced it and made it available on Death Cloud for developers to use as a starting point so that they have this retail starting point or they have this healthcare starting point. All these use cases so that they have all the court we have showed them how to contain arise it. The biggest problem is developers still don't know at the edge how to bring a legacy application and make it cloud native. So they just wrap it all into one doctor and they say, OK, now I'm containerised got a lot more to do. So we tell them how to do it, right? So we train these developers, we give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be. >>Yeah, we saw that a lot with the early cloud where they wrapped their legacy apps in a container, shove it into the cloud. Say it's really hosting a legacy. Apps is all it was. It wasn't It didn't take advantage of the cloud. Never Now people come around. It sounds like a great developer. Free resource. Take advantage of that. Where do they go? They go. >>So it's def cloud dot intel dot com >>death cloud dot intel dot com. Check it out. It's a great freebie, AJ. Thanks very much. >>Thank you very much. I really appreciate your time. All right, >>keep it right there. This is Dave Volonte for Paul Dillon. We're right back. Covering the cube at Red Hat Summit 2022. >>Mhm. Yeah. Mhm. Mm.
SUMMARY :
We're kind of rounding the far turn, you know, coming up for the home stretch. devices that you need. So the cloud spans the cloud, the centralised You can pull all of these things together, and we give you one place where you can build it where gig Ethernet versus an Mpls versus the five G, you can do all that So all of these mortals, where do you run it? and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing So you develop the like to recognise the deer in your example. and we offer an environment that allows you to do it. you customise the the edge configuration for the for the developer So that we believe that if you bring that type of a cloud native I know you can run a S, a p a data So at the edge, you think about it, right? So now the question is, how do you deploy applications to that edge? Same thing with the smart factory, you said, So what we did is you could just have an easy ability for you to stream or connect You need to buy all those devices you need to experiment your solutions on all of that. portfolio of F. P. G s of graphics of like we have all what intel And you can bring in container workloads there, or even bare metal workloads. That is where it is. So what happens is you take your work, So every use case where you optimise is up to the You got hardware developers, you get software developers are What we have time, you lose time. container, shove it into the cloud. Check it out. Thank you very much. Covering the cube at Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valentin | PERSON | 0.99+ |
Ajay Mungara | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
France | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 cameras | QUANTITY | 0.99+ |
five cameras | QUANTITY | 0.99+ |
50 streams | QUANTITY | 0.99+ |
30 streams | QUANTITY | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Paul Dillon | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
three videos | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
about 500 devices | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
ipads | COMMERCIAL_ITEM | 0.98+ |
Iot | ORGANIZATION | 0.98+ |
Kafka | TITLE | 0.98+ |
each | QUANTITY | 0.97+ |
Windows | TITLE | 0.97+ |
three | QUANTITY | 0.97+ |
AJ | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
red hat | TITLE | 0.96+ |
Death Cloud | TITLE | 0.95+ |
one doctor | QUANTITY | 0.94+ |
over 50% 51% | QUANTITY | 0.93+ |
Farage | ORGANIZATION | 0.92+ |
intel dot com | ORGANIZATION | 0.9+ |
intel | ORGANIZATION | 0.9+ |
this morning | DATE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
San Diego Zoo | LOCATION | 0.87+ |
99 | QUANTITY | 0.86+ |
one container | QUANTITY | 0.86+ |
one environment | QUANTITY | 0.86+ |
2019 | DATE | 0.85+ |
half | QUANTITY | 0.85+ |
one place | QUANTITY | 0.84+ |
least two years | QUANTITY | 0.83+ |
Dev Cloud | TITLE | 0.81+ |
monger | PERSON | 0.77+ |
time | QUANTITY | 0.76+ |
I five | OTHER | 0.76+ |
P. G | PERSON | 0.75+ |
red hat | TITLE | 0.74+ |
two of | QUANTITY | 0.73+ |
Brest | ORGANIZATION | 0.63+ |
nine | TITLE | 0.61+ |
86 | OTHER | 0.58+ |
devices | QUANTITY | 0.56+ |
things | QUANTITY | 0.51+ |
five G | OTHER | 0.49+ |
86 | QUANTITY | 0.48+ |
Cloud | TITLE | 0.46+ |
seven | COMMERCIAL_ITEM | 0.4+ |
dot | ORGANIZATION | 0.34+ |
cloud | TITLE | 0.32+ |
Loris Degioanni, Sysdig | CUBE Conversation
(upbeat music) >> Hello, and welcome to this Cube Conversation kicking off 2022, I'm John Furrier, your host of theCUBE. We're with Loris Degioanni, Chief Technology Officer and founder of Sysdig. A company that's in the pioneering cloud native and cloud native security, open source, big part of the CNCF, CUBECon coverage. Of course, we know them as of that environment as well as DockerCon which we've covered many times. Sysdig is a very successful company. Loris, welcome to theCUBE Conversation. >> Thank you and thanks for having me. >> Well, we know a lot about you, but a lot of folks are learning about you guys with your success. Congratulations on the funding and the validation of your product, which is not a surprise. We've been saying on theCUBE open source has been powering innovation for some time and getting stronger, faster. The predictions in the Linux Foundation about this open source contributions continue to be blown away by their projections and more and more is coming. A new generation is upon us. Cloud Native, Edge, Kubernetes. All of these things are powering a modern application environment which is changing business. And under the covers, you guys are a big part of it. So take us through who Sysdig is, what you guys do for the folks out there and let's get into it. Obviously open source is a big part of it. Take us through who is Sysdig and what do you guys do. >> Yeah, Sysdig helps you run your software in the cloud in a way that is secure and confidently. We have a security solution that covers containers, cloud and Kubernetes. And we cover you in the life cycle of modern application. So the Sysdig security platform helps you secure application in a way that ranges from like shift left in CSD and finding vulnerabilities in your CSD pipeline to run time security that is very important in the cloud in particular with orchestrated infrastructures like the ones that are run by Kubernetes. And then of course, everything that has to do with the forensics, threat-hunting and so on. And the world is changing, security is changing, and Sysdig is one of the startups, one of the companies that is at the forefront of true modern cloud native security. >> So I got to ask you. Were you sitting in your backyard one day thinking, hey, I'm going to start a company? How did this all come together? I mean, the originator story, because we saw open source, we saw even more before CNCF was formed, you saw what cloud was doing. Again, we saw OpenStack and all these other things happening around technology. What was the driver behind the founding of Sysdig, and then how did that progress? Because again, there's an open source component here I want to get into. >> Yeah, and it's interesting that you say backyard because actually Sysdig was actually started in my backyard. Just outside of here. So the backyard metaphor is very, very fitting here. And in a general way, let's say I come from a background in open source for a very long time. Sysdig is my second company. My first company was called Case Technologies. It was the company behind an open source network analyzer called Wireshark, which is widely used by millions and millions of people around the world to do network troubleshooting and network analysis. And when we were doing network packets, we were using like the network devices to collect information. The data that is being transferred on the network has some very nice properties, it's rich. It's very deep. When you can see and decode what's happening on the network, you can understand what applications are doing, what the users are doing. I used to say, packets never lie, right? Because you could connect to the router and collect this data and they have a very good picture without any two instrument libraries to link, to install stuff and so on. And all of a sudden, we're moving to the cloud and the router that was like the vintage point for this beautiful way of doing security and visibility disappears. And you're renting instances that are floating in the Amazon cloud. And when the world changed that way from one point of view, I was sure that what we're doing before was useful and was powerful for the users. But I was also sure, okay, the world is going to change. The retrofitted solutions are not going to work. We can take our product, but then we have the innovator dilemma. We have a product that we cannot completely radically change. So I decided let's start from scratch. Let's start Sysdig. Let's try to understand actually what this cloud is going, where containers are going. There's this new Kubernetes thing that everybody's talking about. What does it mean to offer deep, rich, but at the same time lightweight and easy to deploy security and visibility for this kind of new way of writing software and that's how Sysdig was born. >> So if I remember correctly back in that timeframe, that couple you said you found a millions people using that application. If I remember correctly, that was software network monitoring. Is that true? Is that open source at that time? Was that an open project or was that? >> Yeah, like Wireshark is a network analyzer and the software that we're doing was heavily open source oriented and was mostly software and there were also potentially appliances because this was data center more kind of stuff. >> That was before cloud even came here. So again, defined data center software and defined clouds happening. So again, good segue into kind of where security, you mentioned footprints, you can track people with packets. So to your point, is this the tie into security, tell us how this fits in with open source and security with the software piece? >> Yeah, what Sysdig did essentially, the idea was let's learn from our prior life. I always say that every new wave of technology is built on the shoulders of the previous one. And you'd never reinvent anything. You just apply it and evolve it. And the same thing we did with Sysdig. So we learned what was working with our previous approaches that were based on observing the applications behavior by looking essentially at network traffic, but we adapted it to modern infrastructures. And open source was our mantra before with Wireshark and became our mantra with Sysdig. Sysdig, the company name comes from the open source tool that we released was the first thing that we released in our company. And then few years later with Falco, which now is the premier open source project that was created by Sysdig and is now part of the CNCF, it's an incubating project. And it's essentially the runtime security tool for containers, Kubernetes, and cloud. >> Take us through that Falco, because I think this is an important distinction on your success trajectory because CNCF has a nice playbook where companies can contribute to the CNCF at the same time, that creates an open environment for all, and then have a business model tied to it. This is kind of a new, not new, but this is a successful way to be open source and have a commercial opportunity. >> Yeah, and very much a substantial portion of our commercial product is let's say an extension of Falco. But let's say our approach was like, let's first produce something that is truly useful for the community and fits in the proper way with the ecosystem, with the rest of the ecosystem. Nowadays in every field security as well, you don't build any more a single solution. You build something that needs to fit very well in the stack. Kubernetes, Prometers, network meshes and DCO and this kind of stuff, these all fit together. So Falco, which is the runtime security component needs to fit as well. So initially our focus was like, okay, we need to fill the gap of runtime security for containers, for Kubernetes, and also for cloud. But we need to do that in a way that is community first and data really helps, but also engages and takes advantage of the users, of the broader community. At that point, going to the CNCF and telling the CNCF, hey, look, we developed these, are you interested in partnering with us and being essentially the organization behind this project, was very natural. And that's what we did in 2016, sorry, 2018. 2016 is when Falco started, 2018. And at that point, you know, it's a great partnership because the CNCF is really a great home for all of these projects and really makes it possible for the users to trust a project in a way that they know that even if the commercial banker, even if the original creators, even if the team rotates and changes and evolves, the end users can still use this project, trust this project and know that it's community driven. And it's been a great journey for us. >> How would you describe what Falco is and what are the key use cases? >> Yeah, Falco is, I compare it to the security camera for your containers, your house and your cloud infrastructure. So the same way that the security camera allows you to observe maybe what's happening in your home, even if you have a lock, is still useful to have a security camera, right? To understand when something breaks in what they're doing, when they do it, get an alarm when something better happens. Similarly, in software infrastructures, you can still have your lock, your firewall and so on, but then you use a security camera like Falco that is able to observe every single container, every single process, every single machine, every single network connection and so on. Keep an eye on it and then it has sort of a points-based system that includes a bunch of policies that come essentially pre-packaged that allow the users to detect when something dangerous or suspicious happens in the infrastructure. For example, I don't know somebody is spawning or sharing their radius container. Or somebody is logging in AWS without multi-factor authentication. Falco keeps a constant eye and lets you know, it gives you an alert when something like that happens. >> You know what I love about what you guys do and kind of highlights what we've been saying on theCUBE for many, many years is that the networking concepts of the older generations have been moving up the stack with cloud because you got rule engines, policy automation, all these things are now part of connected systems. So if you have the cloud, which is essentially a distributed computing, you have more networks, more connections. And so the networking paradigms of packets can be moved over to software, well, software maintenance, if you will, or anything, any middleware, whatever you want to call it. I mean, this is kind of a new paradigm. So, what's your reaction to that? I want to get your take on this because this is kind of really happening. >> Yeah, and you are absolutely right. And what us as a Falco community or as Sysdig as a company is exactly that. We're taking the concepts that were maybe at the base of the previous generation of the data center in terms of policies, in terms of one clause and we're sort of elevating them to what modern cloud is. To give you an example, I don't know if you remember, but a Falco was inspired by a tool called Snort and the company also was Sourcefire. Snort used to listen on the network, constantly observe the network traffic and the deploy policies to tell you, okay, somebody uploaded a file from China and this file contains a malware. Now we do this, but we're able to see inside containers. We have cloud context. We understand the regions. We understand Kubernetes namespace and all these kinds of stuff. So we're able to put so much more context and be so much closer to the user, but the concepts are the same. We're just, as I was saying, sitting on the shoulders of people before us that invented this and we're modernizing them. >> Well, this is what refactoring is all about. This is the benefit of the cloud. I think, this is why a lot of the cloud native success is happening because companies are realizing that they can actually not just re platform in the cloud, but actually refactor their business, completely different. Using other paradigms and not necessarily rip and replace or just cut and paste. They can take concepts and codify them in their workloads, not necessarily general purpose. So again, key cloud concept and only going to get stronger with the edge developing. So again, more and more complexity, connected complexity. >> Yeah, complexity that more and more you manage through automation, right? Which is another key concept in the cloud. So we are able as a market, as a community to have and manage more and more complex infrastructures because we have tools that are able to automate, to take care of stuff for us, to potentially remediate, which is another big theme in modern security for us and so on. And of course, again, companies like Sysdig, try to really read these in the plight, in a proper way that can be the most possible useful. >> And hackers love complexity, right? And love chaos. And so unless you tame that with really good software, this is the key challenge. >> You need to manage chaos and you need good software to help you manage chaos. >> All right, final question for you. How is Sysdig and the Falco community working with AWS? >> Yeah, in a number of ways. One of the beauties, as I was telling before of essentially being built on an open source project like Falco is that you can really work together with cloud providers like AWS with mutual advantage. For example, AWS and team members at Amazon have done many contributions to Falco and the Sysdig system and integrations and so on. We partnered as Falco community and Sysdig with AWS to offer proper support for Falco versus the products on Fargate, which is, managed containers are the future, are very powerful. Everybody wants to go there, but then you need to make sure that you are covered, you have security from the point of view of severability and so on. Sysdig and AWS work together on doing a P trace based implementation, this is a technical thing, but essentially it means that a tool like Falco can give you invitations, can be the security camera for Fargate as well. And in general way, Amazon is a great partner for us on a daily basis as a community and as a company. >> Loris, you've got a great company there. And again, it was great to see you guys grow from the beginning and the wave is here. As they say, in California, you guys are riding the right wave. And I think it's just the beginning. I think you're going to see more and more security be programmable, built in, automated, under the covers, invisible, but working. And I think the same is going to be true for data and other things. So a lot more to do. And again, it's distributed computing. We've seen this movie before, but not in this environment. So new tools are coming and you guys are a big part of it. Thank you so much for coming on theCUBE and sharing what you guys are doing and the technology behind Sysdig. Thanks for coming on. >> Thank you very much and thank you for the great conversation. >> Okay, this is theCUBE I'm John Furrier your host for Cube conversations with Sysdig's Loris Degioanni, CTO of Sysdig. Thanks for watching. (gentle music)
SUMMARY :
and founder of Sysdig. and the validation of your and Sysdig is one of the startups, I mean, the originator story, and millions of people around the world that couple you said you and the software that So to your point, is this the and is now part of the CNCF, and then have a business model tied to it. CNCF and telling the CNCF, that allow the users to detect that the networking concepts and the deploy policies to tell you, okay, of the cloud native success that can be the most possible useful. And so unless you tame that and you need good software How is Sysdig and the Falco and the Sysdig system and and sharing what you guys are doing and thank you for the great conversation. Okay, this is theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Loris Degioanni | PERSON | 0.99+ |
Loris Degioanni | PERSON | 0.99+ |
Falco | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
2018 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Loris | PERSON | 0.99+ |
Sysdig | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
second company | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Case Technologies | ORGANIZATION | 0.99+ |
first company | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
few years later | DATE | 0.99+ |
DockerCon | EVENT | 0.99+ |
one clause | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Wireshark | TITLE | 0.97+ |
first thing | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Sysdig | PERSON | 0.96+ |
millions people | QUANTITY | 0.96+ |
millions of people | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
one point | QUANTITY | 0.94+ |
CUBECon | EVENT | 0.94+ |
single solution | QUANTITY | 0.93+ |
Snort | TITLE | 0.91+ |
Cube Conversation | EVENT | 0.87+ |
every single machine | QUANTITY | 0.87+ |
Kubernetes | TITLE | 0.85+ |
every single process | QUANTITY | 0.85+ |
CTO | PERSON | 0.84+ |
every single container | QUANTITY | 0.82+ |
two instrument libraries | QUANTITY | 0.8+ |
Cube | ORGANIZATION | 0.8+ |
Fargate | TITLE | 0.78+ |
CNCF | EVENT | 0.77+ |
lco | ORGANIZATION | 0.76+ |
Mai Lan Tomsen Bukovec | AWS Storage Day 2021
(pensive music) >> Thank you, Jenna, it's great to see you guys and thank you for watching theCUBE's continuous coverage of AWS Storage Day. We're here at The Spheres, it's amazing venue. My name is Dave Vellante. I'm here with Mai-Lan Tomsen Bukovec who's Vice President of Block and Object Storage. Mai-Lan, always a pleasure to see you. Thanks for coming on. >> Nice to see you, Dave. >> It's pretty crazy, you know, this is kind of a hybrid event. We were in Barcelona a while ago, big hybrid event. And now it's, you know, it's hard to tell. It's almost like day-to-day what's happening with COVID and some things are permanent. I think a lot of things are becoming permanent. What are you seeing out there in terms of when you talk to customers, how are they thinking about their business, building resiliency and agility into their business in the context of COVID and beyond? >> Well, Dave, I think what we've learned today is that this is a new normal. These fluctuations that companies are having and supply and demand, in all industries all over the world. That's the new normal. And that has what, is what has driven so much more adoption of cloud in the last 12 to 18 months. And we're going to continue to see that rapid migration to the cloud because companies now know that in the course of days and months, you're, the whole world of your expectations of where your business is going and where, what your customers are going to do, that can change. And that can change not just for a year, but maybe longer than that. That's the new normal. And I think companies are realizing it and our AWS customers are seeing how important it is to accelerate moving everything to the cloud, to continue to adapt to this new normal. >> So storage historically has been, I'm going to drop a box off at the loading dock and, you know, have a nice day. And then maybe the services team is involved in, in a more intimate way, but you're involved every day. So I'm curious as to what that permanence, that new normal, some people call it the new abnormal, but it's the new normal now, what does that mean for storage? >> Dave, in the course of us sitting here over the next few minutes, we're going to have dozens of deployments go out all across our AWS storage services. That means our customers that are using our file services, our transfer services, block and object services, they're all getting improvements as we sit here and talk. That is such a fundamentally different model than the one that you talked about, which is the appliance gets dropped off at the loading dock. It takes a couple months for it to get scheduled for setup and then you have to do data migration to get the data on the new appliance. Meanwhile, we're sitting here and customers storage is just improving, under the hood and in major announcements, like what we're doing today. >> So take us through the sort of, let's go back, 'cause I remember vividly when, when S3 was announced that launched this cloud era and people would, you know, they would do a lot of experimentation of, we were storing, you know, maybe gigabytes, maybe even some terabytes back then. And, and that's evolved. What are you seeing in terms of how people are using data? What are the patterns that you're seeing today? How is that different than maybe 10 years ago? >> I think what's really unique about AWS is that we are the only provider that has been operating at scale for 15 years. And what that means is that we have customers of all sizes, terabytes, petabytes, exabytes, that are running their storage on AWS and running their applications using that storage. And so we have this really unique position of being able to observe and work with customers to develop what they need for storage. And it really breaks down to three main patterns. The first one is what I call the crown jewels, the crown jewels in the cloud. And that pattern is adopted by customers who are looking at the core mission of their business and they're saying to themselves, I actually can't scale this core mission on on-premises. And they're choosing to go to the cloud on the most important thing that their business does because they must, they have to. And so, a great example of that is FINRA, the regulatory body of the US stock exchanges, where, you know, a number of years ago, they took a look at all the data silos that were popping up across their data centers. They were looking at the rate of stock transactions going up and they're saying, we just can't keep up. Not if we want to follow the mission of being the watchdog for consumers, for transactions, for stock transactions. And so they moved that crown jewel of their application to AWS. And what's really interesting Dave, is, as you know, 'cause you've talked to many different companies, it's not technology that stops people from moving to the cloud as quick as they want to, it's culture, it's people, it's processes, it's how businesses work. And when you move the crown jewels into the cloud, you are accelerating that cultural change and that's certainly what FINRA saw. Second thing we see, is where a company will pick a few cloud pilots. We'll take a couple of applications, maybe one or a several across the organization and they'll move that as sort of a reference implementation to the cloud. And then the goal is to try to get the people who did that to generalize all the learning across the company. That is actually a really slow way to change culture. Because, as many of us know, in large organizations, you know, you have, you have some resistance to other organizations changing culture. And so that cloud pilot, while it seems like it would work, it seems logical, it's actually counter-productive to a lot of companies that want to move quickly to the cloud. And the third example is what I think of as new applications or cloud first, net new. And that pattern is where a company or a startup says all new technology initiatives are on the cloud. And we see that for companies like McDonald's, which has transformed their drive up experience by dynamically looking at location orders and providing recommendations. And we see it for the Digital Athlete, which is what the NFL has put together to dynamically take data sources and build these models that help them programmatically simulate risks to player health and put in place some ways to predict and prevent that. But those are the three patterns that we see so many customers falling into depending on what their business wants. >> I like that term, Digital Athlete, my business partner, John Furrier, coined the term tech athlete, you know, years ago on theCUBE. That third pattern seems to me, because you're right, you almost have to shock the system. If you just put your toe in the water, it's going to take too long. But it seems like that third pattern really actually de-risks it in a lot of cases, it's so it's said, people, who's going to argue, oh, the new stuff should be in the cloud. And so, that seems to me to be a very sensible way to approach that, that blocker, if you will, what are your thoughts on that? >> I think you're right, Dave. I think what it does is it allows a company to be able to see the ideas and the technology and the cultural change of cloud in different parts of the organization. And so rather than having a, one group that's supposed to generalize it across an organization, you get it decentralized and adopted by different groups and the culture change just goes faster. >> So you, you bring up decentralization and there's a, there's an emerging trend referred to as a data mesh. It was, it was coined, the term coined by Zhamak Dehghani, a very thought-provoking individual. And the concept is basically the, you know, data is decentralized, and yet we have this tendency to sort of shove it all into, you know, one box or one container, or you could say one cloud, well, the cloud is expanding, it's the cloud is, is decentralizing in many ways. So how do you see data mesh fitting in to those patterns? >> We have customers today that are taking the data mesh architectures and implementing them with AWS services. And Dave, I want to go back to the start of Amazon, when Amazon first began, we grew because the Amazon technologies were built in microservices. Fundamentally, a data mesh is about separation or abstraction of what individual components do. And so if I look at data mesh, really, you're talking about two things, you're talking about separating the data storage and the characteristics of data from the data services that interact and operate on that storage. And with data mesh, it's all about making sure that the businesses, the decentralized business model can work with that data. Now our AWS customers are putting their storage in a centralized place because it's easier to track, it's easier to view compliance and it's easier to predict growth and control costs. But, we started with building blocks and we deliberately built our storage services separate from our data services. So we have data services like Lake Formation and Glue. We have a number of these data services that our customers are using to build that customized data mesh on top of that centralized storage. So really, it's about at the end of the day, speed, it's about innovation. It's about making sure that you can decentralize and separate your data services from your storage so businesses can go faster. >> But that centralized storage is logically centralized. It might not be physically centralized, I mean, we put storage all over the world, >> Mai-Lan: That's correct. >> right? But, but we, to the developer, it looks like it's in one place. >> Mai-Lan: That's right. >> Right? And so, so that's not antithetical to the concept of a data mesh. In fact, it fits in perfectly to the point you were making. I wonder if we could talk a little bit about AWS's storage strategy and it started of course, with, with S3, and that was the focus for years and now of course EBS as well. But now we're seeing, we heard from Wayne this morning, the portfolio is expanding. The innovation is, is accelerating that flywheel that we always talk about. How would you characterize and how do you think about AWS's storage strategy per se? >> We are a dynamically and constantly evolving our AWS storage services based on what the application and the customer want. That is fundamentally what we do every day. We talked a little bit about those deployments that are happening right now, Dave. That is something, that idea of constant dynamic evolution just can't be replicated by on-premises where you buy a box and it sits in your data center for three or more years. And what's unique about us among the cloud services, is again that perspective of the 15 years where we are building applications in ways that are unique because we have more customers and we have more customers doing more things. So, you know, I've said this before. It's all about speed of innovation Dave, time and change wait for no one. And if you're a business and you're trying to transform your business and base it on a set of technologies that change rapidly, you have to use AWS services. Let's, I mean, if you look at some of the launches that we talk about today, and you think about S3's multi-region access points, that's a fundamental change for customers that want to store copies of their data in any number of different regions and get a 60% performance improvement by leveraging the technology that we've built up over, over time, leveraging the, the ability for us to route, to intelligently route a request across our network. That, and FSx for NetApp ONTAP, nobody else has these capabilities today. And it's because we are at the forefront of talking to different customers and that dynamic evolution of storage, that's the core of our strategy. >> So Andy Jassy used to say, oftentimes, AWS is misunderstood and you, you comfortable with that. So help me square this circle 'cause you talked about things you couldn't do on on-prem, and yet you mentioned the relationship with NetApp. You think, look at things like Outposts and Local Zones. So you're actually moving the cloud out to the edge, including on-prem data centers. So, so how do you think about hybrid in that context? >> For us, Dave, it always comes back to what the customer's asking for. And we were talking to customers and they were talking about their edge and what they wanted to do with it. We said, how are we going to help? And so if I just take S3 for Outposts, as an example, or EBS and Outposts, you know, we have customers like Morningstar and Morningstar wants Outposts because they are using it as a step in their journey to being on the cloud. If you take a customer like First Abu Dhabi Bank, they're using Outposts because they need data residency for their compliance requirements. And then we have other customers that are using Outposts to help, like Dish, Dish Networks, as an example, to place the storage as close as account to the applications for low latency. All of those are customer driven requirements for their architecture. For us, Dave, we think in the fullness of time, every customer and all applications are going to be on the cloud, because it makes sense and those businesses need that speed of innovation. But when we build things like our announcement today of FSx for NetApp ONTAP, we build them because customers asked us to help them with their journey to the cloud, just like we built S3 and EBS for Outposts for the same reason. >> Well, when you say over time, you're, you believe that all workloads will be on the cloud, but the cloud is, it's like the universe. I mean, it's expanding. So what's not cloud in the future? When you say on the cloud, you mean wherever you meet customers with that cloud, that includes Outposts, just the programming, it's the programmability of that model, is that correct? That's it, >> That's right. that's what you're talking about? >> In fact, our S3 and EBS Outposts customers, the way that they look at how they use Outposts, it's either as part of developing applications where they'll eventually go the cloud or taking applications that are in the cloud today in AWS regions and running them locally. And so, as you say, this definition of the cloud, you know, it, it's going to evolve over time. But the one thing that we know for sure, is that AWS storage and AWS in general is going to be there one or two steps ahead of where customers are, and deliver on what they need. >> I want to talk about block storage for a moment, if I can, you know, you guys are making some moves in that space. We heard some announcements earlier today. Some of the hardest stuff to move, whether it's cultural or maybe it's just hardened tops, maybe it's, you know, governance edicts, or those really hardcore mission critical apps and workloads, whether it's SAP stuff, Oracle, Microsoft, et cetera. You're clearly seeing that as an opportunity for your customers and in storage in some respects was a blocker previously because of whatever, latency, et cetera, then there's still some, some considerations there. How do you see those workloads eventually moving to the cloud? >> Well, they can move now. With io2 Block Express, we have the performance that those high-end applications need and it's available today. We have customers using them and they're very excited about that technology. And, you know, again, it goes back to what I just said, Dave, we had customers saying, I would like to move my highest performing applications to the cloud and this is what I need from the, from the, the storage underneath them. And that's why we built io2 Block Express and that's how we'll continue to evolve io2 Block Express. It is the first SAN technology in the cloud, but it's built on those core principles that we talked about a few minutes ago, which is dynamically evolving and capabilities that we can add on the fly and customers just get the benefit of it without the cost of migration. >> I want to ask you about, about just the storage, how you think about storage in general, because typically it's been a bucket, you know, it's a container, but it seems, I always say the next 10 years aren't going to be like the last, it seems like, you're really in the data business and you're bringing in machine intelligence, you're bringing in other database technology, this rich set of other services to apply to the data. That's now, there's a lot of data in the cloud and so we can now, whether it's build data products, build data services. So how do you think about the business in that sense? It's no longer just a place to store stuff. It's actually a place to accelerate innovation and build and monetize for your customers. How do you think about that? >> Our customers use the word foundational. Every time they talk about storage, they say for us, it's foundational, and Dave, that's because every business is a data business. Every business is making decisions now on this changing landscape in a world where the new normal means you cannot predict what's going to happen in six months, in a year. And the way that they're making those smart decisions is through data. And so they're taking the data that they have in our storage services and they're using SageMaker to build models. They're, they're using all kinds of different applications like Lake Formation and Glue to build some of the services that you're talking about around authorization and data discovery, to sit on top of the data. And they're able to leverage the data in a way that they have never been able to do before, because they have to. That's what the business world demands today, and that's what we need in the new normal. We need the flexibility and the dynamic foundational storage that we provide in AWS. >> And you think about the great data companies, those were the, you know, trillions in the market cap, their data companies, they put data at their core, but that doesn't mean they shove all the data into a centralized location. It means they have the identity access capabilities, the governance capabilities to, to enable data to be used wherever it needs to be used and, and build that future. That, exciting times we're entering here, Mai-Lan. >> We're just set the start, Dave, we're just at the start. >> Really, what ending do you think we have? So, how do you think about Amazon? It was, it's not a baby anymore. It's not even an adolescent, right? You guys are obviously major player, early adulthood, day one, day zero? (chuckles) >> Dave, we don't age ourself. I think if I look at where we're going for AWS, we are just at the start. So many companies are moving to the cloud, but we're really just at the start. And what's really exciting for us who work on AWS storage, is that when we build these storage services and these data services, we are seeing customers do things that they never thought they could do before. And it's just the beginning. >> I think the potential is unlimited. You mentioned Dish before, I mean, I see what they're doing in the cloud for Telco. I mean, Telco Transformation, that's an industry, every industry, there's a transformation scenario, a disruption scenario. Healthcare has been so reluctant for years and that's happening so quickly, I mean, COVID's certainly accelerating that. Obviously financial services have been super tech savvy, but they're looking at the Fintech saying, okay, how do we play? I mean, there isn't manufacturing with EV. >> Mai-Lan: Government. >> Government, totally. >> It's everywhere, oil and gas. >> There isn't a single industry that's not a digital industry. >> That's right. >> And there's implications for everyone. And it's not just bits and atoms anymore, the old Negroponte, although Nicholas, I think was prescient because he's, he saw this coming, it really is fundamental. Data is fundamental to every business. >> And I think you want, for all of those in different industries, you want to pick the provider where innovation and invention is in our DNA. And that is true, not just for storage, but AWS, and that is driving a lot of the changes you have today, but really what's coming in the future. >> You're right. It's the common editorial factors. It's not just the, the storage of the data. It's the ability to apply other technologies that map into your business process, that map into your organizational skill sets that drive innovation in whatever industry you're in. It's great Mai-Lan, awesome to see you. Thanks so much for coming on theCUBE. >> Great seeing you Dave, take care. >> All right, you too. And keep it right there for more action. We're going to now toss it back to Jenna, Canal and Darko in the studio. Guys, over to you. (pensive music)
SUMMARY :
it's great to see you guys And now it's, you know, it's hard to tell. in the last 12 to 18 months. the loading dock and, you know, than the one that you talked about, and people would, you know, and they're saying to themselves, coined the term tech athlete, you know, and the cultural change of cloud And the concept is and it's easier to predict But that centralized storage it looks like it's in one place. to the point you were making. is again that perspective of the 15 years the cloud out to the edge, in the fullness of time, it's the programmability of that's what you're talking about? definition of the cloud, you know, Some of the hardest stuff to move, and customers just get the benefit of it lot of data in the cloud and the dynamic foundational and build that future. We're just set the start, Dave, So, how do you think about Amazon? And it's just the beginning. doing in the cloud for Telco. It's everywhere, that's not a digital industry. Data is fundamental to every business. the changes you have today, It's the ability to Great seeing you Dave, Jenna, Canal and Darko in the studio.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Jenna | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
FINRA | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Nicholas | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Mai-Lan | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
NFL | ORGANIZATION | 0.99+ |
Morningstar | ORGANIZATION | 0.99+ |
McDonald's | ORGANIZATION | 0.99+ |
Wayne | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
third example | QUANTITY | 0.99+ |
First Abu Dhabi Bank | ORGANIZATION | 0.99+ |
three patterns | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Lake Formation | ORGANIZATION | 0.99+ |
third pattern | QUANTITY | 0.99+ |
two steps | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
six months | QUANTITY | 0.98+ |
Glue | ORGANIZATION | 0.98+ |
one box | QUANTITY | 0.98+ |
Mai-Lan Tomsen Bukovec | PERSON | 0.98+ |
one container | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
Darko | PERSON | 0.97+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
EBS | ORGANIZATION | 0.97+ |
Second thing | QUANTITY | 0.96+ |
NetApp | TITLE | 0.96+ |
S3 | TITLE | 0.95+ |
Telco Transformation | ORGANIZATION | 0.95+ |
Block | ORGANIZATION | 0.94+ |
Fintech | ORGANIZATION | 0.94+ |
years ago | DATE | 0.93+ |
a year | QUANTITY | 0.92+ |
LIVE Panel: FutureOps: End-to-end GitOps
>>and hello, we're back. I've got my panel and we are doing things real time here. So sorry for the delay a few minutes late. So the way let's talk about things, the reason we're here and we're going around the room and introduce everybody. Got three special guests here. I got my evil or my john and the normal And we're going to talk about get ops I called it future office just because I want to think about what's the next thing for that at the end, we're gonna talk about what our ideas for what's next for getups, right? Um, because we're all starting to just get into get ups now. But of course a lot of us are always thinking about what's next? What's better? How can we make this thing better? So we're going to take your questions. That's the reason we're here, is to take your questions and answer them. Or at least the best we can for the next hour. And all right, so let's go around the room and introduce yourself. My name is Brett. I am streaming from Brett from that. From Brett. From Virginia Beach in Virginia beach, Virginia, United States. Um, and I talk about things on the internet, I sell courses on you, to me that talk about Docker and kubernetes Ive or introduce yourself. >>How's it going? Everyone, I'm a software engineer at axel Springer, currently based in Berlin and I happen to be Brett Brett's teaching assistant. >>All right, that's right. We're in, we're in our courses together almost every day. Mm john >>hey everyone, my name is john Harris, I used to work at Dhaka um, I now work at VM ware is a star field engineer. Um, so yeah, >>and normal >>awesome by the way, you are streaming from Brett Brett, >>I answered from breath to breath. >>Um I'm normal method. I'm a distinguished engineer with booz allen and I'm also a doctor captain and it's good to see either in person and it's good to see you again john it's been a little while. >>It has the pre covid times, right? You're up here in Seattle. >>Yeah. It feels, it feels like an eternity ago. >>Yeah, john shirt looks red and reminds me of the Austin T shirt. So I was like, yeah, so we all, we all have like this old limited edition doctor on E. >>T. That's a, that's a classic. >>Yeah, I scored that one last year. Sometimes with these old conference church, you have to like go into people's closets. I'm not saying I did that. Um, but you know, you have to go steal stuff, you to find ways to get the swag >>post post covid. If you ever come to my place, I'm going to have to lock the closets. That >>that's right, That's right. >>So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the T shirts, just boxes and boxes and boxes floor to ceiling. So every time I went to HQ you just you just as many as you can fit in your luggage. I think I have about 10 of these. You >>bring an extra piece of luggage just for your your shirt shirt grab. Um All right, so I'm going to start scanning questions uh so that you don't have to you can you help you all are welcome to do that. And I'm going to start us off with the topic. Um So let's just define the parameters. Like we can talk about anything devops and here we can go down and plenty of rabbit holes. But the kind of, the goal here is to talk about get ups and get ups if you haven't heard about it is essentially uh using versioning systems like get like we've all been getting used to as developers to track your infrastructure changes, not just your code changes and then automate that with a bunch of tooling so that the robots take over. And essentially you have get as a central source of truth and then get log as a central source of history and then there's a bunch of magic little bits in the middle and then supposedly everything is wonderful. It's all automatic. The reality is is what it's often quite messy, quite tricky to get everything working. And uh the edges of this are not perfect. Um so it is a relatively new thing. It's probably three, maybe four years old as an official thing from. We've uh so we're gonna get into it and I'll let's go around the room and the same word we did before and um not to push on that, put you on the spot or anything. But what is, what is one of the things you either like or either hate about getups um that you've enjoyed either using it or you know, whatever for me. I really, I really love that I can point people to a repo that basically is hopefully if they look at the log a tracking, simplistic tracking of what might have changed in that part of the world or the environment. I remember many years past where, you know, I've had executive or some mid level manager wants to see what the changes were or someone outside my team went to see what we just changed. It was okay, they need access to this system into that dashboard and that spreadsheet and then this thing and it was always so complicated and now in a world where if we're using get up orbit bucket or whatever where you can just say, hey go look at that repo if there was three commits today, probably three changes happened. That's I love that particular part about it. Of course it's always more complicated than that. But um Ive or I know you've been getting into this stuff recently. So um any thoughts? Yeah, I think >>my favorite part about get ops is >>reproducibility. Um >>you know the ability to just test something and get it up and running >>and then just tear it down. >>Uh not >>being worried that how did I configure it the first time? I think that's my favorite part about >>it. I'm changing your background as we do this. >>I was going to say, did you just do it get ups pushed to like change his >>background, just a dialogue that different for that green screen equals false? Uh Change the background. Yeah, I mean, um and I mean I think last year was really my first year of actually using it on anything significant, like a real project. Um so I'm still, I still feel like I'm very new to john you anything. >>Yeah, it's weird getups is that thing which kind of crystallizes maybe better than anything else, the grizzled veteran life cycle of emotions with the technology because I think it's easy to get super excited about something new. And when I first looked into get up, so I think this is even before it was probably called getups, we were looking at like how to use guest source of truth, like everything sounds great, right? You're like, wait, get everyone knows, get gets the source of truth, There's a load of robust tooling. This just makes a sense. If everything dies, we can just apply the get again, that would be great. Um and then you go through like the trough of despair, right? We're like, oh no, none of this works. The application is super stateless if this doesn't work and what do we do with secrets and how do we do this? Like how do we get people access in the right place and then you realize everything is terrible again and then everything it equalizes and you're kind of, I think, you know, it sounds great on paper and they were absolutely fantastic things about it, but I think just having that measured approach to it, like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. Right, like it's the magic, which is >>the magic is the mystery, >>right? >>Magic can be good and bad and in text so >>very much so yeah, so um concurrence with with john and ever uh in terms of what I like about it is the potential to apply it to moving security to left and getting closer to a more stable infrastructures code with respect to the whole entire environment. Um And uh and that reconciliation loop, it reminds me of what, what is old is new again? Right? Well, quote unquote old um in terms of like chef and puppet and that the reconciliation loop applied in a in a more uh in a cleaner interface and and into the infrastructure that we're kind of used to already, once you start really digging into kubernetes what I don't like and just this is in concurrence with the other Panelist is it's relatively new. It has um, so it has a learning curve and it's still being, you know, it's a very active um environment and community and that means that things are changing and constantly and there's like new ways and new patterns as people are exploring how to use it. And I think that trough of despair is typically figuring out incrementally what it actually is doing for you and what it's not going to solve for you, right, john, so like that's that trough of despair for a bit and then you realize, okay, this is where it fits potentially in my architecture and like anything, you have to make that trade off and you have to make that decision and accept the trade offs for that. But I think it has a lot of promise for, for compliance and security and all that good stuff. >>Yeah. It's like it's like the potentials, there's still a lot more potential than there is uh reality right now. I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling that doesn't appreciate getups like you're using to get up to and use something else and that tool has no awareness of the concept so it doesn't flow well with all of the things you're trying to do and get um uh things that aren't state based and all that. So this is going to lead me to our first question from Camden asking dumb questions by the way. No dumb questions here. Um How is get apps? Not just another name for C. D. Anybody want to take that as an answer as a question. How is get up is not just another name for C. D. I have things but we can talk about it. I >>feel like we need victor foster kids. Yeah, sure you would have opinions. Yeah, >>I think it's a very yeah. One person replied said it's a very specific it's an opinionated version of cd. That's a great that's a great answer like that. Yeah. >>It's like an implement. Its it's an implementation of deployment if you want it if you want to use it for that. All right. I realize now it's kind of hard in terms of a physical panel and a virtual panel to figure out who on the panel is gonna, you know, ready to jump in to answer a question. But I'll take it. So um I'll um I'll do my best inner victor and say, you know, it's it's an implementation of C. D. And it's it's a choice right? It's one can just still do docker build and darker pushes and doctor pulls and that's fine. Or use other technologies to deploy containers and pods and change your, your kubernetes infrastructure. But get apps is a different implementation, a different method of doing that same thing at the end of the day. Yeah, >>I like it. I like >>it and I think that goes back to your point about, you know, it's kind of early days still, I think to me what I like about getups in that respect is it's nice to see kubernetes become a platform where people are experimenting with different ways of doing things, right? And so I think that encourages like lots of different patterns and overall that's going to be a good thing for the community because then more, you know, and not everything needs to settle in terms of only one way of doing things, but a lot of different ways of doing things helps people fit, you know, the tooling to their needs, or helps fit kubernetes to their needs, etcetera. Yeah, >>um I agree with that, the, so I'm gonna, since we're getting a load of good questions, so um one of the, one of the, one of the, I want to add to that real quick that one of the uh from the, we've people themselves, because I've had some on the show and one of things that I look at it is distinguishing is with continuous deployment tools, I sort of think that it's almost like previous generation and uh continuous deployment tools can be anything like we would consider Jenkins cd, right, if you if you had an association to a server and do a doctor pull and you know, dr up or dr composed up rather, or if it did a cube control apply uh from you know inside an ssh tunnel or something like that was considered considered C. D. Well get ops is much more rigid I think in terms of um you you need to apply, you have a specific repo that's all about your deployments and because of what tool you're using and that one your commit to a specific repo or in a specific branch that repo depends on how you're setting it up. That is what kicks off a workflow. And then secondly there's an understanding of state. So a lot of these tools now I have uh reconciliation where they they look at the cluster and if things are changing they will actually go back and to get and the robots will take over and will commit that. Hey this thing has changed um and you maybe you human didn't change it, something else might have changed it. So I think that's where getups is approaching it, is that ah we we need to we need to consider more than just a couple of commands that be runnin in a script. Like there needs to be more than that for a getups repo to happen anyway, that's just kind of the the take back to take away I took from a previous conversation with some people um >>we've I don't think that lost, its the last piece is really important, right? I think like for me, C d like Ci cd, they're more philosophical ideas, write a set of principles, right? Like getting an idea or a code change to environments promoting it. It's very kind of pipeline driven um and it's very imperative driven, right? Like our existing CD tools are a lot of the ways that people think about Cd, it would be triggered by an event, maybe a code push and then these other things are happening in sequence until they either fail or pass, right? And then we're done. Getups is very much sitting on the, you know, the reconciliation side, it's changing to a pull based model of reconciliation, right? Like it's very declarative, it's just looking at the state and it's automatically pulling changes when they happen, rather than this imperative trigger driven model. That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups is doing anything creatively revolutionary here, but I think that's one of the main things that the ideas that are being introduced into those, like existing C kind of tools and pipelines, um certainly the pull based model and the reconciliation model, which, you know, has a lot in common with kubernetes and how those kind of controllers work, but I think that's the key idea. Yeah. >>Um This is a pretty specific one Tory asks, does anyone have opinions about get ops in a mono repo this is like this is getting into religion a little bit. How many repos are too many repose? How um any thoughts on that? Anyone before I rant, >>go >>for it, go for it? >>Yeah. How I'm using it right now in a monitor repo uh So I'm using GIT hub. Right, so you have what? The workflow and then inside a workflow? Yeah, mo file, I'll >>track the >>actual changes to the workflow itself, as well as a folder, which is basically some sort of service in Amman Arepa, so if any of those things changes, it'll trigger the actual pipeline to run. So that's like the simplest thing that I could figure out how to, you know, get it set up using um get hubs, uh workflow path future. Yeah. And it's worked for me for writing, you know? That's Yeah. >>Yeah, the a lot of these things too, like the mono repo discussion will, it's very tool specific. Each tool has various levels of support for branch branching and different repos and subdirectories are are looking at the defense and to see if there's changes in that specific directory. Yeah. Sorry, um john you're going to say something, >>I was just going to say, I've never really done it, but I imagine the same kind of downsides of mono repo to multiple report would exist there. I mean, you've got the blast radius issues, you've got, you know, how big is the mono repo? Do we have to pull does the tool have to pull that or cashier every time it needs to determine def so what is the support for being able to just look at directories versus you know, I think we can get way down into a deeper conversation. Maybe we'll save it for later on in the conversation about what we're doing. Get up, how do we structure our get reposed? We have super granular repo per environment, Perper out reaper, per cluster repo per whatever or do we have directories per environment or branches per environment? How how is everything organized? I think it's you know, it's going to be one of those, there's never one size fits all. I'll give the class of consultant like it depends answer. Right? >>Yeah, for sure. It's very similar to the code struggle because it depends. >>Right? >>Uh Yeah, it's similar to the to the code problem of teams trying to figure out how many repose for their code. Should they micro service, should they? Semi micro service, macro service. Like I mean, you know because too many repose means you're doing a bunch of repo management, a bunch of changes on your local system, you're constantly get pulling all these different things and uh but if you have one big repo then it's it's a it's a huge monolithic thing that you usually have to deal with. Path based issues of tools that only need to look at a specific directory and um yeah, it's a it's a culture, I feel like yeah, like I keep going back to this, it's a culture thing. Does your what is your team prefer? What do you like? What um what's painful for everyone and who's what's the loudest pain that you need to deal with? Is it is it repo management? That's the pain um or is it uh you know, is that that everyone's in one place and it's really hard to keep too many cooks out of the kitchen, which is a mono repo problem, you know? Um How do we handle security? So this is a great one from Tory again. Another great question back to back. And that's the first time we've done that um security as it pertains to get up to anyone who can commit can change the infrastructure. Yes. >>Yes. So the tooling that you have for your GIT repo and the authentication, authorization and permissions that you apply to the GIT repo using a get server like GIT hub or get lab or whatever your flavor of the day is is going to be how security is handled with respect to changes in your get ups configuration repository. So um that is completely specific to your implementation of that or ones implementation of of how they're handling that. Get repositories that the get ups tooling is looking at. To reconcile changes with respect to the permissions of the for lack of better term robot itself. Right? They get up tooling like flux or Argosy. D Um one kid would would create a user or a service account or uh other kind of authentication measures to limit the permissions for that service account that the Gaddafi's tooling needs to be able to read the repose and and send commits etcetera. So that is well within the realm of what you have already for your for your get your get um repo. Yeah. >>Yeah. A related question is from a g what they like about get apps if done nicely for a newbie it's you can get stuff done easily if you what they dislike about it is when you have too many get repose it becomes just too complicated and I agree. Um was making a joke with a team the other week that you know the developer used to just make one commit and they would pass pass it on to a QA team that would then eventually emerging in the master. But they made the commits to these feature branches or whatever. But now they make a commit, they make a pR there for their code then they go make a PR in the helm chart to update the thing to do that and then they go make a PR in the get ups repeal for Argo. And so we talked about that they're probably like four or five P. R. Is just to get their code in the production. But we were talking about the negative of that but the reality was It's just five or 4 or five prs like it wasn't five different systems that had five different methodologies and tooling and that. So I looked at it I was like well yeah that's kind of a pain in the get sense but you're also dealing with one type. It's a repetitive action but it's it's the one thing I don't have to go to five different systems with five different ways of doing it. And once in the web and one's on the client wants a command line that I don't remember. Um Yeah so it's got pros and cons I think when you >>I think when you get to the scale where those kind of issues are a problem then you're probably at the scale where you can afford to invest some time into automation into that. Right? Like what I've when I've seen this in larger customers or larger organizations if there ever at that stage where okay apps are coming up all the time. You know, there's a 10 X 100 X developer to operations folks who may be creating get repose setting up permissions then that stuff gets automated, right? Like, you know, maybe ticket based systems or whatever. Developers say I need a new app. It templates things or more often using the same model, right of reconciliation and operators and the horrific abuse of cogs that we're seeing in the communities community right now. Um You know, developers can create a crd which just says, hey, I'm creating a new app is called app A and then a controller will pick up that app a definition. It will go create a get a repo Programmatically it will add the right definitely will look up and held up the developers and the permissions that need to be able to get to that repo it will create and template automatically some name space and the clusters that it needs in the environments that it needs, depending on, you know, some metadata it might read. So I think, you know, those are definite problems and they're definitely like a teething, growing pain thing. But once you get to that scale, you kind of need to step back and say, well look, we just need to invest in time into the operational aspect of this and automating this pain away, I think. Yeah, >>yeah. And that ultimately ends in Yeah. Custom tooling, which it's hard to avoid it at scale. I mean, there's there's two, there's almost two conversations here, right. There is what I call the Solo admin Solo devops, I bought that domain Solo devops dot com because, you know, whenever I'm talking to dr khan in the real world, it's like I asked people to raise hands, I don't know how we can raise hands here, but I would ask people to raise hands and see how many of you here are. The sole person responsible for deploying the app that your team makes and like a quarter of the room would raise their hand. So I call that solo devops like those, that person can't make all the custom tooling in the world. So they really need dr like solutions where it's opinionated, the workflow is sort of built in and they don't have to wrangle things together with a bunch of glue, you know, in other words bash. Um and so this kind of comes to a conversation uh starting this question from lee he's asking how do you combine get ops with ci cd, especially the continuous bit. How do you avoid having a human uh sort of the complaint the team I was working with has, how do you avoid a human editing and get committing for every single deploy? They've settled on customized templates and a script for routine updates. So as a seed for this conference, this question I'm gonna ask you all uh instead of that specific question cause it's a little open ended. Um Tell me whether you agree with this. I I kind of look at the image, the image artifact because the doctor image or container image in general is an artifact that I I view it that way and that thing going into the registry with the right label or right part of the label. Um That tag rather not the label but the tag that to me is like one of the great demarche points of, we're kind of done with Ci and we're now into the deployment phase and it doesn't necessarily mean the tooling is a clear cut there, but that artifact being shipped in a specific way or promoted as we sometimes say. Um what do you think? Does anyone have opinions on that? I don't even know if that's the right opinion to have so mhm. >>So um I think what you're, what you're getting at is that get ups, models can trigger off of different events um to trigger the reconciliation loop. And one way to do that is if the image, if it notices a image change in the registry, the other is if there's a commit event on a specific rebo and branch and it's up to, you are up to the person that's implementing their get ups model, what event to trigger there, that reconciliation loop off of, You can do both, you can do one or the other. It also depends on the Templeton engine that you're using on top of um on top of kubernetes, such as helm or um you know, the other ones that are out there or if you're not even doing that, then, you know straight. Yeah, mo um so it kind of just depends, but those are the typically the two options one has and a combination of of those to trigger that event. You can also just trigger it manually, right? You can go into the command line and force a a, you know, a really like a scan or a new reconciliation loop to occur. So it kind of just, I don't want to say this, but it depends on what you're trying to do and what makes sense in your pipeline. Right? So if you're if you're set up where you are tag, if you're doing it based off of image tags, then you probably want to use get ups in a way that you're using the image tags. Right. And the pattern that you've established there, if you're not really doing that and you're more around, like, different branches are mapped to different environments, then triggered off of the correct branch. And that's where the permissions also come into play. Where if you don't want someone to touch production and you've got your getups for your production cluster based off of like uh you know, a main branch, then whoever can push a change to that main branch has the authority to push that change to production. Right? So that's your authentication and permissions um system same for the registry itself. Right. So >>Yeah. Yeah. Sorry, anyone else have any thoughts on that? I was about to go to the next topic, >>I was going to say. I think certain tools dictate the approach, like, if you're using Argosy d it's I think I'm correct me if I'm wrong, but I think the only way to use it right now is just through image modification. Like, the manifest changes, it looks at a specific directory and anything changes then it will do its thing. And uh Synchronize the cost there with whatever's and get >>Yeah, flux has both. Yeah, and flux has both. So it it kind of depends. I think you can make our go do that too, but uh this is back to what we were saying in the beginning, uh you know, these things are changing, right? So that might be what it is right now in terms of triggering the reconciliation loops and get ups, tooling, but there might be other events in the future that might trigger it, and it's not completely stand alone because you still need you're tooling to do any kind of testing or whatever you have in terms of like the specific pipeline. So oftentimes you're bolting in getups into some other part of broader Cfd solution. That makes sense. Yeah, >>we've got a lot of questions about secrets or people that are asking about secrets. >>So my my tongue and cheek answered the secrets question was, what's the best practices for kubernetes? Secrets? That's the same thing for secrets with good apps? Uh getups is not last time I checked and last time I was running this stuff get ups is not has nothing to do with secrets in that sense. It's just there to get your stuff running on communities. So, um there's probably a really good session on secrets at dr concept. I >>would agree with you, I agree with you. Yeah, I mean, get off stools, I mean every every project of mine handles secrets differently. Uh huh. And I think I'm not sure if it was even when I was talking to but talking to someone recently that I'm very bullish on get up actions, I love get up actions, it's not great for deployments yet, but we do have this new thing and get hub environments, I think it's called. So it allows me at least the store secrets per environment, which it didn't have the concept of that before, which you know, if you if any of you running kubernetes out there, you typically end up when you start running kubernetes, you end up with more than one kubernetes, like you're going to end up with a lot of clusters at some point, at least many multiple, more than two. Um and so if you're trying to store secret somewhere, you do have and there's a discussion happening in chat right now where people are talking about um sealed secrets which if you haven't heard of that, go look that up and just be versed on what sealed secrets is because it's a it's a fantastic concept for how to store secrets in the public. Um I love it because I'm a big P. K. I nerd but um it's not the only way and it doesn't fit all models. So I have clients that use A W. S. Secrets because they're in A W. S. And then they just have to use the kubernetes external secret. But again like like like normal sand, you know, it's that doesn't really affect get ops, get ops is just applying whatever helm charts or jahmal or images that you're, you're you're deploying, get off. It was more about the approach of when the changes happen and whether it's a push or pull model like we're talking about and you know, >>I would say there's a bunch of prerequisites to get ups secrets being one of them because the risk of you putting a secret into your git repo if you haven't figured out your community secrets architecture and start diving into getups is high and removing secrets from get repose is you know, could be its own industry, right. It's >>a thing, >>how do >>I hide this? How do I obscure this commit that's already now on a dozen machines. >>So there are some prerequisites in terms of when you're ready to adopt get up. So I think is the right way of saying the answer to that secrets being one of them. >>I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see the ah ha moment when it came to get ups which, which was that the premier thing that everyone used to say about get up about why it was great. Was its the single source of truth. There's no state anywhere else. You just need to look at git. Um and then secrets may be realized along with a bunch of other things down the line that is not true and will never be true. So as soon as you can lose the dogmatism about everything is going to be and get it's fantastic. As long as you've understood everything is not going to get. There are things which will absolutely never be and get some tools just don't deal with that. They need to earn their own state, especially in communities, some controls on their own state. You know, cuz sealed secrets and and other projects like SOps and I think there are two or three others. That's a great way of dealing with secrets if you want to keep them in get. But you know, projects like vault more kind of like what I would say, production grade secret strategies. Right? And if you're in AWS or a cloud, you're more likely to be using their secrets. Your secret policy is maybe not dictated by you in large organizations might be dictated by CSO or security or Great. Like I think once if you, if you're trying to adopt getups or you're thinking about it, get the dogmatism of get as a single point of truth out of your mind and think about getups more as a philosophy and a set of best practice principles, then you will be in much better stead, >>right? Yeah. >>People are asking more questions in chat like infrastructure as code plus C d essentially get ups or C I rather, um, these are all great questions and a part of the debate, I'm actually just going to throw up on screen. I'm gonna put this in chat, but this is, this is to me the source, Right? So we worked with when they coined the term. We, a lot of us have been trying to get, if we talk about the history for a minute and then tell me if I'm getting this right. Um, a lot of us were trying to automate all these different parts of the puzzle, but a lot of them, they, some things might have been infrastructure as code. Some things weren't, some things were sort of like settings is coded, like you're going to Jenkins and type in secrets and settings or type in a certain thing in the settings of Jenkins and then that it wasn't really in get and so what we was trying to go for was a way to have almost like eventually a two way state understanding where get might change your infrastructure but then your infrastructure might also change and needs to be reflected in the get if the get is trying to be the single source of truth. Um and like you're saying the reality is that you're never gonna have one repo that has all of your infrastructure in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. Right. Um but anyway, I'm gonna put this link in chat. So this guide actually, uh one of things they talk about is what it's not, so it's, it's kind of great to read through the different requirements and like what I was saying well ago um mhm. Having having ci having infrastructure as code and then trying a little bit of continuous deployment out, it's probably a prerequisite. Forget ops so it's hard to just jump into that when you don't already have infrastructure as code because a machine doing stuff on your behalf, it means that you have to have things documented and somewhere and get repo but let me put this in the in the >>chitty chat, I would like to know if the other panelists agree, but I think get apps is a okay. I would say it's a moderate level, it's not a beginner level communities thing, it's like a moderate level advanced, a little bit more advanced level. Um One can start off using it but you definitely have to have some pre recs in place or some understanding of like a pattern in place. Um So what do the other folks think about that opinion? >>I think if you're if you're trying to use get out before, you know what problem you have, you're probably gonna be in trouble. Right. It's like having a solution to it probably don't have yet. Mhm. Right. I mean if if you're just evil or and you're just typing, keep control apply, you're one person right, Get off. It doesn't seem like a big a big jump, like, I mean it doesn't like why would I do that? I'm just, I'm just gonna inside, it's the type of get commit right, I'm typing Q control apply. But I think one of the rules from we've is none of your developers and none of your admins can have cute control access to the cluster because if you can't, if you do have access and you can just apply something, then that's just infrastructure as code. That's just continuous deployment, that's, that's not really get ops um, getups implies that the only way things get into the cluster is through the get up, get automation that you're using with, you know, flux Argo, we haven't talked about, what's the other one that Victor Farsi talks about, by the way people are asking about victor, because victor would love to talk about this stuff, but he's in my next life, so come back in an hour and a half or whatever and victor is going to be talking about sys, admin list with me. Um >>you gotta ask him nothing but get up questions in the next, >>confuse them, confuse them. But anyway, that, that, that's um, it's hard, it's hard to understand and without having tried it, I think conceptually it's a little challenging >>one thing with getups, especially based off the we've works blog post that you just put up on there. It's an opinionated way of doing something. Uh you know, it's an opinionated way of of delivering changes to an environment to your kubernetes environment. So it's opinionated were often not used to seeing things that are very opinionated in this sense, in the in the ecosystem, but get apps is a opinionated thing. It's it's one way of doing it. Um there are ways to change it and like there are options um like what we were talking about in terms of the events that trigger, but the way that it's structured is an opinion opinionated way both from like a tooling perspective, like using get etcetera, but also from a devops cultural perspective, right? Like you were talking about not having anyone access cube control and changing the cluster directly. That's a philosophical opinion that get ups forces you to adopt otherwise. It kind of breaks the model and um I just I want everyone to just understand that. That is very opinion, anything in that sense. Yeah, >>polygamy is another thing. Infrastructure as code. Um someone's mentioning plummy and chat, I just had actually my life show self plug bread that live go there. I'm on Youtube every week. I did the same thing. These these are my friends um and had palami on two weeks ago uh last week, remember uh and it was in the last couple of weeks and we talked about their infrastructure as code solution. Were actually writing code instead of um oh that's an interesting take on uh developer team sort of owning coding the infrastructure through code rather than Yamil as a data language. I don't really have an opinion on it yet because I haven't used it in production or anything in the real real world, but um, I'm not sure how much they are applying trying to go towards the get up stuff. I will do a plug for Solomon hikes. Who has a, the beginning of the day, it's already happened so you can go back and watch it. It's a, it's a, what's it called? Q. Rethinking application delivery with Q. And build kit. So go look this up. This is the found co founder of Dr and former CTO Solomon hikes at the beginning of the day. He has a tool called dagger. I'm not sure why the title of the talk is delivering with Q. And built it, but the tool is showing off in there for an hour is called dagger. And it's, it's an interesting idea on how to apply a lot of this opinionated automated stuff to uh, to deployment and it's get off space and you use Q language. It's a graph language. I watched most of it and it was a really interesting take. I'm excited to see if that takes off and if they try that because it's another way that you can get a little bit more advanced with your you're get deployments and without having to just stick everything in Yemen, which is kind of what we're in today with helm charts and what not. All right. More questions about secrets, I think. I think we're not going to have a whole lot of more, a lot more about secrets basically. Uh put secrets in your cluster to start with and kubernetes in encrypted, you know, thing. And then, you know, as it gets harder, then you have to find another solution when you have five clusters, you don't wanna have to do it five times. That's when you have to go for Walton A W. S secrets and all >>that. Right? I'm gonna post it note. Yeah. Crm into the cluster. Just kidding. >>Yes, there are recordings of this. Yes, they will be later. Uh, because we're that these are all gonna be on youtube later. Um, yeah, detects secrets cushion saying detect secrets or get Guardian are absolute requirements. I think it's in reference to your secrets comment earlier. Um, Camels asking about Cuban is dropping support for Docker that this is not the place to ask for that, but it, it is uh, basically it's a Nonevent Marantz has actually just created that same plug in available in a different repos. So if you want to keep using Docker and kubernetes, you know, you can do it like it's no big deal. Most of us aren't using doctor in our communities anyway, so we're using like container D or whatever is provided to us by our provider. Um yeah, thank you so much for all these comments. These are great people helping each other and chat. I feel like we're just here to make sure the chats available so people can help each other. >>I feel like I want to pick up on something when you mentioned pollux me, I think there's a um we're talking about getups but I think in the original like the origination of that I guess was deploying applications to clusters right, picking up deployment manifest. But I think with the gloomy and I obviously terra form and things have been around a long time, folks are starting to apply this I think I found one earlier which was like um kub stack the Terror Forms get ups framework. Um but also with the advent of things like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure for your clusters and build the cluster right? We're not just talking about deploying applications, the cluster A. P. I will talk to a W. S. Spin up, VPc spin up machines, you know, we'll do the same kind of things that terra form does and and those other tools do I think applying getups principles to the infrastructure spin up right, the proper infrastructure as code stuff, constantly applying Terror form um you know, plans and whatever, constantly applying cluster Api resources spinning up stuff in those clouds. That's a super interesting. Um you know, extension of this area, I'd be curious to see if what the folks think about that. >>Yeah, that's why I picked this topic is one of my three. Uh I got I got to pick the topics. I was like the three things that there like the most bleeding edge exciting. Most people haven't, we haven't basically we haven't figured all this out yet. We as an industry, so um it's I think we're gonna see more ideas on it. Um what's the one with the popsicle as the as the icon victor talks about all the time? It's not it's another getups like tool, but it's um it's getups for you use this kubernetes limit and then we have to look it up, >>You're talking about cross plane. >>So >>my >>wife is over here with the sound effects and the first sound effect of the day that she chooses to use is one. >>All right, can we pick it? Let's let's find another question bret >>I'm searching >>so many of them. All right, so uh I think one really quick one is getups only for kubernetes, I think the main to tooling to tools that we're talking about, our Argosy D and flux and they're mostly geared toward kubernetes deployments but there's a, it seems like they're organized in a way that there's a clean abstraction in with respect to the agent that's doing the deployment and the tooling that that can interact with. So I would imagine that in the future and this might be true already right now that get ups could be applied to other types of deployments at some point in the future. But right now it's mostly focused and treats kubernetes as a first class citizen or the tooling on top of kubernetes, let's say something like how as a first class citizen? Yeah, to Brett, >>to me the field, back to you bret the thing I was looking for is cross plane. So that's another tool. Um Victor has been uh sharing a lot about it in Youtube cross plane and that is basically runs inside a kubernetes, but it handles your other infrastructure besides your app. It allows you to like get ops, you're a W. S stuff by using the kubernetes state engine as a, as a way to manage that. And I have not used it yet, but he does some really great demos on Youtube. So people are liking this idea of get off, so they're trying to figure out how do we, how do we manage state? How do we uh because the probably terra form is that, well, there's many problems, but it's always a lot of problems, but in the get outs world it's not quite the right fit yet, It might be, but you still, it's still largely as expected for people to, you know, like type the command, um, and it keeps state locally the ss, clouds and all that. And but the other thing is I'm I'm now realizing that when I saw the demo from Solomon, I'm going back to the Solomon hikes thing. He was using the demo and he was showing it apply deploying something on S three buckets, employing internet wifi and deploying it on google other things beyond kubernetes and saying that it's all getups approach. So I think we're just at the very beginning of seeing because it all started with kubernetes and now there's a swarm one, you can look up swarm, get office and there's a swarm, I can't take the name of it. Swarm sink I think is what's called swarm sink on git hub, which allows you to do swarm based getups like things. And now we're seeing these other tools coming out. They're saying we're going to try to do the get ups concepts, but not for kubernetes specifically and that's I think, you know, infrastructure as code started with certain areas of the world and then now then now we all just assume that you're going to have an infrastructure as code way of doing whatever that is and I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for all the clouds stuff and it won't just be flexor Argo. And then that's the weird thing is will flex and Argo support all those things or will it just be focused on kubernetes apps? You know, community stuff? >>There's also, I think this is what you're alluding to. There is a trend of using um kubernetes and see rDS to provision and control things that are outside of communities like the cloud service providers services as if they were first class entities within kubernetes so that you can use the kubernetes um focus tooling for things that are not communities through the kubernetes interface communities. Yeah, >>yeah, even criticism. >>Yeah, yeah, I'm just going to say that sounds like cross plane. >>Yeah, yeah, I mean, I think that's that's uh there were, you know, for the last couple of years, it's been flux and are going back and forth. Um they're like frenemies, you know, and they've been going back and forth with iterating on these ideas of how do we manage this complicated thing? That is many kubernetes clusters? Um because like Argo, I don't know if the flux V two can do this, but Argo can manage multiple clusters now from one cluster, so your, you can manage other clusters, technically external things from a single entity. Um Originally flux couldn't do that, but I'm going to say that V two can, I don't actually >>know. Um I think all that is gonna, I think that's going to consolidate in the future. All right. In terms of like the common feature set, what Iver and john what do you think? >>I mean, I think it's already begun, right, I think haven't, didn't they collaborate on a common engine? I don't know whether it's finished yet, but I think they're working towards a common getups engine and then they're just going to layer on features on top. But I think, I mean, I think that's interesting, right, because where it runs and where it interacts with, if we're talking about a pull based model, it shouldn't, it's decentralized to a certain extent, right? We need get and we need the agent which is pulling if we're saying there's something else which is orchestrating something that we start to like fuzzy the model even right. Like is this state living somewhere else, then I think that's just interesting as well. I thought flux was completely decentralized, but I know you install our go somewhere like the cargo has a server as well, but it's been a while since I've looked in depth at them. But I think the, you know, does that muddy the agent only pull model? >>I'm reading a >>Yeah, I would say that there's like a process of natural selection going on as as the C. N. C. F. Landscape evolves and grows bigger and a lot of divide and conquer right now. But I think as certain things kind of get more prominent >>and popular, I think >>it starts to trend and it inspires other things and then it starts to aggregate and you know, kind of get back into like a unified kind of like core. Maybe like for instance, cross plane, I feel like it shouldn't even really exist. It should be, it like it's a communities add on, but it should be built in, it should be built into kubernetes, like why doesn't this exist already >>for like controlling a cloud? >>Yeah, like just, you know, having this interface with the cloud provider and be able to Yeah, >>exactly. Yeah, and it kinda, you're right. That kinda happens because you do, I mean when you start talking about storage providers and networking providers was very specific implementations of operators or just individual controllers that do operate and control other resources in the cloud, but certainly not universally right. Not every feature of AWS is available to kubernetes out of the box. Um and you know, it, one of the challenges across plane is you gotta have kubernetes before you can deploy kubernetes. Like there's a chicken and egg issue there where if you're going to use, if you're going to use our cross plane for your other infrastructure, but it's gotta, but it has to run on kubernetes who creates that first kubernetes in order for you to put that on there. And victor talks about one of his videos, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. There has to be that initial, I did a thing with, I'm a human and I typed in some commands on a server and things happened but they don't really have an easy deployment method for getting our go up and running using simply nothing but a get push to an existing system. There's something like that. So it's a it's an interesting problem of day one infrastructure which is again only day one, I think data is way more interesting and hard, but um how can we spend these things up if they're all depending on each other and who is the first one to get started? >>I mean it's true of everything though, I mean at the end of that you need some kind of big bang kind of function too, you know, I started running start everything I >>think without going over that, sorry, without going off on a tangent. I was, I was gonna say there's a, if folks have heard of kind which is kubernetes and Docker, which is a mini kubernetes cluster, you can run in a Docker container or each container will run as a as a node. Um you know, that's been a really good way to spin up things like clusters. KPI because they boot strap a local kind, install the manifests, it will go and spin up a fully sized cluster, it will transfer its resources over there and then it will die itself. Right? So that, that's kind of bootstrapping itself. And I think a couple of folks in the community, Jason to Tiberius, I think he works for Quinyx metal um has, has experimented with like an even more minimal just Api server, so we're really just leveraging the kubernetes ideas of like a reconciliation loop and a controller. We just need something to bootstrap with those C R D s and get something going and then go away again. So I think that's gonna be a pattern that comes up kind of more and more >>Yeah, for sure. Um, and uh, the next, next quick answer to the question, Angel asked what your thoughts on getups being a niche to get or versus others vcs tools? Well, if I knew anyone who is using anything other than get, I would say no, you know, get ops is a horrible name. It should just be CVS office, but that doesn't or vcs ops or whatever like that, but that doesn't roll off the tongue. So someone had to come up with the get ups phrase. Um but absolutely, it's all about version control solutions used for infrastructure, not code. Um might get doctor asks a great question, we're not gonna have time for it, but maybe people can reply and chat with what they think but about infrastructure and code, the lines being blurred and that do develop, how much of infrastructure does developer do developers need to know? Essentially, they're having to know all the things. Um so unfortunately we've had way more questions like every panel here today with all the great community, we've got way more questions we can handle in this time. So we're gonna have to wrap it up and say goodbye. Go to the next live panel. I believe the next one is um on developer, developer specific setups that's gonna be peter running that panel. Something about development in containers and I'm sure it's gonna be great. Just like this one. So let's go around the room where can people find you on the internet? I'm at Brett fisher on twitter. That's where you can usually find me most days you are? >>Yeah, I'm on twitter to um, I'll put it in the chat. It's kind of confusing because the TSR seven. >>Okay. Yeah, that's right. You can't just say it. You can also look at the blow of the video and like our faces are there and if you click on them, it tells you our twitter in Arlington and stuff, john >>John Harris 85, pretty much everywhere. Get hub Twitter slack, etc. >>Yeah >>and normal, normal faults or just, you know, living on Youtube live with Brett. >>Yeah, we're all on the twitter so go check us out there and thank you so much for joining. Uh thank you so much to you all for being here. I really appreciate you taking time in your busy schedule to join me for a little chit chat. Um Yes, all the, all the cheers, yes. >>And I think this kid apps loop has been declarative lee reconciled. >>Yeah, there we go. And with that ladies and gentlemen, uh bid you would do, we will see you in the next, next round coming up next with Peter >>bye.
SUMMARY :
I got my evil or my john and the normal And we're going to talk about get ops I currently based in Berlin and I happen to be Brett Brett's teaching assistant. All right, that's right. Um, so yeah, it's good to see either in person and it's good to see you again john it's been a little It has the pre covid times, right? Yeah, john shirt looks red and reminds me of the Austin T shirt. Um, but you know, you have to go steal stuff, you to find ways to get the swag If you ever come to my place, I'm going to have to lock the closets. So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the Um All right, so I'm going to start scanning questions uh so that you don't have to you can Um I still feel like I'm very new to john you anything. like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. so it has a learning curve and it's still being, you know, I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling sure you would have opinions. I think it's a very yeah. um I'll do my best inner victor and say, you know, it's it's I like it. then more, you know, and not everything needs to settle in terms of only one way of doing things, to a server and do a doctor pull and you know, dr up or dr composed up rather, That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups I rant, Right, so you have what? thing that I could figure out how to, you know, get it set up using um get hubs, and different repos and subdirectories are are looking at the defense and to see if there's changes I think it's you know, Yeah, for sure. That's the pain um or is it uh you know, is that that everyone's in one place So that is well within the realm of what you have Um was making a joke with a team the other week that you know the developer used to just I think when you get to the scale where those kind of issues are a problem then you're probably at the scale this kind of comes to a conversation uh starting this question from lee he's asking how do you combine top of kubernetes, such as helm or um you know, the other ones that are out there I was about to go to the next topic, I think certain tools dictate the approach, like, if you're using Argosy d I think you can make our go do that too, but uh this is back to what That's the same thing for secrets with good apps? But again like like like normal sand, you know, it's that doesn't really affect get ops, the risk of you putting a secret into your git repo if you haven't figured I hide this? So I think is the right way of saying the answer to that I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see Yeah. in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. can start off using it but you definitely have to have some pre recs in if you do have access and you can just apply something, then that's just infrastructure as code. But anyway, one thing with getups, especially based off the we've works blog post that you just put up on And then, you know, as it gets harder, then you have to find another solution when Crm into the cluster. I think it's in reference to your secrets comment earlier. like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure but it's um it's getups for you use this kubernetes I think the main to tooling to tools that we're talking about, our Argosy D and flux I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for you can use the kubernetes um focus tooling for things I mean, I think that's that's uh there were, you know, Um I think all that is gonna, I think that's going to consolidate But I think the, you know, does that muddy the agent only But I think as certain things kind of get more it starts to trend and it inspires other things and then it starts to aggregate and you know, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. Um you know, that's been a really good way to spin up things like clusters. So let's go around the room where can people find you on the internet? the TSR seven. are there and if you click on them, it tells you our twitter in Arlington and stuff, john Get hub Twitter slack, etc. and normal, normal faults or just, you know, I really appreciate you taking time in your And with that ladies and gentlemen, uh bid you would do,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brett | PERSON | 0.99+ |
Berlin | LOCATION | 0.99+ |
Victor Farsi | PERSON | 0.99+ |
john Harris | PERSON | 0.99+ |
Virginia Beach | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Jason | PERSON | 0.99+ |
Brett Brett | PERSON | 0.99+ |
Gaddafi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Yemen | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Arlington | LOCATION | 0.99+ |
Brett fisher | PERSON | 0.99+ |
five times | QUANTITY | 0.99+ |
Tiberius | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
two options | QUANTITY | 0.99+ |
john | PERSON | 0.99+ |
Virginia beach | LOCATION | 0.99+ |
two weeks ago | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Amman Arepa | LOCATION | 0.99+ |
three changes | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
second floor | QUANTITY | 0.99+ |
Quinyx | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Tory | PERSON | 0.99+ |
an hour and a half | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
axel Springer | ORGANIZATION | 0.99+ |
Victor | PERSON | 0.99+ |
Jenkins | TITLE | 0.98+ |
youtube | ORGANIZATION | 0.98+ |
SAn Francisco | LOCATION | 0.98+ |
three special guests | QUANTITY | 0.98+ |
4 | QUANTITY | 0.98+ |
Each tool | QUANTITY | 0.98+ |
booz allen | PERSON | 0.98+ |
one person | QUANTITY | 0.98+ |
five clusters | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
five different systems | QUANTITY | 0.98+ |
each container | QUANTITY | 0.98+ |
day one | QUANTITY | 0.98+ |
Youtube | ORGANIZATION | 0.98+ |
Angel | PERSON | 0.98+ |
Iver | PERSON | 0.98+ |
five different ways | QUANTITY | 0.98+ |
first year | QUANTITY | 0.97+ |
V two | OTHER | 0.97+ |
three commits | QUANTITY | 0.97+ |
more than two | QUANTITY | 0.97+ |
One person | QUANTITY | 0.97+ |
two way | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
one way | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
single point | QUANTITY | 0.96+ |
five prs | QUANTITY | 0.95+ |
first one | QUANTITY | 0.95+ |
John Harris 85 | PERSON | 0.95+ |
first | QUANTITY | 0.95+ |
more than one kubernetes | QUANTITY | 0.95+ |
LIVE Panel: Container First Development: Now and In the Future
>>Hello, and welcome. Very excited to see everybody here. DockerCon is going fantastic. Everybody's uh, engaging in the chat. It's awesome to see. My name is Peter McKee. I'm the head of developer relations here at Docker and Taber. Today. We're going to be talking about container first development now and in the future. But before we do that, a couple little housekeeping items, first of all, yes, we are live. So if you're in our session, you can go ahead and chat, ask us questions. We'd love to get all your questions and answer them. Um, if you come to the main page on the website and you do not see the chat, go ahead and click on the blue button and that'll die. Uh, deep dive you into our session and you can interact with the chat there. Okay. Without further ado, let's just jump right into it. Katie, how are you? Welcome. Do you mind telling everybody who you are and a little bit about yourself? >>Absolutely. Hello everyone. My name is Katie and currently I am the eco-system advocate at cloud native computing foundation or CNCF. My responsibility is to lead and represent the end-user community. So these are all the practitioners within the cloud native space that are vendor neutral. So they use cloud native technologies to build their services, but they don't sell it. So this is quite an important characteristic as well. My responsibility is to make sure to close the gap between these practitioners and the project maintainers, to make sure that there is a feedback loop around. Um, I have many roles within the community. I am on the advisory board for KIPP finishes, a sandbox project. I'm working with open UK to make sure that Elton standards are used fairly across data, hardware, and software. And I have been, uh, affiliated way if you'd asked me to make sure that, um, I'm distributing a cloud native fundamental scores to make cloud and they do a few bigger despite everyone. So looking forward to this panel and checking with everyone. >>Awesome. Yeah. Welcome. Glad to have you here. Johanna's how are you? Can you, uh, tell everybody a little bit about yourself and who you are? Yeah, sure. >>So hi everybody. My name is Johannes I'm one of the co-founders at get pot, which in case you don't know is an open-source and container based development platform, which is probably also the reason why you Peter reached out and invited me here. So pleasure to be here, looking forward to the discussion. Um, yeah, though it is already a bit later in Munich. Um, and actually my girlfriend had a remote cocktail class with her colleagues tonight and it took me some stamina to really say no to all the Moscow mules that were prepared just over there in my living room. Oh wow. >>You're way better than me. Yeah. Well welcome. Thanks for joining us. Jerome. How are you? Good to see you. Can you tell everybody who you are and a little bit about yourself? Hi, >>Sure. Yeah, so I'm, I, I used to work at Docker and some, for me would say I'm a container hipster because I was running containers in production before it for hype. Um, I worked at Docker before it was even called Docker. And then since 2018, I'm now a freelancer and doing training and consulting around Docker containers, Kubernetes, all these things. So I used to help folks do stuff with Docker when I was there and now I still have them with containers more generally speaking. So kind of, uh, how do we say same, same team, different company or something like that? Yeah. >>Yeah. Perfect. Yeah. Good to see you. I'm glad you're on. Uh, Jacob, how are you? Good to see you. Thanks for joining us. Good. Yeah. Thanks for having me tell, tell everybody a little bit about yourself who you are. >>Yeah. So, uh, I'm the creator of a tool called mutagen, which is an open source, uh, development tool for doing high performance file synchronization and, uh, network forwarding, uh, to enable remote development. And so I come from like a physics background where I was sort of always doing, uh, remote developments, you know, whether that was on a big central clusters or just like some sort of local machine that was a bit more powerful. And so I, after I graduated, I built this tool called mutagen, uh, for doing remote development. And then to my surprise, people just started using it to use, uh, with Docker containers. And, uh, that's kind of grown into its primary use case now. So I'm, yeah, I've gotten really involved with the Docker community and, uh, talked with a lot of great people and now I'm one of the Docker captains. So I get to talk with even more and, and join these events and yeah, but I'm, I'm kind of focused on doing remote development. Uh, cause I, you know, I like, I like having all my tools available on my local machine, but I also like being able to pull in a little bit more powerful hardware or uh, you know, maybe a software that I can't run locally. And so, uh, that's sort of my interest in, in Docker container. Yeah. Awesome. >>Awesome. We're going to come back to that for sure. But yeah. Thank you again. I really appreciate you all joining me and yeah. So, um, I've been thinking about container first development for a while and you know, what does that actually mean? So maybe, maybe we can define it in our own little way. So I, I just throw it out to the panel. When you think about container first development, what comes to mind? What w what, what are you kind of thinking about? Don't be shy. Go ahead. Jerome. You're never a loss of words >>To me. Like if I go back to the, kind of the first, uh, you know, training engagements we did back at Docker and kind of helping folks, uh, writing Dockerfiles to stop developing in containers. Um, often we were replacing, um, uh, set up with a bunch of Vagrant boxes and another, like the VMs and combinations of local things. And very often they liked it a lot and they were very soon, they wanted to really like develop in containers, like run this microservice. This piece of code is whatever, like run that in containers because that means they didn't have to maintain that thing on their own machine. So that's like five years ago. That's what it meant to me back then. However, today, if you, if you say, okay, you know, developing in containers, um, I'm thinking of course about things like get bought and, uh, I think it's called PR or something like that. >>Like this theme, maybe that thing with the ESCO, that's going to run in a container. And you, you have this vs code thing running in your browser. Well, obviously not in your browser, but in a container that you control from your browser and, and many other things like that, that I, I think that's what we, where we want to go today. Uh, and that's really interesting, um, from all kinds of perspectives, like Chevy pair pairing when we will not next to each other, but actually thousands of miles away, um, or having this little environment that they can put aside and come back to it later, without it having using resource in my machine. Um, I don't know, having this dev service running somewhere in the cloud without needing something like, it's at the rights that are like the, the possibilities are really endless. >>Yeah. Yeah. Perfect. Yeah. I'm, you know, a little while ago I was, I was torn, right. W do I spin up containers? Do I develop inside of my containers? Right. There's foul sinking issues. Um, you know, that we've been working on at Docker for a while, and Jacob is very, very familiar with those, right? Sometimes it, it becomes hard, but, and I, and I love developing in the cloud, but I also have this screaming, you know, fast machine sitting on my desktop that I think I should take advantage of. So I guess another question is, you know, should we be developing inside of containers? Is that a smart thing to do? Uh, I'd love to hear you guys' thoughts around that. >>You know, I think it's one of those things where it's, you know, for me container first development is really about, um, considering containers as sort of a first class citizen in, in terms of your development toolkit, right. I mean, there's not always that silver bullet, that's like the one thing you should use for everything. You know, you shouldn't, you shouldn't use containers if they're not fitting in or adding value to your workflow, but I think there's a lot of scenarios that are like, you know, super on super early on in the development process. Like as soon as you get the server kind of running and working and, you know, you're able to access it, you know, running on your local system. Uh that's I think that's when the value comes in to it to add containers to, you know, what you're doing or to your project. Right. I mean, for me, they're, um, they're more of a orchestrational tool, right? So if I don't have to have six different browser tabs open with like, you know, an API server running at one tab and a web server running in another tab and a database running in another tab, I can just kind of encapsulate those and, and use them as an automation thing. So I think, you know, even if you have a super powerful computer, I think there's still value in, um, using containers as, as a orchestrational mechanism. Yeah. Yeah, >>For sure. I think, I think one of the, one of my original aha moments with Docker was, oh, I can spin up different versions of a database locally and not have to install it and not have to configure it and everything, but, you know, it just ran inside of a container. And that, that was it. Although it's might seem simple to some people that's very, very powerful. Right. So I think being able to spin things up and containers very quickly is one of the super benefits. But yeah, I think, uh, developing in containers is, is hard right now, right. With, um, you know, and how do you do that? Right. Does anybody have any thoughts around, how do you go about that? Right. Should you use a container as just a development environment, so, you know, creating an image and then running it just with your dev tools in it, or do you just, uh, and maybe with an editor all inside of it, and it's just this process, that's almost like a VM. Um, yeah. So I'll just kick it back to the panel. I'd love to hear your thoughts on, you know, how do you set up and configure, uh, containers to develop in any thoughts around that? >>Maybe one step back again, to answer your question, what kind of container first development mean? I think it doesn't mean, um, by default that it has to be in the cloud, right? As you said, um, there are obvious benefits when it comes to the developer experience of containers, such as, I dunno, consistency, we have standardized tools dependencies for the dev side of things, but it also makes their dev environment more similar to all the pipeline that is somehow happening to the right, right. So CIC D all the way to production, it is security, right? Which also somehow comes with standardization. Um, but vulnerability scanning tools like sneak are doing a great job there. And, um, for us, it gets pod. One of the key reasons why we created get pod was literally creating this peace of mind for deaths. So from a developer's point of view, you do not need to take care anymore about all the hassle around setups and things that you will need to install. >>And locally, based on some outdated, REIT me on three operating systems in your company, everybody has something different and leading to these verbs in my machine situations, um, that really slow professional software developers down. Right. Um, back to your point, I mean, with good pod, we obviously have to package everything together in one container because otherwise, exactly the situation happens that you need to have five browser tabs open. So we try and leverage that. And I think a dev environment is not just the editor, right? So a dev environment includes your source code. It includes like a powerful shell. It includes file systems. It includes essentially all the tools you need in order to be productive databases and so on. And, um, yeah, we believe that should be encapsulated, um, um, in a container. >>Yeah. Awesome. Katie, you talked to a lot of end users, right. And you're talking to a lot of developers. What, what's your thoughts around container first development, right? Or, or what's the community out there screaming or screaming. It might be too to, uh, har you know, to, to two grand of the word. Right. But yeah, I love it. I love to hear what your, your thoughts. >>Absolutely. So I think when you're talking about continuing driven development, uh, the first thing that crosses my mind is the awareness of the infrastructure or the platform you're going to run your application on top of, because usually when you develop your application, you'd like to replicate as much as possible the production or even the staging environment to make sure that when you deploy your application, you have us little inconsistencies as possible, but at the same time, you minimize the risk for something to go wrong as well. So when it talking about the, the community, um, again, when you deploy applications and containers and Kubernetes, you have to use, you have awareness about, and probably apply some of the best practices, like introducing liveliness and readiness probes, to make sure that your application can restart in, in case it actually goes down or there's like a you're starving going CPU or something like that. >>So, uh, I think when it comes to deployment and development of an application, the main thing is to actually improve the end developer experience. I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run application and production, but that doesn't necessarily, um, go back to how the end developer is actually enabling that application to run into that production system. So I think there has been, uh, this focus for the community identified now, and it's more, more, um, or trying to build momentum on enhancing the developer experience. And we've seen this going through many, uh, where we think production of many tools did what has been one of them, which actually we can have this portable, um, development environment if you choose so, and you can actually replicate them across different teams in different machines, which is actually quite handy. >>But at the same time, we had tools such as local composts has been a great tool to run locally. We have tool such as carefully, which is absolutely great to automatically dynamically upload any changes to how within your code. So I think all of these kinds of tools, they getting more matured. And again, this is going back to again, we need to enhance our developer experience coming back to what is the right way to do so. Um, I think it really depends on the environment you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but at the same time, um, I'd like to say that, uh, it really depends on, on what trucks are developing. Uh, so it's, it's, I would like to personally, I would like to see a bit more diversification in this area because we might have this competitive solutions that is going to push us to towards a new edge. So this is like, what definitely developer experience. If we're talking about development, that's what we need to enhance. And that's what I see the momentum building at the moment. >>Yeah. Yeah. Awesome. Jerome, I saw you shaking your head there in agreement, or maybe not, but what's your thoughts? >>I was, uh, I was just reacting until 82. Uh, it depends thinking that when I, when I do training, that's probably the answer that I gave the most, uh, each time somebody asks, oh, should we do diesel? And I was also looking at some of the questions in the chat about, Hey, the, should we like have a negatory in the, in the container or something like that. And folks can have pretty strong opinions one way or the other, but as a ways, it kind of depends what we do. It also depends of the team that we're working with. Um, you, you could have teams, you know, with like small teams with folks with lots of experience and they all come with their own Feb tools and editorials and plugins. So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. >>So of course, if you give them something else, they're going to be extremely unhappy or sad. On the other hand, you can have team with folks who, um, will be less opinionated on that. And even, I don't know, let's say suddenly you start working on some project with maybe a new programming language, or maybe you're targeting some embedded system or whatever, like something really new and different. And you come up with all the tools, even the ADE, the extensions, et cetera, folks will often be extremely happy in that case that you're kind of giving them a Dettol and an ADE, even if that's not what they usually would, uh, would use, um, because it will come with all of the, the, the nice stage, you know, the compression, the, um, the, the, the bigger, the, whatever, all these things. And I think there is also something interesting to do here with development in containers. >>Like, Hey, you're going to start working on this extremely complex target based on whatever. And this is a container that has everything to get started. Okay. Maybe it's not your favorites editor, but it has all the customization and the conserver and whatever. Um, so you can start working right away. And then maybe later you, we want to, you know, do that from the container in a way, and have your own Emacs, atom, sublime, vs code, et cetera, et cetera. Um, but I think it's great for containers here, as well as they reserve or particularly the opportunity. And I think like the, that, that's one thing where I see stuff like get blood being potentially super interesting. Um, it's hard for me to gauge because I confess I was never a huge ID kind of person had some time that gives me this weird feeling, like when I help someone to book some, some code and you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. >>And then at some point I'm like, okay, let's, let's get VI and grep and let's navigate this code base. And that makes me feel a little bit, you know, as this kind of old code for movies where you have the old, like colorful guy who knows going food, but at the end ends up still being obsolete because, um, it's only a going for movies that whole good for masters and the winning right. In real life, we don't have conformance there's anymore mentioned. So, um, but part of me is like, yeah, I like having my old style of editor, but when, when the modern editorial modern ID comes with everything set up and configured, that's just awesome. That's I, um, it's one thing that I'm not very good at sitting up all these little things, but when somebody does it and I can use it, it's, it's just amazing. >>Yeah. Yeah. I agree. I'm I feel the same way too. Right. I like, I like the way I've I have my environment. I like the tools that I use. I like the way they're set up. And, but it's a big issue, right? If you're switching machines, like you said, if you're helping someone else out there, they're not there, your key bindings aren't there, you can't, you can't navigate their system. Right? Yeah. So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, and we're, you know, there's a lot, there, there's a, it's super complex, all these things we're talking about. And I think we're taking the approach of let's do something, uh, well, first, right. And then we can add on to that. Right. Because I think, you know, setting up full, full developed environments is hard, right. Especially in the, the, um, cloud native world nowadays with microservices, do you run them on a repo? >>Do you not have a monitor repo? Maybe that would be interesting to talk about. I think, um, you know, I always start out with the mono repos, right. And you have all your services in there and maybe you're using one Docker file. And then, because that works fine. Cause everything is JavaScript and node. And then you throw a little Python in there and then you throw a little go and now you start breaking things out and then things get too complex there, you know, and you start pulling everything out into different, get repos and now, right. Not everything just fits into these little buckets. Right. So how do you guys think maybe moving forward, how do we attack that night? How do we attack these? Does separate programming languages and environments and kind of bring them all together. You know, we, we, I hesitate, we solve that with compose around about running, right about executing, uh, running your, your containers. But, uh, developing with containers is different than running containers. Right. It's a, it's a different way to think about it. So anyway, sorry, I'm rattling on a little bit, but yeah. Be interesting to look at a more complex, uh, setup right. Of, uh, of, you know, even just 10 microservices that are in different get repos and different languages. Right. Just some thoughts. And, um, I'm not sure we all have this flushed out yet, but I'd love to hear your, your, you guys' thoughts around that. >>Jacob, you, you, you, you look like you're getting ready to jump there. >>I didn't wanna interrupt, but, uh, I mean, I think for me the issue isn't even really like the language boundary or, or, um, you know, a sub repo boundary. I think it's really about, you know, the infrastructure, right? Because you have, you're moving to an era where you have these cloud services, which, you know, some of them like S3, you can, you can mock up locally, uh, or run something locally in a container. But at some point you're going to have like, you know, cloud specific hardware, right? Like you got TPS or something that maybe are forming some critical function in your, in your application. And you just can't really replicate that locally, but you still want to be able to develop against that in some capacity. So, you know, my, my feeling about where it's going to go is you'll end up having parts of your application running locally, but then you also have, uh, you know, containers or some other, uh, element that's sort of cohabitating with, uh, you know, either staging or, or testing or production services that you're, uh, that you're working with. >>So you can actually, um, you know, test against a really or realistic simulation or the actual, uh, surface that you're running against in production. Because I think it's just going to become untenable to keep emulating all of that stuff locally, or to have to like duplicate these, you know, and, you know, I guess you can argue about whether or not it's a good thing that, that everything's moving to these kind of more closed off cloud services, but, you know, the reality of situation is that's where it's going to go. And there's certain hardware that you're going to want in the cloud, especially if you're doing, you know, machine learning oriented stuff that there's just no way you're going to be able to run locally. Right. I mean, if you're, even if you're in a dev team where you have, um, maybe like a central machine where you've got like 10 or 20 GPU's in it, that's not something that you're going to be able to, to, to replicate locally. And so that's how I kind of see that, um, you know, containers easing that boundary between different application components is actually maybe more about co-location, um, or having different parts of your application run in different locations, on different hardware, you know, maybe someone on your laptop, maybe it's someone, you know, AWS or Azure or somewhere. Yeah. It'd be interesting >>To start seeing those boundaries blur right. Working local and working in the cloud. Um, and you might even, you might not even know where something is exactly is running right until you need to, you know, that's when you really care, but yeah. Uh, Johanas, what's your thoughts around that? I mean, I think we've, we've talked previously of, of, um, you know, hybrid kind of environments. Uh, but yeah. What, what's your thoughts around that? >>Um, so essentially, yeah, I think, I mean, we believe that the lines between cloud and local will also potentially blur, and it's actually not really about that distinction. It's just packaging your dev environment in a way and provisioning your dev environment in a way that you are what we call always ready to coat. So that literally, um, you, you have that for the, you described as, um, peace of mind that you can just start to be creative and start to be productive. And if that is a container potentially running locally and containers are at the moment. I think, you know, the vehicle that we use, um, two weeks ago, or one week ago actually stack blitz announced the web containers. So potentially some things, well, it's run in the browser at some point, but currently, you know, Docker, um, is the standard that enables you to do that. And what we think will happen is that these cloud-based or local, um, dev environments will be what we call a femoral. So it will be similar to CIS, um, that we are using right now. And it doesn't literally matter, um, where they are running at the end. It's just, um, to reduce friction as much as possible and decrease and yeah, yeah. Essentially, um, avoid or the hustle that is currently involved in setting up and also managing dev environments, um, going forward, which really slows down specifically larger teams. >>Yeah. Yeah. Um, I'm going to shift gears a little bit here. We have a question from the audience in chat, uh, and it's, I think it's a little bit two parts, but so far as I can see container first, uh, development, have the challenges of where to get safe images. Um, and I was going to answer it, but let me keep it, let me keep going, where to get safe images and instrumentation, um, and knowing where exactly the problem is happening, how do we provide instrument instrumentation to see exactly where a problem might be happening and why? So I think the gist of it is kind of, of everything is in a container and I'm sitting outside, you know, the general thought around containers is isolation, right. Um, so how do I get views into that? Um, whether debugging or, or, or just general problems going on. I think that's maybe a broader question around the, how you, you know, you have your local hosts and then you're running everything containers, and what's the interplay there. W what's your thoughts there? >>I tend to think that containers are underused interactively. I mean, I think in production, you have this mindset that there's sort of this isolated environment, but it's very, actually simple to drop into a shell inside of a container and use it like you would, you know, your terminal. Um, so if you want to install software that way, you know, through, through an image rather than through like Homebrew or something, uh, you can kind of treat containers in that way and you can get a very, um, you know, direct access to the, to the space in which those are running in. So I think, I think that's maybe the step one is just like getting rid of that mindset, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because it's actually quite easy to just Docker exec into a container and then use it interactively >>Yeah. A hundred percent. And maybe I'll pass, I'm going to pass this question. You drone, but maybe demystify containers a little bit when I talked about this on the last, uh, panel, um, because we have a question in the, in the chat around, what's the, you know, why, why containers now I have VMs, right? And I think there's a misunderstanding in the industry, uh, about what, what containers are, we think they're fair, packaged stuff. And I think Jacob was hitting on that of what's underneath the hood. So maybe drown, sorry, for a long way to set up a question of what, what, what makes up a container, what is a container >>Is a container? Well, I, I think, um, the sharpest and most accurate and most articulate definition, I was from Alice gold first, and I will probably misquote her, but she said something like containers are a bunch of capsulated processes, maybe running on a cookie on welfare system. I'm not sure about the exact definition, but I'm going to try and, uh, reconstitute that like containers are just processes that run on a Unix machine. And we just happen to put a bunch of, um, red tape or whatever around them so that they are kind of contained. Um, but then the beauty of it is that we can contend them as much, or as little as we want. We can go kind of only in and put some actual VM or something like firecracker around that to give some pretty strong angulation, uh, all we can also kind of decontam theorize some aspects, you know, you can have a container that's actually using the, um, the, um, the network namespace of the host. >>So that gives it an entire, you know, wire speed access to the, to the network of the host. Um, and so to me, that's what really interesting, of course there is all the thing about, oh, containers are lightweight and I can pack more of them and they start fast and the images can be small, yada yada, yada. But to me, um, with my background in infrastructure and building resilient, things like that, but I find really exciting is the ability to, you know, put the slider wherever I need it. Um, the, the, the ability to have these very light containers, all very heavily, very secure, very anything, and even the ability to have containers in containers. Uh, even if that sounds a little bit, a little bit gimmicky at first, like, oh, you know, like you, you did the Mimi, like, oh, I heard you like container. >>So I put Docker when you're on Docker. So you can run container for you, run containers. Um, but that's actually extremely convenient because, um, as soon as you stop building, especially something infrastructure related. So you challenge is how do you test that? Like, when we were doing.cloud, we're like, okay, uh, how do we provision? Um, you know, we've been, if you're Amazon, how do you provision the staging for us installed? How do you provision the whole region, Jen, which is actually staging? It kind of makes things complicated. And the fact that we have that we can have containers within containers. Uh, that's actually pretty powerful. Um, we're also moving to things where we have secure containers in containers now. So that's super interesting, like stuff like a SIS box, for instance. Um, when I saw that, that was really excited because, uh, one of the horrible things I did back in the days as Docker was privileged containers, precisely because we wanted to have Docker in Docker. >>And that was kind of opening Pandora's box. That's the right, uh, with the four, because privileged containers can do literally anything. They can completely wreck up the machine. Um, and so, but at the same time, they give you the ability to run VPNs and run Docker in Docker and all these cool things. You can run VM in containers, and then you can list things. So, um, but so when I saw that you could actually have kind of secure containers within containers, like, okay, there is something really powerful and interesting there. And I think for folks, well, precisely when you want to do development in containers, especially when you move that to the cloud, that kind of stuff becomes a really important and interesting because it's one thing to have my little dev thing on my local machine. It's another thing when I want to move that to a swarm or Kubernetes cluster, and then suddenly even like very quickly, I hit the wall, which is, oh, I need to have containers in my containers. Um, and then having a runtime, like that gets really intense. >>Interesting. Yeah, yeah, yeah. And I, and jumping back a bit, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, a process with, with some, uh, Abra, pardon me, operating constructs wrapped around it and see groups, namespaces those types of things. But I think it's very important to, for our discussion right. Of, uh, developers really understanding that, that this is just the process, just like a normal process when I spin up my local bash in my term. Uh, and I'm just interacting with that. And a lot of the things we talk about are more for production runtimes for securing containers for isolating them locally. I don't, I don't know. I'll throw the question out to the panel. Is that really relevant to us locally? Right. Do we want to pull out all of those restrictions? What are the benefits of containers for development, right. And maybe that's a soft question, but I'd still love to hear your thoughts. Maybe I'll kick it over to you, Katie, would you, would you kick us off a little bit with that? >>I'll try. Um, so I think when, again, I was actually thinking of the previous answers because maybe, maybe I could do a transition here. So, interesting, interesting about containers, a piece of trivia, um, the secrets and namespaces have been within the Linux kernel since 2008, I think, which just like more than 10 years ago, hover containers become popular in the last years. So I think it's, it's the technology, but it's about the organization adopting this technology. So I think why it got more popular now is because it became the business differentiator organizations started to think, how can I deliver value to my customers as quickly as possible? So I think that there should be this kind of two lane, um, kind of progress is the technology, but it's at the same time organization and cultural now are actually essential for us to develop, uh, our applications locally. >>Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind of run it locally, have a very simple testing environment. Sufficient is a container necessary, probably not. However, I think it's more important when you're thinking to the bigger picture. When we have an architecture that has myriads of microservices at the basis, when it's something that you have to expose, for example, an API, or you have to consume an API, these are kind of things where you might need to think about a lightweight set up within the containers, only local environment to make sure that you have at least a similar, um, environment or a configuration to make sure that you test some of the expected behavior. Um, I think the, the real kind of test you start from the, the dev cluster will like the dev environment. >>And then like for, for you to go to staging and production, you will get more clear into what exactly that, um, um, configuration should be in the end. However, at the same time, again, it's, it's more about, um, kind of understanding why you continue to see this, the thing, like, I don't say that you definitely need containers at all times, but there are situations when you have like, again, multiple services and you need to replicate them. It's just the place to, to, to work with these kind of, um, setups. So, um, yeah, really depends on what you're trying to develop here. Nothing very specific, unfortunately, but get your product and your requirements are going to define what you're going to work with. >>Yeah, no, I think that's a great answer, right. I think one of the best answers in, in software engineering and engineering in general as well, it depends. Right. It's things are very specific when we start getting down to the details, but yeah, generally speaking, you know, um, I think containers are good for development, but yeah, it depends, right. It really depends. Is it helping you then? Great. If it's hindering you then, okay. Maybe think what's, what's the hindrance, right. And are containers the right solution. I agree. 110% and, >>And everything. I would like absurd this too as well. When we, again, we're talking about the development team and now we have this culture where we have the platform and infrastructure team, and then you have your engineering team separately, especially when the regulations are going to be segregated. So, um, it's quite important to understand that there might be a, uh, a level of up-skilling required. So pushing for someone to use containers, because this is the right way for you to develop your application might be not, uh, might not be the most efficient way to actually develop a product because you need to spend some time to make sure that the, the engineering team has the skills to do so. So I think it's, it's, again, going back to my answers here is like, truly be aware of how you're trying to develop how you actually collaborate and having that awareness of your platform can be quite helpful in developing your, uh, your publication, the more importantly, having less, um, maybe blockers pushing it to a production system. >>Yeah, yeah. A hundred percent. Yeah. The, uh, the cultural issue is, is, um, within the organization, right. Is a very interesting thing. And it, and I would submit that it's very hard from top down, right. Pushing down tools and processes down to the dev team, man, we'll just, we'll just rebel. It usually comes from the bottom up. Right. What's working for us, we're going to do right. And whether we do it in the shadows and don't let it know, or, or we've conformed, right. Yeah. A hundred percent. Um, interesting. I would like to think a little bit in the future, right? Like, let's say, I don't know, two, three years from now, if, if y'all could wave a and I'm from Texas. So I say y'all, uh, if you all could wave a magic wand, what, what, what would that bring about right. What, what would, what would be the best scenario? And, and we just don't have to say containers. Right. But, you know, what's the best development environment and I'm going to kick it over to you, Jacob. Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, yeah. Implies, they need to keep you awake. You're, you're, you're, uh, almost on the other side of the world for me, but yeah, please. >>Um, I think, you know, it's, it's interesting because you have this technology that you've been, that's been brought from production, so it's not, um, necessarily like the right or the normal basis for development. So I think there's going to be some sort of realignment or renormalization in terms of, uh, you know, what the, what the basis and the abstractions that we're using on a daily basis are right. Like images and containers as they exist now are really designed for, um, for production use cases. And, and in terms of like, even even the ergonomics of opening a shell inside a container, I think is something that's, um, you know, not as polished or not as smooth as it could be because they've come from production. And so I think it's important, like not to, not to have people look at, look at the technology as it exists now and say like, okay, this is slightly rough around the edges, or it wasn't designed for this use case and think, oh, there's, you know, there's never any way I could use this for, for my development of workflows. >>I think it's, you know, it's something Docker's exploring now with, uh, with the, uh, dev containers, you know, it's, it's a new, and it's an experimental paradigm and it may not be what the final picture looks like. As, you know, you were saying, there's going to be kind of a baseline and you'll add features to that or iterate on that. Um, but I think that's, what's interesting about it, right? Cause it's, there's not a lot of things as developers that you get to play with that, um, that are sort of the new technology. Like if you're talking about things you're building to ship, you want to kind of use tried and true components that, you know, are gonna, that are going to be reliable. But I think containers are that interesting point where it's like, this is an established technology, but it's also being used in a way now that's completely different than what it was designed for. And, and, you know, as hackers, I think that's kind of an interesting opportunity to play with it, but I think, I think that's, what's going to happen is you're just going to see kind of those production, um, designed, uh, knobs kind of sanded down or redesigned for, for development. So that's kind of where I see it going. >>Yeah. Yeah. And I think that's what I was trying to hint out earlier is like, um, yeah, just because all these things are there, does it actually mean we need them locally? Right. Do they make sense? I, I agree. A hundred percent, uh, anybody else drawn? What are your thoughts around that? And then, and then, uh, I'll probably just ask all of you. I'd love to hear each of your thoughts of the future. >>I had a thought was maybe unrelated, but I was kind of wondering if we would see something on the side of like energy efficiency in some way. Um, and maybe it's just because I've been thinking a lot about like climate change and things like that recently, and trying to reduce like the, uh, the energy use energy use and things like that. Perhaps it's also because I recently got a new laptop, which on paper is super awesome, but in practice, as soon as you try to have like two slack tabs and a zoom call, you know, it's super fast, both for 30 seconds. And after 30 seconds, it blows its thermal budget and it's like slows down to a crawl. And I started to think, Hmm, maybe, you know, like before we, we, we were thinking about, okay, I don't have that much CPU available. So you have to be kind of mindful about that. >>And now I wonder how are we going to get in something similar to that, but where you try to save CPU cycles, not just because you don't have that many CPU cycles, but more because you know, that you can't go super fast for super long when you are on one of these like small laptops or tablets or phones, like you have this demo budget to take into account. And, um, I wonder if, and how like, is there something where goaltenders can do some things here? I guess it can be really interesting if they can do some the equivalent of like Docker top and Docker stats. And if I could see, like how much what's are these containers using, I can already do that with power top on Linux, for instance, like process by process. So I'm thinking I could see what's the power usage of, of some containers. Um, and I wonder if down the line, is this going to be something useful or is this just silly because we can just masquerade CPU usage for, for Watson and forget about it. >>Yeah. Yeah. It was super, super interesting, uh, perspective for sure. I'm going to shut up because I want to, I want to give, make sure I give Johannes and Katie time. W w what are your thoughts of the future around, let's just say, you know, container development in general, right? You want, you want to start absolutely. Oh, honest, Nate. Johns wants more time. I say, I'll try not to. Beneficiate >>Expensive here, but, um, so one of the things that we've we've touched upon earlier in the panel was multicloud strategy. And I was reading one of the data reports from it was about the concept of Kubernetes from gamer Townsville. But what is working for you to see there is that more and more organizations are thinking about multicloud strategy, which means that you need to develop an application or need an infrastructure or a component, which will allow you to run this application bead on a public cloud bead, like locally in a data center and so forth. And here, when it comes to this kind of, uh, maybe problems we come across open standards, this is where we require something, which will allow us to execute our application or to run our platform in different environments. So when you're thinking about the application or development of the application, one of the things that, um, came out in 2019 at was the Oakland. >>Um, I wish it was Kybella, which is a, um, um, an open application model based application, which allows you to describe the way you would like your service to be executed in different environments. It doesn't need to be well developed specifically for communities. However, the open application model is specialized. So specialized tries to cover multiple platforms. You will be able to execute your application anywhere you want it to. So I think that that's actually quite important because it completely obstructs what is happening underneath it, completely obstructs notions, such as containers, uh, or processes is just, I want this application and I want to have this kind of behavior is so example of, to scale in this conditions or to, um, to be exposed for these, uh, end points and so forth. And everything that I would like to mention here is that maybe this transcends again, the, uh, the logistics of the application development, but it definitely will impact the way we run our applications. >>So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. And this is again, something which is trying to present what we have the on containers. Again, it's focusing on the, it's kind of a cyclical, um, uh, action movement that we have here. When we moved from the VMs to containers, it was smaller footprint. We want like better execution, one, this agnosticism of the platforms. We have the same thing happening here with Watson, but again, it consents a new, um, uh, kind of, well, it teaches in you, uh, in new climax here, where again, we shrink the footprint of the cluster. We have a better isolation of all the services. We have a better trend, like portability of how services and so forth. So there is a great potential out there. And again, like why I'm saying this is some of these technologies are gonna define the way we're gonna do our development of the application on our local environment. >>That's why it's important to kind of maybe have an eye there and maybe see if some of those principles of some of those technologies we can bring internally as well. And just this, like a, a final thought here, um, security has been mentioned as well. Um, I think it's something which has been, uh, at the forefront, especially when it comes to containers, uh, especially when it comes to enterprise organizations and those who are regulated, which I feel come very comfortable to run their application within a VM where you have the full isolation, you can do what we have complete control of what's happening inside that compute. So, um, again, security has been at the forefront at the moment. So I know it has mentioned in the panel before. I'd like to mention that we have the security white paper, which has been published. We have the software supply chain, white paper as well, which twice to figure out or define some of these good practices as well, again, which you can already apply from your development environment and then propagate them to production. So I'm just going to leave, uh, all of these. That's all. >>That's awesome. And yeah, well, while is very, very interesting. I saw the other day that, um, and I forget who it was, maybe, maybe all can remember, um, you know, running, running the node, um, engine inside of, you know, in Walzem inside of a browser. Right. And, uh, at first glance I said, well, we already have a JavaScript execution engine. Right. And it's kind of like Docker and Docker. So you have, uh, you know, you have the browser, then, then you have blossom and then you have a node, you know, a JavaScript runtime. And, and I didn't understand was while I was, um, you know, actually executing is JavaScript and it's not, but yeah, it's super interesting, super powerful. I always felt that the browser was, uh, Java's what write once run anywhere kind of solution, right. That never came about, they were thinking of set top, uh, TV boxes and stuff like that, which is interesting. >>I don't know, you'll some of the history of Java, but yeah. Wasm is, is very, I'm not sure how to correctly pronounce it, but yeah, it's extremely interesting because of the isolation in that boxing. Right. And running powerful languages that were used to inside of a more isolated environment. Right. And it's almost, um, yeah, it's kind of, I think I've mentioned it before that the containers inside of containers, right. Um, yeah. So Johannes, hopefully I gave you enough time. I delayed, I delayed as much as I can. My friend, you better, you better just kidding. I'm just kidding, please, please. >>It was by the way, stack let's and they worked together with Google and with Russell, um, developing the web containers, it's called there's, it's quite interesting. The research they're doing there. Yeah. Yeah. I mean, what we believe and I, I also believe is that, um, yeah, probably somebody is doing to death environments, what Docker did to servers and at least that good part. We hope that somebody will be us. Um, so what we mean by that is that, um, we think today we are still somehow emotionally attached to our dev environments. Right. We give them names, we massage them over time, which can also have its benefits, but it's, they're still pets in some way. Right. And, um, we believe that, um, environments in the future, um, will be treated similar like servers today as automated resources that you can just spin up and close down whenever you need them. >>Right. And, um, this trend essentially that you also see in serverless, if you look at what kind of Netlify is doing a bit with preview environments, what were sellers doing? Um, there, um, we believe will also arrive at, um, at Steph environments. It probably won't be there tomorrow. So it will take some time because if there's also, you know, emotion involved into, in that, in that transition, but ultimately really believe that, um, provisioning dev environments also in the cloud allows you to leverage the power of the cloud and to essentially build all that stuff that you need in order to work in advance. Right? So that's literally either command or a button. So either, I don't know, a command that spins up your local views code and SSH into, into a container, or you do it in a browser, um, will be the way that professional development teams will develop in the future. Probably let's see in our direction of document, we say it's 2000 to 23. Let's see if that holds true. >>Okay. Can we, can, we let's know. Okay. Let's just say let's have a friendly bet. I don't know that's going to be closed now, but, um, yeah, I agree. I, you know, it's my thought around is it, it's hard, right? Th these are hard. And what problems do you tackle first, right? Do you tackle the day, one of, uh, you know, of development, right. I joined a team, Hey, here's your machine? And you have Docker installed and there you go, pull, pull down your environment. Right. Is that necessarily just an image? You know, what, what exactly is that sure. Containers are involved. Right. But that's, I mean, you, you've probably all gone through it. You joined a team, new project, even open-source project, right there. There's a huge hurdle just to get everything configured, to get everything installed, to get it up and running, um, you know, set aside all understanding the code base. >>Cause that's a different issue. Right. But just getting everything running locally and to your point earlier, Jacob of around, uh, recreating, local production cues and environments and, you know, GPS or anything like that, right. Is extremely hard. You can't do a lot of that locally. Right. So I think that's one of the things I'd love to see tackled. And I think that's where we're tackling in dev environments, uh, with Docker, but then now how do you become productive? Right. And where do we go from there? And, uh, and I would love to see this kind of hybrid and you guys have been all been talking about it where I can, yes. I have it configured everything locally on my nice, you know, apple notebook. Right. And then, you know, I go with the family and we go on vacation. I don't want to drag this 16 inch, you know, Mac laptop with me. >>And I want to take my nice iPad with the magic keyboard and all the bang stuff. Right. And I just want to fire up and I pick up where I left off. Right. And I keep coding and environment feels, you know, as much as it can that I'm still working at backup my desktop. I think those, those are very interesting to me. And I think reproducing, uh, the production running runtime environments as close as possible, uh, when I develop my, I think that's extremely powerful, extremely powerful. I think that's one of the hardest things, right. It's it's, uh, you know, we used to say, we, you debug in production. Right. We would launch, right. We would do, uh, as much performance testing as possible. But until you flip that switch on a big, on a big site, that's where you really understand what is going to break. >>Right. Well, awesome. I think we're just about at time. I really, really appreciate everybody joining me. Um, it's been a pleasure talking to all of you. We have to do this again. If I, uh, hopefully, you know, I I'm in here in America and we seem to be doing okay with COVID, but I know around the world, others are not. So my heart goes out to them, but I would love to be able to get out of here and come see all of you and meet you in person, maybe break some bread together. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Have a good evening. Cool. >>Thanks for having us. Thanks for joining us. Yes.
SUMMARY :
Um, if you come to the main page on the website and you do not see the chat, go ahead and click And I have been, uh, affiliated way if you'd asked me to make sure that, Glad to have you here. which is probably also the reason why you Peter reached out and invited me here. Can you tell everybody who you are and a little bit about yourself? So kind of, uh, how do we say same, same team, different company or something like that? Good to see you. bit more powerful hardware or uh, you know, maybe a software that I can't run locally. I really appreciate you all joining me Like if I go back to the, kind of the first, uh, you know, but in a container that you control from your browser and, and many other things So I guess another question is, you know, should we be developing So I think, you know, even if you have a super powerful computer, I think there's still value in, With, um, you know, and how do you do that? of view, you do not need to take care anymore about all the hassle around setups It includes essentially all the tools you need in order to be productive databases and so on. It might be too to, uh, har you know, to, to two grand of the word. much as possible the production or even the staging environment to make sure that when you deploy your application, I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but what's your thoughts? So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. And I think there is also something interesting to do here with you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. And that makes me feel a little bit, you know, as this kind of old code for movies where So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, Of, uh, of, you know, even just 10 microservices that are in different get repos boundary or, or, um, you know, a sub repo boundary. all of that stuff locally, or to have to like duplicate these, you know, and, of, um, you know, hybrid kind of environments. I think, you know, the vehicle that we use, I'm sitting outside, you know, the general thought around containers is isolation, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because because we have a question in the, in the chat around, what's the, you know, why, why containers now I have you know, you can have a container that's actually using the, um, the, um, So that gives it an entire, you know, wire speed access to the, to the network of the Um, but that's actually extremely convenient because, um, as soon as you And I think for folks, well, precisely when you want to do development in containers, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, So I think that there should be this kind of two Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind And then like for, for you to go to staging and production, you will get more clear into what exactly that, down to the details, but yeah, generally speaking, you know, um, So pushing for someone to use containers, because this is the right way for you to develop your application Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, a shell inside a container, I think is something that's, um, you know, not as polished or I think it's, you know, it's something Docker's exploring now with, uh, with the, I'd love to hear each of your thoughts of the So you have to be kind of mindful cycles, but more because you know, that you can't go super fast for super long when let's just say, you know, container development in general, right? But what is working for you to see there is that more and more organizations way you would like your service to be executed in different environments. So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. again, which you can already apply from your development environment and then propagate them to production. um, and I forget who it was, maybe, maybe all can remember, um, you know, So Johannes, hopefully I gave you enough time. as automated resources that you can just spin up and close down whenever really believe that, um, provisioning dev environments also in the cloud allows you to to get everything installed, to get it up and running, um, you know, set aside all in dev environments, uh, with Docker, but then now how do you become productive? It's it's, uh, you know, we used to say, we, you debug in production. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Thanks for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Accelerating Your Data driven Journey The HPE Ezmeral Strategic Road Ahead | HPE Ezmeral Day 2021
>>Yeah. Okay. Now we're going to dig deeper into HP es moral and try to better understand how it's going to impact customers. And with me to do that are Robert Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology officer and head of software both, of course, with Hewlett Packard Enterprise. Gentlemen, welcome to the program. Thanks for coming on. >>Good seeing you. Thanks for having us. >>Always. Great. Great to see you guys. So, Esmeralda, kind of a interesting name. Catchy name. But tomorrow, what exactly is H P E s bureau? >>Yeah. It's indeed a catchy name. Our branding team done a fantastic job. I believe it's actually a derivation from Esmeralda. The Spanish for Emerald Berlin. Supposed to have some very mystical powers. Um, and they derived as moral from there, and we all actually, initially that we heard it was interesting. Um, so as well was our effort to take all the software, the platform tools that HB has and provide these modern operating platform to the customers and put it under one brand. It has a modern container platform. It has a persistent stories distribute the date of February. It has been foresight, as many of our customers similar, So it's the think of it as a container platform offering for modernization of the civilization of the customers. >>Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, but you're positioning it as a platform, so it has a broader implications. >>That's very true. So as the customers are thinking of this civilization, modernization containers and microservices, as you know there has become, has become the stable whole. So it's actually a container orchestration platform. It offers open source proven. It is as well as the persistence always bolted to >>so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity powers as well. So immunity >>from >>lock in and all those other terrible diseases. Maybe it helps us with covid to rob Robert. When you talk to customers, what problems do you probe for that that is immoral. Can can do a good job solving. >>Yeah, they That's a really great question because a lot of times they don't even know what it is that they're trying to solve for, other than just a very narrow use case. But the idea here is to give them a platform by which they can bridge both the public and private environment for what to do an application development specifically in the data side. So when they're looking to bring Container Ization, which originally got started on the public cloud and has moved its way, I should say, become popular in the public cloud and has moved its way on premises. Now Esmeralda really opens the door to three fundamental things. But how do I maintain an open architecture like you're referring to some low or oh, no lock in of my applications And there were two. How do I gain a data fabric or data consistency of accessing the data so I don't have to rewrite those applications when I do move them around and then, lastly, where everybody is heading down, the real value is in the AI ML initiatives that companies are are really bringing that value of their data and locking the data at where the data is being generated and stored. And so the is moral platform is those multiple pieces that I was talking about stacked together to deliver those solutions for the client. >>So come on, what's the How does it work? What's the sort of I p or the secret sauce behind it all? What makes HP different? >>Continuing our team of medical force around, uh, it's a moral platform for optimizing the data Indians who were close. I think I would say there are three unique characteristics of this platform. Number one is actually provides you both an ability to run stable and stateless were close under the same platform, and number two is as we were thinking about. Unlike analogues, covenant is open source. It actually produce you all open source government as well as an orchestration behind you. So you can actually you can provide this hybrid, um, thing that drivers was talking about. And then actually we built the work flows into it. For example, we're actually announced along with Esmeralda MLS, but on their customers can actually do the work flow management. Our own specifically did the work force. So the magic is if you want to see the secrets of is all the efforts that have been gone into some of the I p acquisitions that HBs the more years we should be. Blue Data bar in the nimble emphasize, all these pieces are coming together and providing a modern digitalization platform for the customers. >>So these pieces, they all have a little bit of a machine intelligence in them. Yeah, People used to think of a I as the sort of separate thing, having the same thing with containers, right? But now it's getting embedded in into the stack. What? What is the role of machine intelligence or machine learning in Edinburgh? >>I would take a step back and say, You know this very well. They're the customer's data amount of data that is being generated, and 95% or 98% of data is machine generated, and it has a serious amount of gravity, and it is sitting at the edge, and we were the only the only one that edge to the cloud data fabric that's built. So the number one is that we are bringing computer or a cloud to the data. They're taking the data to the cloud like if you go, it's a cloud like experience that provides the customer. Yeah, is not much value to us if we don't harness the data. So I said this in one of the blood. Of course, we have gone from collecting the data era to the finding insights into the data so that people have used all sorts of analysis that we are to find data is the new oil to the air and the data. And then now you're applications have to be modernized. And nobody wants to write an obligation in a non microservices fashion because you want to build the modernization. So if you bring these three things, I want to have a data. Gravity have lots of data. I had to build an area applications and I want to have an idea those three things I think we bring together to the customs. >>So, Robert, let's stay on customers from it. I mean, you know, I want to understand the business impact, the business case. I mean, why should all the you know, the cloud developers have all the fun? You mentioned that you're bridging the cloud and on Prem, uh, they talk about when you talk to customers and what they are seeing is the business impact. What's the real drivers for them. >>That's a great question because at the end of the day I think the reason survey that was that cost and performance is still the number one requirement for the real close. Second is agility, the speed of which they want to move. And so those two are the top of mind every time. But the thing we find in as moral, which is so impactful, is that nobody brings together the silicon, the hardware, the platform and all that stacked together work and combined, like as moral does with the platforms that we have and specifically, you know, when we start getting 90 92 93% utilization out of ai ml workloads on very expensive hardware, it really, really is a competitive advantage over a public cloud offering which does not offer those kind of services. And the cost models are so significantly different. So we do that by collapsing the stack. We take out as much intellectual property, give me, um, as much software pieces that are necessary. So we are closest to the silicon closest to the applications bring into the hardware itself, meaning that we can inter leave the applications, meaning that you can get to true multi tendency on a particular platform that allows you to deliver a cost optimized solution. So when you talk about the money side, absolutely. There's just nothing out there and then on the second side, which is agility. Um, one of the things that we know is today is that applications need to be built in pipelines. Right? This is something that has been established now for quite some time now. That's really making its way on premises. And what Kumar was talking about was, how do we modernize? How do we do that? Well, there's going to be something that you want to break into Microservices and containers. There's something you don't now the ones that they're going to do that they're gonna get that speed and motion etcetera out of the gate. And they can put that on premises, which is relatively new these days to the on premises world. So we think both will be the advantage. >>Okay, I want to unpack that a little bit. So the cost is clearly really 90 plus percent utilization. I mean, come on. You know, even even a pre virtualization. We know what it was like even with virtualization, you never really got that high. I mean, people would talk about it, but are you really able to sustain that in real world workloads? >>Yeah, I think when you I think when you when you make your exchangeable currency into small pieces, you can insert them into many areas. And we have one customer was running 18 containers on a single server and each of those containers, as you know, early days of data. You actually modernized what we consider we won containers of micro B. Um, so if you actually build these microservices and you have all anti affinity rules and you have rationing formulas all correctly, you can pack being part of these things extremely violent. We have seen this again. It's not a guarantee. It all depends on your application and your I mean, as an engineer, we want to always understand how this can be that sport. But it is a very modern utilization of the platform with the data and once you know where the data is, and then it becomes very easy to match those >>now. The other piece of the value proposition that I heard Robert is it's basically an integrated stack, so I don't have to cobble together a bunch of open source components. It's there. There's legal implications. There's obviously performance implications that I would imagine that resonates is particularly with the enterprise buyer, because they have the time to do all this integration. >>That's a very good point. So there is an interesting, uh, interesting question that enterprise they want to have an open source, so there is no lock in. But they also need help to implement and deploy and manage it because they don't have expertise. And we all know that Katie has actually brought that AP the past layer standardization. So what we have done is we've given the open source and you write to the covenant is happy, but at the same time orchestration, persistent stories, the data fabric, the ai algorithms, all of them are bolted into it. And on the top of that, it's available both as a licensed software and run on Prem. And the same software runs on the Green Lake so you can actually pay as you go and you don't we run it for them in in a collar or or in their own data center. >>Oh, good. I was one of my latter questions, so I can get this as a service paid by the drink. Essentially, I don't have to install a bunch of stuff on Prem and pay >>a perpetual license container at the service and the service in the last Discover. And now it's gone production. So both MLRS is available. You can run it on friends on the top of Admiral Container platform or you can run inside of the Green Bay. >>Robert, are there any specific use case patterns that you see emerging amongst customers? >>Yeah, absolutely. So there's a couple of them. So we have a really nice relationship that we see with any of the Splunk operators that were out there today. Right? So Splunk containerized their operator. That operator is the number one operator, for example, for Splunk, um, in the i t operation side or notifications as well as on the security operation side. So we found that that runs highly effective on top of his moral on top of our platforms that we just talked about what, uh, Kumar just talked about, but I want to also give a little bit of backgrounds to that same operator platform. The way that the Admiral platform has done is that we've been able to make highly active, active with a check availability at 95 nines for that same spark operator on premises on the kubernetes open source, which is, as far as I'm concerned. Very, very high end computer science work. You understand how difficult that is? Uh, that's number one. Number two, you'll see spark just a spark. Workloads as a whole. All right. Nobody handles spark workloads like we do. So we put a container around them, and we put them inside the pipeline of moving people through that basic, uh uh, ml ai pipeline of getting a model through its system through its train and then actually deployed to our MLS pipeline. This is a key fundamental for delivering value in the data space as well. And then, lastly, this is This is really important. When you think about the data fabric that we offer, um, the data fabric itself, it doesn't necessarily have to be bolted with the container platform to container at the actual data. Fabric itself can be deployed underneath a number of our for competitive platforms who don't handle data. Well, we know that we know that they don't handle it very well at all. And we get lots and lots of calls for people say, Hey, can you take your as Merrill data for every and solve my large scale, highly challenging data problems, we say yes. And then when you're ready for a real world full time but enterprise already, container platform would be happy to privilege. >>So you're saying if I'm inferring correctly, you're one of the values? Is your simplifying that whole data pipeline and the whole data science science project? Unintended, I guess. >>Okay, >>that's so so >>absolutely So where does the customer start? I mean, what what are the engagements like? Um, what's the starting point? >>It's being is probably one of the most trusted enterprise supplier for many, many years, and we have a phenomenal workforce of the both. The PowerPoint next is one of the leading world leading support organization. There are many places to start with. The right one is Obviously all these services are available on the green leg as we just start apart and they can start on a pay as you go basis. We have many customers that. Actually, some of the grandfather from the early days of pleaded and map are and they're already running, and they actually improvised on when, as they move into their next generation modernization, um, you can start with simple as metal container platform with persist with the story compared to this operation and can implement as as little as $10 and to start working. Um, and finally, there is a a big company like HP E. As an enterprise company defined next services. It's very easy for the customers to be able to get that support on the day to operation. >>Thank you for watching everybody's day volonte for the Cube. Keep it right there for more great content from Esmeralda. >>A mhm, okay.
SUMMARY :
Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology Thanks for having us. Great to see you guys. It has been foresight, as many of our customers similar, So it's the think of Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, So as the customers are thinking of this civilization, so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity When you talk to customers, what problems do you probe for that that is immoral. And so the is moral platform is those multiple pieces that I was talking about stacked together So the magic is if you want to see the secrets of is all the efforts What is the role of machine intelligence They're taking the data to the cloud like if you go, it's a cloud like experience that I mean, you know, I want to understand the business impact, But the thing we find in as moral, which is so impactful, So the cost is clearly really 90 plus percent of the platform with the data and once you know where the data is, The other piece of the value proposition that I heard Robert is it's basically an integrated stack, on the Green Lake so you can actually pay as you go and you don't we by the drink. You can run it on friends on the top of Admiral Container platform or you can run inside of the the container platform to container at the actual data. data pipeline and the whole data science science project? It's being is probably one of the most trusted enterprise supplier for many, Thank you for watching everybody's day volonte for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
95% | QUANTITY | 0.99+ |
18 containers | QUANTITY | 0.99+ |
$10 | QUANTITY | 0.99+ |
February | DATE | 0.99+ |
Edinburgh | LOCATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Robert Christensen | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
98% | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Kumar | PERSON | 0.99+ |
Kumar Srikanth | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
H P E | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
T. O. | PERSON | 0.99+ |
PowerPoint | TITLE | 0.99+ |
one customer | QUANTITY | 0.99+ |
tomorrow | DATE | 0.98+ |
90 plus percent | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Emerald Berlin | PERSON | 0.98+ |
second side | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.97+ |
three things | QUANTITY | 0.96+ |
Esmeralda | PERSON | 0.96+ |
Esmeralda MLS | ORGANIZATION | 0.96+ |
Prem | ORGANIZATION | 0.95+ |
single server | QUANTITY | 0.94+ |
HP E. | ORGANIZATION | 0.94+ |
three unique characteristics | QUANTITY | 0.93+ |
Spanish | OTHER | 0.93+ |
number two | QUANTITY | 0.93+ |
one brand | QUANTITY | 0.91+ |
three fundamental things | QUANTITY | 0.89+ |
Merrill | ORGANIZATION | 0.85+ |
Number one | QUANTITY | 0.83+ |
couple | QUANTITY | 0.83+ |
Green Lake | ORGANIZATION | 0.83+ |
90 92 93% | QUANTITY | 0.8+ |
Number two | QUANTITY | 0.8+ |
Ezmeral | LOCATION | 0.8+ |
Bay | LOCATION | 0.79+ |
HB | ORGANIZATION | 0.77+ |
Gravity | ORGANIZATION | 0.77+ |
Admiral | ORGANIZATION | 0.74+ |
Ization | TITLE | 0.73+ |
rob | PERSON | 0.7+ |
95 nines | QUANTITY | 0.68+ |
Indians | PERSON | 0.68+ |
Discover | ORGANIZATION | 0.67+ |
Green | ORGANIZATION | 0.64+ |
number one | QUANTITY | 0.61+ |
Cube | COMMERCIAL_ITEM | 0.57+ |
Ezmeral Day | EVENT | 0.55+ |
2021 | DATE | 0.55+ |
Esmeralda | ORGANIZATION | 0.54+ |
Container | ORGANIZATION | 0.5+ |
Admiral | TITLE | 0.49+ |
A Day in the Life of a Data Scientist
>>Hello, everyone. Welcome to the a day in the life of a data science talk. Uh, my name is Terry Chang. I'm a data scientist for the ASML container platform team. And with me, I have in the chat room, they will be moderating the chat. I have Matt MCO as well as Doug Tackett, and we're going to dive straight into kind of what we can do with the asthma container platform and how we can support the role of a data scientist. >>So just >>A quick agenda. So I'm going to do some introductions and kind of set the context of what we're going to talk about. And then we're actually going to dive straight into the ASML container platforms. So we're going to walk straight into what a data scientist will do, kind of a pretty much a day in the life of the data scientists. And then we'll have some question and answer. So big data has been the talk within the last few years within the last decade or so. And with big data, there's a lot of ways to derive meaning. And then a lot of businesses are trying to utilize their applications and trying to optimize every decision with their, uh, application utilizing data. So previously we had a lot of focus on data analytics, but recently we've seen a lot of data being used for machine learning. So trying to take any data that they can and send it off to the data scientists to start doing some modeling and trying to do some prediction. >>So that's kind of where we're seeing modern businesses rooted in analytics and data science in itself is a team sport. We're seeing that it doesn't, we need more than data scientists to do all this modeling. We need data engineers to take the data, massage the data and do kind of some data manipulation in order to get it right for the data scientists. We have data analysts who are monitoring the models, and we even have the data scientists themselves who are building and iterating through multiple different models until they find a one that is satisfactory to the business needs. Then once they're done, they can send it off to the software engineers who will actually build it out into their application, whether it's a mobile app or a web app. And then we have the operations team kind of assigning the resources and also monitoring it as well. >>So we're really seeing data science as a team sport, and it does require a lot of different expertise and here's the kind of basic machine learning pipeline that we see in the industry now. So, uh, at the top we have this training environment and this is, uh, an entire loop. Uh, we'll have some registration, we'll have some inferencing and at the center of all, this is all the data prep, as well as your repositories, such as for your data, for any of your GitHub repository, things of that sort. So we're kind of seeing the machine learning industry, go follow this very basic pattern and at a high level I'll glance through this very quickly, but this is kind of what the, uh, machine learning pipeline will look like on the ASML container platform. So at the top left, we'll have our, our project depository, which is our, uh, persistent storage. >>We'll have some training clusters, we'll have a notebook, we'll have an inference deployment engine and a rest API, which is all sitting on top of the Kubernetes cluster. And the benefit of the container platform is that this is all abstracted away from the data scientist. So I will actually go straight into that. So just to preface, before we go into the data as small container platform, where we're going to look at is a machine learning example, problem that is, uh, trying to predict how long a specific taxi ride will take. So with a Jupiter notebook, the data scientists can take all of this data. They can do their data manipulation, train a model on a specific set of features, such as the location of a taxi ride, the duration of a taxi ride, and then model it to trying to figure out, you know, what, what kind of prediction we can get on a future taxi ride. >>So that's the example that we will talk through today. I'm going to hop out of my slides and jump into my web browser. So let me zoom in on this. So here I have a Jupiter environment and, um, this is all running on the container platform. All I need is actually this link and I can access my environment. So as a data scientist, I can grab this link from my it admin or my system administrator. And I could quickly start iterating and, and start coding. So on the left-hand side of the Jupiter, we actually have a file directory structure. So this is already synced up to my get repository, which I will show in a little bit on the container platform so quickly I can pull any files that are on my get hub repository. I can even push with a button here, but I can, uh, open up this Python notebook. >>And with all this, uh, unique features of the Jupiter environment, I can start coding. So each of these cells can run Python code and in specific the container at the ESMO container platform team, we've actually built our own in-house lime magic commands. So these are unique commands, um, that we can use to interact with the underlying infrastructure of the container platform. So the first line magic command that I want to mention is this command called percent attachments. When I run this command, I'll actually get the available training clusters that I can send training jobs to. So this specific notebook, uh, it's pretty much been created for me to quickly iterate and develop a model very quickly. I don't have to use all the resources. I don't have to allocate a full set of GPU boxes onto my little Jupiter environment. So with the training cluster, I can attach these individual data science notebooks to those training clusters and the data scientists can actually utilize those resources as a shared environment. >>So the, essentially the shared large eight GPU box can actually be shared. They don't have to be allocated to a single data scientist moving on. We have another line magic command, it's called percent percent Python training. This is how we're going to utilize that training cluster. So I will prepare the cell percent percent with the name of the training cluster. And this is going to tell this notebook to send this entire training cell, to be trained on those resources on that training cluster. So the data scientists can quickly iterate through a model. They can then format that model and all that code into a large cell and send it off to that training cluster. So because of that training cluster is actually located somewhere else. It has no context of what has been done locally in this notebook. So we're going to have to do and copy everything into one large cell. >>So as you see here, I'm going to be importing some libraries and I'm in a, you know, start defining some helper functions. I'm going to read in my dataset and with the typical data science modeling life cycle, we're going to have to take in the data. We're going to have to do some data pre-processing. So maybe the data scientists will do this. Maybe the data engineer will do this, but they have access to that data. So I'm here. I'm actually getting there to be reading in the data from the project repository. And I'll talk about this a little bit later with all of the clusters within the container platform, we have access to some project repository that has been set up using the underlying data fabric. So with this, I have, uh, some data preprocessing, I'm going to cleanse some of my data that I noticed that maybe something is missing or, uh, some data doesn't look funky. >>Maybe the data types aren't correct. This will all happen here in these cells. So once that is done, I can print out that the data is done cleaning. I can start training my model. So here we have to split our data, set into a test, train, uh, data split so that we have some data for actually training the model and some data to test the model. So I can split my data there. I could create my XG boost object to start doing my training and XG boost is kind of like a decision tree machine learning algorithm, and I'm going to fit my data into this, uh, XG boost algorithm. And then I'm going to do some prediction. And then in addition, I'm actually going to be tracking some of the metrics and printing them out. So these are common metrics that we, that data scientists want to see when they do their training of the algorithm. >>Just to see if some of the accuracy is being improved, if the loss is being improved or the mean absolute error. So things like that. So these are all things, data scientists want to see. And at the end of this training job, I'm going to be saving the model. So I'm going to be saving it back into the project repository in which we will have access to. And at the end, I will print out the end time so I can execute that cell. And I've already executed that cell. So you'll see all of these print statements happening here. So importing the libraries, the training was run reading and data, et cetera. All of this has been printed out from that training job. Um, and in order to access that, uh, kind of glance through that, we would get an output with a unique history URL. >>So when we send the training job to that training cluster, we'll the training cluster will send back a unique URL in which we'll use the last line magic command that I want to talk about called percent logs. So percent logs will actually, uh, parse out that response from the training cluster. And actually we can track in real time what is happening in that training job so quickly, we can see that the data scientist has a sandbox environment available to them. They have access to their get repository. They have access to a project repository in which they can read in some of their data and save the model. So very quick interactive environment for the data scientists to do all of their work. And it's all provisioned on the ASML container platform. And it's also abstracted away. So here, um, I want to mention that again, this URL is being surfaced through the container platform. >>The data scientist doesn't have to interact with that at all, but let's take, it's take a step back. Uh, this is the day to day in the life of the data scientists. Now, if we go backwards into the container platform and we're going to walk through how it was all set up for them. So here is my login page to the container platform. I'm going to log in as my user, and this is going to bring me to the, uh, view of the, uh, Emma lops tenant within the container platform. So this is where everything has been set up for me, the data scientist doesn't have to see this if they don't need to, but what I'll walk through now is kind of the topics that I mentioned previously that we would go back into. So first is the project repository. So this project deposited comes with each tenant that is created on the platform. >>So this is a more, nothing more than a shared collaborative workspace environment in which data scientist or any data scientist who is allocated to this tenant. They have this politics client that can visually see all their data of all, all of their code. And this is actually taking a piece of the underlying data fabric and using that for your project depository. So you can see here, I have some code I can create and see my scoring script. I can see the models that have been created within this tenant. So it's pretty much a powerful tool in which you can store your code store any of your data and have the ability to read and write from any of your Jupiter environments or any of your created clusters within this tenant. So a very cool ad here in which you can, uh, quickly interact with your data. >>The next thing I want to show is the source control. So here is where you would plug in all of your information for your source control. And if I edit this, you guys will actually see all the information that I've passed in to configure the source control. So on the backend, the container platform will take these credentials and connect the Jupiter notebooks you create within this tenant to that get repository. So this is the information that I've passed in. If GitHub is not of interest, we also have support for bit bucket here as well. So next I want to show you guys that we do have these notebook environments. So, um, the notebook environment was created here and you can see that I have a notebook called Teri notebook, and this is all running on the Kubernetes environment within the container platform. So either the data scientists can come here and create their notebook or their project admin can create the notebook. >>And all you'd have to do is come here to this notebook end points. And this, the container platform will actually map the container platform to a specific port in which you can just give this link to the data scientists. And this link will actually bring them to their own Jupiter environment and they can start doing all of their model just as I showed in that previous Jupiter environment. Next I want to show the training cluster. This is the training cluster that was created in which I can attach my notebook to start utilizing those training clusters. And then the last thing I want to show is the model, the deployment cluster. So once that model has been saved, we have a model registry in which we can register the model into the platform. And then the last step is to create a deployment clusters. So here on my screen, I have a deployment cluster called taxi deployment. >>And then all these serving end points have been configured for me. And most importantly, this endpoint model. So the deployment cluster is actually a wrap the, uh, train model with the flask wrapper and add a rest endpoint to it so quickly. I can operationalize my model by taking this end point and creating a curl command, or even a post request. So here I have my trusty postman tool in which I can format a post request. So I've taken that end point from the container platform. I've formatted my body, uh, right here. So these are some of the features that I want to send to that model. And I want to know how long this specific taxi ride at this location at this time of day would take. So I can go ahead and send that request. And then quickly I will get an output of the ride. >>Duration will take about 2,600 seconds. So pretty much we've walked through how a data scientists can quickly interact with their notebook. They can train their model. And then coming into the platform, we saw the project repository, we saw the source control. We can register the model within the platform, and then quickly we can operationalize that model with our deployment cluster, uh, and have our model up and running and available for inference. So that wraps up the demo. Uh, I'm gonna pass it back to Doug and Matt and see if they want to come off mute and see if there are any questions, Matt, Doug, you there. Okay. >>Yeah. Hey, Hey Terry, sorry. Sorry. Just had some trouble getting off mute there. Uh, no, that was a, that was an excellent presentation. And I think there are generally some questions that come up when I talk to customers around how integrated into the Kubernetes ecosystem is this capability and where does this sort of Ezreal starts? And the open source, uh, technologies like, um, cube flow as an example, uh, begin. >>Yeah, sure. Matt. So this is kind of one layer up. We have our Emma LOBs tenant and this is all running on a piece of a Kubernetes cluster. So if I log back out and go into the site admin view, this is where you would see all the Kubernetes clusters being created. And it's actually all abstracted away from the data scientists. They don't have to know Kubernetes. They just interact with the platform if they want to. But here in the site admin view, I had this Kubernetes dashboard and here on the left-hand side, I have all my Kubernetes sections. So if I just add some compute hosts, whether they're VMs or cloud compute hosts, like ETQ hosts, we can have these, uh, resources abstracted away from us to then create a Kubernetes cluster. So moving on down, I have created this Kubernetes cluster utilizing those resources. >>Um, so if I go ahead and edit this cluster, you'll actually see that have these hosts, which is just a click and a click and drop method. I can move different hosts to then configure my Kubernetes cluster. Once my Kubernetes cluster is configured, I can then create Kubernetes tenant or in this case, it's a namespace. So once I have this namespace available, I can then go into that tenant. And as my user, I don't actually see that it is running on Kubernetes. So in addition with our ML ops tenants, you have the ability to bootstrap cute flow. So queue flow is a open source machine learning framework that is run on Kubernetes, and we have the ability to link that up as well. So, uh, coming back to my Emma lops tenant, I can log in what I showed is the ASML container platform version of Emma flops. But you see here, we've also integrated QP flow. So, uh, very, uh, a nod to, uh, HPS contribution to, you know, utilizing open source. Um, it's actually all configured within our platform. So, um, hopefully, >>Yeah, actually, Tara, can you hear me? It's Doug. So there were a couple of other questions actually about key flare that came in. I wonder whether you could just comment on why we've chosen cube flow. Cause I know there was a question about ML flow in stead and what the differences between ML flow and coop flow. >>Yeah, sure. So the, just to reiterate, there are some questions about QP flow and I'm just, >>Yeah, so obviously one of, uh, one of the people watching saw the queue flow dashboard there, I guess. Um, and so couldn't help but get excited about it. But there was another question about whether, you know, ML flow versus cube flow and what the difference was between them. >>Yeah. So with flow, it's, it's an open source framework that Google has developed. It's a very powerful framework that comes with a lot of other unique tools and Kubernetes. So with Q flow, you really have the ability to launch other notebooks. You have the ability to utilize different Kubernetes operators like TensorFlow and PI torch. You can utilize a lot of the, some of the frameworks within Q4 to do training like Q4 pipelines, which visually allow you to see your training jobs, uh, within the queue flow. It also has a plethora of different serving mechanisms, such as Seldin, uh, for, you know, deploying your, your machine learning models. You have Ks serving, you have TF serving. So Q4 is very, it's a very powerful tool for data scientists to utilize if they want a full end to end open source and know how to use Kubernetes. So it's just a, another way to do your machine learning model development and right with ML flow, it's actually a different piece of the machine learning pipeline. So ML flow mainly focuses on model experimentation, comparing different models, uh, during the training and it off it can be used with Q4. >>The complimentary Terry I think is what you're saying. Sorry. I know we are dramatically running out of time now. So that was really fantastic demo. Thank you very much, indeed. >>Exactly. Thank you. So yeah, I think that wraps it up. Um, one last thing I want to mention is there is this slide that I want to show in case you have any other questions, uh, you can visit hp.com/asml, hp.com/container platform. If you have any questions and that wraps it up. So thank you guys.
SUMMARY :
I'm a data scientist for the ASML container platform team. So I'm going to do some introductions and kind of set the context of what we're going to talk about. the models, and we even have the data scientists themselves who are building and iterating So at the top left, we'll have our, our project depository, which is our, And the benefit of the container platform is that this is all abstracted away from the data scientist. So that's the example that we will talk through today. So the first line magic command that I want to mention is this command called percent attachments. So the data scientists can quickly iterate through a model. So maybe the data scientists will do this. So once that is done, I can print out that the data is done cleaning. So I'm going to be saving it back into the project repository in which we will So here, um, I want to mention that again, this URL is being So here is my login page to the container So this is a more, nothing more than a shared collaborative workspace environment in So on the backend, the container platform will take these credentials and connect So once that model has been saved, we have a model registry in which we can register So I've taken that end point from the container platform. So that wraps up the demo. And the open source, uh, technologies like, um, cube flow as an example, So moving on down, I have created this Kubernetes cluster So once I have this namespace available, So there were a couple of other questions actually So the, just to reiterate, there are some questions about QP flow and I'm just, But there was another question about whether, you know, ML flow versus cube flow and So with Q flow, you really have the ability to launch So that was really fantastic demo. So thank you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Doug Tackett | PERSON | 0.99+ |
Terry Chang | PERSON | 0.99+ |
Terry | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Matt MCO | PERSON | 0.99+ |
Jupiter | LOCATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
first line | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
about 2,600 seconds | QUANTITY | 0.97+ |
Q4 | TITLE | 0.97+ |
A Day in the Life of a Data Scientist | TITLE | 0.97+ |
hp.com/asml | OTHER | 0.97+ |
last decade | DATE | 0.97+ |
one layer | QUANTITY | 0.95+ |
hp.com/container | OTHER | 0.92+ |
single data | QUANTITY | 0.91+ |
Emma | PERSON | 0.91+ |
one large cell | QUANTITY | 0.91+ |
each tenant | QUANTITY | 0.88+ |
one | QUANTITY | 0.84+ |
one last thing | QUANTITY | 0.81+ |
Q flow | TITLE | 0.8+ |
Emma | TITLE | 0.8+ |
ESMO | ORGANIZATION | 0.76+ |
last few years | DATE | 0.74+ |
one of | QUANTITY | 0.73+ |
day | QUANTITY | 0.72+ |
eight GPU | QUANTITY | 0.7+ |
Seldin | TITLE | 0.69+ |
Q4 | DATE | 0.67+ |
percent percent | OTHER | 0.65+ |
Ezreal | ORGANIZATION | 0.65+ |
some questions | QUANTITY | 0.65+ |
ASML | TITLE | 0.65+ |
ASML | ORGANIZATION | 0.61+ |
people | QUANTITY | 0.49+ |
ETQ | TITLE | 0.46+ |
Teri | ORGANIZATION | 0.4+ |
Emma | ORGANIZATION | 0.35+ |
Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021
>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I'd like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Container Platform together with Run:AI and Nvidia deliver a world class solutions for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. >> What a great panel discussion. And these partners they really do have a good understanding of the possibilities, working on the platform, and I hope and expect we'll see this ecosystem continue to grow. That concludes the main program, which means you can now pick one of three live demos to attend and chat live with experts. Now those three include day in the life of IT Admin, day in the life of a data scientist, and even a day in the life of the HPE Ezmeral Data Fabric, where you can see the many ways the data fabric is used in your life today. Wish you could attend all three, no worries. The recordings will be available on demand for you and your teams. Moreover, the show doesn't stop here, HPE has a growing and thriving tech community, you should check it out. It's really a solid starting point for learning more, talking to smart people about great ideas and seeing how Ezmeral can be part of your own data journey. Again, thanks very much to all of you for joining, until next time, keep unleashing the power of your data.
SUMMARY :
and how it can help you Hey, would you mind just talking a bit and integrated that with the and really what that's meant for Dataiku. So, basically I'd like the quote here Florian Douetteau, and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and HPE Container Platform together with Run:AI efficiency of the solution. So first, it is important to understand for our team to have you and even a day in the life of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Ron Kafka | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Omri Geller | PERSON | 0.99+ |
Florian Douetteau | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Daniel Hladky | PERSON | 0.99+ |
Dataiku | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
DSS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
early 2020 | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
Data Science Studio | ORGANIZATION | 0.98+ |
Ezmeral | PERSON | 0.98+ |
Ezmeral | ORGANIZATION | 0.98+ |
Dataiku Data Science Studio | ORGANIZATION | 0.97+ |
three live demos | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
about 80% | QUANTITY | 0.96+ |
HPEs | ORGANIZATION | 0.95+ |
three stages | QUANTITY | 0.94+ |
two great guest speakers | QUANTITY | 0.93+ |
Omri | PERSON | 0.91+ |
Analytics Unleashed | ORGANIZATION | 0.91+ |
D3S | TITLE | 0.87+ |
almost 7,000 jobs | QUANTITY | 0.87+ |
HPE Container Platform | TITLE | 0.86+ |
HPE Ezmeral Container Platform | TITLE | 0.83+ |
HBE Ezmeral | ORGANIZATION | 0.83+ |
Run | ORGANIZATION | 0.82+ |
Ezmeral Container Platform | TITLE | 0.81+ |
about five years ago | DATE | 0.8+ |
Platform | TITLE | 0.71+ |
Ezmeral | TITLE | 0.7+ |
Run:AI | ORGANIZATION | 0.7+ |
Ezmeral Data | ORGANIZATION | 0.69+ |
2021 | DATE | 0.68+ |
Ezmeral Ecosystem Program | TITLE | 0.68+ |
ICS | ORGANIZATION | 0.67+ |
Run | TITLE | 0.66+ |
Partner Scale Initiatives | ORGANIZATION | 0.66+ |
Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021
>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Ezmeral Container Platform together with Run:AI and Nvidia deliver a word about solution for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. (bright upbeat music)
SUMMARY :
and how it can help you journey has been with HPE? and integrated that with the and really what that's meant for Dataiku. and put machine learning and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and outcomes with vendors efficiency of the solution. So first, it is important to understand and new partners to our marketplace. Thank you, Ron.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Ron Kafka | PERSON | 0.99+ |
Florian Douetteau | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Omri Geller | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Daniel Hladky | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
Dataiku | ORGANIZATION | 0.99+ |
DSS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
Omri | PERSON | 0.99+ |
Data Science Studio | ORGANIZATION | 0.98+ |
early 2020 | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Ezmeral | ORGANIZATION | 0.98+ |
Dataiku Data Science Studio | ORGANIZATION | 0.97+ |
about 80% | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
HPEs | ORGANIZATION | 0.95+ |
three stages | QUANTITY | 0.94+ |
two great guest speakers | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
almost 7,000 jobs | QUANTITY | 0.92+ |
Analytics Unleashed | ORGANIZATION | 0.91+ |
HPE Ezmeral Container Platform | TITLE | 0.84+ |
HBE Ezmeral | ORGANIZATION | 0.83+ |
Run | ORGANIZATION | 0.83+ |
Ezmeral Container Platform | TITLE | 0.82+ |
D3S | TITLE | 0.81+ |
about five years ago | DATE | 0.8+ |
HPE Ezmeral Container Platform | TITLE | 0.79+ |
2021 | DATE | 0.76+ |
Run:AI | ORGANIZATION | 0.72+ |
Ezmeral | TITLE | 0.7+ |
Platform | TITLE | 0.69+ |
Ezmeral Container Platform | TITLE | 0.68+ |
ICS | ORGANIZATION | 0.67+ |
Partner Scale Initiatives | ORGANIZATION | 0.66+ |
HPE | TITLE | 0.62+ |
DSS | TITLE | 0.6+ |
Ezmeral Container | TITLE | 0.59+ |
Container | TITLE | 0.56+ |
HPE Ezmeral | EVENT | 0.55+ |
First | QUANTITY | 0.52+ |
Run | TITLE | 0.51+ |
Day | EVENT | 0.51+ |
Io-Tahoe Episode 5: Enterprise Digital Resilience on Hybrid and Multicloud
>>from around the globe. It's the Cube presenting enterprise. Digital resilience on hybrid and multi cloud Brought to You by Iota Ho. Hello, everyone, and welcome to our continuing Siri's covering data automation brought to you by Io Tahoe. Today we're gonna look at how to ensure enterprise resilience for hybrid and multi cloud. Let's welcome in age. Eva Hora, who is the CEO of Iota A J. Always good to see you again. Thanks for coming on. >>Great to be back. David Pleasure. >>And he's joined by Fozzy Coons, who is a global principal architect for financial services. The vertical of financial services. That red hat. He's got deep experiences in that sector. Welcome, Fozzie. Good to see you. >>Thank you very much. Happy to be here. >>Fancy. Let's start with you. Look, there are a lot of views on cloud and what it is. I wonder if you could explain to us how you think about what is a hybrid cloud and and how it works. >>Sure, yes. So the hybrid cloud is a 90 architecture that incorporates some degree off workload, possibility, orchestration and management across multiple clouds. Those clouds could be private cloud or public cloud or even your own data centers. And how does it all work? It's all about secure interconnectivity and on demand. Allocation of resources across clouds and separate clouds can become hydrate when they're similarly >>interconnected. And >>it is that interconnectivity that allows the workloads workers to be moved and how management can be unified in off the street. You can work and how well you have. These interconnections has a direct impact on how well your hybrid cloud will work. >>Okay, so we'll fancy staying with you for a minute. So in the early days of Cloud that turned private Cloud was thrown a lot around a lot, but often just meant virtualization of an on PREM system and a network connection to the public cloud. Let's bring it forward. What, in your view, does a modern hybrid cloud architecture look like? >>Sure. So for modern public clouds, we see that, um, teams organizations need to focus on the portability off applications across clouds. That's very important, right? And when organizations build applications, they need to build and deploy these applications as small collections off independently, loosely coupled services, and then have those things run on the same operating system which means, in other words, running it on Lenox everywhere and building cloud native applications and being able to manage and orchestrate thes applications with platforms like KUBERNETES or read it open shit, for example. >>Okay, so that Z, that's definitely different from building a monolithic application that's fossilized and and doesn't move. So what are the challenges for customers, you know, to get to that modern cloud? Aziz, you've just described it. Is it skill sets? Is that the ability to leverage things like containers? What's your view there? >>So, I mean, from what we've seen around around the industry, especially around financial services, where I spent most of my time, we see that the first thing that we see is management right now because you have all these clouds and all these applications, you have a massive array off connections off interconnections. You also have massive array off integrations, possibility and resource allocations as well, and then orchestrating all those different moving pieces. Things like storage networks and things like those are really difficult to manage, right? That's one. What s O Management is the first challenge. The second one is workload, placement, placement. Where do you place this? How do you place this cloud? Native applications. Do you or do you keep on site on Prem? And what do you put in the cloud? That is the the the other challenge. The major one. The third one is security. Security now becomes the key challenge and concern for most customers. And we could talk about how hundreds? Yeah, >>we're definitely gonna dig into that. Let's bring a J into the conversation. A J. You know, you and I have talked about this in the past. One of the big problems that virtually every companies face is data fragmentation. Um, talk a little bit about how I owe Tahoe unifies data across both traditional systems legacy systems. And it connects to these modern I t environments. >>Yeah, sure, Dave. I mean, fancy just nailed it. There used to be about data of the volume of data on the different types of data. But as applications become or connected and interconnected at the location of that data really matters how we serve that data up to those those app. So working with red hat in our partnership with Red Hat being able Thio, inject our data Discovery machine learning into these multiple different locations. Would it be in AWS on IBM Cloud or A D. C p R. On Prem being able thio Automate that discovery? I'm pulling that. That single view of where is all my data then allows the CEO to manage cast that can do things like one. I keep the data where it is on premise or in my Oracle Cloud or in my IBM cloud on Connect. The application that needs to feed off that data on the way in which you do that is machine learning. That learns over time is it recognizes different types of data, applies policies to declassify that data. Andi and brings it all together with automation. >>Right? And that's one of the big themes and we've talked about this on earlier episodes. Is really simplification really abstracting a lot of that heavy lifting away so we can focus on things A. J A. Z. You just mentioned e nifaz e. One of the big challenges that, of course, we all talk about his governance across thes disparity data sets. I'm curious as your thoughts. How does Red Hat really think about helping customers adhere to corporate edicts and compliance regulations, which, of course, are are particularly acute within financial services. >>Oh, yeah, Yes. So for banks and the payment providers, like you've just mentioned their insurers and many other financial services firms, Um, you know, they have to adhere Thio standards such as a PC. I. D. S s in Europe. You've got the G g d p g d p r, which requires strange and tracking, reporting documentation. And you know, for them to to remain in compliance and the way we recommend our customers to address these challenges is by having an automation strategy. Right. And that type of strategy can help you to improve the security on compliance off the organization and reduce the risk after the business. Right. And we help organizations build security and compliance from the start without consulting services residencies. We also offer courses that help customers to understand how to address some of these challenges. And that's also we help organizations build security into their applications without open sources. Mueller, where, um, middle offerings and even using a platform like open shift because it allows you to run legacy applications and also continue rights applications in a unified platform right And also that provides you with, you know, with the automation and the truly that you need to continuously monitor, manage and automate the systems for security and compliance >>purposes. Hey, >>Jay, anything. Any color you could add to this conversation? >>Yeah, I'm pleased. Badly brought up Open shift. I mean, we're using open shift to be able. Thio, take that security application of controls to to the data level. It's all about context. So, understanding what data is there being able to assess it to say who should have access to it. Which application permission should be applied to it. Um, that za great combination of Red Hat tonight. Tahoe. >>But what about multi Cloud? Doesn't that complicate the situation even even further? Maybe you could talk about some of the best practices to apply automation across not only hybrid cloud, but multi >>cloud a swell. Yeah, sure. >>Yeah. So the right automation solution, you know, can be the difference between, you know, cultivating an automated enterprise or automation caress. And some of the recommendations we give our clients is to look for an automation platform that can offer the first thing is complete support. So that means have an automation solution that provides that provides, um, you know, promotes I t availability and reliability with your platform so that you can provide, you know, enterprise great support, including security and testing, integration and clear roadmaps. The second thing is vendor interoperability interoperability in that you are going to be integrating multiple clouds. So you're going to need a solution that can connect to multiple clouds. Simples lee, right? And with that comes the challenge off maintain ability. So you you you're going to need to look into a automation Ah, solution that that is easy to learn or has an easy learning curve. And then the fourth idea that we tell our customers is scalability in the in the hybrid cloud space scale is >>is >>a big, big deal here, and you need a to deploy an automation solution that can span across the whole enterprise in a constituent, consistent manner, right? And then also, that allows you finally to, uh, integrate the multiple data centers that you have, >>So A J I mean, this is a complicated situation, for if a customer has toe, make sure things work on AWS or azure or Google. Uh, they're gonna spend all their time doing that, huh? What can you add really? To simplify that that multi cloud and hybrid cloud equation? >>Yeah. I could give a few customer examples here Warming a manufacturer that we've worked with to drive that simplification Onda riel bonuses for them is has been a reduction cost. We worked with them late last year to bring the cost bend down by $10 million in 2021 so they could hit that reduced budget. Andre, What we brought to that was the ability thio deploy using open shift templates into their different environments. Where there is on premise on bond or in as you mentioned, a W s. They had G cps well, for their marketing team on a cross, those different platforms being out Thio use a template, use pre built scripts to get up and running in catalog and discover that data within minutes. It takes away the legacy of having teams of people having Thio to jump on workshop cause and I know we're all on a lot of teens. The zoom cause, um, in these current times, they just sent me is in in of hours in the day Thio manually perform all of this. So yeah, working with red hat applying machine learning into those templates those little recipes that we can put that automation toe work, regardless of which location the data is in allows us thio pull that unified view together. Right? >>Thank you, Fozzie. I wanna come back to you. So the early days of cloud, you're in the big apple, you know, financial services. Really well. Cloud was like an evil word within financial services, and obviously that's changed. It's evolved. We talked about the pandemic, has even accelerated that, Um And when you really, you know, dug into it when you talk to customers about their experiences with security in the cloud it was it was not that it wasn't good. It was great, whatever. But it was different. And there's always this issue of skill, lack of skills and multiple tools suck up teams, they're really overburdened. But in the cloud requires new thinking. You've got the shared responsibility model you've got obviously have specific corporate requirements and compliance. So this is even more complicated when you introduce multiple clouds. So what are the differences that you can share from your experience is running on a sort of either on Prem or on a mono cloud, um, or, you know, and versus across clouds. What? What? What do you suggest there? >>Yeah, you know, because of these complexities that you have explained here, Miss Configurations and the inadequate change control the top security threats. So human error is what we want to avoid because is, you know, as your clouds grow with complexity and you put humans in the mix, then the rate off eras is going to increase, and that is going to exposure to security threat. So this is where automation comes in because automation will streamline and increase the consistency off your infrastructure management. Also application development and even security operations to improve in your protection, compliance and change control. So you want to consistently configure resources according to a pre approved um, you know, pre approved policies and you want to proactively maintain a to them in a repeatable fashion over the whole life cycle. And then you also want to rapid the identified system that require patches and and reconfiguration and automate that process off patching and reconfiguring so that you don't have humans doing this type of thing, right? And you want to be able to easily apply patches and change assistant settings. According Thio, Pre defined, based on like explained before, you know, with the pre approved policies and also you want is off auditing and troubleshooting, right? And from a rate of perspective, we provide tools that enable you to do this. We have, for example, a tool called danceable that enables you to automate data center operations and security and also deployment of applications and also obvious shit yourself, you know, automates most of these things and obstruct the human beings from putting their fingers on, causing, uh, potentially introducing errors right now in looking into the new world off multiple clouds and so forth. The difference is that we're seeing here between running a single cloud or on prem is three main areas which is control security and compliance. Right control here it means if your on premise or you have one cloud, um, you know, in most cases you have control over your data and your applications, especially if you're on Prem. However, if you're in the public cloud, there is a difference there. The ownership, it is still yours. But your resources are running on somebody else's or the public clouds. You know, e w s and so forth infrastructure. So people that are going to do this need to really especially banks and governments need to be aware off the regulatory constraints off running, uh, those applications in the public cloud. And we also help customers regionalize some of these choices and also on security. You will see that if you're running on premises or in a single cloud, you have more control, especially if you're on Prem. You can control this sensitive information that you have, however, in the cloud. That's a different situation, especially from personal information of employees and things like that. You need to be really careful off that. And also again, we help you rationalize some of those choices. And then the last one is compliant. Aziz. Well, you see that if you're running on Prem or a single cloud, um, regulations come into play again, right? And if you're running a problem, you have control over that. You can document everything you have access to everything that you need. But if you're gonna go to the public cloud again, you need to think about that. We have automation, and we have standards that can help you, uh, you know, address some of these challenges for security and compliance. >>So that's really strong insights, Potsie. I mean, first of all, answerable has a lot of market momentum. Red hats in a really good job with that acquisition, your point about repeatability is critical because you can't scale otherwise. And then that idea you're you're putting forth about control, security compliance It's so true is I called it the shared responsibility model. And there was a lot of misunderstanding in the early days of cloud. I mean, yeah, maybe a W s is gonna physically secure the, you know, s three, but in the bucket. But we saw so many Miss configurations early on. And so it's key to have partners that really understand this stuff and can share the experiences of other clients. So this all sounds great. A j. You're sharp, you know, financial background. What about the economics? >>You >>know, our survey data shows that security it's at the top of the spending priority list, but budgets are stretched thin. E especially when you think about the work from home pivot and and all the areas that they had toe the holes that they had to fill their, whether it was laptops, you know, new security models, etcetera. So how do organizations pay for this? What's the business case look like in terms of maybe reducing infrastructure costs so I could, you know, pay it forward or there's a There's a risk reduction angle. What can you share >>their? Yeah. I mean, the perspective I'd like to give here is, um, not being multi cloud is multi copies of an application or data. When I think about 20 years, a lot of the work in financial services I was looking at with managing copies of data that we're feeding different pipelines, different applications. Now what we're saying I talk a lot of the work that we're doing is reducing the number of copies of that data so that if I've got a product lifecycle management set of data, if I'm a manufacturer, I'm just gonna keep that in one location. But across my different clouds, I'm gonna have best of breed applications developed in house third parties in collaboration with my supply chain connecting securely to that. That single version of the truth. What I'm not going to do is to copy that data. So ah, lot of what we're seeing now is that interconnectivity using applications built on kubernetes. Um, that decoupled from the data source that allows us to reduce those copies of data within that you're gaining from the security capability and resilience because you're not leaving yourself open to those multiple copies of data on with that. Couldn't come. Cost, cost of storage on duh cost of compute. So what we're seeing is using multi cloud to leverage the best of what each cloud platform has to offer That goes all the way to Snowflake and Hiroko on Cloud manage databases, too. >>Well, and the people cost to a swell when you think about yes, the copy creep. But then you know when something goes wrong, a human has to come in and figured out um, you brought up snowflake, get this vision of the data cloud, which is, you know, data data. I think this we're gonna be rethinking a j, uh, data architectures in the coming decade where data stays where it belongs. It's distributed, and you're providing access. Like you said, you're separating the data from the applications applications as we talked about with Fozzie. Much more portable. So it Z really the last 10 years will be different than the next 10 years. A. >>J Definitely. I think the people cast election is used. Gone are the days where you needed thio have a dozen people governing managing black policies to data. Ah, lot of that repetitive work. Those tests can be in power automated. We've seen examples in insurance were reduced teams of 15 people working in the the back office China apply security controls compliance down to just a couple of people who are looking at the exceptions that don't fit. And that's really important because maybe two years ago the emphasis was on regulatory compliance of data with policies such as GDP are in CCP a last year, very much the economic effect of reduce headcounts on on enterprises of running lean looking to reduce that cost. This year, we can see that already some of the more proactive cos they're looking at initiatives such as net zero emissions how they use data toe under understand how cape how they can become more have a better social impact. Um, and using data to drive that, and that's across all of their operations and supply chain. So those regulatory compliance issues that may have been external we see similar patterns emerging for internal initiatives that benefiting the environment, social impact and and, of course, course, >>great perspectives. Yeah, Jeff Hammer, Bucker once famously said, The best minds of my generation are trying to get people to click on ads and a J. Those examples that you just gave of, you know, social good and moving. Uh, things forward are really critical. And I think that's where Data is gonna have the biggest societal impact. Okay, guys, great conversation. Thanks so much for coming on the program. Really appreciate your time. Keep it right there from, or insight and conversation around, creating a resilient digital business model. You're watching the >>Cube digital resilience, automated compliance, privacy and security for your multi cloud. Congratulations. You're on the journey. You have successfully transformed your organization by moving to a cloud based platform to ensure business continuity in these challenging times. But as you scale your digital activities, there is an inevitable influx of users that outpaces traditional methods of cybersecurity, exposing your data toe underlying threats on making your company susceptible toe ever greater risk to become digitally resilient. Have you applied controls your data continuously throughout the data Lifecycle? What are you doing to keep your customer on supply data private and secure? I owe Tahoe's automated, sensitive data. Discovery is pre programmed with over 300 existing policies that meet government mandated risk and compliance standards. Thes automate the process of applying policies and controls to your data. Our algorithm driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant on ensure sensitive data is secure. Unsure about where your organization stands In terms of digital resilience, Sign up for a minimal cost commitment. Free data Health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Book time within Iot. Tahoe Engineer Now >>Okay, let's now get into the next segment where we'll explore data automation. But from the angle of digital resilience within and as a service consumption model, we're now joined by Yusuf Khan, who heads data services for Iot, Tahoe and Shirish County up in. Who's the vice president and head of U. S. Sales at happiest Minds? Gents, welcome to the program. Great to have you in the Cube. >>Thank you, David. >>Trust you guys talk about happiest minds. This notion of born digital, foreign agile. I like that. But talk about your mission at the company. >>Sure. >>A former in 2011 Happiest Mind is a born digital born a child company. The reason is that we are focused on customers. Our customer centric approach on delivering digitals and seamless solutions have helped us be in the race. Along with the Tier one providers, Our mission, happiest people, happiest customers is focused to enable customer happiness through people happiness. We have Bean ranked among the top 25 i t services company in the great places to work serving hour glass to ratings off 41 against the rating off. Five is among the job in the Indian nineties services company that >>shows the >>mission on the culture. What we have built on the values right sharing, mindful, integrity, learning and social on social responsibilities are the core values off our company on. That's where the entire culture of the company has been built. >>That's great. That sounds like a happy place to be. Now you said you had up data services for Iot Tahoe. We've talked in the past. Of course you're out of London. What >>do you what? Your >>day to day focus with customers and partners. What you focused >>on? Well, David, my team work daily with customers and partners to help them better understand their data, improve their data quality, their data governance on help them make that data more accessible in a self service kind of way. To the stakeholders within those businesses on dis is all a key part of digital resilience that will will come on to talk about but later. You're >>right, e mean, that self service theme is something that we're gonna we're gonna really accelerate this decade, Yussef and so. But I wonder before we get into that, maybe you could talk about the nature of the partnership with happiest minds, you know? Why do you guys choose toe work closely together? >>Very good question. Um, we see Hyo Tahoe on happiest minds as a great mutual fit. A Suresh has said, uh, happiest minds are very agile organization um, I think that's one of the key things that attracts their customers on Io. Tahoe is all about automation. Uh, we're using machine learning algorithms to make data discovery data cataloging, understanding, data done. See, uh, much easier on. We're enabling customers and partners to do it much more quickly. So when you combine our emphasis on automation with the emphasis on agility that happiest minds have that that's a really nice combination work works very well together, very powerful. I think the other things that a key are both businesses, a serious have said, are really innovative digital native type type companies. Um, very focused on newer technologies, the cloud etcetera on. Then finally, I think they're both Challenger brands on happiest minds have a really positive, fresh ethical approach to people and customers that really resonates with us at Ideo Tahoe to >>great thank you for that. So Russia, let's get into the whole notion of digital resilience. I wanna I wanna sort of set it up with what I see, and maybe you can comment be prior to the pandemic. A lot of customers that kind of equated disaster recovery with their business continuance or business resilient strategy, and that's changed almost overnight. How have you seen your clients respond to that? What? I sometimes called the forced march to become a digital business. And maybe you could talk about some of the challenges that they faced along the way. >>Absolutely. So, uh, especially during this pandemic, times when you say Dave, customers have been having tough times managing their business. So happiest minds. Being a digital Brazilian company, we were able to react much faster in the industry, apart from the other services company. So one of the key things is the organisation's trying to adopt onto the digital technologies. Right there has bean lot off data which has been to manage by these customers on There have been lot off threats and risk, which has been to manage by the CEO Seo's so happiest minds digital resilient technology, right where we bring in the data. Complaints as a service were ableto manage the resilience much ahead off other competitors in the market. We were ableto bring in our business continuity processes from day one, where we were ableto deliver our services without any interruption to the services. What we were delivered to our customers So that is where the digital resilience with business community process enabled was very helpful for us. Toe enable our customers continue their business without any interruptions during pandemics. >>So I mean, some of the challenges that customers tell me they obviously they had to figure out how to get laptops to remote workers and that that whole remote work from home pivot figure out how to secure the end points. And, you know, those were kind of looking back there kind of table stakes, But it sounds like you've got a digital business. Means a data business putting data at the core, I like to say, but so I wonder if you could talk a little bit more about maybe the philosophy you have toward digital resilience in the specific approach you take with clients? >>Absolutely. They seen any organization data becomes. The key on that, for the first step is to identify the critical data. Right. So we this is a six step process. What we following happiest minds. First of all, we take stock off the current state, though the customers think that they have a clear visibility off their data. How are we do more often assessment from an external point off view on see how critical their data is, then we help the customers to strategies that right. The most important thing is to identify the most important critical herself. Data being the most critical assert for any organization. Identification off the data's key for the customers. Then we help in building a viable operating model to ensure these identified critical assets are secure on monitor dearly so that they are consumed well as well as protected from external threats. Then, as 1/4 step, we try to bring in awareness, toe the people we train them >>at >>all levels in the organization. That is a P for people to understand the importance off the digital ourselves and then as 1/5 step, we work as a back up plan in terms of bringing in a very comprehensive and a holistic testing approach on people process as well as in technology. We'll see how the organization can withstand during a crisis time, and finally we do a continuous governance off this data, which is a key right. It is not just a one step process. We set up the environment, we do the initial analysis and set up the strategy on continuously govern this data to ensure that they are not only know managed will secure as well as they also have to meet the compliance requirements off the organization's right. That is where we help organizations toe secure on Meet the regulations off the organizations. As for the privacy laws, so this is a constant process. It's not on one time effort. We do a constant process because every organization goes towards their digital journey on. They have to face all these as part off the evolving environment on digital journey. And that's where they should be kept ready in terms off. No recovering, rebounding on moving forward if things goes wrong. >>So let's stick on that for a minute, and then I wanna bring yourself into the conversation. So you mentioned compliance and governance when when your digital business, you're, as you say, you're a data business, so that brings up issues. Data sovereignty. Uh, there's governance, this compliance. There's things like right to be forgotten. There's data privacy, so many things. These were often kind of afterthoughts for businesses that bolted on, if you will. I know a lot of executives are very much concerned that these air built in on, and it's not a one shot deal. So do you have solutions around compliance and governance? Can you deliver that as a service? Maybe you could talk about some of the specifics there, >>so some of way have offered multiple services. Tow our customers on digital against. On one of the key service is the data complaints. As a service here we help organizations toe map the key data against the data compliance requirements. Some of the features includes in terms off the continuous discovery off data right, because organizations keep adding on data when they move more digital on helping the helping and understanding the actual data in terms off the residents of data, it could be a heterogeneous data soldiers. It could be on data basis, or it could be even on the data legs. Or it could be a no even on compromise all the cloud environment. So identifying the data across the various no heterogeneous environment is very key. Feature off our solution. Once we identify classify this sensitive data, the data privacy regulations on the traveling laws have to be map based on the business rules So we define those rules on help map those data so that organizations know how critical their digital assets are. Then we work on a continuous marching off data for anomalies because that's one of the key teachers off the solution, which needs to be implemented on the day to day operational basis. So we're helping monitoring those anomalies off data for data quality management on an ongoing basis. On finally, we also bringing the automated data governance where we can manage the sensory data policies on their later relationships in terms off mapping on manage their business roots on we drive reputations toe Also suggest appropriate actions to the customers. Take on those specific data sets. >>Great. Thank you, Yousef. Thanks for being patient. I want to bring in Iota ho thio discussion and understand where your customers and happiest minds can leverage your data automation capability that you and I have talked about in the past. I'm gonna be great if you had an example is well, but maybe you could pick it up from there, >>John. I mean, at a high level, assertions are clearly articulated. Really? Um, Hyoty, who delivers business agility. So that's by, um accelerating the time to operationalize data, automating, putting in place controls and actually putting helping put in place digital resilience. I mean way if we step back a little bit in time, um, traditional resilience in relation to data often met manually, making multiple copies of the same data. So you have a d b A. They would copy the data to various different places, and then business users would access it in those functional style owes. And of course, what happened was you ended up with lots of different copies off the same data around the enterprise. Very inefficient. ONDA course ultimately, uh, increases your risk profile. Your risk of a data breach. Um, it's very hard to know where everything is. And I realized that expression. They used David the idea of the forced march to digital. So with enterprises that are going on this forced march, what they're finding is they don't have a single version of the truth, and almost nobody has an accurate view of where their critical data is. Then you have containers bond with containers that enables a big leap forward so you could break applications down into micro services. Updates are available via a p I s on. So you don't have the same need thio to build and to manage multiple copies of the data. So you have an opportunity to just have a single version of the truth. Then your challenge is, how do you deal with these large legacy data states that the service has been referring Thio, where you you have toe consolidate and that's really where I attack comes in. Um, we massively accelerate that process of putting in a single version of the truth into place. So by automatically discovering the data, discovering what's dubica? What's redundant? Uh, that means you can consolidate it down to a single trusted version much more quickly. We've seen many customers have tried to do this manually, and it's literally taken years using manual methods to cover even a small percentage of their I T estates. With our tire, you could do it really very quickly on you can have tangible results within weeks and months on Ben, you can apply controls to the data based on context. So who's the user? What's the content? What's the use case? Things like data quality validations or access permissions on. Then, once you've done there. Your applications and your enterprise are much more secure, much more resilient. As a result, you've got to do these things whilst retaining agility, though. So coming full circle. This is where the partnership with happiest minds really comes in as well. You've got to be agile. You've gotta have controls. Um, on you've got a drug toward the business outcomes. Uh, and it's doing those three things together that really deliver for the customer. >>Thank you. Use f. I mean you and I. In previous episodes, we've looked in detail at the business case. You were just talking about the manual labor involved. We know that you can't scale, but also there's that compression of time. Thio get to the next step in terms of ultimately getting to the outcome. And we talked to a number of customers in the Cube, and the conclusion is, it's really consistent that if you could accelerate the time to value, that's the key driver reducing complexity, automating and getting to insights faster. That's where you see telephone numbers in terms of business impact. So my question is, where should customers start? I mean, how can they take advantage of some of these opportunities that we've discussed today. >>Well, we've tried to make that easy for customers. So with our Tahoe and happiest minds, you can very quickly do what we call a data health check. Um, this is a is a 2 to 3 week process, uh, to really quickly start to understand on deliver value from your data. Um, so, iota, who deploys into the customer environment? Data doesn't go anywhere. Um, we would look at a few data sources on a sample of data. Onda. We can very rapidly demonstrate how they discovery those catalog e on understanding Jupiter data and redundant data can be done. Um, using machine learning, um, on how those problems can be solved. Um, And so what we tend to find is that we can very quickly, as I say in the matter of a few weeks, show a customer how they could get toe, um, or Brazilian outcome on then how they can scale that up, take it into production on, then really understand their data state? Better on build. Um, Brasiliense into the enterprise. >>Excellent. There you have it. We'll leave it right there. Guys, great conversation. Thanks so much for coming on the program. Best of luck to you and the partnership Be well, >>Thank you, David Suresh. Thank you. Thank >>you for watching everybody, This is Dave Volonte for the Cuban are ongoing Siris on data automation without >>Tahoe, digital resilience, automated compliance, privacy and security for your multi cloud. Congratulations. You're on the journey. You have successfully transformed your organization by moving to a cloud based platform to ensure business continuity in these challenging times. But as you scale your digital activities, there is an inevitable influx of users that outpaces traditional methods of cybersecurity, exposing your data toe underlying threats on making your company susceptible toe ever greater risk to become digitally resilient. Have you applied controls your data continuously throughout the data lifecycle? What are you doing to keep your customer on supply data private and secure? I owe Tahoe's automated sensitive data. Discovery is pre programmed with over 300 existing policies that meet government mandated risk and compliance standards. Thes automate the process of applying policies and controls to your data. Our algorithm driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant on ensure sensitive data is secure. Unsure about where your organization stands in terms of digital resilience. Sign up for our minimal cost commitment. Free data health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Book time within Iot. Tahoe Engineer. Now. >>Okay, now we're >>gonna go into the demo. We want to get a better understanding of how you can leverage open shift. And I owe Tahoe to facilitate faster application deployment. Let me pass the mic to Sabetta. Take it away. >>Uh, thanks, Dave. Happy to be here again, Guys, uh, they've mentioned names to be the Davis. I'm the enterprise account executive here. Toyota ho eso Today we just wanted to give you guys a general overview of how we're using open shift. Yeah. Hey, I'm Noah Iota host data operations engineer, working with open ship. And I've been learning the Internets of open shift for, like, the past few months, and I'm here to share. What a plan. Okay, so So before we begin, I'm sure everybody wants to know. Noel, what are the benefits of using open shift. Well, there's five that I can think of a faster time, the operation simplicity, automation control and digital resilience. Okay, so that that's really interesting, because there's an exact same benefits that we had a Tahoe delivered to our customers. But let's start with faster time the operation by running iota. Who on open shift? Is it faster than, let's say, using kubernetes and other platforms >>are >>objective iota. Who is to be accessible across multiple cloud platforms, right? And so by hosting our application and containers were able to achieve this. So to answer your question, it's faster to create and use your application images using container tools like kubernetes with open shift as compared to, like kubernetes with docker cry over container D. Okay, so we got a bit technical there. Can you explain that in a bit more detail? Yeah, there's a bit of vocabulary involved, uh, so basically, containers are used in developing things like databases, Web servers or applications such as I have top. What's great about containers is that they split the workload so developers can select the libraries without breaking anything. And since Hammond's can update the host without interrupting the programmers. Uh, now, open shift works hand in hand with kubernetes to provide a way to build those containers for applications. Okay, got It s basically containers make life easier for developers and system happens. How does open shift differ from other platforms? Well, this kind of leads into the second benefit I want to talk about, which is simplicity. Basically, there's a lot of steps involved with when using kubernetes with docker. But open shift simplifies this with their source to image process that takes the source code and turns it into a container image. But that's not all. Open shift has a lot of automation and features that simplify working with containers, an important one being its Web console. Here. I've set up a light version of open ship called Code Ready Containers, and I was able to set up her application right from the Web console. And I was able to set up this entire thing in Windows, Mac and Lennox. So its environment agnostic in that sense. Okay, so I think I've seen the top left that this is a developers view. What would a systems admin view look like? It's a good question. So here's the administrator view and this kind of ties into the benefit of control. Um, this view gives insights into each one of the applications and containers that are running, and you could make changes without affecting deployment. Andi can also, within this view, set up each layer of security, and there's multiple that you can prop up. But I haven't fully messed around with it because with my luck, I'd probably locked myself out. So that seems pretty secure. Is there a single point security such as you use a log in? Or are there multiple layers of security? Yeah, there are multiple layers of security. There's your user login security groups and general role based access controls. Um, but there's also a ton of layers of security surrounding like the containers themselves. But for the sake of time, I won't get too far into it. Okay, eso you mentioned simplicity In time. The operation is being two of the benefits. You also briefly mention automation. And as you know, automation is the backbone of our platform here, Toyota Ho. So that's certainly grabbed my attention. Can you go a bit more in depth in terms of automation? Open shift provides extensive automation that speeds up that time the operation. Right. So the latest versions of open should come with a built in cryo container engine, which basically means that you get to skip that container engine insulation step and you don't have to, like, log into each individual container host and configure networking, configure registry servers, storage, etcetera. So I'd say, uh, it automates the more boring kind of tedious process is Okay, so I see the iota ho template there. What does it allow me to do? Um, in terms of automation in application development. So we've created an open shift template which contains our application. This allows developers thio instantly, like set up our product within that template. So, Noah Last question. Speaking of vocabulary, you mentioned earlier digital resilience of the term we're hearing, especially in the banking and finance world. Um, it seems from what you described, industries like banking and finance would be more resilient using open shift, Correct. Yeah, In terms of digital resilience, open shift will give you better control over the consumption of resource is each container is using. In addition, the benefit of containers is that, like I mentioned earlier since Hammond's can troubleshoot servers about bringing down the application and if the application does go down is easy to bring it back up using templates and, like the other automation features that open ship provides. Okay, so thanks so much. Know us? So any final thoughts you want to share? Yeah. I just want to give a quick recap with, like, the five benefits that you gained by using open shift. Uh, the five are timeto operation automation, control, security and simplicity. You could deploy applications faster. You could simplify the workload you could automate. A lot of the otherwise tedious processes can maintain full control over your workflow. And you could assert digital resilience within your environment. Guys, >>Thanks for that. Appreciate the demo. Um, I wonder you guys have been talking about the combination of a Iot Tahoe and red hat. Can you tie that in subito Digital resilience >>Specifically? Yeah, sure, Dave eso when we speak to the benefits of security controls in terms of digital resilience at Io Tahoe, we automated detection and apply controls at the data level, so this would provide for more enhanced security. >>Okay, But so if you were trying to do all these things manually. I mean, what what does that do? How much time can I compress? What's the time to value? >>So with our latest versions, Biota we're taking advantage of faster deployment time associated with container ization and kubernetes. So this kind of speeds up the time it takes for customers. Start using our software as they be ableto quickly spin up io towel on their own on premise environment are otherwise in their own cloud environment, like including aws. Assure or call GP on IBM Cloud a quick start templates allow flexibility deploy into multi cloud environments all just using, like, a few clicks. Okay, so so now just quickly add So what we've done iota, Who here is We've really moved our customers away from the whole idea of needing a team of engineers to apply controls to data as compared to other manually driven work flows. Eso with templates, automation, previous policies and data controls. One person can be fully operational within a few hours and achieve results straight out of the box on any cloud. >>Yeah, we've been talking about this theme of abstracting the complexity. That's really what we're seeing is a major trend in in this coming decade. Okay, great. Thanks, Sabina. Noah, How could people get more information or if they have any follow up questions? Where should they go? >>Yeah, sure. They've. I mean, if you guys are interested in learning more, you know, reach out to us at info at iata ho dot com to speak with one of our sales engineers. I mean, we love to hear from you, so book a meeting as soon as you can. All >>right. Thanks, guys. Keep it right there from or cube content with.
SUMMARY :
Always good to see you again. Great to be back. Good to see you. Thank you very much. I wonder if you could explain to us how you think about what is a hybrid cloud and So the hybrid cloud is a 90 architecture that incorporates some degree off And it is that interconnectivity that allows the workloads workers to be moved So in the early days of Cloud that turned private Cloud was thrown a lot to manage and orchestrate thes applications with platforms like Is that the ability to leverage things like containers? And what do you put in the cloud? One of the big problems that virtually every companies face is data fragmentation. the way in which you do that is machine learning. And that's one of the big themes and we've talked about this on earlier episodes. And that type of strategy can help you to improve the security on Hey, Any color you could add to this conversation? is there being able to assess it to say who should have access to it. Yeah, sure. the difference between, you know, cultivating an automated enterprise or automation caress. What can you add really? bond or in as you mentioned, a W s. They had G cps well, So what are the differences that you can share from your experience is running on a sort of either And from a rate of perspective, we provide tools that enable you to do this. A j. You're sharp, you know, financial background. know, our survey data shows that security it's at the top of the spending priority list, Um, that decoupled from the data source that Well, and the people cost to a swell when you think about yes, the copy creep. Gone are the days where you needed thio have a dozen people governing managing to get people to click on ads and a J. Those examples that you just gave of, you know, to give you a clear understanding of what's in your environment. Great to have you in the Cube. Trust you guys talk about happiest minds. We have Bean ranked among the mission on the culture. Now you said you had up data services for Iot Tahoe. What you focused To the stakeholders within those businesses on dis is of the partnership with happiest minds, you know? So when you combine our emphasis on automation with the emphasis And maybe you could talk about some of the challenges that they faced along the way. So one of the key things putting data at the core, I like to say, but so I wonder if you could talk a little bit more about maybe for the first step is to identify the critical data. off the digital ourselves and then as 1/5 step, we work as a back up plan So you mentioned compliance and governance when when your digital business, you're, as you say, So identifying the data across the various no heterogeneous environment is well, but maybe you could pick it up from there, So you don't have the same need thio to build and to manage multiple copies of the data. and the conclusion is, it's really consistent that if you could accelerate the time to value, to really quickly start to understand on deliver value from your data. Best of luck to you and the partnership Be well, Thank you, David Suresh. to give you a clear understanding of what's in your environment. Let me pass the mic to And I've been learning the Internets of open shift for, like, the past few months, and I'm here to share. into each one of the applications and containers that are running, and you could make changes without affecting Um, I wonder you guys have been talking about the combination of apply controls at the data level, so this would provide for more enhanced security. What's the time to value? a team of engineers to apply controls to data as compared to other manually driven work That's really what we're seeing I mean, if you guys are interested in learning more, you know, reach out to us at info at iata Keep it right there from or cube content with.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jeff Hammer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Eva Hora | PERSON | 0.99+ |
David Suresh | PERSON | 0.99+ |
Sabina | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Yusuf Khan | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Fozzie | PERSON | 0.99+ |
2 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
David Pleasure | PERSON | 0.99+ |
iata ho dot com | ORGANIZATION | 0.99+ |
Jay | PERSON | 0.99+ |
Five | QUANTITY | 0.99+ |
six step | QUANTITY | 0.99+ |
five benefits | QUANTITY | 0.99+ |
15 people | QUANTITY | 0.99+ |
Yousef | PERSON | 0.99+ |
$10 million | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
Ideo Tahoe | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Andre | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one cloud | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Tahoe | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Noel | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Prem | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
tonight | DATE | 0.99+ |
Io Tahoe | ORGANIZATION | 0.99+ |
second benefit | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Iota A J. | ORGANIZATION | 0.99+ |
one step | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Siris | TITLE | 0.98+ |
Aziz | PERSON | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
each layer | QUANTITY | 0.98+ |
both businesses | QUANTITY | 0.98+ |
fourth idea | QUANTITY | 0.98+ |
apple | ORGANIZATION | 0.98+ |
1/5 step | QUANTITY | 0.98+ |
Toyota Ho | ORGANIZATION | 0.98+ |
first challenge | QUANTITY | 0.98+ |
41 | QUANTITY | 0.98+ |
azure | ORGANIZATION | 0.98+ |
Io Tahoe | PERSON | 0.98+ |
One person | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
Noah | PERSON | 0.98+ |
over 300 existing policies | QUANTITY | 0.98+ |
Iot Tahoe | ORGANIZATION | 0.98+ |
Thio | PERSON | 0.98+ |
Lenox | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.98+ |
A. J A. Z. | PERSON | 0.98+ |
single point | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.97+ |
Yussef | PERSON | 0.97+ |
Jupiter | LOCATION | 0.97+ |
second thing | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
about 20 years | QUANTITY | 0.97+ |
single cloud | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
Suresh | PERSON | 0.97+ |
3 week | QUANTITY | 0.97+ |
each container | QUANTITY | 0.97+ |
each cloud platform | QUANTITY | 0.97+ |
Noah Fields and Sabita Davis | Io-Tahoe Enterprise Digital Resilience on Hybrid & Multicloud
>> Narrator: From around the globe, it's theCUBE presenting enterprise digital resilience on hybrid and multicloud brought to you by Io-Tahoe. >> Okay, now we're going to go into the demo and we want to get a better understanding of how you can leverage OpenShift and Io-Tahoe to facilitate faster application deployment. Let me pass the mic to Sabita, take it away. >> Thanks, Dave. Happy to be here again. >> Guys as Dave mentioned my name's Sabita Davis. I'm the Enterprise Account Executive here at Io-Tahoe. So today we just wanted to give you guys a general overview of how we're using OpenShift. >> Yeah, hey, I'm Noah, Io-Tahoe's Data Operations Engineer working with OpenShift and I've been learning the ins and outs of OpenShift for like the past few months. And I'm here to share what I've learned. >> Okay so before we begin I'm sure everybody wants to know Noah. What are the benefits of using OpenShift? >> Well, there's five that I can think of, faster time to operations, simplicity, automation, control and digital resilience. >> Okay, so that's really interesting because those are the exact same benefits that we at Io-Tahoe deliver to our customers. But let's start with faster time to operation, by running Io-Tahoe on OpenShift is it faster than let's say using Kubernetes and other platforms? >> Well, our objective at Io-Tahoe is to be accessible across multiple cloud platforms, right? And so by hosting our application in containers we're able to achieve this. So to answer your question it's faster to create end user application images using container tools like Kubernetes with OpenShift as compared to like Kubernetes with Docker, Kryo >> or Containerd. >> Okay, so we got a bit technical there. Can you explain that in a bit more detail? >> Yeah, there's a bit of vocabulary involved. So basically containers are used in developing things like databases, web servers or applications such as Io-Tahoe. What's great about containers is that they split the workload. So developers can select the libraries without breaking anything. And CIS admins can update the host without interrupting the programmers. Now OpenShift works hand-in-hand with Kubernetes to provide a way to build those containers for applications. >> Okay, got it. So basically containers make life easier for developers and system admins. So how does OpenShift differ from other platforms? >> Well, this kind of leads into the second benefit I want to talk about which is simplicity. Basically there's a lot of steps involved with when using Kubernetes with Docker but OpenShift simplifies this with their source to image process that takes the source code and turns it into a container image but that's not all. OpenShift has a lot of automation and features that simplify working with containers an important one being its web console. So here I've set up a light version of OpenShift called CodeReady Containers. And I was able to set up for application right from the web console. And I was able to set up this entire thing in Windows, Mac and Linux. So it's environment agnostic in that sense. >> Okay, so I think I see in the top left that this is a developer's view. What would a systems admin view look like? >> That's a good question. So here's the administrator view and this kind of ties into the benefit of control. This view gives insights into each one of the applications and containers that are running and you can make changes without affecting deployment. And you can also within this view set up each layer of security and there's multiple that you can prop up but I haven't fully messed around with it because since with my luck, I'd probably lock myself out. >> Okay, so that seems pretty secure. Is there a single point security such as you user login or are there multiple layers of security? >> Yeah, there are multiple layers of security. There's your user login, security groups and general role based access controls but there's also a ton of layers of security surrounding like the containers themselves. But for the sake of time, I won't get too far into it. >> Okay, so you mentioned simplicity and time to operation as being two of the benefits. You also briefly mentioned automation and as you know automation is the backbone of our platform here at Io-Tahoe. So that certainly grabbed my attention. Can you go a bit more in depth in terms of automation? >> OpenShift provides extensive automation that speeds up that time to operation, right? So the latest versions of OpenShift come with a built-in cryo container engine which basically means that you get to skip that container engine installation step. And you don't have to like log into each individual container hosts and configure networking, configure registry servers, storage, et cetera. So I'd say it automates the more boring kind of tedious processes. >> Okay, so I see the Io-Tahoe template there. What does it allow me to do? >> In terms of automation in application development. So we've created an OpenShift template which contains our application. This allows developers to instantly like set up a product within that template or within that, yeah. >> Okay, so Noah, last question. Speaking of vocabulary, you mentioned earlier digital resilience is a term we're hearing especially in the banking and finance world. It seems from what you described industries like banking and finance would be more resilient using OpenShift, correct? >> Yeah, in terms of digital resilience, OpenShift will give you better control over the consumption of resources each container is using. In addition, the benefit of containers is that like I mentioned earlier sysadmins can troubleshoot the servers without bringing down the application. And if the application does go down it's easy to bring it back up using the templates and like the other automation features that OpenShift provides. >> Okay, so thanks so much Noah. So any final thoughts you want to share? >> Yeah, I just want to give a quick recap of like the five benefits that you gain by using OpenShift. The five are time to operation, automation, control, security and simplicity. You can deploy applications faster, you can simplify the workload, you can automate a lot of the otherwise tedious processes, and maintain full control over your workflow and you can assert digital resilience within your environment. >> So guys, thanks for that appreciate the demo. I wonder you guys have been talking about the combination of Io-Tahoe and Red Hat. Can you tie that in Sabita to digital resilience specifically? >> Yeah, sure Dave. So when we speak to the benefits of security controls in terms of digital resilience at Io-Tahoe we automated detection and apply controls at the data level. So this would provide for more enhanced security. >> Okay, but so if you were to try to do all these things manually I mean, what does that do? How much time can I compress? What's the time to value? >> So with our latest versions of Io-Tahoe we're taking advantage of faster deployment time associated with containerization and Kubernetes. So this kind of speeds up the time it takes for customers start using our softwares. They'd be able to quickly spin up Io-Tahoe in their own on-premise environment or otherwise in their own cloud environment like including AWS, Azure, Oracle GCP and IBM cloud. Our quick start templates allow flexibility to deploy into multicloud environments all just using like a few clicks. >> Okay, so now I'll just quickly add, so what we've done Io-Tahoe here is we've really moved our customers away from the whole idea of needing a team of engineers to apply controls to data as compared to other manually driven workflows. So with templates, automation, pre-built policies and data controls one person can be fully operational within a few hours and achieve results straight out of the box on any cloud. >> Yeah, we've been talking about this theme of abstracting the complexity that's really what we're seeing is a major trend in this coming decade. Okay, great. Thanks Sabita, Noah. How can people get more information or if they have any follow up questions, where should they go? >> Yeah, sure Dave I mean if you guys are interested in learning more reach out to us @infoatiotahoe.com to speak with one of our sales engineers. I mean, we'd love to hear from you. So book a meeting as soon as you can. >> All right, thanks guys. Keep it right there for more cube content with Io-Tahoe. (gentle music)
SUMMARY :
brought to you by Io-Tahoe. Let me pass the mic to Happy to be here again. I'm the Enterprise Account and I've been learning the What are the benefits of using OpenShift? faster time to operations, simplicity, faster time to operation, So to answer your question Okay, so we got a bit technical there. So developers can select the libraries So basically containers make life easier that takes the source code Okay, so I think I see in the top left and there's multiple that you can prop up Okay, so that seems pretty secure. But for the sake of time, I and time to operation as So the latest versions of OpenShift Okay, so I see the This allows developers to instantly like especially in the banking And if the application does go down So any final thoughts you want to share? and you can assert digital resilience that appreciate the demo. controls at the data level. So with our latest versions of Io-Tahoe So with templates, automation, of abstracting the So book a meeting as soon as you can. cube content with Io-Tahoe.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Sabita | PERSON | 0.99+ |
Noah | PERSON | 0.99+ |
Sabita Davis | PERSON | 0.99+ |
Io-Tahoe | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
five benefits | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
each layer | QUANTITY | 0.99+ |
Noah Fields | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
each container | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Kryo | TITLE | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
single point | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
second benefit | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Windows | TITLE | 0.95+ |
Docker | TITLE | 0.94+ |
CodeReady | TITLE | 0.93+ |
Io-Tahoe | TITLE | 0.92+ |
Linux | TITLE | 0.91+ |
this coming decade | DATE | 0.91+ |
each individual | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
one person | QUANTITY | 0.82+ |
Azure | ORGANIZATION | 0.79+ |
OpenShift | ORGANIZATION | 0.79+ |
Containerd | TITLE | 0.73+ |
@infoatiotahoe.com | OTHER | 0.71+ |
layers | QUANTITY | 0.7+ |
past | DATE | 0.6+ |
Mac | TITLE | 0.52+ |
Sabita | ORGANIZATION | 0.52+ |
Tahoe | TITLE | 0.51+ |
GCP | TITLE | 0.5+ |
Io | TITLE | 0.5+ |
Noah Fields and Sabita Davis | Io-Tahoe
>>From around the globe. It's the cube presenting enterprise digital resilience on hybrid and multicloud brought to you by IO Tahoe. Okay. Now we're going to go into the demo and we want to get a better understanding of how you can leverage OpenShift and IO Tahoe to facilitate faster application deployment. Let me pass the mic to Savita, take it away. >>Uh, thanks Dave. Happy to be here again. Um, guys, as they've mentioned, my name is to be the Davis. I'm the enterprise account executive here at IO Tahoe. Uh, so today we just wanted to give you guys a general overview of how we're using open shift. >>Yeah. Hey, I'm Noah IO. Tahoe's data operations engineer working with OpenShift, and I've been learning the ins and outs of OpenShift for like the past few months and I'm here to share it up line. >>Okay. So, so before we begin, I'm sure everybody wants to know Noah. What are the benefits of using OpenShift? >>Well, um, there's five that I can think of a faster time to operations, simplicity, automation control, and digital resilience. >>Okay. So, so that, that's really interesting because those are the exact same benefits that we at Aja Tahoe delivered to our customers. But, uh, let's start with faster time to operation by running IO Tahoe on OpenShift. Is it faster than let's say using Kubernetes and other platforms? >>Well, um, our objective at IO Tahoe has to be accessible across multiple cloud platforms, right? And so by hosting our application and containers, uh, we're able to achieve this. So to answer your question, it's faster to create end user application images, using container tools like Kubernetes with OpenShift as compared to like Kubernetes with Docker cryo or container D. >>Okay. So, so we got a bit technical there. Um, can you explain that in a bit more detail? >>Yeah, there's a bit of vocabulary involved. Uh, so basically containers are used in developing things like databases, web servers, or applications such as I've taught. What's great about containers is that they split the workload. So developers can select a libraries without breaking anything. And CIS admins can update the host without interrupting the programmers. Uh, now OpenShift works hand-in-hand with Kubernetes to provide a way to build those containers for applications. >>Okay, got it. Uh, so basically containers make life easier for developers and system admins. So how does OpenShift differ from other platforms? >>Um, well this kind of leads into the second benefit I want to talk about, which is simplicity. Basically. There's a lot of steps involved with when you're using Kubernetes with a Docker, but OpenShift simplifies this with their source to image process that takes the source code and turns it into a container image, but that's not all, uh, OpenShift has a lot of automation and features that simplify working with containers and important one being its web console. Um, so here I've set up a light version of OpenShift code ready containers. And I was able to set up our application right from the web console. And I was able to set up this entire thing in windows, Mac, and Linux. So it's environment agnostic in that sense. >>Okay. So I think I seen the top left. This is a developer's view. What would a systems admin view look like? >>That's a good question. So, uh, here's the, uh, administrator view and this kind of ties into the benefit of control. Um, this view gives insights into each one of the applications and containers that are running and you can make changes without affecting deployment. Um, and you can also within this view, set up each layer of security and there's multiple that you can prop up, but I haven't fully messed around with it because since with my look, I'd probably locked myself out. >>Okay. Um, so, so that seems pretty secure. Um, is there a single point security such as you use a login or are there multiple layers of security? Yeah. >>Um, there are multiple layers of security. There's your user login security groups and general role based access controls. Um, but there's also a ton of layers of security surrounding like the containers themselves. But for the sake of time, I won't get too far into it. >>Okay. Uh, so you mentioned simplicity and time to operation as being two of the benefits. You also briefly mentioned automation and as you know, automation is the backbone of our platform here at IO Tahoe. So that's certainly grabbed my attention. Can you go a bit more in depth in terms of automation? >>Yeah, sure. I'd say that automation is important benefit. Uh, OpenShift provides extensive automation that speeds up that time to operation, right? So the latest versions of open should come with a built-in cryo container engine, which basically means that you get to skip that container engine installation step. And you don't have to like log into each individual container hosts and configure networking, configure the registered servers, storage, et cetera. So I'd say, uh, it automates the more boring kind of tedious processes. >>Okay. So I see the iota template there. What does it allow me to do >>In terms of automation in application development? So we've created an OpenShift template, which contains our application. This allows developers to instantly like, um, set up a product within that template or within that. Yeah. >>Okay. Um, so Noah, last question. Speaking of vocabulary, you mentioned earlier digital resilience is a term we're hearing, especially in the banking and finance world. Um, it seems from what you described industries like banking and finance would be more resilient using OpenShift, correct? >>Yeah. In terms of digital resilience, OpenShift will give you better control over the consumption of resources each container is using. In addition, the benefit of containers is that, uh, like I mentioned earlier, CIS admins can troubleshoot the servers about like bringing down the application. And if the application does go down, it's easy to bring it back up using the templates and like the other automation features that OpenShift provides. >>Okay. So thanks so much. So any final thoughts you want to share? >>Yeah. Just want to give a quick recap of like the five benefits that you gain by using OpenShift. Uh, the five are time to operation automation, control, security and simplicity. Uh, you can deploy applications faster. You can simplify the workload. You can automate a lot of the otherwise tedious processes can maintain full control over your workflow and you can assert digital resilience within your environment. >>So guys, thanks for that. Appreciate the demo. Um, I wonder you guys have been talking about the combination of IO Tahoe and red hat. Can you tie that in Sabita to digital resilience specifically? >>Yeah, sure. Dave, um, so why don't we speak to the benefits of security controls in terms of digital resilience at Iowa hope? Uh, we automated detection and applied controls at the data level. So this would provide for more enhanced security. >>Okay. But so if you were to try to do all these things manually, I mean, what's, what does that do? How, how much time can I compress? What's the time to value? >>So, um, with our latest versions via Tahoe, we're taking advantage of faster deployment time, um, associated with containerization and Kubernetes. So this kind of speeds up the time it takes for customers to start using our software as they be able to quickly spin up a hotel and their own on-premise environment or otherwise in their own cloud environment, like including AWS or shore Oracle GCP and IBM cloud. Um, our quick start templates allow flexibility to deploy into multicloud environments, all just using like a few clicks. >>Okay. Um, so, so now I'll just quickly add, so what we've done, I Tahoe here is we've really moved our customers away from the whole idea of needing a team of engineers to apply controls to data as compared to other manually driven workflows. Uh, so with templates, automation, pre-built policies and data controls, one person can be fully operational within a few hours and achieve results straight out of the box, uh, on any cloud. >>Yeah. We've been talking about this theme of abstracting, the complexity that's really what we're seeing is a major trend in this coming decade. Okay, great. Thanks Savita Noah. Uh, ho how can people get more information or if they have any follow-up questions, where should they go? >>Yeah, sure. They've I mean, if you guys are interested in learning more, you know, reach out to us at info at dot com to speak with one of our sales engineers. I mean, we'd love to hear from you. So book a meeting as soon as you can. >>All right. Thanks guys. Keep it right there for more cube content with IO Tahoe.
SUMMARY :
resilience on hybrid and multicloud brought to you by IO Tahoe. so today we just wanted to give you guys a general overview of how we're using open shift. and I've been learning the ins and outs of OpenShift for like the past few months and I'm here to share it up line. What are the benefits of using OpenShift? Well, um, there's five that I can think of a faster time to operations, at Aja Tahoe delivered to our customers. So to answer your question, it's faster to create end user application Um, can you explain that in a bit more detail? Uh, so basically containers are used in Uh, so basically containers make life easier for developers and system Um, so here I've set up a light version of OpenShift code ready containers. This is a developer's view. Um, and you can also within this view, set up each layer of security and there's multiple that you can prop you use a login or are there multiple layers of security? But for the sake of time, I won't get too far into it. You also briefly mentioned automation and as you know, automation is the backbone of our platform here at IO Tahoe. So the latest versions of open should come with a built-in cryo container engine, What does it allow me to do This allows developers to instantly like, Um, it seems from what you described industries like banking and finance would be more resilient go down, it's easy to bring it back up using the templates and like the other automation features that OpenShift provides. So any final thoughts you want to share? Uh, the five are time to operation automation, Um, I wonder you guys have been talking about the combination So this would provide for more enhanced security. What's the time to value? So this kind of speeds up the time it takes for Uh, so with templates, Uh, ho how can people get more information or if they have any follow-up questions, where should they go? So book a meeting as soon as you can. Keep it right there for more cube content with IO Tahoe.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Aja Tahoe | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IO Tahoe | ORGANIZATION | 0.99+ |
Sabita Davis | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
five benefits | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
OpenShift | TITLE | 0.99+ |
each layer | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
each container | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
second benefit | QUANTITY | 0.98+ |
Noah | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Savita Noah | PERSON | 0.98+ |
Linux | TITLE | 0.98+ |
Noah Fields | PERSON | 0.98+ |
windows | TITLE | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
single point | QUANTITY | 0.97+ |
one person | QUANTITY | 0.96+ |
Iowa | LOCATION | 0.96+ |
Tahoe | ORGANIZATION | 0.96+ |
dot com | ORGANIZATION | 0.96+ |
Io-Tahoe | PERSON | 0.96+ |
CIS | ORGANIZATION | 0.95+ |
Savita | PERSON | 0.94+ |
iota | TITLE | 0.94+ |
IO | ORGANIZATION | 0.93+ |
each one | QUANTITY | 0.91+ |
Davis | PERSON | 0.89+ |
each individual container | QUANTITY | 0.8+ |
Docker cryo | TITLE | 0.79+ |
this coming decade | DATE | 0.78+ |
open | TITLE | 0.76+ |
OpenShift | ORGANIZATION | 0.73+ |
Tahoe | TITLE | 0.72+ |
GCP | TITLE | 0.67+ |
past few months | DATE | 0.64+ |
Mac | COMMERCIAL_ITEM | 0.59+ |
Tahoe | PERSON | 0.58+ |
container | TITLE | 0.58+ |
ton | QUANTITY | 0.56+ |
red | ORGANIZATION | 0.51+ |
benefits | QUANTITY | 0.43+ |
Sabita | COMMERCIAL_ITEM | 0.36+ |
Andrew Hillier, Densify | AWS re:Invent 2020
>> Announcer: From around the globe, it's theCUBE, with digital coverage of AWS re:Invent 2020, sponsored by Intel, AWS and our community partners. >> Hey is Keith Townsend a CTO Advisor on the Twitter and we have yet another CUBE alum for this, AWS re:Invent 2020 virtual coverage. AWS re:Invent 2020 unlike any other, I think it's safe to say unlike any other virtual event, AWS, nearly 60, 70,000 people in person, every conference, there's hundreds of thousands of people tuning in to watch the coverage, and we're talking to builders. No exception to that is our friends at Densify, co founder and CTO of Densify Andrew Hillier, welcome back to the show. >> Thanks, Keith, it's great to be with you again. >> So we're recording this right before it gets cold in Toronto. I hope you're enjoying some of this, breaking the cold weather? >> Yeah, no, we're getting the same whether you are right now it's fantastic. We're ready for the worst, I think in the shorter days, but we'll get through it. >> So for those of you that haven't watched any of the past episodes of theCUBE in which Andrew has appeared. Andrew can you recap, Densify, what do you guys do? >> Well, we're analytics where you can think of us as very advanced cost analytics for cloud and containers. And when I say advanced, what I mean is, there's a number of different aspects of cost, there's understanding your bill, there's how to purchase. And we do those, but we also focus heavily on the resources that you're buying, and try to change that behavior. So it's basically, boils down to a business value of saving a ton of money, but by actually changing what you're using in the cloud, as well as providing visibility. So it's, again, a form of cost optimization, but combined with resource optimization. >> So cost of resource optimization, we understand this stuff on-premises, we understand network, compute, storage, heating, cooling, etc. All of that is abstracted from us in the public cloud, what are the drivers for cost in the public cloud? >> Well, I think you directly or indirectly pay for all of those things. The funny thing about it is that it happens in a very different way. And I think everybody's aware, of course, on-demand, and be able to get resources when you need them. But the flip side of on-demand, the not so good size, is it causes what we call micro-purchasing. So when you're buying stuff, if you go and turn on a, like an Amazon Cloud instance, you're paying for that instance, you're paying Rogers and storage as well. And, implicitly for some networking, a few dollars at a time. And that really kind of creates a new situation and scale because all of a sudden now what was a control purchase on-prem, becomes a bunch of possibly junior people buying things in a very granular way, that adds up to a huge amount of money. So the very thing that makes cloud powerful, the on-demand aspects, the elasticity, also causes a very different form of purchasing behavior, which I think is one of the causes of the cost problem. >> So we're about 10, 12 years into this cloud movement, where public cloud has really become mainstream inside of traditional enterprises. What are some of the common themes you've seen when it comes to good cloud management, the cost management, hygiene across organizations? >> Yeah, and hygiene is a great word for that. I think it's evolved, you're right it's been around this is nothing new. I mean, we've probably been going to cloud expos for over a decade now. But it's kind of coming waves as far as the business problem, I think the initial problem was more around, I don't understand this bill. 'Cause to your point, all those things that you purchase on-prem, you're still purchasing in some way, and a bunch of other services. And it all shows up in this really complicated bill. And so you're trying to figure out, well, who in my organization owes what. And so that was a very early driver years ago, we saw a lot of focus on slicing and dicing the bill, as we like to call it. And then that led to well, now I know where my costs are going, can I purchase a little more intelligently. And so that was the next step. And that was an interesting step because what the problem is, the people that care about cost can't always change what's being used, but they can buy discounts and coupons, and RIs and Savings Plans. So we saw that there was a, then start to be focused on, I'm going to come up with ways of buying it, where I can get a bit of a discount. And it's like having a phone bill where I can't stop people making long distance calls, but I can get on a better phone plan. And that, kind of the second wave, and what we're seeing is the next big wave now is that, okay, I've done that, now I actually should just change what I'm actually using because, there's a lot of inefficiency in there. I've got a handle on those other problems, I need to actually, hopefully make people not buy giant instances all the time, for example. >> So let's talk about that feedback loop, understand what's driving the cost, the people that's consuming that, those services and need to understand those costs. How does Densify breach that gap? >> Well, again, we have aspects of our product that lineup with basically all three of those business problems I mentioned. So there's a there's a cloud cost intelligence module that basically lets you look at the bill any different ways by different tags. Look for anomalies, we find that very important, you say, well, this something unusual happened in my bill. So there's aspect that just focuses on kind of accountability of what's happening in the cost world. And then now, one of the strengths of our product is that when we do our analytics, we look at a whole lot of things at once. So we look at, the instances and their utilization, and what the catalog is, and the RIs and Savings Plans, and everything all together. So if you want to purchase more intelligently, that can be very complicated. So we see a lot of customers that say, well, I do want to buy savings plans, but man, it's difficult to figure out exactly what to do. So we like to think of ourselves as kind of a, it's almost like a, an analytics engine that's got an equation with a lot of terms in. It's got a lot of detail of what we're taking into account when we tell you what you should be doing. And that helps you by more intelligently, it also helps you consume more intelligently, 'cause they're all interrelated. I don't want to change an instance I'm using if there's no RI on it, that would take you backwards. I don't want to buy RIs for instances that I shouldn't be using, that takes you backwards. So it's all interconnected. And we feel that looking at everything at once is the path to getting the right answer. And having the right answer is the path to having people actually make a change. >> So when I interviewed you a few years ago, we talked about very high level containers, and how containers is changing the way that we can consume Cloud Services, containers introduced this concept of oversubscription, and the public cloud. We couldn't really oversubscribe and for large instance, back then. But we can now with containers, how are containers in general complicating cloud costing? >> So it's interesting because they do allow overcommit but not in the same way that a virtual environment does. So in a virtual environment, if I say I need two CPUs for job X, I need two CPUs for job Y, I can put them both on a machine that has two CPUs, and there will be over committed. So over committed in a virtual environment, it is a very well established operation. It lets you get past people asking for too much effectively. Containers don't quite do that in the same way, when they refer to overcommit, they refer to the fact that you can ask for one CPU, but you can use up to four, and that difference is if you overcommit. But the fact that I'm asking for one CPU is actually a pretty big problem. So let me give an example. If I look into my laptop here, and I've got Outlook and Word and all these things on it, and I had to tell you how many millicores I had to give each one, or with Zoom, let's see I'm running Zoom. Now, well, I want Zoom to work well, I want to give it $4,000 millicores, I want to give it four CPUs, because it uses that when it needs it. But my PowerPoint, I also want to give 4000 or $2,000 millicores. So I add all these things up of what I need based on the actual more granular requirements. And it might add up to four laptops. But containers don't overcommit the same way, if I asked for those requests by using containers, I actually will use for laptops. So it's those request values that are the trick, if I say I need a CPU, I get a CPU, it's not the same as a virtual CPU would be in a virtual environment. So we see that as the cause of a lot of the problem and that people quite rationally say I need these resources for these containers. But because containers are much more granular, I'm asking for a lot of individual resource, that when you add them up, it's a ton of resources. So almost every container running, we see that they're very low utilization, because everybody, rightfully so asked for individual resources for each container, but they are the wrong resources, or in aggregate, it's not creating the behavior you wanted. So we find containers are a bit, people think they're going to magically cause problems to go away. But in fact, what happens is, when you start running a lot of them, you end up just with a ton of cost. And people are just starting to get to that point now. >> Yeah, I can see how that could easily be the case inside of a virtual environment. I can easily save my VM needs four CPUs, four VCPUs. And I can do that across 100 applications. And that really doesn't cost me a lot in the private data center, tools like VMware, DRS, and all of that kind of fix that for me on the back-end is magical. In the public cloud, if I ask for four CPUs, I get four CPUs, and I'm going to pay for four CPUs, even if I don't utilize it, there's no auto-balancing. So how does Densify help actually solve that problem? >> Well, so they, there's multiple aspects for that problem, ones of the thing was that people don't necessarily ask for the right thing in the first place, that's one of the biggest ones. So, I give the example of, I need to give Zoom 4,000 millicores, that's probably not true at all, if I analyze what it's doing, maybe for a second it uses that, but for the most of the time, it's not using nearly those resources. So the first step is to analyze the container behavior patterns, and say, well, those numbers should be different. And so for example, the one thing we do with that is, we say, well if a developer is using terraform templates to stand up containers, we can say, instead of putting the number 1000, in that, a thousand millercores, or 400 millicores in your template, just put a variable and that references our analytics, just let the analytics figure what that number should be. And so it's a very elegant solution to say, the machine learning will actually figure out what resources that container needs, 'cause humans are not very good at it, especially when there's 10s of thousands of containers. So that's kind of the, one of the big things is to optimize the container of requests. And then once you've done that the nodes that you're running on can be optimized, because now they start to look different. Maybe you don't have, you don't need as much memory or as much CPU. So it's all again, it's all interrelated, but it's a methodical step that's based on analytics. And, people, they're too busy to figure this out, that they can't figure it out for thousands of things. Again, if I asked you don't get your laptop, on your laptop, how many miillicores do you need to get PowerPoint? You don't know. But in containers, you have to know. So we're saying let the machine figure out. >> Yes kind of like when you're asked how many miillicores do you need to give Zoom answer's yes. >> Yeah exactly. >> (laughs) So at the end of the day, you need some way to quantify that. So you guys are doing the two things. One, you're quantifying, you're measuring how much this application typically take. And then when I go to provision it, we're using a tool like terraform. Though then instead of me answering the question, the answer is go ask Densify, and Densify will tell you, and then I'll optimize my environment. So I get both ends of that equation, if I'm kind of summarizing it correctly. >> Absolutely. And that last part is extremely important because, in a legacy environment, like in a virtual environment, I can call an API and change the size of VM, and it will stay that way. And so that's a viable automation strategy for those types of environments. In the cloud, or when you're using terraform, or in containers, they will go right back to what's in the terraform template, that's one of the powerful things about terraform is that it always matches what's in the code. So I can't go and change the cloud, it'll just go back to whatever is in the terraform template next time, it's provision. So we have to go upstream, you have to actually do it at the source, when you're provisioning applications, the actual resource specifications should be coming through at that point, you can't, you don't want to change them after the fact, you can update the terraform and redeploy with a new value, that that's the way to do automation in a container environment, it doesn't, you can't do it, like you did in a VMware environment, because it won't stick, it just gets undone the next time the DevOps pipeline triggers. So it's both a, it's a big opportunity for a kind of a whole new generation of automation, doing it, we call it CICDCO. It's, Continuous Integration, Continuous Delivery, Continuous Optimization. It's just part of the, of the fabric of the way you deploy Ops, and it's a much more elegant way to do it. >> So you hit two trigger words, or a few trigger terms, one, DevOps, two, I'm saying DevOps, CICD, and Continuous Operations. What is the typical profile of a Densify customer? >> Well, usually, they're a mix of a bunch of different technologies. So I don't want to make it sound like you have to be a DevOps shop to benefit from this, most of our customers have some DevOps teams, they also have a lot of legacy workloads, they have virtual environments, they have cloud environments. So don't necessarily have 100%, of all of these things. But usually, it's a mix of things where, there might be some newer born in the cloud as being deployed, and this whole CICDCO concept really makes sense for them, they might just have another few thousand cloud instances that they stood up, not as a part of a DevOps pipeline, but just to run apps or maybe even migrated from on-prem. So it's a pretty big mix, we see almost every company has a mix, unless you just started a company yesterday, you're going to have a mix of some EC2 services that are kind of standalone and static, maybe some skill groups running, or containers running skill groups. And there's a generally a mix of these things. So the things I'm describing do not require DevOps, the notion of optimizing the cloud instances, by changing the marching orders when they're provisioned not after the fact, that that applies to any anybody using the cloud. And our customers tend to be a mix, some again very new, new school processes and born in the cloud. And some more legacy applications that are running that look a little more like on-prem environment would, where they're not turning on and off dynamically, they're just running transactional workloads. >> So let's talk about the kind of industries, because you you hit on a key point, we kind of associate a certain type of company with born in the cloud, et cetera. What type of organizations or industries are we seeing Densify deployed in. >> So we don't really have a specific market vertical that we focus on, we have a wide variety. So we find we have a lot of customers in financial services, banks, insurance companies. And I think that's because those are very large, complicated environments, where analytics really pay dividends, if you have a lot of business services, that are doing different things, and different criticality levels. The things I'm describing are very important. But we also have logistics companies, software companies. So again, complexity plays a part, I think elasticity plays a part in the organization that wants to be able to make use of the cloud in a smart way where they're more elastic, and obviously drive costs down. So again, we have customers across all different types of industries, manufacturing, pharmaceutical. So it's a broad range, we have partners as well that use our like IBM, that use our product, and their customers. So there's no one type of company that we focus on, certainly. But we do see, again, environments that are complicated or mission critical, or that they really want to run in a more of elastic way, those tend to be very good customers for us. >> Well, CUBE alum Andrew Hillier, thank you for joining us on theCUBE coverage of AWS re:Invent 2020 virtual. Say goodbye to a couple hundred thousand of your closest friends. >> Okay, and thanks for having me. >> That concludes our interview with Densify. We really appreciate the folks that Densify, having us again to have this conversation around workload analytics and management. To find out more of, well or find out just more great CUBE coverage, visit us on the web SiliconANGLE TV. Talk to you next episode of theCUBE. (upbeat music)
SUMMARY :
the globe, it's theCUBE, CTO Advisor on the Twitter great to be with you again. breaking the cold weather? We're ready for the worst, any of the past episodes on the resources that you're buying, cost in the public cloud? So the very thing that What are some of the And that, kind of the second wave, So let's talk about that feedback loop, is the path to getting the right answer. the way that we can it's not creating the behavior you wanted. and all of that kind of fix that for me So the first step is to analyze Yes kind of like when you're So I get both ends of that equation, of the way you deploy Ops, So you hit two trigger So the things I'm describing the kind of industries, So again, we have customers across thank you for joining Talk to you next episode of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith | PERSON | 0.99+ |
$4,000 | QUANTITY | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Andrew Hillier | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew | PERSON | 0.99+ |
$2,000 | QUANTITY | 0.99+ |
Densify | ORGANIZATION | 0.99+ |
Toronto | LOCATION | 0.99+ |
4000 | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
100 applications | QUANTITY | 0.99+ |
Outlook | TITLE | 0.99+ |
Word | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
each container | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two CPUs | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
400 millicores | QUANTITY | 0.98+ |
each one | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
4,000 millicores | QUANTITY | 0.97+ |
hundreds of thousands of people | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
two trigger words | QUANTITY | 0.97+ |
first place | QUANTITY | 0.96+ |
Amazon | ORGANIZATION | 0.96+ |
big | EVENT | 0.96+ |
nearly 60, 70,000 people | QUANTITY | 0.95+ |
about 10, 12 years | QUANTITY | 0.93+ |
Intel | ORGANIZATION | 0.93+ |
EC2 | TITLE | 0.93+ |
CTO | PERSON | 0.92+ |
a thousand millercores | QUANTITY | 0.92+ |
terraform | TITLE | 0.92+ |
Rogers | ORGANIZATION | 0.9+ |
10s of thousands | QUANTITY | 0.88+ |
few years ago | DATE | 0.87+ |
one CPU | QUANTITY | 0.87+ |
second wave | EVENT | 0.85+ |
1000 | QUANTITY | 0.85+ |
four | QUANTITY | 0.84+ |
ORGANIZATION | 0.84+ | |
AWS re:Invent 2020 | EVENT | 0.84+ |
thousands of things | QUANTITY | 0.8+ |
CICDCO | ORGANIZATION | 0.8+ |
years | DATE | 0.79+ |
four CPUs | QUANTITY | 0.77+ |
theCUBE | ORGANIZATION | 0.77+ |
re:Invent 2020 | EVENT | 0.74+ |
over a decade | QUANTITY | 0.72+ |
hundred thousand | QUANTITY | 0.72+ |
a ton of money | QUANTITY | 0.71+ |
2020 | TITLE | 0.71+ |
both ends | QUANTITY | 0.7+ |
second | QUANTITY | 0.7+ |
VMware | TITLE | 0.68+ |
every conference | QUANTITY | 0.68+ |
wave | EVENT | 0.66+ |
TV | ORGANIZATION | 0.64+ |
Miguel Perez Colino & Rich Sharples, Red Hat | KubeCon + CloudNativeCon NA 2020
>>From around the globe. It's the cube with coverage of coop con and cloud native con North America, 2020 virtual brought to you by red hat, the cloud native computing foundation and ecosystem partners. >>Hey, welcome back, everybody Jeffrey here with the cube coming to you from our Palo Alto studios today with our ongoing coverage of coupon cloud native con North America, 2020. It's not really North America, it's virtual like everything else, but you know that the European show earlier in the summer, and this is the, this is the late fall show. So we're excited to welcome in our very next two guests. Uh, first joining us from Madrid. Spain is Miguel Perez, Kaleena. He is a principal product manager from red hat, Miguel. Great to see you. >>Good to see you happy to be in the cube. >>Yes. Great. Well welcome. And joining us from North Carolina is rich Sharples. He is a senior director, product management of red hat. Rich. Great to see you. >>Yeah, likewise, thanks for inviting me again. >>So we're talking about Java today and before we kind of jump into it, you know, in preparing for this rich, I saw an interview that you did, I think earlier about halfway through the year, uh, celebrating the 25th anniversary of Java and talking about the 25th anniversary Java. And before we kind of get into the future, I think it's worthwhile to take a look back at, you know, kind of where Java came from and how it's lasted for 25 years of such an important enterprise, you know, kind of application framework, because we always hear jokes about people looking for COBOL programmers or, you know, all these old language programmers, because they have some old system that's that needs a little assist. What's special about Java. Why are we 25 years into it? And you guys are still excited about Java yesterday, today and in the future. >>Yeah. And I should add that, um, in terms of languages, uh, twenty-five is actually still pretty young. Java's, uh, kind of middle aged, I guess. Um, you know, things like CC plus bus rrr you're 45, 50 years old Python, I think is about the same as Java in terms of years. So, you know, the languages do tend to move at a, um, at a, they do tend to stick around, uh, uh, a bit, well what's made Java really, really important for enterprises building business critical applications is it started off with a very large ecosystem of big vendors supporting it. Um, it was open in a sense from the very start and it's remained open as in open source and an open community as well. So that's really, really helped, um, you know, keep the language innovating and moving along and attracting new developers. And, um, it's, it's still a fairly modern language in terms of some of the new features it's advancing with the industry taking on new kinds of workloads and new kinds of per program paradigms as well. So, you know, it's, it's evolved very well and has a huge base out somewhere between 11 and 13 million developers still use it as a primary development language in professional settings. Yeah. >>What struck me about what you said though in that interview was kind of the evolution and how Java has been able to continue to adapt based on kind of what the new frameworks are. So whether it was early days in a machine, like you talked about being in a set top box, or, you know, kind of really lightweight kind of almost IOT applications then to be calming, you know, this really a great application to deliver enterprise applications via a web browser and that, you know, and it continues to morph and change and adapt over time. I thought that was pretty interesting given the vast change in the way applications are delivered today versus what they were 25 years ago. >>Yeah, absolutely. It's, you know, the very early days were around embedded devices, uh, intelligent toasters and, you know, whatever. Um, and, and then where it really, really took off was, but the building supporting big backend systems, big transactional workloads, whether you're a bank or an airline you're running both the scale, but also running really, really complex transactional systems that were business critical. And that's that's for the last, you know, 15 years has been, um, where it's, it's really shown building backend, um, systems. Now, as we kind of move forward, you know, the idea of, uh, um, like server side, uh, server side application versus a front end is kind of changed. You know, now we're talking microservices, we're talking about running in containers. So really the focus of where we run Java and the kinds of applications we're building with Java as this has radically changed. And as such the language has to change as well, which is, you know, one, I'm pretty excited to talk about caucus today. >>So let's, let's jump into it and talk about corcus cause the other big trend, you know, along with, with, with obviously, uh, uh, browsers being great enterprise applications, delivery vehicles is this thing called containers, right? And, and specifically more recently Kubernetes is the one that's grabbing all the attention and grabbing all the, all the momentum. Um, so I wonder Miguel, if you could talk about, you know, kind of as, as the popularity of containerized applications and containerized to everything right, containerized storage, or you even talked about containerizing networking, troll, how that's impacted, uh, what you guys are doing and the impact of Java, uh, and making it work with kind of a containerized Kubernetes world. >>Well, what we found is that the paradigm of development has teeth. So we have this top up, uh, uh, paradigm that the people are following to be able to do the best with containers, to the best with Kubernetes on the, this has worked quite fine in Greenfield on for, for many cases has been a way to develop applications faster, to be able to obtain variably salts. And the thing is that for many, uh, users, for many companies that we work with, uh, they also want to bring some of their stuff that the applications that are currently are running into this world. And, uh, I mean, we, we walk especially a lot in helping these customers be able to adopt those obligations, but we try to do it, uh, as we say, the N pixie dust, you know, we really dig into the code, we'll review the code with modernize. The application will help their customer with that application. We provide the tools are open for anyone to be able to review it and to be able to take it. So we are moving away from Greenfield into brownfield and not a way we are evolving together to say we more precise, you know, all these Greenfield applications keep coming, but also the current applications want to be more organized. >>Right. Right. So it's pretty interesting. Cause that's always the big conversation. There's, it's, it's all fine. And good if you're just building something new, uh, to use the latest tools. But as you mentioned, there's a whole lot of conversation about application modernization and this is really an opportunity to apply some of these techniques to do that. So quirky. So I wonder if you just give, let's just jump into it. What is it at the highest level? Uh, what's it all about? What should people know? >>Yeah. So, so Corker says I'm reading an attempt by red hat to ensure Java is a first-class citizen in containerized environments, but building reactive applications, uh, cloud native applications, uh, functions, Java is an incredible piece of engineering. It does some incredible things. It sudden can self optimize. As it's running in line code, it can do some really amazing things the longer it runs, but in a containerized environment, you're likely not going to be running huge amounts of code. You'd likely be running microservices and your, your services are likely to have a kind of limited life cycle as we you're able to deploy more frequently or in a function environment where, you know, you've been bought once and then you're done, um, you know, during all those long, um, kind of, um, those optimizations over time, don't really, um, make a lot of sense. So what we can do is remove a lot of the, um, the weights of Java, a lot of the complexity of Java, and we can optimize for an environment where your code is maybe just running for a few microseconds as in the case of the function or something running in native, cause you scale up and scale down. >>So we move a lot of the op side. We move a lot of the, um, the, the efforts within the application, uh, to compile time, we pre compile all of your, of your config and initialization, so that doesn't have to happen in your, um, your, your, your runtime or your production environment. Um, and then we can optimize the code week. We can, we can remove that code. We can remove, you know, whole, uh, trees and class libraries and really slimmed down the memory footprint and radically, um, slim, the Maddie memory footprint, um, increase the startup time as well. So, you know, you have less downtime in your applications. Um, and we've recently done a S a study with ADC that shows some pretty stunning results compared to, you know, some existing frameworks. And, you know, we get, um, you know, sort of like, you know, overall cost savings of, you know, 60, 64%. >>Um, we can get eight times better density. You're running more in a, in a, in a cluster and, um, you know, reduction in memory up to 90% as well. So it's, these are significant changes now. That's all good, you know, saving, saving 60, 60% on your operational costs is significant. But what we find is that most organizations, they come for the performance and the optimizations, but what actually stay for is the speed of development. So I think, I think caucus real silver bullets is, um, the developer productivity, you know, for organizations, the cost of development is still one of the major costs. I mean, the operational costs, the hosting costs a significant, but development costs, time to market will always be top of mind for organizations that are trying to move faster than the competition. And I think that's really where, um, um, caucus special and coupled in, uh, in, uh, OpenShift or Coobernetti's environment really, really does shine. Yeah, >>It's pretty interesting. So people can go to corcus.io and see a lot of the statistics that you just referenced in terms of memory usage and speed and, and whole bunch of stuff. But what struck me when I went to the site was that was this big, uh, uh, two words that jumped out developer joy. And it's funny that you talked on that just now about really, um, the benefits that come to the developer directly to make them happier. I mean, really calling out their joy. So they're more productive and ultimately that's what you said. That's where the great value is in terms of speed of deployment, happy developers, and productive developers. You know, Miguel, you get your, you get down into the weeds of this stuff. Again, the presentations on your LinkedIn, everyone needs to go look and you talk a lot about at migration and you lot talk a lot about app modernization. So without going through all 120 some odd slides that I think you have, which is good, phenomenal information, what are some of the top things that people need to think about and consider both for app modernization as well as at migration? >>Um, that's, that's, that's an interesting question. Uh, the thing is that, um, the tolling is important on the current code is, and the thing is that normally when, when we started migration project, we tried to find architects in the applications to be able to find patterns. You know, you find parents is much easier because, uh, once you solve one part on the same part on can be solved in a very similar way. So this is one of the parts of that. We focus a lot, but before getting to that point, it's very important how you stop, you know, so the assessment phase is, is very important to be able to review well, what is the status of the applications, the context of the applications. And with that, I mean, things like, for example, the requirements that they have, there's the maintenance that they take in their resiliency and so on. >>So you have to prepare very well, the project by starting with a good assessment, you have to check which applications makes more, make more sense to start with and see which, how to group them together by similarities. And then you can start with the project that saying, okay, let's go for these set of applications that make more sense that are more likely to be containerized because of the way we are developing them because of the dependencies that they have because of the resiliency that is already embedded into them and so on. So that, that the methodology is important. And we normally, for example, when we, when we help partners do a application migration, one of the things that we stress is that this is the methodology that we follow and in the website for my vision, totally for application, you can find also, um, methodology, uh, part that, uh, could help, uh, people understand, okay, these, these are the stages that we normally follow to be successful with migrating applications. >>Yeah. Let go. You don't, we're not friends. We don't hang out a lot, but if we did, you would know I never ever recommend PowerPoint for anything. So, so the fact that I'm calling out your PowerPoint actually means something. Cause I think it's the worst application ever built, but you got some tremendous, tremendous information in there and people do need to go in and look, and again, it's all from your LinkedIn work, but I wanted to shift gears a little bit, right? We're at CubeCon cloud native con. Um, obviously it's virtual is 2020. That's the way the world today. But I just curious to get your guys' take on, on what does this, uh, event mean for you obviously really active, open source community, you know, red hat has a long open-source history. Um, what does CubeCon cloud native con mean for you guys? What do you hope to get out of it? What should people hope to, uh, to learn from red hat? >>Yeah, we, um, yeah, we're, we're buying your DNA. We're very, very collaborative. Uh, we, we love to learn from our customers, users of the technologies, um, in the communities that we support. Um, speaking as a, you know, we're both product guys, there's nothing better than getting with, um, people that actually use the products, um, in anger, in real life, whether they're products are upstream technologies, learning, learning, what they're doing, understanding where, um, some of the gaps are there's. Um, yeah, we just couldn't do our jobs without engaging with developers, users in these kind of conferences. Yeah. A lot of the, um, love interest we've seen with coworkers is, is in the community, you know, um, like I'd been part of many, many successful open source projects, um, um, over red hat. And it's great when your customers, you know, like, uh, Vodafone, Greece or Carrefour in Spain are openly publicly talking about how good your technology is, what they're using it for. And that's really good. So it's just nothing, there's no alternative that, you know, whether it be virtual virtually or physically sitting down with, uh, with users of your technology, >>How about you, Miguel? What are you hoping to get out of, uh, out of the show this year? >>Um, we are working a lot with, on Kubernetes in red hat, on, uh, as part of the community, of course. And, um, I mean, there are so many new stuff that is coming around, Kubernetes that, uh, it's mostly about it, about all the capabilities that were arming, especially for example, several lists, you know, several lessons, there is an important topic with crackers, because for example, as you make the application stopped so much faster and react so much faster, you could have known of them running and just waiting for an event to happen, which saves a lot of resources and makes us super efficient. So this is one of the topics, for example, that we wanted to cover in this edition, you know, how we are implementing serverless with Kubernetes and OpenShift and many other things like pipelines. Like, I don't know, we just had quite a visit in the, uh, uh, video, uh, life of what is coming up. I see for the six. And I recommend people to take a look at it, to get everything that's new because there's a lot. Yeah, >>Yeah. You guys are technical people. You've been doing this for a long time. Why is Kubernetes so special? W Y Y you know, there's been containers in the past, right. And we've seen other kind of branded open source projects that got a lot of momentum, but Kubernetes just seems to be blowing everybody out of the out of its path. Why, what should people know about Kubernetes that aren't necessarily developers? >>Yeah, there's really nothing interesting about a single container or a single microservice, right? That's not, that's not the kind of environment that, um, real organizations live in. They live in organizations where they're going to have hundreds of services, um, who just containers and you need a technology to orchestrate and manage that in that complex environment. And Kubernete's has just quickly become the, the district per standard. Um, yeah, folks are red hat jumped on my very, very early, um, I mean, one of the advantages around her have is where we're embedded with developers and open source communities. We often have a pretty good, it gives us a pretty good crystal ball. So we're often quick to jump on the emerging technologies that are coming out of open source. And that's exactly what happened with Cubanetis. It was clear. It was, um, you're going to be sophisticated for our, you know, most, um, most sophisticated customers running at scale. Um, but, but also, you know, great for development environments as well. So it really a good fit for, uh, where we were headed and, you know, just very, very quickly became the fact that standard. And you, you just gotta go with the de facto standard. Right, right. >>Right. Well, the another thing that you mentioned rich in that other interview that I was watching is it came up the conversation in terms of managing open source projects. And at some point, you know, they kind of start, and then, you know, I think this one, if I go to corcus and look at the bottom of the page sponsored by red hat, but you talked about, you know, at some point, do you move it over to a foundation, um, you know, and kind of what are the things that kind of drive that process, that decision, um, and, you know, I would imagine that part of it has to do with popularity and scale, is that something, you know, potentially down the road, how do you think that you said you've been in lots of open source projects, when does it move from, you know, kind of single point of origin to more of a foundational support? >>Yeah. I mean, in fact the foundation's owner was necessary. Um, you know, when you have a, yeah. If you, if you have a, an open, very open project with, um, um, clear, clear rules for collaboration and kind of the encouragement or others to collaborate and be able to, you know, um, move the project and, you know, the foundation as low as necessarily what we've seen, I've been part of the no GS world where, you know, the, the community reached Belden to keep no GS moving forward. Um, we had to go from a, what we call a benevolent dictator for life, somebody who's well-intentioned, but, um, yeah, we're on stone, the technology, so a foundation, which is much more inclusive and, um, you know, greater collaboration and you can move even quicker. So, you know, um, I think what's required is, is open governance for open source projects and where that doesn't happen. You know, maybe a foundation is, is the right way forward. Right, right now with, with caucus, um, you know, the, the non red hat developers seem pretty happy with the way they can get, uh, get engaged and contribute. Um, but if we get to a point where the community is demanding a foundation and we'll absolutely consider it, that's the best project we'll do. >>So, so we're, we're coming to the end of our time. I want to give you each the last word, really with two questions, one again, you know, just kind of a summary of, of, uh, of CubeCon cloud, native con, you know, what should people be looking for, uh, find you, and, and, and I don't know if you guys are sponsoring any sessions, I'm sure there's a lot of great content. If you want to highlight one or two things. And then most importantly, as we turn the calendars, we come to the end of 2020, uh, thankfully, um, as you look ahead to 2021, you know, what are some of your priorities, uh, as, as we get ready to turn the turn, the calendar, and Miguel let's start with you. >>So, um, I mean, we have been working very hard this year on the migration, took it for applications to help her every user that is using Java to bring the two containers. You know, whether it is data IE or these crackers, but we're putting like a lot of effort in crackers. And now we are bringing in new rules. And, uh, by the, by December, we expect to have the new version of the migration looking for applications that is going to include the, all the bulls to help developers bring their, their code to the Java code, to, to carcass. And, uh, on this, this is the main goal for us right now. We are moving forward to the next year to include more, more capabilities in that project. Everything's up on site. You can go to the conveyor, uh, project and ticket on, uh, on the up capabilities for the assessment phase. So whenever any partner, any, any of our consultants are working on, on migration or anyone that would like to go and try it themselves on adopted, would like to do these migrations to the cloud native world, uh, will feel comfortable with, with this tool. So that is our main goal in, in my, in my team. >>All right. And how about you rich? >>Yeah, I think we're going to see this, um, um, kind of syllabus solidification kind of web of, um, microservices. Um, you know, if you like hate that, I'm sorry, but I'm just going to next generation microservice. There's going to be, as Miguel mentioned, is gonna be based around, um, uh, native, um, advancing, um, serverless functions. I think that's really the, the, the ideal architecture, the building March services, um, on, on Coobernetti's and caucus plays really, really well there. Um, I think there's, there's a, there's a kind of backlog of projects, um, within organizations that, um, you know, hopefully next year, everything really does start to crank up. And I think, um, yeah, I think a lot of the migration that Miguel has talked about is going to be, is going to rise in terms of importance. So app modernization, taking those existing applications, maybe taking aspects of those and, you know, doing some kind of decomposition in some microservices using caucus and a native, I think we'll see a lot of that. So I think we'll see a real drive around both the kind of Greenfield, um, applications, uh, you know, this next generation of microservices, as well as pulling those existing applications forward into these new environments, don't give an answers. So it's going to be excellent. >>Awesome. Well, thank you both for taking a few minutes with us and sharing the story of corcus, uh, and have a great show. Great to see you and a really good the conversation. All right. He's Miguel, he's rich. I'm Jeff. You're watching the cubes ongoing coverage of CubeCon cloud native con 2020 North America. Virtual. Thanks for watching. We'll see you next time.
SUMMARY :
cloud native con North America, 2020 virtual brought to you by red hat, Hey, welcome back, everybody Jeffrey here with the cube coming to you from our Palo Alto studios today with our ongoing coverage Great to see you. And before we kind of get into the future, I think it's worthwhile to take a look back at, you know, kind of where Java came So that's really, really helped, um, you know, keep the language innovating and moving IOT applications then to be calming, you know, this really a great application And that's that's for the last, you know, 15 years has been, So let's, let's jump into it and talk about corcus cause the other big trend, you know, along with, the N pixie dust, you know, we really dig into the code, So I wonder if you just give, as in the case of the function or something running in native, cause you scale up and scale down. um, you know, sort of like, you know, overall cost savings of, in a, in a cluster and, um, you know, reduction in memory up to 90% And it's funny that you talked on that just now about really, to that point, it's very important how you stop, you know, so the assessment phase is, So you have to prepare very well, the project by starting with a good assessment, open source community, you know, red hat has a long open-source history. So it's just nothing, there's no alternative that, you know, for example, that we wanted to cover in this edition, you know, how we are implementing serverless W Y Y you know, there's been containers in the past, right. So it really a good fit for, uh, where we were headed and, you know, just very, very quickly became the fact that And at some point, you know, kind of the encouragement or others to collaborate and be able to, you know, uh, thankfully, um, as you look ahead to 2021, you know, what are some of your priorities, So, um, I mean, we have been working very hard this year on the migration, And how about you rich? um, applications, uh, you know, this next generation of microservices, as well Great to see you and a really good the conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark Shuttleworth | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Madrid | LOCATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
Dorich Telecom | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Miguel Perez | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
10 servers | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
Carrefour | ORGANIZATION | 0.99+ |
45 | QUANTITY | 0.99+ |
North Carolina | LOCATION | 0.99+ |
Miguel | PERSON | 0.99+ |
Americas | LOCATION | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Vancouver | LOCATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark | PERSON | 0.99+ |
100 servers | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
one server | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
64% | QUANTITY | 0.99+ |
Jeffrey | PERSON | 0.99+ |
next year | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
11 | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
.3% | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Kaleena | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
Python | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020
(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)
SUMMARY :
leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Werner | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Eric | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Wells Fargo | ORGANIZATION | 0.99+ |
October 2020 | DATE | 0.99+ |
Wells Fargo | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
50 petabytes | QUANTITY | 0.99+ |
10 petabytes | QUANTITY | 0.99+ |
North Carolina | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
150 petabytes | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
University of California | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
a year ago | DATE | 0.99+ |
both cases | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
second example | QUANTITY | 0.99+ |
Eric Cigar Shop | ORGANIZATION | 0.99+ |
Herzog Cigar Store | ORGANIZATION | 0.99+ |
OpenShift | TITLE | 0.99+ |
today | DATE | 0.99+ |
DevOps | TITLE | 0.98+ |
over 500 different arrays | QUANTITY | 0.98+ |
end of June | DATE | 0.98+ |
four people | QUANTITY | 0.98+ |
vCenter Ops | TITLE | 0.98+ |