Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native
(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Chris Jones | PERSON | 0.99+ |
12 gig | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
two personas | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
three months | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
last year | DATE | 0.99+ |
GitOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
over two years | QUANTITY | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
two separate platforms | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
third alternative | QUANTITY | 0.98+ |
each release | QUANTITY | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
third pillar | QUANTITY | 0.98+ |
2:00am | DATE | 0.98+ |
first jobs | QUANTITY | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Cloud Native | TITLE | 0.98+ |
this year | DATE | 0.98+ |
late 2019 | DATE | 0.98+ |
Platform9 | TITLE | 0.98+ |
one environment | QUANTITY | 0.98+ |
last fall | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
yesterday | DATE | 0.97+ |
two experiences | QUANTITY | 0.97+ |
about eight years | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.97+ |
Git | TITLE | 0.97+ |
Flux | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
two big contributors | QUANTITY | 0.96+ |
Cloud Native | TITLE | 0.96+ |
DevOps | TITLE | 0.96+ |
Rebars | ORGANIZATION | 0.95+ |
Brian Gilmore, Influx Data | Evolving InfluxDB into the Smart Data Platform
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now, in this program, we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program, you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think, like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean, if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems. Certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean, commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away. Just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean, we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like, take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and, you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally, I would just say please, like watch in ice in Tim's sessions, Like these are two of our best and brightest. They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time, really hot area. As Brian said in a moment, I'll be right back with Anna East Dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't want to miss this.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. who are using out on a, on a daily basis, you know, and having that sort of big shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, results in, in, you know, milliseconds of time since it hit the, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try you know, the risk of, of, you know, any issues that can come with new software rollouts. And you can do some experimentation and, you know, using the cloud resources. but you know, when it came to this particular new engine, you know, that power performance really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. going out and, you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. the critical aspects of key open source components of the Influx DB engine,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
60,000 people | QUANTITY | 0.99+ |
Influx | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Bryan | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
Influx DB | TITLE | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
Anna East Dos Georgio | PERSON | 0.98+ |
IOT | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
In Flux Data | ORGANIZATION | 0.96+ |
Influx | TITLE | 0.95+ |
The Cube | ORGANIZATION | 0.95+ |
tons | QUANTITY | 0.95+ |
Cube | ORGANIZATION | 0.94+ |
Rust | TITLE | 0.93+ |
both enterprises | QUANTITY | 0.92+ |
iot T | TITLE | 0.91+ |
second | QUANTITY | 0.89+ |
Go | TITLE | 0.88+ |
two thumbs | QUANTITY | 0.87+ |
Anna East | PERSON | 0.87+ |
Parque | TITLE | 0.85+ |
a minute ago | DATE | 0.84+ |
Influx State | ORGANIZATION | 0.83+ |
Dos Georgio | ORGANIZATION | 0.8+ |
influx data | ORGANIZATION | 0.8+ |
Apache Arrow | ORGANIZATION | 0.76+ |
GitHub | ORGANIZATION | 0.75+ |
Bryan | LOCATION | 0.74+ |
phase one | QUANTITY | 0.71+ |
past May | DATE | 0.69+ |
Go | ORGANIZATION | 0.64+ |
number two | QUANTITY | 0.64+ |
millisecond ago | DATE | 0.61+ |
InfluxDB | TITLE | 0.6+ |
Time | TITLE | 0.55+ |
industrial | QUANTITY | 0.54+ |
phase two | QUANTITY | 0.54+ |
Parque | COMMERCIAL_ITEM | 0.53+ |
couple | QUANTITY | 0.5+ |
time | TITLE | 0.5+ |
things | QUANTITY | 0.49+ |
TSI | ORGANIZATION | 0.4+ |
Arrow | TITLE | 0.38+ |
PARQUE | OTHER | 0.3+ |
Evolving InfluxDB into the Smart Data Platform
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Herain Oberoi | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Kamile Taouk | PERSON | 0.99+ |
John Fourier | PERSON | 0.99+ |
Rinesh Patel | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Santana Dasgupta | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ICE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jack Berkowitz | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Camille | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Don Tapscott | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Intercontinental Exchange | ORGANIZATION | 0.99+ |
Children's Cancer Institute | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Sabrina Yan | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
Sabrina | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
MontyCloud | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Leo | PERSON | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Santa Ana | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Tushar | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Valente | PERSON | 0.99+ |
JL Valente | PERSON | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Evolving InfluxDB into the Smart Data Platform Full Episode
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tim | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
16 times | QUANTITY | 0.99+ |
two rows | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
60,000 people | QUANTITY | 0.99+ |
Rust | TITLE | 0.99+ |
Influx | ORGANIZATION | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
three experts | QUANTITY | 0.99+ |
InfluxDB | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Noble nine | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Flux | ORGANIZATION | 0.99+ |
Influx DB | TITLE | 0.99+ |
each column | QUANTITY | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
cube.net | OTHER | 0.99+ |
twice | QUANTITY | 0.99+ |
Bryan | PERSON | 0.99+ |
Pandas | TITLE | 0.99+ |
c plus plus | TITLE | 0.99+ |
three years ago | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
more than a decade | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
dozens | QUANTITY | 0.98+ |
free@influxdbu.com | OTHER | 0.98+ |
30,000 feet | QUANTITY | 0.98+ |
Rust Foundation | ORGANIZATION | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
In Flux Data | ORGANIZATION | 0.98+ |
one time stamp | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
Russ | PERSON | 0.98+ |
IOT | ORGANIZATION | 0.98+ |
Evolving InfluxDB | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
Influx data | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
Influx DB University | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
The Cube | TITLE | 0.96+ |
Influx DB Cloud | TITLE | 0.96+ |
single server | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
Moving The World With InfluxDB
(upbeat music) >> Okay, we're now going to go into the customer panel. And we'd like to welcome Angelo Fausti, who's software engineer at the Vera C Rubin Observatory, and Caleb Maclachlan, who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks, this interview. Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. Cause doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem? >> Yeah, absolutely. And thanks for having me here, by the way. So Loft Orbital is a company that's a series B startup now. And our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have big software teams, and then eventually worry about a lot of very specialized engineering. And what we're trying to do is, change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP, as getting your programs, your mission deployed on orbit, with access to different sensors, cameras, radios, stuff like that. So that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum labs, who is working on building an IoT constellation, for Internet of Things. Basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT, which means you have this little modem inside a container. A container that you track from anywhere on the world as it's going across the ocean. So it's really little. And they've been able to stay small startup that's focused on their product, which is that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, before Loft was really impossible. So that's our mission is, providing space infrastructure as a service. We are kind of groundbreaking in this area, and we're serving a huge variety of customers with all kinds of different missions, and obviously, generating a ton of data in space that we've got to handle. >> Yeah, so amazing, Caleb, what you guys do. I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so I guess just a little bit about me. For some people, they don't necessarily know what they want to do, early in their life. For me, I was five years old and I knew, I want to be in the space industry. So I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of, actually. So I've kind of started out in satellites, did spend some time in working in the launch industry on rockets. Now I'm here back in satellites. And honestly, this is the most exciting of the different space startups that I've been a part of. So, always been passionate about space and basically writing software for operating in space for basically extending how we write software into orbit. >> Super interesting. Okay, Angelo. Let's talk about the Rubin Observatory Vera C. Rubin, famous woman scientists, Galaxy guru, Now you guys, the observatory are up, way up high, you're going to get a good look at the southern sky. I know COVID slowed you guys down a bit. But no doubt you continue to code away on the software. I know you're getting close. You got to be super excited. Give us the update on the observatory and your role. >> All right. So yeah, Rubin is state of the art observatory that is in construction on a remote mountain in Chile. And with Rubin we'll conduct the large survey of space and time. We are going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 3.2 gigapixel camera. And we're going to do that for 10 years, which is the duration of the survey. The goal is to produce an unprecedented data set. Which is going to be about .5 exabytes of image data. And from these images will detect and measure the properties of billions of astronomical objects. We are also building a science platform that's hosted on Google Cloud, so that the scientists and the public can explore this data to make discoveries. >> Yeah, amazing project. Now, you aren't a Doctor of Philosophy. So you probably spent some time thinking about what's out there. And then you went on to earn a PhD in astronomy and astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right. About 15 years. I studied physics in college, then I got a PhD in astronomy. And I worked for about five years in another project, the Dark Energy survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common, of course, is software. And you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb, you can start. >> Yeah, absolutely. So the first company that I extensively used InfluxDB in was a launch startup called Astra. And we were in the process of designing our first generation rocket there and testing the engines, pumps. Everything that goes into a rocket. And when I joined the company, our data story was not very mature. We were collecting a bunch of data in LabVIEW. And engineers were taking that over to MATLAB to process it. And at first, that's the way that a lot of engineers and scientists are used to working. And at first that was, like, people weren't entirely sure that, that needed to change. But it's something, the nice thing about InfluxDB is that, it's so easy to deploy. So our software engineering team was able to get it deployed and up and running very quickly and then quickly also backport all of the data that we've collected thus far into Influx. And what was amazing to see and it's kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana, is the visualization platform we use with influx, because it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly, easily discover data that they hadn't been able to see before. And take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming. And I saw them implementing crazy rocket equation type stuff in Influx and it just was totally game changing for how we tested. And things that previously it would be like run a test, then wait an hour for the engineers to crunch the data and then we run another test with some changed parameters or a changed startup sequence or something like that, became, by the time the test is over, the engineers know what the next step is, because they have this just like instant game changing access to data. So since that experience, basically everywhere I've gone, every company since then, I've been promoting InfluxDB and using it and spinning it up and quickly showing people how simple and easy it is. >> Yeah, thank you. So Angelo, I was explaining in my open that, you know you could add a column in a traditional RDBMS and do time series. But with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So I worked with the data management team and my first project was the record metrics that measure the performance of our software. The software that we use to process the data. So I started implementing that in our relational database. But then I realized that in fact, I was dealing with time series data. And I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, that was back in 2018. Then I got involved in another project. To record telemetry data from the telescope itself. It's very challenging because you have so many subsystems and sensors, producing data. And with that data, the goal is to look at the telescope harder in real time so we can make decisions and make sure that everything's doing the right thing. And another use for InfluxDB that I'm also interested, is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in the time series, we call that visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to the other problems. It's really just the different time scale. So yeah, we have plans on continuing using InfluxDB and finding new applications in the project. >> Yeah and the speed with which you can actually get high quality images. Angelo, my understanding is, you use InfluxDB, as you said, you're monitoring the telescope hardware and the software. And just say, some of the scientific data as well. The telescope at the Rubin Observatory is like, no pun intended, I guess, the star of the show. And I believe, I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. That's like 40 moons in an image, and amazingly fast as well. What else can you tell us about the telescope? >> Yeah, so it's really a challenging project, from the point of view of engineering. This telescope, it has to move really fast. And it also has to carry the primary mirror, which is an eight meter piece of glass, it's very heavy. And it has to carry a camera, which is about the size of a small car. And this whole structure weighs about 300 pounds. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about its design is that the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair, in that brings an almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, in diameter, the size of about seven full moons. And with that we can map the entire sky in only three days. And of course, during operations, everything's controlled by software, and it's automatic. There's a very complex piece of software called the scheduler, which is responsible for moving the telescope and the camera. Which will record the 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we're using InfluxDB to record engineering data and metadata about the observations, like telemetry events and the commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up and we need to store this data and have it around for the lifetime of the project. >> Hm. So at the mountain, we keep the data for 30 days. So the observers, they use Influx and InfluxDB instance, running there to analyze the data. But we also replicate the data to another instance running at the US data facility, where we have more computational resources and so more people can look at the data without interfering with the observations. Yeah, I have to say that InfluxDB has been really instrumental for us, and especially at this phase of the project where we are testing and integrating the different pieces of hardware. And it's not just the database, right. It's the whole platform. So I like to give this example, when we are doing this kind of task, it's hard to know in advance which dashboards and visualizations you're going to need, right. So what you really need is a data exploration tool. And with tools like chronograph, for example, having the ability to query and create dashboards on the fly was really a game changer for us. So astronomers, they typically are not software engineers, but they are the ones that know better than anyone, what needs to be monitored. And so they use chronograph and they can create the dashboards and the visualizations that they need. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about, you got these dishwasher size satellites are kind of using a multi tenant model. I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So we have in space, some satellites already. That, as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoe box to I guess, a few times larger than what we have today. And it is, we do shoot to have, effectively something like a multi tenant model where we will buy a bus off the shelf, the bus is, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something. Where it's providing the power, it has the solar panels, it has some radios attached to it, it handles the altitude control, basically steers the spacecraft in orbit. And then we build, also in house, what we call our payload hub, which is has all any customer payloads attached, and our own kind of edge processing sort of capabilities built into it. And so we integrate that, we launch it, and those things, because they're in low Earth orbit, they're orbiting the Earth every 90 minutes. That's seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time. Where we get to talk to them through our ground sites, either in Antarctica or in the North Pole region. So we'll see them for 10 minutes, and then we won't see them for the next 90 minutes as they zip around the Earth collecting data. So one of the challenges that exists for a company like ours is, that's a lot of, you have to be able to make real time decisions operationally, in those short windows that can sometimes be critical to the health and safety of the spacecraft. And it could be possible that we put ourselves into a low power state in the previous orbit or something potentially dangerous to the satellite can occur. And so as an operator, you need to very quickly process that data coming in. And not just the the live data, but also the massive amounts of data that were collected in, what we call the back orbit, which is the time that we couldn't see the spacecraft. >> We got it. So talk more about how you use InfluxDB to make sense of this data from all those tech that you're launching into space. >> Yeah, so we basically, previously we started off, when I joined the company, storing all of that, as Angelo did, in a regular relational database. And we found that it was so slow, and the size of our data would balloon over the course of a couple of days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So that thing's like power level voltage, currents counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, you know, now we can actually easily store the entire volume of data for the mission life so far, without having to worry about the size bloating to an unmanageable amount. And we can also seamlessly query large chunks of data, like if I need to see, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have a plot in an Influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent. I can intelligently group the data by citing time interval. So it's been extremely powerful for us to access the data. And as time has gone on, we've gradually migrated more and more of our operating data into Influx. So not only do we store the basic telemetry about the bus and our payload hub, but we're also storing data for our customers, that our customers are generating on board about things like you know, one example of a customer that's doing something pretty cool. They have a computer on our satellite, which they can reprogram themselves to do some AI enabled edge compute type capability in space. And so they're sending us some metrics about the status of their workloads, in addition to the basics, like the temperature of their payload, their computer or whatever else. And we're delivering that data to them through Influx in a Grafana dashboard that they can plot where they can see, not only has this pipeline succeeded or failed, but also where was the spacecraft when this occurred? What was the voltage being supplied to their payload? Whatever they need to see, it's all right there for them. Because we're aggregating all that data in InfluxDB. >> That's awesome. You're measuring everything. Let's talk a little bit about, we throw this term around a lot, data driven. A lot of companies say, Oh, yes, we're data driven. But you guys really are. I mean, you got data at the core. Caleb, what does that what does that mean to you? >> Yeah, so you know, I think, the clearest example of when I saw this, be like totally game changing is, what I mentioned before it, at Astra, were our engineers feedback loop went from a lot of, kind of slow researching, digging into the data to like an instant, instantaneous, almost, Seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all that data almost instantaneously and provide it to the operator in near real time. About a second worth of latency is all that's acceptable for us to react to. To see what is coming down from the spacecraft and building that pipeline is challenging, from a software engineering standpoint. Our primary language is Python, which isn't necessarily that fast. So what we've done is started, in the in the goal being data driven, is publish metrics on individual, how individual pieces of our data processing pipeline, are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is, allow us to make intelligent decisions on our software development roadmap. Where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. At sometimes we've found ourselves, before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now, that we're being a bit more data driven, there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scaled from supporting a couple of satellites to supporting many, many satellites at once. >> So you reduce those dead ends. Maybe Angela, you could talk about what sort of data driven means to you and your team? >> Yeah, I would say that having real time visibility, to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect, with the telescope have good quality and that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible, and then start fixing problems. >> Yeah, so I mean, you think about these big science use cases, Angelo. They are extremely high precision, you have to have a lot of granularity, very tight tolerances. How does that play into your time series data strategy? >> Yeah, so one of the subsystems that produce the high volume and high rates is the structure that supports the telescope's primary mirror. So on that structure, we have hundreds of actuators that compensate the shape of the mirror for the formations. That's part of our active updated system. So that's really real time. And we have to record this high data rates, and we have requirements to handle data that are a few 100 hertz. So we can easily configure our database with milliseconds precision, that's for telemetry data. But for events, sometimes we have events that are very close to each other and then we need to configure database with higher precision. >> um hm For example, micro seconds. >> Yeah, so Caleb, what are your event intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 hertz, 20 measurements per second on things like our gyroscopes. But I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give you an example, from when I worked on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 hertz, so 500 samples per second. And in some cases, we would actually need to ingest much higher rate data. Even up to like 1.5 kilohertz. So extremely, extremely high precision data there, where timing really matters a lot. And, I can, one of the really powerful things about Influx is the fact that it can handle this, that's one of the reasons we chose it. Because there's times when we're looking at the results of firing, where you're zooming in. I've talked earlier about how on my current job, we often zoom out to look at a year's worth of data. You're zooming in, to where your screen is preoccupied by a tiny fraction of a second. And you need to see, same thing, as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, hey, I opened this valve at exactly this time. And that goes, we want to have that at micro or even nanosecond precision, so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve open? That kind of visibility is critical in these kinds of scientific applications and absolutely game changing, to be able to see that in near real time. And with a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self serve? Or do you have to design and build all the analytics and queries for scientists? >> I think that's absolutely from my perspective, that's absolutely one of the best things about Influx, and what I've seen be game changing is that, generally, I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx. Because the interface that we expose to them is Grafana, which is generic graphing, open source graphing library that is very similar to Influx zone chronograph. >> Sure. >> And what it does is, it provides this, almost, it's a very intuitive UI for building your query. So you choose a measurement, and it shows a drop down of available measurements, and then you choose the particular field you want to look at. And again, that's a drop down. So it's really easy for our users to discover it. And there's kind of point and click options for doing math, aggregations. You can even do like, perfect kind of predictions all within Grafana. The Grafana user interface, which is really just a wrapper around the API's and functionality that Influx provides. So yes, absolutely, that's been the most powerful thing about it, is that it gets us out of the way, us software engineers, who may not know quite as much as the scientists and engineers that are closer to the interesting math. And they build these crazy dashboards that I'm just like, wow, I had no idea you could do that. I had no idea that, that is something that you would want to see. And absolutely, that's the most empowering piece. >> Yeah, putting data in the hands of those who have the context, the domain experts is key. Angelo is it the same situation for you? Is it self serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards, because they know exactly what they need to visualize. And I have an example just from last week. We had an engineer at the observatory that was building a dashboard to monitor the cooling system of the entire building. And he was familiar with InfluxQL, which was the primarily query language in version one of InfluxDB. And he had, that was really a challenge because he had all the data spread at multiple InfluxDB measurements. And he was like doing one query for each measurement and was not able to produce what he needed. And then, but that's the perfect use case for Flux, which is the new data scripting language that Influx data developed and introduced as the main language in version two. And so with Flux, he was able to combine data from multiple measurements and summarize this data in a nice table. So yeah, having more flexible and powerful language, also allows you to make better a visualization. >> So Angelo, where would you be without time series database, that technology generally, may be specifically InfluxDB, as one of the leading platforms. Would you be able to do this? >> Yeah, it's hard to imagine, doing what we are doing without InfluxDB. And I don't know, perhaps it would be just a matter of time to rediscover InfluxDB. >> Yeah. How about you Caleb? >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company, we weren't using InfluxDB and we were dealing with serious issues of the database growing to a an incredible size, extremely quickly. And being unable to, like even querying short periods of data, was taking on the order of seconds, which is just not possible for operations. So time series database is, if you're dealing with large volumes of time series data, Time series database is the right tool for the job and Influx is a great one for it. So, yeah, it's absolutely required to use for this kind of data, there is not really any other option. >> Guys, this has been really informative. It's pretty exciting to see, how the edge is mountain tops, lower Earth orbits. Space is the ultimate edge. Isn't it. I wonder if you could two questions to wrap here. What comes next for you guys? And is there something that you're really excited about? That you're working on. Caleb, may be you could go first and than Angelo you could bring us home. >> Yeah absolutely, So basically, what's next for Loft Orbital is more, more satellites a greater push towards infrastructure and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there. So many cool ways of leveraging space that people are taking advantage of and with companies like SpaceX, now rapidly lowering cost of launch. It's just a really exciting place to be in. And we're launching more satellites. We're scaling up for some constellations and our ground system has to be improved to match. So there is a lot of improvements that we are working on to really scale up our control systems to be best in class and make it capable of handling such large workloads. So, yeah. What's next for us is just really 10X ing what we are doing. And that's extremely exciting. >> And anything else you are excited about? Maybe something personal? Maybe, you know, the titbit you want to share. Are you guys hiring? >> We're absolutely hiring. So, we've positions all over the company. So we need software engineers. We need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. Personal wise, I don't have any interesting personal things that are data related. But my current hobby is sea kayaking, so I'm working on becoming a sea kayaking instructor. So if anyone likes to go sea kayaking out in the San Francisco Bay area, hopefully I'll see you out there. >> Love it. All right, Angelo, bring us home. >> Yeah. So what's next for us is, we're getting this telescope working and collecting data and when that's happened, it's going to be just a delish of data coming out of this camera. And handling all that data, is going to be a really challenging. Yeah, I wonder I might not be here for that I'm looking for it, like for next year we have an important milestone, which is our commissioning camera, which is a simplified version of the full camera, is going to be on sky and so most of the system has to be working by then. >> Any cool hobbies that you are working on or any side project? >> Yeah, actually, during the pandemic I started gardening. And I live here in Two Sun, Arizona. It gets really challenging during the summer because of the lack of water, right. And so, we have an automatic irrigation system at the farm and I'm trying to develop a small system to monitor the irrigation and make sure that our plants have enough water to survive. >> Nice. All right guys, with that we're going to end it. Thank you so much. Really fascinating and thanks to InfluxDB for making this possible. Really ground breaking stuff, enabling value at the edge, in the cloud and of course beyond, at the space. Really transformational work, that you guys are doing. So congratulations and I really appreciate the broader community. I can't wait to see what comes next from this entire eco system. Now in the moment, I'll be back to wrap up. This is Dave Vallante. And you are watching The cube, the leader in high tech enterprise coverage. (upbeat music)
SUMMARY :
and what you guys do of the kind of customer that we can serve. Caleb, what you guys do. So I started in the Air Force, code away on the software. so that the scientists and the public for the better part of the Dark Energy survey And you both use InfluxDB and it's kind of the super in the example that Caleb just gave, the goal is to look at the of the next gen telescopes to come online. the telescope needs to be that the system needs to keep up And it's not just the database, right. Okay, Caleb, let's bring you back in. the bus is, what you can kind of think of So talk more about how you use InfluxDB And that has, you know, does that mean to you? digging into the data to like an instant, means to you and your team? the images that we collect, I mean, you think about these that produce the high volume For example, micro seconds. that's one of the reasons we chose it. that's absolutely one of the that are closer to the interesting math. Angelo is it the same situation for you? And he had, that was really a challenge as one of the leading platforms. Yeah, it's hard to imagine, How about you Caleb? of the database growing Space is the ultimate edge. and to be in this industry as a whole. And anything else So if anyone likes to go sea kayaking All right, Angelo, bring us home. and so most of the system because of the lack of water, right. in the cloud and of course
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Angela | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
Angelo Fausti | PERSON | 0.99+ |
1000 pictures | QUANTITY | 0.99+ |
Loft Orbital | ORGANIZATION | 0.99+ |
Caleb Maclachlan | PERSON | 0.99+ |
40 moons | QUANTITY | 0.99+ |
500 hertz | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
Chile | LOCATION | 0.99+ |
SpaceX | ORGANIZATION | 0.99+ |
Caleb | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Antarctica | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
15 terabytes | QUANTITY | 0.99+ |
San Francisco Bay | LOCATION | 0.99+ |
Earth | LOCATION | 0.99+ |
North Pole | LOCATION | 0.99+ |
Angelo | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
Vera C. Rubin | PERSON | 0.99+ |
Influx | TITLE | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
3.2 gigapixel | QUANTITY | 0.99+ |
InfluxDB | TITLE | 0.99+ |
300 tons | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Rubin Observatory | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
each image | QUANTITY | 0.99+ |
1.5 kilohertz | QUANTITY | 0.99+ |
first project | QUANTITY | 0.99+ |
eight meter | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
Vera C Rubin Observatory | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
one thing | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
first generation | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three orders | QUANTITY | 0.98+ |
one example | QUANTITY | 0.97+ |
Two Sun, Arizona | LOCATION | 0.97+ |
InfluxQL | TITLE | 0.97+ |
hundreds of actuators | QUANTITY | 0.97+ |
each measurement | QUANTITY | 0.97+ |
about 300 pounds | QUANTITY | 0.97+ |
Alexis Richardson, Weaveworks | CUBE Conversation
(bright upbeat music) >> Hey everyone, welcome to theCUBE's AWS startup showcase. This is season two of the startup showcase, episode one. I'm your host, Lisa Martin. Pleased to be welcoming back one of our alumni, Alexis Richardson, the founder >> Hey. >> and CEO of Weaveworks. Alexis, welcome back to the program. >> Thank you so much, Lisa, I'm really happy to be here. Good to see you again. >> Likewise. So it's been a while since we've had Weaveworks on the program. Give the audience an overview of Weaveworks. You were founded in 2014, pioneering getopts, automating Kubernetes across all industries, but help us understand, unpack that a bit. >> Well, so my previous role was at Pivotal, where I was head of application platform and I was responsible for Spring and Vfabric, and some pieces of Cloud Foundry. And you may remember back in those days, everybody wanted to build like a Heroku, but for the enterprise. And so they were asking, how can we build more cloud services? And my team was involved in building out cloud services, but we were running into trouble with the technology that we had. And then when containers appeared, we thought this is the technology for us to roll out cloud services. So with some of my team, we decided to start a new company, Weaveworks, really intending to focus on developers. Because these new containers were pretty cool, but they were really complex operational centric tools, and enterprise developers need simplicity. That's what we'd learned from things like Spring. They want simplicity, productivity, velocity, all of that stuff, they don't want operational complexity. So Weaveworks' mission is to make applications easy for developers with containers. >> Talk to me about how you've accomplished that over the last seven years, and some of the things that you're doing to facilitate a DevOps practice within organizations across any industry? >> Yeah, well, our story is pretty interesting because of course in 2014, all of this was incredibly new. You couldn't even take two containers and put them together into a single application. So forget about enterprise. What we did was we built a network, which gave the company its name, Weave. But then we spent several years building out more and more pieces of the stack. We decided that we should go to market commercially because we're an open source company with a commercial SaaS. And we thought we would be like new Relic, that there'll be lots of customers in the cloud. And, therefore, they would need monitoring and management. And Weave started writing a SaaS based on Kubernetes, which was what we chose as our platform, back in the day, very, very, very early. We were one of the very first companies to start running Kubernetes in production other than Google. And so what we learned was customers didn't want to have management and monitoring for applications in the cloud, based on Kubernetes. Because they were all still struggling to get Docker working, to get basic Kubernetes clusters set up. And they kept saying to us "this is great, we love your tool, but we really need simpler things right now." So what we had done was we'd learned how to operate Kubernetes. And we discovered that we were doing it in this specific way, a way that meant that we could be reliable, we could set things up remotely, we could move things between zones. And so we called this approach getopts. So we've named the practice of getopts, which is really DevOps for Kubernetes. We decided that it was exciting after we had an outage and made a very quick recovery. Told people about it and they said, "well, we can't even Kubernetes started, let alone recover it from a crash." So we started evangelizing getopts and saying to people that we knew how to set up and run Kubernetes as operators for developers of apps, based on this experience. And people said, "well, why don't you help us do that?" So we pivoted the company away from a SaaS business, doing management, and straight back into enterprise software, providing a solution for people to run Kubernetes stacks, deploy applications, detect drifts, and operate them at scale. And we've never looked back. And since then we've built, very successfully, a big business out of telco customers, banks, car companies, really global two thousands. Starting from that open source base, continuing to respect that, but always keeping in mind helping developers build applications at scale. >> So in terms of that pivot that you've made, it sounds like you made that in conjunction with developers across industries to really understand what the right direction is here. What's the approach, what's their appetite? Talk to me about a customer example or two that really you think articulate the value and the right decision that that pivot was and how you're helping customers to really further their DevOps practice. >> Well, one of our first customers was actually Fidelity in this new world. Fidelity has a very advanced technology organization, a very forward thinking CTO, who I seem to recall is, or CEO, who I think is female. Really is into technology as a source of, you know, velocity and business strength. And we were brought to Fidelity by our partner, Amazon. And they said, "look, Fidelity have been using your open source tools, they want to run on Kubernetes, the early EKS service on AWS, but they need help, because what they want is a shared application platform that people can use across Fidelity to deploy and manage apps." So the idea Fidelity had was they're going to split their IT into a platform team, that was going to provide this platform, and a bunch of app teams that were going to write business apps like risk management, other financial processing. Paths, basically. And we came in to help Fidelity. And what we did was help Fidelity rollout, using getopts, a Amazon wide application platform. We also helped them to build, this was very early days for us post pivot, we really helped them to build an add on layer. So you could take any Kubernetes cluster and add other components to it, and then you'd have your platform right there. And the whole stack would be managed by getopts, which nobody had done before. Nobody who'd come up with a way of managing the whole stack, so you could start and stop stacks wherever you wanted, at will, correctly. I mean, if you talk to people about what's hard in IT, they'll tell you shutting down Kubernetes is hard, 'cause I know I'm never going to know how to start it again. So being able to start and stop things, move them around is really crucial. What Fidelity also wanted, which made I think the whole thing even more exciting, was to duplicate this environment on Azure and actually also on-premise later on. So where Fidelity are today is the whole Fidelity platform runs on Microsoft and on Amazon and on-premise, using three different implementations of Kubernetes. But using this platform technology and getopts that we helped Fidelity rollout. And if you want to know a bit about the story, type FIDEKS, F I D E K S into Google and you'll find a video of me three or four years ago on stage at Cube Con talking with a Fidelity chief architect about this story. It's pretty exciting and these are early days for these new Kubernetes platforms. >> Early days, but so transformative. And I can't imagine the events of the last few years without having this capability and this technology to facilitate such pivots and transformation where we would all be. I want to kind of dig into some use cases, 'cause one of the things that you just mentioned with the Fidelity example got me thinking use case of hybrid, multi-cloud, but also continuous app development. Talk to me about some of the key use cases that you work with customers on. >> Well you just named two. So hybrid and multi-cloud is absolutely critical, and also sovereign, which is when you're actually offline and you only update your cloud periodically. That's one of the major use cases for us. And what customers want there is they want consistency. They want a single operating model, across all of these different locations, so that all of their teams can get trained on one set of technologies and then move from place to place. They're not looking for magic, where apps move with the sun or any of that stuff. They just want to know they can base everything on a single, homogeneous skillset and have scale across their teams. Maybe tens of thousands of developers, all who know how to do the same thing. That's a really important use case. You also mentioned continuous delivery. That's probably the second really critical use case for us. People say, "I've got Kubernetes set up now, and I have Jenkins." At JP Morgan once told me they had 40,000 Jenkins servers, or something like that, you know, Jenkins at scale. And they're like, "okay, how do I push changes from Jenkins into the cloud?" So getopts provides a bridge between the world of CI and the runtime of Kubernetes. So one group of our customers is help me to put that middle piece of CD that gets you CI, CD, to Kubernetes, that's a classic. And then what they're looking for is an increase in velocity. And what we typically see is people go from deploying once every six months to deploying once a week, to deploying once a day, to deploying several times a day. And then they split things up into teams and suddenly, wow, that vision of microservices has come and everybody's excited 'cause IT velocity has gone up by two X. Another really >> So, >> Sorry, carry on. >> Go ahead, I was just going to say in terms of IT velocity it sounds like that's a major business outcome that you're enabling for, whether it's teleco, financial services, or whatnot. That velocity is, as you just described, is rapidly accelerating. >> Yeah, if you go to our website, you'll find a bunch of these use cases. And one that I really like is NatWest mettle, which is another financial example. They're not all financial by the way. But there's some metrics in there. We're getting people up to two X productivity, which at scale is huge, really makes a difference. Also, meantime to recovery. If you know the metric space, you'll know these are all DORA metrics. And DORA, which was acquired by Google a couple of years ago, is a really fantastic analyst in the space that came up with a bunch of ways of thinking about how to measure your performance as a business and IT organization. Recovery time and things like this that you really need to focus on if you're in this world. >> Well, from an IT velocity perspective, if I translate that to business outcomes, especially given the dynamics in the market over the last two years, this is transformative and probably helped a lot of organizations to pivot multiple times during the last couple of years. To get to that survival mode and into that thriving mode, enabling organizations to meet customer demand that was changing faster, et cetera. That's a really big imperative that this technology can deliver to the business. >> Yeah, I mean, that's been huge for us. So when the pandemic first began, obviously, we had some road bumps and there were some challenges, but what we found out very quickly was that people were moving into digital much faster. And we've been mostly enabling them, not just in finance, as I said, but also, car companies, utilities, et cetera. The other one, of course, is modern operations. So, everyone's excited about the potential for automation. If I have thousands and thousands of developers and thousands of applications, do I need thousands of operations staff? And the answer is, with Kubernetes in this new era, you can reduce your operational loads. So that actually very few people are needed to keep systems up, to do basic monitoring, to do redeployments and so on, which are all boring infrastructure tasks that no developer wants to do. If we can automate all of that, we can modernize the whole IT space. And that's what I think the promise of Kubernetes that we're also seeing as well. So applications speed first and then operational competence second. >> So you guys had a launch, here we are in early calendar year 2022, you guys had a launch just about six or eight weeks ago in November of 2021, where you were launching announcing the GA of Weave getopts enterprise, which is a licensed product building on the free open source Weave getopts core. Talk to me about that and what the significance of that is. >> Well, this is an enterprise solution that helps customers build these critical use cases, like shared service platform or secure DevOps or multi-cloud, using getopts, which gives them higher security, lower costs of management, and better operations, and higher velocity. And all of it is taking all the best practices that we've learned starting from those days of running our own Kubernetes stack and then through those early customers like Fidelity into the modern era where we have an at-scale platform for these people. And the crucial properties are it provides you with a platform, it provides you with trusted delivery, and it provides you with what we call release orchestration, which is when you deploy things at scale into production, using tools like canaries and other modern practices. So, all of it is enabling what we call the cloud native enterprise, application delivery, modern operations. >> So what's the upgrade path for customers that are using the free open-source tier to the enterprise package, what does that look like? >> The good news is it's an add on. So, I have been in the industry a while and I strongly believe it's really important that if you have an open source product, you shouldn't ask people to delete it or uninstall it to install your enterprise product, unless you really, really, really have to. And I'm not trying to be picky here. Maybe there are cases where it's important, but actually in our case, it's very simple. If you're already using one of our upstream tools, like Flux, for example, then going from Flux to Weave getopts enterprise is an add-on installation. So you don't have to change or take out what you're doing. You might be using Flux without knowing it. You may not be aware of this, but it's also insight as your AKS and ARC, it's inside the Amazon EKS anywhere bundle. It's available on Alibaba, VMware have used it in cartographer and Tanzu application platform. And even Red Hat use it too in some cases. So you may be using it already, from one of the big vendors who are partners of ours, as a precursor to buying Weave getopts enterprise. So, you know, don't be scared. Get in touch is what I would say to people. >> Get in touch. And of course, folks can go to weave.works to learn more about that. And, also we want to watch the Weave.works space, 'cause you have some news coming out relatively soon that sounds pretty exciting, Alexis. >> Well, I mentioned trusted delivery. And I think one of the things with that is no CIO wants to go faster, unless they also have the safety wheels on, let's face it. And the big question we get asked is "I love this getopts stuff, but how can I bring my team with me? How can I introduce change?" I have all of these approvals mechanisms in place, can I move into the world of getopts? And the answer is yes, yes you can because we now support policy engines as baked into our enterprise product. Now, if you don't know what policy is, it's really a way of applying rules to what you're seeing in IT. And you can detect whether something passes or fails conditions, which means that we can detect if something bad is about to happen in a deployment and stop it from happening, this is really critical. It also goes hand in hand with things like supply chain and security, which I'm sure we read about in the news far too much. >> Yeah, pretty much daily supply chain and security >> Pretty much daily. >> is one of those things that we're all in every generation concerned about. Well, Alexis, it's been a pleasure having you back on the program, talking to us about what's new at Weaveworks, the direction that you're going, how you're helping organizations across industries really advance their DevOps practice. And we will check weave.works in the next couple of weeks for more on that news that you started to break a little bit with us today. We appreciate your time, Alexis. >> Thank you very much, indeed, take care. >> Likewise. For Alexis Richardson, I'm Lisa Martin. Keep it right here on theCUBE, your leader in hybrid tech event coverage. (bright music) (music fades)
SUMMARY :
the founder and CEO of Weaveworks. Good to see you again. Weaveworks on the program. And you may remember back in those days, and saying to people that we knew and the right decision that that pivot was and getopts that we And I can't imagine the and then move from place to place. That velocity is, as you just described, And one that I really and into that thriving mode, And the answer is, with Talk to me about that and what And the crucial properties are So, I have been in the industry a while And of course, folks can go to And the answer is yes, yes you can for more on that news that you started your leader in hybrid tech event coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Alexis Richardson | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Fidelity | ORGANIZATION | 0.99+ |
Alexis | PERSON | 0.99+ |
November of 2021 | DATE | 0.99+ |
JP Morgan | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Weaveworks | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
NatWest | ORGANIZATION | 0.99+ |
once a day | QUANTITY | 0.99+ |
40,000 | QUANTITY | 0.99+ |
three | DATE | 0.98+ |
early calendar year 2022 | DATE | 0.98+ |
today | DATE | 0.98+ |
once a week | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
Alibaba | ORGANIZATION | 0.98+ |
two thousands | QUANTITY | 0.98+ |
Pivotal | ORGANIZATION | 0.98+ |
Weave | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
two containers | QUANTITY | 0.97+ |
Jenkins | TITLE | 0.97+ |
Weaveworks' | ORGANIZATION | 0.97+ |
Flux | TITLE | 0.97+ |
Kubernetes | TITLE | 0.96+ |
single application | QUANTITY | 0.96+ |
weave.works | ORGANIZATION | 0.96+ |
four years ago | DATE | 0.96+ |
Azure | TITLE | 0.95+ |
eight weeks ago | DATE | 0.95+ |
DORA | ORGANIZATION | 0.94+ |
first customers | QUANTITY | 0.93+ |
Relic | ORGANIZATION | 0.92+ |
single | QUANTITY | 0.91+ |
about six | DATE | 0.9+ |
first companies | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
telco | ORGANIZATION | 0.89+ |
Heroku | ORGANIZATION | 0.89+ |
Kingdon Barrett, Weaveworks | KubeCon + CloudNativeCon NA 2021
>>Good morning, welcome to the cubes coverage of Qube con and cloud native con 21 live from Los Angeles. Lisa Martin, here with Dave Nicholson. David's great to be in person with other humans at this conference. Finally, I can't believe >>You're arms length away. It's unreal. >>I know, and they checked backs cards. So everybody's here is nice and safe. We're excited to welcome kingdom Barrett to the program, flux, maintainer and open source support engineer at we works. He came to him. Welcome to the program. >>Oh, thank you for having me on today. >>So let's talk about flux. This is a CNCF incubating project. I saw catalyze as adopt talk to us about flux and its evolution. >>Uh, so flex is, uh, uh, just got into its second version a while ago. We've been, uh, working on, um, uh, we're an incubating project and we're going towards graduation at this point. Um, flex has seen a great deal of adoption from, uh, infant cloud infrastructure vendors in particular, uh, like Microsoft and Amazon and VMware, all building products on, um, flux, uh, the latest version of flux. And, uh, we've heard, uh, from companies like Alibaba and state farm. We had a, uh, uh, conference, uh, at a co-hosted event earlier on Tuesday called get-ups con, uh, where we presented all about get ops, which is the technology, uh, guiding, uh, set of principles that underlies flux. And, uh, there are new adopters, um, all, all every day, including, uh, the department of defense, uh, who has a hundred thousand developers. Um, it's, it's, it's very successful project at this point, who are the >>Key users of flux flux? >>Excuse me. The key users of flux are, uh, probably, uh, application developers and infrastructure engineers, and platform support folks. So a pretty broad spectrum of people. >>And you've got some news at the event. >>Yeah, we actually, uh, we have a, uh, ecosystem event that's coming up, um, on October 20th, uh, it's free virtual event. Uh, folks can join us to hear from these companies. We have people from high level, uh, CTOs and GMs, uh, from companies like Microsoft, Amazon VMware, uh, we've worked D two IQ, um, that are, uh, going to be speaking, uh, about their, uh, products that you can buy from their cloud vendor, uh, that, uh, are based on flux. Uh, so, so that's a milestone for us. That's a major milestone. These are large vendors, um, major cloud vendors that have decided that they trust, uh, flux with their customers workloads. And it's, it's the way that they want to push get ups. Great >>Validation. Yeah. >>So give us an example, just digging in a little bit on flux and get ops. What are some of the things that flux either enforces or enables or validates? What, how would you describe the flux get ops relationship? >>So the first to get ops principles is declarative infrastructure and that's, uh, that's something that people who are using Kubernetes are already very familiar with. Um, flux has a basic itself, or, or I guess spawned, uh, maybe is a better way to say it. Uh, this, um, uh, whole get ops working group, that's just defined the principles. There's four of them in the formal definition. That's just been promoted to a 1.0 and, uh, the get ups working group, publish, publish this at, uh, open get-ups dot dev where you can read all four. And, um, it's great copy site. If you're not really familiar with get ops, you can, you can read all four, but, uh, the other, uh, the second one I would have mentioned is, uh, version storage is, is, uh, it's called get ups and get as a version store. So it's a good for, um, disaster recovery. >>Uh, and, uh, if you have an issue with a new release, if you're, uh, pushing changes frequently, that's, you know, more than likely you will have issues from time to time. Uh, you can roll back with, get ups because everything is version. Um, and, uh, you can do those releases rapidly because the deployment is automatic, um, and it's continuously reconciling. So those are the four principles of get ups. Uh, and they're, they're not exactly prescriptive. You don't have to adopt them all at once. You can pick and choose where you want to get started. Um, but that's what, uh, is underneath flux. >>How do you help customers pick and choose based on what are some of the key criteria that you would advise them on? >>We would advise them to try to follow all of those principles, because that's what you get out of the box with fluxes is a solution that does those things. But if there is one of those things that gets in a way, um, there's also the concept of a closed loop that is, um, sometimes debated as whether it should be part of the get ops principles or not. Um, that just means that, uh, when you use get-ups the only changes that go to your infrastructure are coming through get-ups. Uh, so you don't have someone coming in and using the back door. Um, it all goes through get, uh, w when you want to make a change to your cluster or your application, you push it to get the automation takes over from there and, um, and makes, uh, developers and platform engineers jobs a lot easier. And it makes it easier for them to collaborate with each other, >>Of course, productivity. You mentioned AWS, Microsoft, VMware, uh, all working with you to deliver, get ups to enterprise customers. Talk to me about some of the benefits in it for these big guys. I mean, that's great validation, but what's in it for AWS and VMware and Microsoft, for example, business outcome wise. >>Well, uh, one of the things that we've been promoting and since June is a flex has been, uh, uh, there's an API underneath, that's called the get ops toolkit. This is, uh, if you're building a platform for platforms like these cloud vendors are, um, we announced that fluxes APRs are officially stable. So that means that it's safe for them to build on top of, and they can, uh, go ahead and build things and not worry that we're going to pull the rug out from under them. So that's one of the major vendors, uh, one of the major, uh, uh, vendor benefits and, um, uh, we've, we've also added a recent improvement, uh, uh, called service side apply that, uh, will improve performance. Uh, we reduced the number of, um, API calls, but also for, for, uh, users, it makes things a lot easier because they don't have to write explicitly health checks on everything. Uh, it's possible for them to say, we'd like to see everything is healthy, and it's a one-line addition, that's it? >>So, you know, there's been a lot of discussion from a lot of different angles of the subject of security, uh, in this space. Um, how does this, how does this dovetail with that? A lot of discussion specifically about software supply chain security. Now this is more in the operations space. How do, how do those come together? Do you have any thoughts on security? >>Well, flux is built for security first. Um, there are a lot of products out there that, uh, will shell out to other tools and, and that's a potential vulnerability and flux does not do that. Uh, we've recently undergone a security audit, which we're waiting for the results and the report over, but this is part of our progress towards the CNCF graduated status. Um, and, uh, we've, we've liked what we've seen and preliminary results. Uh, we've, we've prepared for the security audit on knowing that it was coming and, uh, uh, flexes, uh, uh, designed for security first. Uh, you're able to verify that the commits that you're applying to your cluster are signed and actually come from a valid author who is, uh, permitted to make changes to the cluster and, uh, get ops itself is, is this, uh, model of operations by poll requests. So, um, you, you have an opportunity to make sure that your changes are, uh, appropriately reviewed before they get applied. >>Got it. So you had a session at coupon this week. Talk to me a little bit about that. What were like the top three takeaways, and maybe even share with us some of the feedback that you got from the audience? >>Um, so, uh, the session was about Jenkins and get ups or Jenkins and flux. And the, um, the main idea is that when you use flux, flux is a tool for delivery. So you've heard maybe of CIC, D C I N C D are separate influx. We consider these as two separate jobs that should not cross over. And, uh, when, when, uh, you do that. So the talk is about Jenkins and flux. Jenkins is a very popular CII solution and the messages, uh, you don't have to abandon, if you've made a large infrastructure investment in a CII solution, you don't have to abandon your Jenkins or your GitHub actions or, or whatever other CII solution you're using to build and test images. Uh, you can take it with you and adopt get ups. >>Um, so there's compatibility there and, and usability familiarity for the audience, the users. Yeah. What was some of the feedback that they provided to you? Um, were they surprised by that? Happy about that? >>Well, and talk to us a little bit fast paced. Uh, we'll put it in the advanced CIC D track. I covered a lot of ground in that talk, and I hope to go back and cover things in a little bit smaller steps. Um, I tried to show as many of the features of Fluxus as I could. Uh, and, and so one of the feedback that I got was actually, it was a little bit difficult to follow up as, so I'm a new presenter. Um, this is my first year we've worked. I've never presented at CubeCon before. Um, I'm really glad I got the opportunity to be here. This is a great, uh, opportunity to collaborate with other open source teams. And, um, that's, that's, uh, that's the takeaway from me? No. >>So you've got to give a shout out to, uh, to weave works. Absolutely. You know, any, any organization that realizes the benefit of having its folks participating in the community, realizing that it, it helps the community, it helps you, it helps them, you know, that's, that's what we love about, about all of this. >>Yeah. We're, uh, we're really excited to grow adoption for, um, Kubernetes and get ops together. So, >>So I've asked a few people this over the last couple of days, where do you think we are in the peak Kubernetes curve? Are we still just at the very beginning stages of this, of this as a, as a movement? >>Um, certainly we're, um, it's, it's, uh, for, for people who are here at CubeCon, I think we see that, you know, uh, a lot of companies are very successful with Kubernetes, but, um, I come from a university, it, uh, background and I haven't seen a lot of adoption, uh, in, in large enterprise, um, more conservative enterprises, at least in, in my personal experience. And I think that, uh, there is a lot for those places to gain, um, through, through, uh, adopting Kubernetes and get ups together. I think get ops is, uh, we'll provide them with the opportunity to, uh, experience Kubernetes in the best way possible. >>We've seen such acceleration in the last 18, 19 months of digital transformation for companies to survive, to pivot during COVID to survive, doubt to thrive. Do you see that influencing the adoption of Kubernetes and maybe different industries getting more comfortable with leveraging it as a platform? >>Sure. Um, a lot of companies see it as a cost center. And so if you can make it easier or possible to do, uh, operations with fewer people in the loop, um, that, that makes it a cost benefit for a lot of people, but also you need to keep people in the loop. You need to keep the people that you have included and, and be transparent about what infrastructure choices and changes you're making. So, uh, that's one of the things that get ups really helps with >>At transparency is key. One more question for you. Can you share a little bit before we wrap here about the project roadmap and some of the things that are coming down the pike? Yeah. >>So I mentioned a graduation. That's the immediate goal that we're working towards? Uh, most directly, uh, we have, um, grown our, uh, number of integrations pretty significantly. We have an operator how entry in red hat, open shift there's operator hub, where you can go and click to install flux. And that's great. Um, and, uh, we looked forward to, uh, making flux more compatible with more of the tools that you find in the CNCF umbrella. Um, that's, that's what our roadmap is for >>Increasing that compatibility. And one more time mentioned the event, October 20th, I believe he said, let folks know where they can go and find it on the web. Yeah. >>If you're interested in the get ups days.com, it's the get-ups one-stop shop and it's, uh, vendors like AWS and Microsoft and VMware detour IQ. And we've worked, we've all built a flux based solutions, um, for, uh, that are available for sale right now. So if you're, uh, trying to use get-ups and you have one of these vendors as your cloud vendor, um, it seems like a natural fit to try the solution that's out of the box. Uh, but if you need convincing, you get Upstate's dot com, you can go find out more about the event and, uh, we'll hope to see you there. >>I get upstairs.com kingdom. Thank you. You're joining Dave and me on the program, talking to us about flux. Congratulations on its evolution. We look forward to hearing more great things as the years unfold. >>Thank you so much for having me on our pleasure >>For Dave Nicholson. I'm Lisa Martin. You're watching the kid live from Los Angeles at CubeCon cloud native con 21 stick around Dave and I, and we'll be right back with our next guest.
SUMMARY :
David's great to be in person with other humans You're arms length away. We're excited to welcome kingdom Barrett to the program, to us about flux and its evolution. Uh, so flex is, uh, uh, just got into its second version a while So a pretty broad spectrum of people. uh, products that you can buy from their cloud vendor, uh, that, uh, are based on flux. Yeah. What, how would you describe the flux get ops and, uh, the get ups working group, publish, publish this at, uh, open get-ups dot dev where you can Uh, and, uh, if you have an issue with a new release, if you're, uh, w when you want to make a change to your cluster or your application, you push it to get the automation uh, all working with you to deliver, get ups to enterprise customers. So that means that it's safe for them to build on top of, and they can, uh, of security, uh, in this space. Um, and, uh, we've, we've liked what we've seen and preliminary results. and maybe even share with us some of the feedback that you got from the audience? And, uh, when, when, uh, you do that. Um, so there's compatibility there and, and usability familiarity for the audience, uh, opportunity to collaborate with other open source teams. it helps the community, it helps you, it helps them, you know, that's, So, I think get ops is, uh, we'll provide them with the opportunity to, Do you see that influencing the adoption of Kubernetes and maybe different So, uh, that's one of the things that get ups really helps with Can you share a little bit before we wrap here about the project roadmap Um, and, uh, we looked forward to, uh, And one more time mentioned the event, October 20th, I believe he said, uh, trying to use get-ups and you have one of these vendors as your cloud vendor, You're joining Dave and me on the program, talking to us about flux. con 21 stick around Dave and I, and we'll be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
October 20th | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
David | PERSON | 0.99+ |
June | DATE | 0.99+ |
one-line | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Jenkins | TITLE | 0.99+ |
second version | QUANTITY | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
flex | ORGANIZATION | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
first year | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
CubeCon | EVENT | 0.96+ |
two separate jobs | QUANTITY | 0.95+ |
Kingdon Barrett | ORGANIZATION | 0.95+ |
udNativeCon NA 2021 | EVENT | 0.95+ |
One more question | QUANTITY | 0.93+ |
Weaveworks | ORGANIZATION | 0.92+ |
second one | QUANTITY | 0.92+ |
CIC | TITLE | 0.92+ |
Upstate | ORGANIZATION | 0.92+ |
Kubernetes | TITLE | 0.92+ |
CubeCon | ORGANIZATION | 0.89+ |
Fluxus | TITLE | 0.87+ |
Qube con | EVENT | 0.86+ |
Kubernetes | ORGANIZATION | 0.85+ |
KubeCon | EVENT | 0.84+ |
ups days.com | OTHER | 0.83+ |
upstairs.com | OTHER | 0.83+ |
four | QUANTITY | 0.82+ |
Barrett | PERSON | 0.81+ |
three takeaways | QUANTITY | 0.8+ |
hundred thousand developers | QUANTITY | 0.79+ |
18 | QUANTITY | 0.75+ |
VMware detour IQ | ORGANIZATION | 0.75+ |
Kubernetes | PERSON | 0.75+ |
one more time | QUANTITY | 0.74+ |
four of them | QUANTITY | 0.71+ |
get | OTHER | 0.7+ |
1.0 | OTHER | 0.7+ |
19 months | QUANTITY | 0.69+ |
CII | TITLE | 0.68+ |
Clo | ORGANIZATION | 0.66+ |
flux | TITLE | 0.64+ |
Tuesday | DATE | 0.64+ |
COVID | TITLE | 0.63+ |
D two IQ | ORGANIZATION | 0.62+ |
con | EVENT | 0.59+ |
con | ORGANIZATION | 0.59+ |
last | DATE | 0.56+ |
21 | QUANTITY | 0.5+ |
cloud | ORGANIZATION | 0.5+ |
VMware | TITLE | 0.37+ |
Constance Caramanolis, Splunk & Stephen Augustus, CISCO | KubeCon + CloudNativeCon NA 2021
(cheery synth music) >> Hello, this is theCUBE. I'm John Furrier, your host. We're here for a KubeCon CloudNativeCon preview for the North America show in Los Angeles, here in person and a virtual event. Two of the co-chairs are with me again this year, Constance Caramanolis, principal engineer at Splunk, and of course, Stephen Augustus, head of Open Source at Cisco. Great to see you guys. Hey, thanks for coming on, virtually, for the preview. >> Great to be had! >> Constance: Thank you for having us. >> Stephen: Great to see you again John. (laughing) >> Constance: Yeah. >> So I love... well, KubeCon has gotten, It's my favorite event every year. This is where the DevOps actually, where the people are reading the tea leaves, connecting the dots, but also meeting up and doing what communities do best, which is set the agenda for the next, next generation that's happening in person. Last year, it was virtual. We had the European virtual KubeCon, CloudNativeCon. This year a mix. Give us a taste of updates that you want to share. Let's get, let's get into it. >> Sure. Uh, so I think, you know, um, I-I-I think uh, seeing this event in particular and uh, you know, one, we've got this, we've got this hopeful r-return to you know, some semblance of normalcy. I know that you know, over the last year and change, we've been uh, we've been kind of itching t-t-to see each other in person. And, and you know, and, and I-I think I say on a lot of uh, interviews that I, you know, one of my favorite parts of any conference is the, is the hallway track, right? It's really hard to, and, and we've- we've made, you know, we've made strides to replicate it, but there's- I don't think there's anything uh, you know, close t-to being in person, right? And, and getting to, to bounce i-ideas off of uh, your, your co-conspirators, (laughs) co-conspirators or compatriots. Um, so I'm- I'm really excited for that, um, I love the, I love the um, the mandates that we've put in place, uh, to make sure that people are uh, a little bit more safe. Um, and, you know, overall, like seeing uh- I-I think one of the things that gets me most excited is the, is the uh, the set of day zero events, right? Um, I-I think the, the increase in the uh, day zero events, we, we've got uh, Constance, what's the, what's the count at now? I'm, I'm looking over it and, and it's uh, it's, it's massive, right? You know, SupplyChainSecurityCon, Uh, the, you know, the Cloud Native for Eclipse Foundation, it's beyond, >> Too, hmm, too many to count right off the bat when I'm looking at it. >> Too many, too many to count! >> And it's also like, this is a reduced number because some people decide or some, not people, like projects, decide to do virtual uh, days or a non-conference outside of the normal KubeCon cycle because of... >> Yeah, well, let's get, let's get- >> that thing that should not be named. >> Let's get into some of the data. >> I want to jump into the trends. But just for the folks watching, this is a hybrid event, and- >> Yeah. >> There's going to be this day zero, which is the pre-programming. Which by the way, I think has evolved into a format that's just tremendous. You got the pregame, pre-event action. Very dynamic, very ad-hoc, ephemeral in the, in the, in the, in the, in the people getting together and making things happen. Then you got the structured event. It's uh, the 11th to the 12th on the pre-programming, day zero stuff, which you talked about, and then the 13th to the 15th, the main conference. It's in-person and virtual, so it's going to be a hybrid event, which should be dynamic because you have an in-person dynamic where it's a scarce resource of the face-to-face, working and trying to create synchronicity with the asynchronous environment on virtuals. So it should be an action packed and a must-watch event. So I'm personally excited, we'll be there in person. But I got to ask you guys, the co-chairs, how are you guys handling this? How are the papers coming, what's the call for talks? How are you structuring things? Can you just give a quick overview of what's, what's happening on the talks? >> Uh, talks, uh, I feel like it went really well this round. >> Um, really like, wide variety. I know it's pretty vague, but there's a wide variety of topics, uh, things that are getting I think, I feel like more popularity, like security is getting more popular. Uh, business value, one thing that I'm really passionate about, is getting a lot more traction. Uh, student track 101 is also, as always, I guess, as ever since it's been, since inception has been popular, um, it's definitely getting to the point where we're actually, well not to the point, but maybe it's just being more highlighted that a lot of the, like, like, some of the like great content from the day zeros are also showing up in KubeCon and then like, vice versa and they're kind of everywhere. Uh, Yeah, the talks I think was really- >> John: The sessions, the sessions are always driving it. Stephen I'm like from a, from a, from a maturisation standpoint, you have the, the, the people developing and then you got the f... the things are getting hardened. Can you talk about the trends around, what's kind of hardening out from a project basis on these sessions and what's forming relative to the trend line this year. >> Yeah. So, you know, so to Constance's point, I think that we're, we're starting to see some diversity in, or continued diversity and kind of the personas that are coming into the conference, right? So whether you're talking about that continuing 101 track or, the student track, which, you know, a lot of people have, have kind of jumped in and seeing that as an opportunity to, to, to not only start becoming part of the community, but also to immediately contribute to content. And then you've got that For me? It's, it's security, all day, right? I think, you know, I think that, you know, there's not a week, there's not a week that passes that I don't have a chat with someone around what's happening in security lately. And I think you'll see that highlighted in in all of the keynotes that we have planned there are, there's not one, not two, but three uh, keynotes around software supply chain security, and some of the different things that you have to consider as we're kind of walking into the space of you know, protecting, protecting your, your build pipeline, protecting your production artifacts, so that's something that really, you know, that goes to that, you know, that goes to my work on that, you know, in Kubernetes for SIG release, release engineering, that's, you know, something that we, we know that there are countless downstream consumers, right? So, some, you know, some that we may not have even had contact with yet from the upstream perspective, right? So it's, it's paramount for us to make sure that, you know, everything that we're pushing out to the community and to the wider world is safe to consume. So, so security is definitely top of mind for me. I would say for, you know, lots of things around you know, continue, continuing to talk about uh, GitOps observability. And I think, and I think that, you know, each of these, what's, you know, what's fun about um, each of these, uh, the, each of these topics, each of these areas is that they're all interconnected, right? So more and more you're seeing, you're seeing, oh, well, you know, the, you know, the Tekton folks are, you know, are talking to the Flux folks. And, and they're talking to the, the folks who are working on uh, Sigstore and Rekor and, and, and all of these fun tools about how to integrate into, you know, how to integrate into those respective areas. Um, so it's, it's, it's really a time of um, collaboration underscored by um, you know, protecting, protecting the community and the, and the end users. >> John: Yeah. We're seeing a lot of ah, um, you know, the security discussions. I mean, how far can you shift left before it becomes like standard, right? So like, you know, we're seeing that being built in. I got to ask you guys also on the trend of DevOps there's been a lot of conversations around Cloud Native, around obsolete management and in terms of ability, but data, the role of data has been different approaches on how people are leveraging machine learning and AI, can you, did that come up a lot in, in some of the, the discussions and the analysis? Because everyone's slapping machine learning on things these days, and there's a little bit of that going on, but it seems to be data and machine learning and horizontal scale, classic DevOps, things are happening. What's your reaction to, to some of those things that are happening? Can you guys, is there anything happening there? >> I feel like this year wasn't that big of a machine learning year in terms of submissions. >> Yes. >> I'm certain you agree with that, but it wasn't, as I think, like, security took a lot and, and, like, and this might also just be like, thinking about it holistically now, like security was, had such amazing submissions that it probably took a little bit of the spotlight off of when we were looking at the machine learning ones. Um... >> John: So security... >> Also I'm biased, so I think >> John: So security dominated more than, than everyone else did. >> Yeah. I think, you know, I think for this year, security is, security is dominating. I, you know, I think we even talked about this in the last uh, chat we had, um, the, you know, kind of from the AI side, I think you're, we're, we're running, there have been discussions around the, uh, you know, bias in, in AI models and um, you know, how we work through that, um, I'm not sure that we have any content for that this time around, but I think it, yeah, but I think, you know, as we start to talk about like how we collect data, you know, are, are we collecting the right types of data, how we serve it, especially as a, those relate to like collecting data at the edge, right? Like, how do we, how do we, how, how do we even deploy applications at the edge? We, we have a lot of potential solutions for that. But when you combine that with, well, how do we, how do we scrape information from the things that we're deploying from the edge, right? Or, or, or some, some of the things you'll see in the, in the program. >> Constance and Stephen, talk about the community vibe right now, because you know, that's the biggest part of this conference is seeing how the people come together, but it's also the vibe sets the tone. What's, what's the current vibe in the community that you're seeing and what do we expect this year at KubeCon, CloudNativeCon? >> Yeah, I'm going to say, I imagine the community's tired and it's been a long few, two years. It feels like 10 years, it feels like forever. And a lot of the in-person aspect that used to be like social validation, we just get like is lacking, so, but that being said, there's still been amazing, like collaboration from like the open, from like the Observability and Open Telemetry part. Like, I am seeing so many projects within the tag Observability collaborate together and making that a focus. And so even though we are tired, it's still, we're still doing good work. And we're still making a point of trying to keep that community tight even though it's much harder on Zoom and right, you know, it's going to try and do the awkward, like Zoom handshake. It just doesn't do the same thing there. But to Stephen's keynote, can't remember how long ago it is, about like resiliency. We are pretty resilient. And we're also, I think we're all learning to work at a slower pace because maybe we were working too fast beforehand. And I think that, I think that's a really good takeaway from all of this. So I think it's going to, for as safe as it can be to have some variation, it's probably going to just be like, it's going to be a big party because we're going to finally get to see each other after a long time then. >> John: Yeah. >> I hope we get to do that in a safe way. >> Stephen, you bring it in, Steve, you go. Oh, Steve, you always got the energy certainly on camera, but in person as well. >> (laughs) >> This in-person dynamic this year is huge. >> Yeah, we, >> Wh-what do you think is going to happen? What, give us your take. >> Yeah, so I mean, I, you know, I would echo Constance in saying that, you know, we're, we're, we're all tired, we're all very tired at this point. Um, but I, you know, but, they, they, the conference tagline for, for North America is, uh, is 'Resilience Realized', right? I think that, you know, throughout this, this year, um, the, the contributors, maintainers of, of all of these, you know, CNCF projects have made incredible strides uh, to empower the communities to, to, uh, to be together, to be family, to, to work better together, um, in spite of, you know, in spite of uh, location, location uh, boundaries, in spite of, you know, uh, uh, health concerns, like we've, we've really made the effort to um, to show up for each other. Um, so I think that, you know, what we'll see in the conference and, and, you know, one of my favorite tracks personally um, is the, the community track, um, so lots of, lots of content around, you know, a-around community building, around uh, I think more of the, the meta of, of maintaining communities, right? So the, you know, the, the, the, the code of conduct committee, as well as uh, steering committee uh, for Kubernetes got together um, last conference to, to talk about the values and principles of the community, right? And, and I think that, you know, that, that needs to continue to be highlighted, um, you know, some of the conversations that we've had around um, how you maintain groups, you know, how do you maintain groups, especially as um, especially as a, the, the, the size of the group grows, right? Once you escape that kind of like Dunbar's number uh, area, like it gets harder and harder to s have the s the same bandwidth conversations that you would in a smaller group, right? So making sure that we're continuing to, to have valuable conversations, but also be inclusive while we're doing that is, um, is something that will continue to be highlighted over the next year and change really. >> Well. I'm really impressed by what you guys do. And I know we're all tired getting, and we want to get back and, hats off to pulling it together and creating a great program because your, your group and your community is a social construct. It's, it's, we're all social animals. And this whole COVID virtual, now hybrid really is going to, going to show in real world as all playing out, and we're going to see how it evolves, and evolution is part of social communities. And I think that the progress has been made and, you know, and with the team and you guys putting together this great event. So my hat's off to you guys, thanks for, for doing that. Appreciate, great stuff. >> Thank you, thank you. >> Now, final question, um, what do you expect? Given, I mean, this is a social organization, um, things evolve, we're social organisms. We're going to be face to face. We're going to have virtual. We're going to have great talks, security obviously is prime time, Mainstream Enterprise Adoption in Kubernetes and Cloud Native. This is crunch time, so what do you guys expect for this event? Share your thoughts. >> Yeah, I-I think there's going to be lots of um, lots of fun, uh, I think uh more social conversations, less structured. Um, you know, i-if you have, if you haven't had the opportunity to kind of hang out on CNCF Slack, while one of these events are happening, we, we've spun up something of like a hallway track. Um, so, so people are hanging out, they're giving their takes during the um, you know, you know, in between uh talks, there, there was also a, you know, kind of after conference uh, hangout for, for the hallway track that we did. Um, so w we definitely want to continue some of that stuff. Um, as you know, between the last few conferences we've launched uh, Cloud Native TV um, and lots of great producers uh, and, and, and content over there. So you'll see, you'll see, kind of, us start to break the wall between um, that virtual content that we've created uh, across the last few months, as well as, you know, th s seeing that turn physical, right? Um, so how do we, you know, how, how do we, how do we manage that and h-how do we make that seamless for people who may be maybe participating virtually as opposed to physically, right. That there's going to be a bit of um, there, there's an aspect of like, you're, you're almost running two conferences, right. Simultaneously. So. >> It's a total experiment in the real world, but it's, it's all important. It's super important. Constance, your thoughts on, on the event, what people are expecting to see and surprises that might emerge, what do you, what's your thoughts? >> Um, I, well actually, see while you were saying something, I had an idea that I think we can make it more connected, So I just wrote it down, um, uh, I, I have some silly ideas when it comes to the conference stuff, which is why Stephen's laughing, although you can't see it. >> (both men laughing) >> Um, my, I, like, I'm, I'm trying to go in with no expectations, mostly because I'm so excited. I don't want to be disappointed um, and I don't want to miss out. I think, I actually think that probably a lot of the discussions are just going to be like, hi, like, it's so nice to actually meet you and just talk about random things. Maybe not as much technology discussions as maybe there would be at a normal, I like, ah, I don't want to say normal, right? Because we are in a new normal, like what KubeCon was several years ago. Um, I think that I do. I think that it would be probably a little painful, this hybrid part, since we don't know what to expect. I think there's going to be so many things that we're going to look back and be like, face palm and be like, oh, we should've thought about these things. So for anyone who's attending virtually, apologies in advance, and please give us feedback. There's so many things I know we're going to have to improve, we just, we don't know them yet. So please be patient with us and know that we wish that you could be there in person with us too. >> Um, uh, I don't know. >> Well, that's the thing, that's the thing. >> I'm just going to go in there with an open mind. Well that's the thing, it's, it's new, it's all new, virtual. So it's, it's, we're learning together. That's, I think, people put too much pressure. I think people like expecting, you know, some magic to happen, but it's all evolving. And I think the magic is the event. And I think, I think it's going to work out great. And by the way, there's no downside it's, you know, learn. >> Exactly! >> So, yeah. So, you know, so one of the things that I um, I, I have this spiel that I give to um, the release team, the Kubernetes release team, every time we start a new cycle, right? Um, you've got a set of returning contributors. You've got a set of uh, net new contributors, right? And um, and, and moving into the release team, you're kind of like thrown right into the fire of Kubernetes, right? So it's, it's, it's one of those things. I, I, I come in and, and, and, essentially say, um, be curious, question everything. Um, this is like, it's a, it's, it's very much like a human experience, right? And I think that, you know uh, to, to Constance's point, we're all here to, to learn and grow, make this a better experience for everyone. Um, so bring yourself, like bring yourself to the conference, right? I think it's, you know, in, in terms of offering feedback, we have, you know, feedback forms for every one of the, you know, every one of the, the talks that you attend, um, you can feel free to reach out to Constance, and myself and, and Jasmine, um, if you have feedback that you want to give personally, you know, there, there are, there are ways to get in touch with us. There are ways to make the event better. And I think that every time we, we uh, we incorporate, like, we incorporate a lot of this feedback into the next conference. So every time um, you provide some piece of information for us, that gives us an opportunity to make it better, right? So this conference is built, uh, this conference is built by the community, right? The, you know, it's not just a, you know, it's not a, you know, it's not a body just uh making, making decisions kind of off the cuff, it's, we are taking your ideas and we're trying to turn them into a program, right? So it's, it's the maintainers, it's the end users. It's the students, it's people who have never used Kubernetes in their lives, or never used Cloud Native technology in their lives. It's folks who are coming from the, you know, the, the corporate IT kind of classic uh, background, and, and just trying to understand how to be effective in this, in this new world for them. Um so it's like, it takes all kinds and we, we don't get it done without your feedback. So please, um, as you're coming to the conference, whether it's in-person or virtually, like, bring yourselves, be curious, ask questions, um, provide that feedback. And then um, and I think, you know, from the, you know, th-the kind of from the uh, the, yes, we need to be human, but we also need to um recognize some of the, the requirements, uh, that, that are, that we have going into this conference. So reminder that, you know, all of, all of the events are under, you know, under a code of conduct, please make sure to familiarize yourself with uh, code of conduct. I think that um, you know, I-I think that coming back into a physical space for a lot of people, the um, the, some of the social skills can, can erode over time. So please not just bring yourself, bring your best self. And, you know, be sure to review all of the policies around health and, and safety as we go into this. >> Constance, Stephen, that's great stuff. Love talking with you guys. Constance, you want to add something? Go ahead. >> I want to add one thing, also be gentle with yourself and like, be really kind to yourself and others, because this is going to be really overwhelming. I haven't been around more than 10 people at once in almost two years. And so, just remember to be kind as well, always be curious and question everything. >> Yeah. That's great stuff. Great reminder. This is what it's all about, face-to-face. Face-to-face, presence, being together, but also having the openness and the community around you. A lot of mentoring, you guys have a great community for people coming in that are new and there's great mentors, people are open and cool, great community. Thanks for coming on for this special preview for KubeCon CloudNativeCon, thank you so much. >> Thanks for having us. >> Thank you. >> Okay, this is theCUBE's coverage of Kubecon CloudNative, and we've been every year of KubeCon. It's been in fantastic growth. Going the next level again in person, a lot of security, real time adoption should be uh, should be great, virtual and in-person. I'm John Furrier, thanks for watching. (cheery synth music)
SUMMARY :
Great to see you guys. you again John. that you want to share. I know that you know, over the bat when I'm looking at it. of the normal KubeCon cycle But just for the folks watching, But I got to ask you guys, the co-chairs, I feel like it went Yeah, the talks I think was really- and then you got the f... that goes to that, you know, I got to ask you guys also I feel like this year wasn't that big I'm certain you agree with that, John: So security dominated more than, models and um, you know, because you know, that's the you know, it's going to Oh, Steve, you always got the this year is huge. Wh-what do you think And, and I think that, you know, that, So my hat's off to you guys, um, what do you expect? during the um, you know, in the real world, but it's, I had an idea that I think we to actually meet you Well, that's the thing, I think people like expecting, you know, all of the events are under, you know, Love talking with you guys. because this is going to and the community around you. Going the next level again in person,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Stephen Augustus | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jasmine | PERSON | 0.99+ |
Constance Caramanolis | PERSON | 0.99+ |
Constance | PERSON | 0.99+ |
Two | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
one | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
CISCO | ORGANIZATION | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
101 | QUANTITY | 0.99+ |
two years | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
This year | DATE | 0.98+ |
Splunk | ORGANIZATION | 0.97+ |
Eclipse Foundation | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
this year | DATE | 0.96+ |
two conferences | QUANTITY | 0.96+ |
more than 10 people | QUANTITY | 0.96+ |
three | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
North America | LOCATION | 0.95+ |
Cloud Native | ORGANIZATION | 0.95+ |
Flux | ORGANIZATION | 0.94+ |
Kubernetes | PERSON | 0.94+ |
Tekton | ORGANIZATION | 0.94+ |
Kubernetes | TITLE | 0.93+ |
one thing | QUANTITY | 0.92+ |
Dunbar | PERSON | 0.9+ |
Sigstore | ORGANIZATION | 0.9+ |
Cloud | TITLE | 0.9+ |
almost two years | QUANTITY | 0.89+ |
Rekor | ORGANIZATION | 0.89+ |
KubeCon CloudNativeCon | EVENT | 0.88+ |
years ago | DATE | 0.88+ |
European | EVENT | 0.88+ |
both men | QUANTITY | 0.85+ |
a week | QUANTITY | 0.76+ |
North | EVENT | 0.74+ |
SupplyChainSecurityCon | ORGANIZATION | 0.74+ |
day zero | QUANTITY | 0.73+ |
LIVE Panel: FutureOps: End-to-end GitOps
>>and hello, we're back. I've got my panel and we are doing things real time here. So sorry for the delay a few minutes late. So the way let's talk about things, the reason we're here and we're going around the room and introduce everybody. Got three special guests here. I got my evil or my john and the normal And we're going to talk about get ops I called it future office just because I want to think about what's the next thing for that at the end, we're gonna talk about what our ideas for what's next for getups, right? Um, because we're all starting to just get into get ups now. But of course a lot of us are always thinking about what's next? What's better? How can we make this thing better? So we're going to take your questions. That's the reason we're here, is to take your questions and answer them. Or at least the best we can for the next hour. And all right, so let's go around the room and introduce yourself. My name is Brett. I am streaming from Brett from that. From Brett. From Virginia Beach in Virginia beach, Virginia, United States. Um, and I talk about things on the internet, I sell courses on you, to me that talk about Docker and kubernetes Ive or introduce yourself. >>How's it going? Everyone, I'm a software engineer at axel Springer, currently based in Berlin and I happen to be Brett Brett's teaching assistant. >>All right, that's right. We're in, we're in our courses together almost every day. Mm john >>hey everyone, my name is john Harris, I used to work at Dhaka um, I now work at VM ware is a star field engineer. Um, so yeah, >>and normal >>awesome by the way, you are streaming from Brett Brett, >>I answered from breath to breath. >>Um I'm normal method. I'm a distinguished engineer with booz allen and I'm also a doctor captain and it's good to see either in person and it's good to see you again john it's been a little while. >>It has the pre covid times, right? You're up here in Seattle. >>Yeah. It feels, it feels like an eternity ago. >>Yeah, john shirt looks red and reminds me of the Austin T shirt. So I was like, yeah, so we all, we all have like this old limited edition doctor on E. >>T. That's a, that's a classic. >>Yeah, I scored that one last year. Sometimes with these old conference church, you have to like go into people's closets. I'm not saying I did that. Um, but you know, you have to go steal stuff, you to find ways to get the swag >>post post covid. If you ever come to my place, I'm going to have to lock the closets. That >>that's right, That's right. >>So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the T shirts, just boxes and boxes and boxes floor to ceiling. So every time I went to HQ you just you just as many as you can fit in your luggage. I think I have about 10 of these. You >>bring an extra piece of luggage just for your your shirt shirt grab. Um All right, so I'm going to start scanning questions uh so that you don't have to you can you help you all are welcome to do that. And I'm going to start us off with the topic. Um So let's just define the parameters. Like we can talk about anything devops and here we can go down and plenty of rabbit holes. But the kind of, the goal here is to talk about get ups and get ups if you haven't heard about it is essentially uh using versioning systems like get like we've all been getting used to as developers to track your infrastructure changes, not just your code changes and then automate that with a bunch of tooling so that the robots take over. And essentially you have get as a central source of truth and then get log as a central source of history and then there's a bunch of magic little bits in the middle and then supposedly everything is wonderful. It's all automatic. The reality is is what it's often quite messy, quite tricky to get everything working. And uh the edges of this are not perfect. Um so it is a relatively new thing. It's probably three, maybe four years old as an official thing from. We've uh so we're gonna get into it and I'll let's go around the room and the same word we did before and um not to push on that, put you on the spot or anything. But what is, what is one of the things you either like or either hate about getups um that you've enjoyed either using it or you know, whatever for me. I really, I really love that I can point people to a repo that basically is hopefully if they look at the log a tracking, simplistic tracking of what might have changed in that part of the world or the environment. I remember many years past where, you know, I've had executive or some mid level manager wants to see what the changes were or someone outside my team went to see what we just changed. It was okay, they need access to this system into that dashboard and that spreadsheet and then this thing and it was always so complicated and now in a world where if we're using get up orbit bucket or whatever where you can just say, hey go look at that repo if there was three commits today, probably three changes happened. That's I love that particular part about it. Of course it's always more complicated than that. But um Ive or I know you've been getting into this stuff recently. So um any thoughts? Yeah, I think >>my favorite part about get ops is >>reproducibility. Um >>you know the ability to just test something and get it up and running >>and then just tear it down. >>Uh not >>being worried that how did I configure it the first time? I think that's my favorite part about >>it. I'm changing your background as we do this. >>I was going to say, did you just do it get ups pushed to like change his >>background, just a dialogue that different for that green screen equals false? Uh Change the background. Yeah, I mean, um and I mean I think last year was really my first year of actually using it on anything significant, like a real project. Um so I'm still, I still feel like I'm very new to john you anything. >>Yeah, it's weird getups is that thing which kind of crystallizes maybe better than anything else, the grizzled veteran life cycle of emotions with the technology because I think it's easy to get super excited about something new. And when I first looked into get up, so I think this is even before it was probably called getups, we were looking at like how to use guest source of truth, like everything sounds great, right? You're like, wait, get everyone knows, get gets the source of truth, There's a load of robust tooling. This just makes a sense. If everything dies, we can just apply the get again, that would be great. Um and then you go through like the trough of despair, right? We're like, oh no, none of this works. The application is super stateless if this doesn't work and what do we do with secrets and how do we do this? Like how do we get people access in the right place and then you realize everything is terrible again and then everything it equalizes and you're kind of, I think, you know, it sounds great on paper and they were absolutely fantastic things about it, but I think just having that measured approach to it, like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. Right, like it's the magic, which is >>the magic is the mystery, >>right? >>Magic can be good and bad and in text so >>very much so yeah, so um concurrence with with john and ever uh in terms of what I like about it is the potential to apply it to moving security to left and getting closer to a more stable infrastructures code with respect to the whole entire environment. Um And uh and that reconciliation loop, it reminds me of what, what is old is new again? Right? Well, quote unquote old um in terms of like chef and puppet and that the reconciliation loop applied in a in a more uh in a cleaner interface and and into the infrastructure that we're kind of used to already, once you start really digging into kubernetes what I don't like and just this is in concurrence with the other Panelist is it's relatively new. It has um, so it has a learning curve and it's still being, you know, it's a very active um environment and community and that means that things are changing and constantly and there's like new ways and new patterns as people are exploring how to use it. And I think that trough of despair is typically figuring out incrementally what it actually is doing for you and what it's not going to solve for you, right, john, so like that's that trough of despair for a bit and then you realize, okay, this is where it fits potentially in my architecture and like anything, you have to make that trade off and you have to make that decision and accept the trade offs for that. But I think it has a lot of promise for, for compliance and security and all that good stuff. >>Yeah. It's like it's like the potentials, there's still a lot more potential than there is uh reality right now. I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling that doesn't appreciate getups like you're using to get up to and use something else and that tool has no awareness of the concept so it doesn't flow well with all of the things you're trying to do and get um uh things that aren't state based and all that. So this is going to lead me to our first question from Camden asking dumb questions by the way. No dumb questions here. Um How is get apps? Not just another name for C. D. Anybody want to take that as an answer as a question. How is get up is not just another name for C. D. I have things but we can talk about it. I >>feel like we need victor foster kids. Yeah, sure you would have opinions. Yeah, >>I think it's a very yeah. One person replied said it's a very specific it's an opinionated version of cd. That's a great that's a great answer like that. Yeah. >>It's like an implement. Its it's an implementation of deployment if you want it if you want to use it for that. All right. I realize now it's kind of hard in terms of a physical panel and a virtual panel to figure out who on the panel is gonna, you know, ready to jump in to answer a question. But I'll take it. So um I'll um I'll do my best inner victor and say, you know, it's it's an implementation of C. D. And it's it's a choice right? It's one can just still do docker build and darker pushes and doctor pulls and that's fine. Or use other technologies to deploy containers and pods and change your, your kubernetes infrastructure. But get apps is a different implementation, a different method of doing that same thing at the end of the day. Yeah, >>I like it. I like >>it and I think that goes back to your point about, you know, it's kind of early days still, I think to me what I like about getups in that respect is it's nice to see kubernetes become a platform where people are experimenting with different ways of doing things, right? And so I think that encourages like lots of different patterns and overall that's going to be a good thing for the community because then more, you know, and not everything needs to settle in terms of only one way of doing things, but a lot of different ways of doing things helps people fit, you know, the tooling to their needs, or helps fit kubernetes to their needs, etcetera. Yeah, >>um I agree with that, the, so I'm gonna, since we're getting a load of good questions, so um one of the, one of the, one of the, I want to add to that real quick that one of the uh from the, we've people themselves, because I've had some on the show and one of things that I look at it is distinguishing is with continuous deployment tools, I sort of think that it's almost like previous generation and uh continuous deployment tools can be anything like we would consider Jenkins cd, right, if you if you had an association to a server and do a doctor pull and you know, dr up or dr composed up rather, or if it did a cube control apply uh from you know inside an ssh tunnel or something like that was considered considered C. D. Well get ops is much more rigid I think in terms of um you you need to apply, you have a specific repo that's all about your deployments and because of what tool you're using and that one your commit to a specific repo or in a specific branch that repo depends on how you're setting it up. That is what kicks off a workflow. And then secondly there's an understanding of state. So a lot of these tools now I have uh reconciliation where they they look at the cluster and if things are changing they will actually go back and to get and the robots will take over and will commit that. Hey this thing has changed um and you maybe you human didn't change it, something else might have changed it. So I think that's where getups is approaching it, is that ah we we need to we need to consider more than just a couple of commands that be runnin in a script. Like there needs to be more than that for a getups repo to happen anyway, that's just kind of the the take back to take away I took from a previous conversation with some people um >>we've I don't think that lost, its the last piece is really important, right? I think like for me, C d like Ci cd, they're more philosophical ideas, write a set of principles, right? Like getting an idea or a code change to environments promoting it. It's very kind of pipeline driven um and it's very imperative driven, right? Like our existing CD tools are a lot of the ways that people think about Cd, it would be triggered by an event, maybe a code push and then these other things are happening in sequence until they either fail or pass, right? And then we're done. Getups is very much sitting on the, you know, the reconciliation side, it's changing to a pull based model of reconciliation, right? Like it's very declarative, it's just looking at the state and it's automatically pulling changes when they happen, rather than this imperative trigger driven model. That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups is doing anything creatively revolutionary here, but I think that's one of the main things that the ideas that are being introduced into those, like existing C kind of tools and pipelines, um certainly the pull based model and the reconciliation model, which, you know, has a lot in common with kubernetes and how those kind of controllers work, but I think that's the key idea. Yeah. >>Um This is a pretty specific one Tory asks, does anyone have opinions about get ops in a mono repo this is like this is getting into religion a little bit. How many repos are too many repose? How um any thoughts on that? Anyone before I rant, >>go >>for it, go for it? >>Yeah. How I'm using it right now in a monitor repo uh So I'm using GIT hub. Right, so you have what? The workflow and then inside a workflow? Yeah, mo file, I'll >>track the >>actual changes to the workflow itself, as well as a folder, which is basically some sort of service in Amman Arepa, so if any of those things changes, it'll trigger the actual pipeline to run. So that's like the simplest thing that I could figure out how to, you know, get it set up using um get hubs, uh workflow path future. Yeah. And it's worked for me for writing, you know? That's Yeah. >>Yeah, the a lot of these things too, like the mono repo discussion will, it's very tool specific. Each tool has various levels of support for branch branching and different repos and subdirectories are are looking at the defense and to see if there's changes in that specific directory. Yeah. Sorry, um john you're going to say something, >>I was just going to say, I've never really done it, but I imagine the same kind of downsides of mono repo to multiple report would exist there. I mean, you've got the blast radius issues, you've got, you know, how big is the mono repo? Do we have to pull does the tool have to pull that or cashier every time it needs to determine def so what is the support for being able to just look at directories versus you know, I think we can get way down into a deeper conversation. Maybe we'll save it for later on in the conversation about what we're doing. Get up, how do we structure our get reposed? We have super granular repo per environment, Perper out reaper, per cluster repo per whatever or do we have directories per environment or branches per environment? How how is everything organized? I think it's you know, it's going to be one of those, there's never one size fits all. I'll give the class of consultant like it depends answer. Right? >>Yeah, for sure. It's very similar to the code struggle because it depends. >>Right? >>Uh Yeah, it's similar to the to the code problem of teams trying to figure out how many repose for their code. Should they micro service, should they? Semi micro service, macro service. Like I mean, you know because too many repose means you're doing a bunch of repo management, a bunch of changes on your local system, you're constantly get pulling all these different things and uh but if you have one big repo then it's it's a it's a huge monolithic thing that you usually have to deal with. Path based issues of tools that only need to look at a specific directory and um yeah, it's a it's a culture, I feel like yeah, like I keep going back to this, it's a culture thing. Does your what is your team prefer? What do you like? What um what's painful for everyone and who's what's the loudest pain that you need to deal with? Is it is it repo management? That's the pain um or is it uh you know, is that that everyone's in one place and it's really hard to keep too many cooks out of the kitchen, which is a mono repo problem, you know? Um How do we handle security? So this is a great one from Tory again. Another great question back to back. And that's the first time we've done that um security as it pertains to get up to anyone who can commit can change the infrastructure. Yes. >>Yes. So the tooling that you have for your GIT repo and the authentication, authorization and permissions that you apply to the GIT repo using a get server like GIT hub or get lab or whatever your flavor of the day is is going to be how security is handled with respect to changes in your get ups configuration repository. So um that is completely specific to your implementation of that or ones implementation of of how they're handling that. Get repositories that the get ups tooling is looking at. To reconcile changes with respect to the permissions of the for lack of better term robot itself. Right? They get up tooling like flux or Argosy. D Um one kid would would create a user or a service account or uh other kind of authentication measures to limit the permissions for that service account that the Gaddafi's tooling needs to be able to read the repose and and send commits etcetera. So that is well within the realm of what you have already for your for your get your get um repo. Yeah. >>Yeah. A related question is from a g what they like about get apps if done nicely for a newbie it's you can get stuff done easily if you what they dislike about it is when you have too many get repose it becomes just too complicated and I agree. Um was making a joke with a team the other week that you know the developer used to just make one commit and they would pass pass it on to a QA team that would then eventually emerging in the master. But they made the commits to these feature branches or whatever. But now they make a commit, they make a pR there for their code then they go make a PR in the helm chart to update the thing to do that and then they go make a PR in the get ups repeal for Argo. And so we talked about that they're probably like four or five P. R. Is just to get their code in the production. But we were talking about the negative of that but the reality was It's just five or 4 or five prs like it wasn't five different systems that had five different methodologies and tooling and that. So I looked at it I was like well yeah that's kind of a pain in the get sense but you're also dealing with one type. It's a repetitive action but it's it's the one thing I don't have to go to five different systems with five different ways of doing it. And once in the web and one's on the client wants a command line that I don't remember. Um Yeah so it's got pros and cons I think when you >>I think when you get to the scale where those kind of issues are a problem then you're probably at the scale where you can afford to invest some time into automation into that. Right? Like what I've when I've seen this in larger customers or larger organizations if there ever at that stage where okay apps are coming up all the time. You know, there's a 10 X 100 X developer to operations folks who may be creating get repose setting up permissions then that stuff gets automated, right? Like, you know, maybe ticket based systems or whatever. Developers say I need a new app. It templates things or more often using the same model, right of reconciliation and operators and the horrific abuse of cogs that we're seeing in the communities community right now. Um You know, developers can create a crd which just says, hey, I'm creating a new app is called app A and then a controller will pick up that app a definition. It will go create a get a repo Programmatically it will add the right definitely will look up and held up the developers and the permissions that need to be able to get to that repo it will create and template automatically some name space and the clusters that it needs in the environments that it needs, depending on, you know, some metadata it might read. So I think, you know, those are definite problems and they're definitely like a teething, growing pain thing. But once you get to that scale, you kind of need to step back and say, well look, we just need to invest in time into the operational aspect of this and automating this pain away, I think. Yeah, >>yeah. And that ultimately ends in Yeah. Custom tooling, which it's hard to avoid it at scale. I mean, there's there's two, there's almost two conversations here, right. There is what I call the Solo admin Solo devops, I bought that domain Solo devops dot com because, you know, whenever I'm talking to dr khan in the real world, it's like I asked people to raise hands, I don't know how we can raise hands here, but I would ask people to raise hands and see how many of you here are. The sole person responsible for deploying the app that your team makes and like a quarter of the room would raise their hand. So I call that solo devops like those, that person can't make all the custom tooling in the world. So they really need dr like solutions where it's opinionated, the workflow is sort of built in and they don't have to wrangle things together with a bunch of glue, you know, in other words bash. Um and so this kind of comes to a conversation uh starting this question from lee he's asking how do you combine get ops with ci cd, especially the continuous bit. How do you avoid having a human uh sort of the complaint the team I was working with has, how do you avoid a human editing and get committing for every single deploy? They've settled on customized templates and a script for routine updates. So as a seed for this conference, this question I'm gonna ask you all uh instead of that specific question cause it's a little open ended. Um Tell me whether you agree with this. I I kind of look at the image, the image artifact because the doctor image or container image in general is an artifact that I I view it that way and that thing going into the registry with the right label or right part of the label. Um That tag rather not the label but the tag that to me is like one of the great demarche points of, we're kind of done with Ci and we're now into the deployment phase and it doesn't necessarily mean the tooling is a clear cut there, but that artifact being shipped in a specific way or promoted as we sometimes say. Um what do you think? Does anyone have opinions on that? I don't even know if that's the right opinion to have so mhm. >>So um I think what you're, what you're getting at is that get ups, models can trigger off of different events um to trigger the reconciliation loop. And one way to do that is if the image, if it notices a image change in the registry, the other is if there's a commit event on a specific rebo and branch and it's up to, you are up to the person that's implementing their get ups model, what event to trigger there, that reconciliation loop off of, You can do both, you can do one or the other. It also depends on the Templeton engine that you're using on top of um on top of kubernetes, such as helm or um you know, the other ones that are out there or if you're not even doing that, then, you know straight. Yeah, mo um so it kind of just depends, but those are the typically the two options one has and a combination of of those to trigger that event. You can also just trigger it manually, right? You can go into the command line and force a a, you know, a really like a scan or a new reconciliation loop to occur. So it kind of just, I don't want to say this, but it depends on what you're trying to do and what makes sense in your pipeline. Right? So if you're if you're set up where you are tag, if you're doing it based off of image tags, then you probably want to use get ups in a way that you're using the image tags. Right. And the pattern that you've established there, if you're not really doing that and you're more around, like, different branches are mapped to different environments, then triggered off of the correct branch. And that's where the permissions also come into play. Where if you don't want someone to touch production and you've got your getups for your production cluster based off of like uh you know, a main branch, then whoever can push a change to that main branch has the authority to push that change to production. Right? So that's your authentication and permissions um system same for the registry itself. Right. So >>Yeah. Yeah. Sorry, anyone else have any thoughts on that? I was about to go to the next topic, >>I was going to say. I think certain tools dictate the approach, like, if you're using Argosy d it's I think I'm correct me if I'm wrong, but I think the only way to use it right now is just through image modification. Like, the manifest changes, it looks at a specific directory and anything changes then it will do its thing. And uh Synchronize the cost there with whatever's and get >>Yeah, flux has both. Yeah, and flux has both. So it it kind of depends. I think you can make our go do that too, but uh this is back to what we were saying in the beginning, uh you know, these things are changing, right? So that might be what it is right now in terms of triggering the reconciliation loops and get ups, tooling, but there might be other events in the future that might trigger it, and it's not completely stand alone because you still need you're tooling to do any kind of testing or whatever you have in terms of like the specific pipeline. So oftentimes you're bolting in getups into some other part of broader Cfd solution. That makes sense. Yeah, >>we've got a lot of questions about secrets or people that are asking about secrets. >>So my my tongue and cheek answered the secrets question was, what's the best practices for kubernetes? Secrets? That's the same thing for secrets with good apps? Uh getups is not last time I checked and last time I was running this stuff get ups is not has nothing to do with secrets in that sense. It's just there to get your stuff running on communities. So, um there's probably a really good session on secrets at dr concept. I >>would agree with you, I agree with you. Yeah, I mean, get off stools, I mean every every project of mine handles secrets differently. Uh huh. And I think I'm not sure if it was even when I was talking to but talking to someone recently that I'm very bullish on get up actions, I love get up actions, it's not great for deployments yet, but we do have this new thing and get hub environments, I think it's called. So it allows me at least the store secrets per environment, which it didn't have the concept of that before, which you know, if you if any of you running kubernetes out there, you typically end up when you start running kubernetes, you end up with more than one kubernetes, like you're going to end up with a lot of clusters at some point, at least many multiple, more than two. Um and so if you're trying to store secret somewhere, you do have and there's a discussion happening in chat right now where people are talking about um sealed secrets which if you haven't heard of that, go look that up and just be versed on what sealed secrets is because it's a it's a fantastic concept for how to store secrets in the public. Um I love it because I'm a big P. K. I nerd but um it's not the only way and it doesn't fit all models. So I have clients that use A W. S. Secrets because they're in A W. S. And then they just have to use the kubernetes external secret. But again like like like normal sand, you know, it's that doesn't really affect get ops, get ops is just applying whatever helm charts or jahmal or images that you're, you're you're deploying, get off. It was more about the approach of when the changes happen and whether it's a push or pull model like we're talking about and you know, >>I would say there's a bunch of prerequisites to get ups secrets being one of them because the risk of you putting a secret into your git repo if you haven't figured out your community secrets architecture and start diving into getups is high and removing secrets from get repose is you know, could be its own industry, right. It's >>a thing, >>how do >>I hide this? How do I obscure this commit that's already now on a dozen machines. >>So there are some prerequisites in terms of when you're ready to adopt get up. So I think is the right way of saying the answer to that secrets being one of them. >>I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see the ah ha moment when it came to get ups which, which was that the premier thing that everyone used to say about get up about why it was great. Was its the single source of truth. There's no state anywhere else. You just need to look at git. Um and then secrets may be realized along with a bunch of other things down the line that is not true and will never be true. So as soon as you can lose the dogmatism about everything is going to be and get it's fantastic. As long as you've understood everything is not going to get. There are things which will absolutely never be and get some tools just don't deal with that. They need to earn their own state, especially in communities, some controls on their own state. You know, cuz sealed secrets and and other projects like SOps and I think there are two or three others. That's a great way of dealing with secrets if you want to keep them in get. But you know, projects like vault more kind of like what I would say, production grade secret strategies. Right? And if you're in AWS or a cloud, you're more likely to be using their secrets. Your secret policy is maybe not dictated by you in large organizations might be dictated by CSO or security or Great. Like I think once if you, if you're trying to adopt getups or you're thinking about it, get the dogmatism of get as a single point of truth out of your mind and think about getups more as a philosophy and a set of best practice principles, then you will be in much better stead, >>right? Yeah. >>People are asking more questions in chat like infrastructure as code plus C d essentially get ups or C I rather, um, these are all great questions and a part of the debate, I'm actually just going to throw up on screen. I'm gonna put this in chat, but this is, this is to me the source, Right? So we worked with when they coined the term. We, a lot of us have been trying to get, if we talk about the history for a minute and then tell me if I'm getting this right. Um, a lot of us were trying to automate all these different parts of the puzzle, but a lot of them, they, some things might have been infrastructure as code. Some things weren't, some things were sort of like settings is coded, like you're going to Jenkins and type in secrets and settings or type in a certain thing in the settings of Jenkins and then that it wasn't really in get and so what we was trying to go for was a way to have almost like eventually a two way state understanding where get might change your infrastructure but then your infrastructure might also change and needs to be reflected in the get if the get is trying to be the single source of truth. Um and like you're saying the reality is that you're never gonna have one repo that has all of your infrastructure in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. Right. Um but anyway, I'm gonna put this link in chat. So this guide actually, uh one of things they talk about is what it's not, so it's, it's kind of great to read through the different requirements and like what I was saying well ago um mhm. Having having ci having infrastructure as code and then trying a little bit of continuous deployment out, it's probably a prerequisite. Forget ops so it's hard to just jump into that when you don't already have infrastructure as code because a machine doing stuff on your behalf, it means that you have to have things documented and somewhere and get repo but let me put this in the in the >>chitty chat, I would like to know if the other panelists agree, but I think get apps is a okay. I would say it's a moderate level, it's not a beginner level communities thing, it's like a moderate level advanced, a little bit more advanced level. Um One can start off using it but you definitely have to have some pre recs in place or some understanding of like a pattern in place. Um So what do the other folks think about that opinion? >>I think if you're if you're trying to use get out before, you know what problem you have, you're probably gonna be in trouble. Right. It's like having a solution to it probably don't have yet. Mhm. Right. I mean if if you're just evil or and you're just typing, keep control apply, you're one person right, Get off. It doesn't seem like a big a big jump, like, I mean it doesn't like why would I do that? I'm just, I'm just gonna inside, it's the type of get commit right, I'm typing Q control apply. But I think one of the rules from we've is none of your developers and none of your admins can have cute control access to the cluster because if you can't, if you do have access and you can just apply something, then that's just infrastructure as code. That's just continuous deployment, that's, that's not really get ops um, getups implies that the only way things get into the cluster is through the get up, get automation that you're using with, you know, flux Argo, we haven't talked about, what's the other one that Victor Farsi talks about, by the way people are asking about victor, because victor would love to talk about this stuff, but he's in my next life, so come back in an hour and a half or whatever and victor is going to be talking about sys, admin list with me. Um >>you gotta ask him nothing but get up questions in the next, >>confuse them, confuse them. But anyway, that, that, that's um, it's hard, it's hard to understand and without having tried it, I think conceptually it's a little challenging >>one thing with getups, especially based off the we've works blog post that you just put up on there. It's an opinionated way of doing something. Uh you know, it's an opinionated way of of delivering changes to an environment to your kubernetes environment. So it's opinionated were often not used to seeing things that are very opinionated in this sense, in the in the ecosystem, but get apps is a opinionated thing. It's it's one way of doing it. Um there are ways to change it and like there are options um like what we were talking about in terms of the events that trigger, but the way that it's structured is an opinion opinionated way both from like a tooling perspective, like using get etcetera, but also from a devops cultural perspective, right? Like you were talking about not having anyone access cube control and changing the cluster directly. That's a philosophical opinion that get ups forces you to adopt otherwise. It kind of breaks the model and um I just I want everyone to just understand that. That is very opinion, anything in that sense. Yeah, >>polygamy is another thing. Infrastructure as code. Um someone's mentioning plummy and chat, I just had actually my life show self plug bread that live go there. I'm on Youtube every week. I did the same thing. These these are my friends um and had palami on two weeks ago uh last week, remember uh and it was in the last couple of weeks and we talked about their infrastructure as code solution. Were actually writing code instead of um oh that's an interesting take on uh developer team sort of owning coding the infrastructure through code rather than Yamil as a data language. I don't really have an opinion on it yet because I haven't used it in production or anything in the real real world, but um, I'm not sure how much they are applying trying to go towards the get up stuff. I will do a plug for Solomon hikes. Who has a, the beginning of the day, it's already happened so you can go back and watch it. It's a, it's a, what's it called? Q. Rethinking application delivery with Q. And build kit. So go look this up. This is the found co founder of Dr and former CTO Solomon hikes at the beginning of the day. He has a tool called dagger. I'm not sure why the title of the talk is delivering with Q. And built it, but the tool is showing off in there for an hour is called dagger. And it's, it's an interesting idea on how to apply a lot of this opinionated automated stuff to uh, to deployment and it's get off space and you use Q language. It's a graph language. I watched most of it and it was a really interesting take. I'm excited to see if that takes off and if they try that because it's another way that you can get a little bit more advanced with your you're get deployments and without having to just stick everything in Yemen, which is kind of what we're in today with helm charts and what not. All right. More questions about secrets, I think. I think we're not going to have a whole lot of more, a lot more about secrets basically. Uh put secrets in your cluster to start with and kubernetes in encrypted, you know, thing. And then, you know, as it gets harder, then you have to find another solution when you have five clusters, you don't wanna have to do it five times. That's when you have to go for Walton A W. S secrets and all >>that. Right? I'm gonna post it note. Yeah. Crm into the cluster. Just kidding. >>Yes, there are recordings of this. Yes, they will be later. Uh, because we're that these are all gonna be on youtube later. Um, yeah, detects secrets cushion saying detect secrets or get Guardian are absolute requirements. I think it's in reference to your secrets comment earlier. Um, Camels asking about Cuban is dropping support for Docker that this is not the place to ask for that, but it, it is uh, basically it's a Nonevent Marantz has actually just created that same plug in available in a different repos. So if you want to keep using Docker and kubernetes, you know, you can do it like it's no big deal. Most of us aren't using doctor in our communities anyway, so we're using like container D or whatever is provided to us by our provider. Um yeah, thank you so much for all these comments. These are great people helping each other and chat. I feel like we're just here to make sure the chats available so people can help each other. >>I feel like I want to pick up on something when you mentioned pollux me, I think there's a um we're talking about getups but I think in the original like the origination of that I guess was deploying applications to clusters right, picking up deployment manifest. But I think with the gloomy and I obviously terra form and things have been around a long time, folks are starting to apply this I think I found one earlier which was like um kub stack the Terror Forms get ups framework. Um but also with the advent of things like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure for your clusters and build the cluster right? We're not just talking about deploying applications, the cluster A. P. I will talk to a W. S. Spin up, VPc spin up machines, you know, we'll do the same kind of things that terra form does and and those other tools do I think applying getups principles to the infrastructure spin up right, the proper infrastructure as code stuff, constantly applying Terror form um you know, plans and whatever, constantly applying cluster Api resources spinning up stuff in those clouds. That's a super interesting. Um you know, extension of this area, I'd be curious to see if what the folks think about that. >>Yeah, that's why I picked this topic is one of my three. Uh I got I got to pick the topics. I was like the three things that there like the most bleeding edge exciting. Most people haven't, we haven't basically we haven't figured all this out yet. We as an industry, so um it's I think we're gonna see more ideas on it. Um what's the one with the popsicle as the as the icon victor talks about all the time? It's not it's another getups like tool, but it's um it's getups for you use this kubernetes limit and then we have to look it up, >>You're talking about cross plane. >>So >>my >>wife is over here with the sound effects and the first sound effect of the day that she chooses to use is one. >>All right, can we pick it? Let's let's find another question bret >>I'm searching >>so many of them. All right, so uh I think one really quick one is getups only for kubernetes, I think the main to tooling to tools that we're talking about, our Argosy D and flux and they're mostly geared toward kubernetes deployments but there's a, it seems like they're organized in a way that there's a clean abstraction in with respect to the agent that's doing the deployment and the tooling that that can interact with. So I would imagine that in the future and this might be true already right now that get ups could be applied to other types of deployments at some point in the future. But right now it's mostly focused and treats kubernetes as a first class citizen or the tooling on top of kubernetes, let's say something like how as a first class citizen? Yeah, to Brett, >>to me the field, back to you bret the thing I was looking for is cross plane. So that's another tool. Um Victor has been uh sharing a lot about it in Youtube cross plane and that is basically runs inside a kubernetes, but it handles your other infrastructure besides your app. It allows you to like get ops, you're a W. S stuff by using the kubernetes state engine as a, as a way to manage that. And I have not used it yet, but he does some really great demos on Youtube. So people are liking this idea of get off, so they're trying to figure out how do we, how do we manage state? How do we uh because the probably terra form is that, well, there's many problems, but it's always a lot of problems, but in the get outs world it's not quite the right fit yet, It might be, but you still, it's still largely as expected for people to, you know, like type the command, um, and it keeps state locally the ss, clouds and all that. And but the other thing is I'm I'm now realizing that when I saw the demo from Solomon, I'm going back to the Solomon hikes thing. He was using the demo and he was showing it apply deploying something on S three buckets, employing internet wifi and deploying it on google other things beyond kubernetes and saying that it's all getups approach. So I think we're just at the very beginning of seeing because it all started with kubernetes and now there's a swarm one, you can look up swarm, get office and there's a swarm, I can't take the name of it. Swarm sink I think is what's called swarm sink on git hub, which allows you to do swarm based getups like things. And now we're seeing these other tools coming out. They're saying we're going to try to do the get ups concepts, but not for kubernetes specifically and that's I think, you know, infrastructure as code started with certain areas of the world and then now then now we all just assume that you're going to have an infrastructure as code way of doing whatever that is and I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for all the clouds stuff and it won't just be flexor Argo. And then that's the weird thing is will flex and Argo support all those things or will it just be focused on kubernetes apps? You know, community stuff? >>There's also, I think this is what you're alluding to. There is a trend of using um kubernetes and see rDS to provision and control things that are outside of communities like the cloud service providers services as if they were first class entities within kubernetes so that you can use the kubernetes um focus tooling for things that are not communities through the kubernetes interface communities. Yeah, >>yeah, even criticism. >>Yeah, yeah, I'm just going to say that sounds like cross plane. >>Yeah, yeah, I mean, I think that's that's uh there were, you know, for the last couple of years, it's been flux and are going back and forth. Um they're like frenemies, you know, and they've been going back and forth with iterating on these ideas of how do we manage this complicated thing? That is many kubernetes clusters? Um because like Argo, I don't know if the flux V two can do this, but Argo can manage multiple clusters now from one cluster, so your, you can manage other clusters, technically external things from a single entity. Um Originally flux couldn't do that, but I'm going to say that V two can, I don't actually >>know. Um I think all that is gonna, I think that's going to consolidate in the future. All right. In terms of like the common feature set, what Iver and john what do you think? >>I mean, I think it's already begun, right, I think haven't, didn't they collaborate on a common engine? I don't know whether it's finished yet, but I think they're working towards a common getups engine and then they're just going to layer on features on top. But I think, I mean, I think that's interesting, right, because where it runs and where it interacts with, if we're talking about a pull based model, it shouldn't, it's decentralized to a certain extent, right? We need get and we need the agent which is pulling if we're saying there's something else which is orchestrating something that we start to like fuzzy the model even right. Like is this state living somewhere else, then I think that's just interesting as well. I thought flux was completely decentralized, but I know you install our go somewhere like the cargo has a server as well, but it's been a while since I've looked in depth at them. But I think the, you know, does that muddy the agent only pull model? >>I'm reading a >>Yeah, I would say that there's like a process of natural selection going on as as the C. N. C. F. Landscape evolves and grows bigger and a lot of divide and conquer right now. But I think as certain things kind of get more prominent >>and popular, I think >>it starts to trend and it inspires other things and then it starts to aggregate and you know, kind of get back into like a unified kind of like core. Maybe like for instance, cross plane, I feel like it shouldn't even really exist. It should be, it like it's a communities add on, but it should be built in, it should be built into kubernetes, like why doesn't this exist already >>for like controlling a cloud? >>Yeah, like just, you know, having this interface with the cloud provider and be able to Yeah, >>exactly. Yeah, and it kinda, you're right. That kinda happens because you do, I mean when you start talking about storage providers and networking providers was very specific implementations of operators or just individual controllers that do operate and control other resources in the cloud, but certainly not universally right. Not every feature of AWS is available to kubernetes out of the box. Um and you know, it, one of the challenges across plane is you gotta have kubernetes before you can deploy kubernetes. Like there's a chicken and egg issue there where if you're going to use, if you're going to use our cross plane for your other infrastructure, but it's gotta, but it has to run on kubernetes who creates that first kubernetes in order for you to put that on there. And victor talks about one of his videos, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. There has to be that initial, I did a thing with, I'm a human and I typed in some commands on a server and things happened but they don't really have an easy deployment method for getting our go up and running using simply nothing but a get push to an existing system. There's something like that. So it's a it's an interesting problem of day one infrastructure which is again only day one, I think data is way more interesting and hard, but um how can we spend these things up if they're all depending on each other and who is the first one to get started? >>I mean it's true of everything though, I mean at the end of that you need some kind of big bang kind of function too, you know, I started running start everything I >>think without going over that, sorry, without going off on a tangent. I was, I was gonna say there's a, if folks have heard of kind which is kubernetes and Docker, which is a mini kubernetes cluster, you can run in a Docker container or each container will run as a as a node. Um you know, that's been a really good way to spin up things like clusters. KPI because they boot strap a local kind, install the manifests, it will go and spin up a fully sized cluster, it will transfer its resources over there and then it will die itself. Right? So that, that's kind of bootstrapping itself. And I think a couple of folks in the community, Jason to Tiberius, I think he works for Quinyx metal um has, has experimented with like an even more minimal just Api server, so we're really just leveraging the kubernetes ideas of like a reconciliation loop and a controller. We just need something to bootstrap with those C R D s and get something going and then go away again. So I think that's gonna be a pattern that comes up kind of more and more >>Yeah, for sure. Um, and uh, the next, next quick answer to the question, Angel asked what your thoughts on getups being a niche to get or versus others vcs tools? Well, if I knew anyone who is using anything other than get, I would say no, you know, get ops is a horrible name. It should just be CVS office, but that doesn't or vcs ops or whatever like that, but that doesn't roll off the tongue. So someone had to come up with the get ups phrase. Um but absolutely, it's all about version control solutions used for infrastructure, not code. Um might get doctor asks a great question, we're not gonna have time for it, but maybe people can reply and chat with what they think but about infrastructure and code, the lines being blurred and that do develop, how much of infrastructure does developer do developers need to know? Essentially, they're having to know all the things. Um so unfortunately we've had way more questions like every panel here today with all the great community, we've got way more questions we can handle in this time. So we're gonna have to wrap it up and say goodbye. Go to the next live panel. I believe the next one is um on developer, developer specific setups that's gonna be peter running that panel. Something about development in containers and I'm sure it's gonna be great. Just like this one. So let's go around the room where can people find you on the internet? I'm at Brett fisher on twitter. That's where you can usually find me most days you are? >>Yeah, I'm on twitter to um, I'll put it in the chat. It's kind of confusing because the TSR seven. >>Okay. Yeah, that's right. You can't just say it. You can also look at the blow of the video and like our faces are there and if you click on them, it tells you our twitter in Arlington and stuff, john >>John Harris 85, pretty much everywhere. Get hub Twitter slack, etc. >>Yeah >>and normal, normal faults or just, you know, living on Youtube live with Brett. >>Yeah, we're all on the twitter so go check us out there and thank you so much for joining. Uh thank you so much to you all for being here. I really appreciate you taking time in your busy schedule to join me for a little chit chat. Um Yes, all the, all the cheers, yes. >>And I think this kid apps loop has been declarative lee reconciled. >>Yeah, there we go. And with that ladies and gentlemen, uh bid you would do, we will see you in the next, next round coming up next with Peter >>bye.
SUMMARY :
I got my evil or my john and the normal And we're going to talk about get ops I currently based in Berlin and I happen to be Brett Brett's teaching assistant. All right, that's right. Um, so yeah, it's good to see either in person and it's good to see you again john it's been a little It has the pre covid times, right? Yeah, john shirt looks red and reminds me of the Austin T shirt. Um, but you know, you have to go steal stuff, you to find ways to get the swag If you ever come to my place, I'm going to have to lock the closets. So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the Um All right, so I'm going to start scanning questions uh so that you don't have to you can Um I still feel like I'm very new to john you anything. like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. so it has a learning curve and it's still being, you know, I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling sure you would have opinions. I think it's a very yeah. um I'll do my best inner victor and say, you know, it's it's I like it. then more, you know, and not everything needs to settle in terms of only one way of doing things, to a server and do a doctor pull and you know, dr up or dr composed up rather, That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups I rant, Right, so you have what? thing that I could figure out how to, you know, get it set up using um get hubs, and different repos and subdirectories are are looking at the defense and to see if there's changes I think it's you know, Yeah, for sure. That's the pain um or is it uh you know, is that that everyone's in one place So that is well within the realm of what you have Um was making a joke with a team the other week that you know the developer used to just I think when you get to the scale where those kind of issues are a problem then you're probably at the scale this kind of comes to a conversation uh starting this question from lee he's asking how do you combine top of kubernetes, such as helm or um you know, the other ones that are out there I was about to go to the next topic, I think certain tools dictate the approach, like, if you're using Argosy d I think you can make our go do that too, but uh this is back to what That's the same thing for secrets with good apps? But again like like like normal sand, you know, it's that doesn't really affect get ops, the risk of you putting a secret into your git repo if you haven't figured I hide this? So I think is the right way of saying the answer to that I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see Yeah. in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. can start off using it but you definitely have to have some pre recs in if you do have access and you can just apply something, then that's just infrastructure as code. But anyway, one thing with getups, especially based off the we've works blog post that you just put up on And then, you know, as it gets harder, then you have to find another solution when Crm into the cluster. I think it's in reference to your secrets comment earlier. like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure but it's um it's getups for you use this kubernetes I think the main to tooling to tools that we're talking about, our Argosy D and flux I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for you can use the kubernetes um focus tooling for things I mean, I think that's that's uh there were, you know, Um I think all that is gonna, I think that's going to consolidate But I think the, you know, does that muddy the agent only But I think as certain things kind of get more it starts to trend and it inspires other things and then it starts to aggregate and you know, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. Um you know, that's been a really good way to spin up things like clusters. So let's go around the room where can people find you on the internet? the TSR seven. are there and if you click on them, it tells you our twitter in Arlington and stuff, john Get hub Twitter slack, etc. and normal, normal faults or just, you know, I really appreciate you taking time in your And with that ladies and gentlemen, uh bid you would do,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brett | PERSON | 0.99+ |
Berlin | LOCATION | 0.99+ |
Victor Farsi | PERSON | 0.99+ |
john Harris | PERSON | 0.99+ |
Virginia Beach | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Jason | PERSON | 0.99+ |
Brett Brett | PERSON | 0.99+ |
Gaddafi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Yemen | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Arlington | LOCATION | 0.99+ |
Brett fisher | PERSON | 0.99+ |
five times | QUANTITY | 0.99+ |
Tiberius | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
two options | QUANTITY | 0.99+ |
john | PERSON | 0.99+ |
Virginia beach | LOCATION | 0.99+ |
two weeks ago | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Amman Arepa | LOCATION | 0.99+ |
three changes | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
second floor | QUANTITY | 0.99+ |
Quinyx | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Tory | PERSON | 0.99+ |
an hour and a half | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
axel Springer | ORGANIZATION | 0.99+ |
Victor | PERSON | 0.99+ |
Jenkins | TITLE | 0.98+ |
youtube | ORGANIZATION | 0.98+ |
SAn Francisco | LOCATION | 0.98+ |
three special guests | QUANTITY | 0.98+ |
4 | QUANTITY | 0.98+ |
Each tool | QUANTITY | 0.98+ |
booz allen | PERSON | 0.98+ |
one person | QUANTITY | 0.98+ |
five clusters | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
five different systems | QUANTITY | 0.98+ |
each container | QUANTITY | 0.98+ |
day one | QUANTITY | 0.98+ |
Youtube | ORGANIZATION | 0.98+ |
Angel | PERSON | 0.98+ |
Iver | PERSON | 0.98+ |
five different ways | QUANTITY | 0.98+ |
first year | QUANTITY | 0.97+ |
V two | OTHER | 0.97+ |
three commits | QUANTITY | 0.97+ |
more than two | QUANTITY | 0.97+ |
One person | QUANTITY | 0.97+ |
two way | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
one way | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
single point | QUANTITY | 0.96+ |
five prs | QUANTITY | 0.95+ |
first one | QUANTITY | 0.95+ |
John Harris 85 | PERSON | 0.95+ |
first | QUANTITY | 0.95+ |
more than one kubernetes | QUANTITY | 0.95+ |
Amr Abdelhalem, Fidelity Investments | KubeCon + CloudNativeCon NA 2019
>> Announcer: Live from San Diego, California, it's theCUBE! Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back. I'm Stu Miniman, my cohost, John Troyer, and this is theCUBE's fourth year of coverage of KubeCon, CloudNativeCon 2019. We're in here San Diego and happy to welcome to the program a first-time guest, Amr Abdelhalem, who is the head of Cloud Platforms at Fidelity Investments. Of course, Fidelity, we love talking to an end user. Big financial company. Your boss was up on the main stage in front of 8000 people, just in that room, there's over 12,000 here in person. Fidelity itself, you know, founded in 1946, first computers in 1965. In the last year, you've now got over 500 applications running in the public cloud, and Fidelity also joined the CNCS. So let's start there, Amr, if we would. Just kind of how does Fidelity look at kind of Kubernetes and CNCS? How does that fit into your company's mission? >> Absolutely, I mean thank you so much for inviting me here. Innovation in Fidelity is, a big part of the process. We're very focused at this time in cloud computing and machine learning, NEI technology. We had the first financial robot in 2015, I believe. We have the first augmented reality financial advisor, was actually released this year as a prototype. So a part of that innovation, we're seeing, CNCF and cloud computing and Cloud Native, is keys for strategy for our innovation part. >> All right, maybe if you could, give us a little bit of the breadth and depth of your team, what they cover, cloud platforms. What does that mean inside of Fidelity? >> Sure, so Fidelity had over, like, over 10,000 of IT. Hundreds and hundreds of develop teams, thousands of applications. It's globally distributed. It had all kind of workloads, that you can imagine. And it's in a highly regulated environment as well. And that's where we are seeing that we are all looking for this autonomy between teams, and agility, and improved time to market and customer experience. And the key for that is Cloud Native. We're seeing Kubernetes and CNCF and Cloud Native technology is like a key player for us when we go, multicloud to hypercloud model. >> Can you talk a little bit about more into that portfolio of technologies? You know, there's a lot of talk about public cloud verses on-prem, and, as if one thing is going to, one knife is going to be the only thing you need in your kitchen. >> Amr: Right. >> So you have a portfolio of platforms, you have a portfolio of destinations and a portfolio of applications. Can you talk a little bit, both about what you're using, and maybe how you're organized to access and address all those needs? >> Absolutely. So, I think, 2019, I would say, is the year of multicloud-hypercloud modeling, right? Actually, I would say that 2020 is going to more about distributed cloud, where you can distribute your workload across multicloud providers. We're not there yet. I don't think we're, anyone, is there yet. But at least we should start somewhere. We already has this multicloud providing. Distributing the workload itself between, I mean, it's a journey to move thousands of applications and thousands workloads and data as well, between on-premises data centers to a public cloud. You need to move through this journey of hypercloud models. And be able to move apps slowly and aggressively to other apps. >> All right. Amr, I want to dig into what you talked about there, multiclouds. >> Sure. >> So when you talk about multiple clouds, yes, everybody has that. I've got, walk us through a little bit, you know, where you have workloads and how many public clouds you use in life, but I want to set you up with a premise. You know, we really said, for multicloud to really be a reality-- >> Amr: Right. >> The value that you extract should be greater than the sum of its parts. And most of us lived through the multi vendor years, and that wasn't necessarily happiness and joy, when I had to span between those environments. So how do we make sure that multicloud doesn't become the least common denominator or a detriment to what I need to do with my data, my applications, the value that the company has? >> And that's why we are here. We are actually incorporated at Kubecon for that reason. That where we see this abstract layer that guarantee you the portability for moving your application from one cloud provider to another. That capability of the ability to deploy the same workload into multiclouds, the ability to have the workload itself, managed in different characteristic, next to assess services that you will find in AWS via Azure, via Google Cloud, the others. That's were we need that flexibility, and Kubernetes and Cloud Native itself, the ability to have the same deployable structure for your application, the ability to have the same ecosystem around that construction, around that artifact. The ability to move all of that, as-is, from one cloud provider to another cloud provider is big, big key. And that you can only find with script native. >> All right, Amr, can you share which cloud or clouds you're working on today, and what is your roadmap, do you have a timeline to when that vision becomes reality? >> At this moment, we're with a major cloud provider keys that, you guys can name them, all the colors. >> Stu: You're using all of them, okay. >> All the colors. >> And how are you using Kubernetes today? Where are you in that journey? >> So Kubernetes is mainly, I mean, I would say the majority is still running on premise. We are very intensively moving to public cloud in the Kubernates side. At this moment, actually, we're building an offer, inside my team, which is a cloud platform team. That offer will guarantee that portability between all the cloud provider. So for development team to port our platform, it will be kind of seamless for them, where it's going to land, is it going to be landing in AWS or Azures or on premise. >> Okay, joining the CNCF as a member, bring us inside. I understand the journey. Are there any specific goals you have? How do you measure the investment, and what you're hoping to, both as a company as well as part of the community, get out of it? >> So we have a big hold right now and opensource our project our little project about multiclouding, and our focus is mainly about the high regulation part. We're very focused in compliance and security, and in that way we can, I think, we can contribute back to the open source community around that. >> So Amr, you talked about, you know, we talked about the platforms here, and Kubernetes, but that goes hand-in-hand with the culture, and the up-skilling, and the organization and the processes. What intrigued me is you said, well, we put some things on Kubernetes on-prem, and then, and you know some things in the cloud, but then we're going to move some of those apps over time, we'll move to other appropriate homes. So that implies that you've changed process and you've changed, or maybe to be able to build cloud native apps, and that was actually separate, in some cases, from being in the public cloud. Is that the case, can you talk a little bit about how you've approached from the perspective of people who are listening or watching who are IT admins, and wondering how a company, a major organization, like your org, gets there? >> Right, and this is a main challenge. The challenge is not in the technology side itself, or the tools, that seems a majority there in the ecosystem at this moment. The challenge is mainly building the sculpture inside teams. So we're building many like, star-point or COEs across all of our business unit and all of our teams. And again, to build a sculpture across 10,000 developers plus, that's a major. >> And it's funny, because sometimes people go, well, COE is a dirty word, right, don't do a COE, but you said multiple COEs distributed across. >> So it's like nuclear reaction, our COEs, the first one, that will communicate with few COEs, each one of them would be with other COEs, and that's how that chain will go and expand quite quickly. >> All right. >> And this is happening at this moment. >> So, Amr, I have a few friends that this is the first time that they've come, and they go into the keynote, or they look at the schedule, and they're a bit overwhelmed. >> Amr: Right >> They say, it's not just Kubernetes, there's dozens and dozens of projects. The ecosystem is sprawling. If you could, give us a little walkthrough as to, the projects you're using, any key partners that you're allowed to talk about that are useful in helping you to achieve your mission. >> So, we're very focused at this moment, actually, in the Kubernetes project itself. We start exploring some of the open source project and in the CICD part, additional to that, we are starting using few frameworks like Flux, this is one of the frameworks like GitOps in general, building this culture of GitOps deployment, and moving toward, like, more ops of deployment, that's one of areas that we are very invested in. We're exploring service mesh at this time, and I hope like, we're going to get, like, maybe next year we can talk about service mesh more. >> Yeah, is there something that's holding you back on service mesh, 'cause there's a few options out there at various maturity levels, and who's driving them. What will some of your criteria be? >> I would say it's mainly, I'm waiting little bit more, I feel like 214 for me, when we had that discussion, instead of sitting here, 214, you will be discussing Mesos via Kubernetes via Swarm. So I think we are still moving at this time, service mesh as well. >> Any partners that you can speak to from a technology standpoint that are helping you, that you're allowed to talk about? >> Amr: Well, I mean, first of all CNCF. >> Yeah. >> I greatly appreciate all their help in that. Most of the public cloud providers are helping us in this areas as well, yeah. >> I'll be interested in catching you after the show and seeing how you thought, I mean this is, in some ways, it's a science project a few years ago, and now it's this robust thing. Did you bring, I'm curious, did you bring mostly engineers, mostly managers, a mix of the two? >> Amr: Mostly engineers, yeah, mostly engineers. >> Hands on? >> All hands on, I mean, this is like another change in culture right now, where most of our engineers are in innovation, like, they are full stack engineers. We're using VDI process at this moment, to move forward. All our road maps, in turn, have been published, it's being used like evolving process, to go, like, with continuous deployment, and continues feature enhancement for the teams. So it's fantastic honestly, yeah. >> Okay, Amr, what things does your team hope to achieve this week, anything that is on your roadmap, or on the public open source road map that you're waiting on? We talked a little bit, service mesh? >> We're definitely exploring OPA at this moment. I think that's like, that's big potentials there. So that's one of them, yeah. I think going through that showroom and try to see what option we have as well, that's on the area where we going to be very interested at. >> OPA, the Policy Agent, I mean, you talked about compliance before >> Yeah. >> A few years ago, with folks in the financial industry, you would have some arguments, some discussions, sometimes heated discussions about security in the cloud and et cetera and highly regulated industry, yet, kind of, maybe ironically or somewhat, maybe surprisingly for some, right? Very advanced in many areas, the whole industry. That's well known if you're in it. Do you still have to have discussions about compliance and security in the cloud? Maybe, I guess, maybe when you talk about data locality and international borders more? >> Right, and that's why we already have our own policy management tool, which is built in, we build it ourself, and that's where I see the potential, like, our moving from building it yourself to more of using an open source project and try to reuse it and contribute back to that open source community, like something like OPA, for example. So that's the next generation, where I can see it will help us as well. >> Amr, any advice you'd give your peers out there, if they're new to the community? Things you've learned along the journey so far? >> I would say start small, don't boil the ocean. Start with small COEs, small pilots program. Look for success, look for goals. Technology is great, but don't just move toward technology, because it's a moving target, it will never end. Try to set business goals for you, like targets for your project, and that's how you can achieve success. >> Well, Amr, really appreciate you sharing Fidelity's update. >> Thank you. >> Wish you and your team the best of luck here at the show and beyond, and we definitely hope to catch up soon. >> Thank you, I appreciate it. >> All right, for John Troyer, I'm Stu Miniman, be sure to checkout theCUBE.net for all of the coverage of this, as well as all the cloud, Cloud Native, and more shows that we have. Thank you for watching theCUBE. (upbeat electronic music)
SUMMARY :
Brought to you by Red Hat, and Fidelity also joined the CNCS. Innovation in Fidelity is, a big part of the process. All right, maybe if you could, It had all kind of workloads, that you can imagine. you need in your kitchen. So you have a portfolio of platforms, where you can distribute your workload Amr, I want to dig into what you talked about there, So when you talk about multiple clouds, and that wasn't necessarily happiness and joy, And that you can only find with script native. that, you guys can name them, all the colors. in the Kubernates side. How do you measure the investment, and in that way we can, I think, we can contribute back Is that the case, can you talk a little bit about how in the ecosystem at this moment. but you said multiple COEs distributed across. the first one, that will communicate with few COEs, So, Amr, I have a few friends that this is the first time in helping you to achieve your mission. and in the CICD part, additional to that, Yeah, is there something that's holding you back on you will be discussing Mesos via Kubernetes via Swarm. Most of the public cloud providers are helping us and seeing how you thought, I mean this is, and continues feature enhancement for the teams. that's on the area where we going to be very interested at. in the cloud and et cetera and highly regulated industry, So that's the next generation, and that's how you can achieve success. Well, Amr, really appreciate you sharing Wish you and your team the best of luck here at the show and more shows that we have.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
John Troyer | PERSON | 0.99+ |
Umair Khan | PERSON | 0.99+ |
Laura Dubois | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
1965 | DATE | 0.99+ |
Keith | PERSON | 0.99+ |
Laura Dubois | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Emil | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Fidelity | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
1946 | DATE | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Amr Abdelhalem | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Kapil Thangavelu | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
10 feet | QUANTITY | 0.99+ |
Avamar | ORGANIZATION | 0.99+ |
Amr | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
Fidelity Investments | ORGANIZATION | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
one repository | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Tens of thousands | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
8000 people | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
10,000 developers | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
214 | OTHER | 0.99+ |
six months later | DATE | 0.99+ |
C two | TITLE | 0.99+ |
today | DATE | 0.99+ |
fourth year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
150,000 | QUANTITY | 0.99+ |
79% | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
2022 | DATE | 0.99+ |
OpenVMS | TITLE | 0.99+ |
Networker | ORGANIZATION | 0.99+ |
GitOps | TITLE | 0.99+ |
DOD | ORGANIZATION | 0.99+ |
Gary Cifatte, Candy.com | Boomi World 2019
>>live from Washington, D. C. >>It's the Cube >>covering Bumi World 19. Do you buy movie? >>Hey, welcome back to the Cube. We've got candy. That's right. I am Lisa Martin in Washington, D. C. At booming World 19 with John Ferrier and John and I are excited to be talking next with a chief technology officer of candy dot com. Gary, welcome to the Cube. >>Thank you for having me great to be here. >>So tell our audience about candy dot com Guinea all that you want dot com cool stuff. >>It is cool stuff. It is the endless. I'll just like going to the supermarket and never runs. Oh, it's absolutely perfect. That's actually how we started knowing that there was so much candy out there that people wanted in the lines just weren't long enough to put him in, no matter where you checked out, and we started off being the online candy store, which was a foot in the door, but it was a very small opening at that time. >>One of the things you said when I met you today whilst eating candy that you guys brought thank you very much for that was very appropriate. Um, was that candy? Is recession proof? >>It is. It's it's ah, you know, good times, bad times. You know, people are gonna have birthday parties. People get married holidays. They're going to come. You know, you've had a really great day. It's a candy bar. You know, you've had a really bad day. It's the candy bar. That's just it's an impulse buy, but it's an impulse buy with your favorite. I mean, it's something to comfort more than anything else, actually. And the technology side talk about how you guys were organized. What? Some of the challenges and how does Bumi fit in? Take us through the journey. Sure, when we started out, we thought, How hard could it be doing? Data entry will get the orders. They'll come across, we'll have some people. Instrument to the system will start filling up, you know, and then everything else will take care of itself. And within about a few minutes, we realized that that was probably not going to work. It was not scalable because first of all, data entry is air pro. You know, if you have someone actually trying to do with their, it's not gonna work for us. So we realized that there was a mechanism out there with Edie I and we went to 1/3 party provider to help us with the FBI. And that's how we started with the first couple of integrations and it was good. It got us off the ground and got us further into that door. >>So you started with, um, how many different partners trading partners take us back to kind of the last 10 years of candy dot com and how that Trading Partner Network has grown. >>Oh, it's like the journey. It's still we starts with the first step. We had one that was interested, one that wanted to work with Austin, and we started to do the work with them and figure out how to handle it. But they had multiple divisions, so, you know, there was only one that was 32 actual integrations that had to be done on being a traditional brick and mortar. It's very competitive. So once the word got out that they were work with us, there was a couple other. So we had six pretty big ones lined up early on that we needed to have integrated in up and running very quickly. >>And from a digital perspective, what were some of the initial system's applications that you implemented just start being able to manage and track those trading partner interactions to ensure that you're able to deliver? You know what? The candy, the candy demand that you need to fill? >>It was, sadly, a lot of C S V. A lot of email, a lot of phone calls back and forth. There was a lot of hours, and it was one those ones where we would really just bring in temps and try to keep up with it did not really have a repeatable process or a good technical footprint of what we needed to d'oh way didn't know what we didn't know when we started, and we very rapidly came to become aware of what we needed to do. >>So starting with air P Net sweet brought net Sweden two years ago. Tell us about that and what you thought was gonna solve all of our problems. Well, that's why it's >>a great package because it brought us both order management and it brought us here. Pee in. There were so many models and so much technology behind it and they have a warehouse module. There's, like all we could grow forever With this, it will never be bounded. This is gonna be fantastic. But what we forgot is that it was only as good as the data in there. And if we're using as a manual data entry, it's not going to meet our needs. We needed to come up with a better way in a more efficient way to get the data in. And this was still back in the day when we're trying to fulfill something within a week, much less where we're at today. >>Okay, so where does Bumi fit into play? >>We realized, unfortunately that even when you have an integration up and running and as good as the integration is, some of your trading partners will have changes. They're going to give you a different reference number. They're gonna give you a different requirement. They're gonna make something that was optional now mandatory. So we had problems because it wasn't just also that was impacting everyone that was doing an integration with that trading partner had it. So if I had outsourced it and there was 100 people that had that map. We were one of 100. Sometimes we were one, and sometimes we were as far away from one is possible and you understand that, and you appreciate it because there's only a finite number of hours to get things done. So we understood that to be really profitable and get to the level of service we needed to control the data. And that's when we decided that we needed to bring the E. D I and house. >>So when you were looking for the right integration partner, what was it about Bhumi from a technology perspective and a business perspective that really differentiated it. >>First and foremost, the number one requirement had to talk to nets. We had a have a native nets. We'd integration if it did not talk to net sweet. It wasn't gonna make it onto our plate because we weren't gonna spend the time to reinvent the wheel when obviously the wheel was out there. We had actually done that once before, and it was successful but painful. And there's people out there who build a connection and work to silver partners like blooming in the platinum partners that can go out and they can actually keep up with the release before it comes out. And you're being proactive by the reactive from a business need. It was We can't drop data. We need to be efficient. We need to be timely. We need visibility. And looking at Bumi, it met all those needs. We had a connection into nets. We had a reporting tool. We had error messages coming back. We had everything that we needed to manage our own world and take control of it. Or so we thought >>that look. Okay, so get this implemented. What sort of opportunities is the start opening up? You talked about control there, or so we thought. What have you been able to unlock where control is concerned? In the last few years, >>what we didn't realize with what we were doing is that way. We're just basically turning on everything and trying to run this efficiently and fast as possible. And that was really the wrong approach to take what we needed to do it as some governance to it as some logic to it, too, you know, not compete with jobs. There's there's a finite number of avenues into the back end system, you need to utilize it. But there was also tools that we found out inside this system that handled things like error trapping and retrial, logic and time outs and stuff like that. And as we worked with the subject matter experts at Boom, as we worked with the people at Nets, we in our account managers who would show us things and help us long. We learned a lot more about him. When we went live back in February of 2016 we were very excited. We did 1000 orders into our system and one day and we thought, How phenomenal is this? I mean, 1000 orders. How many more orders could you actually look for? And we very soon realized that there was a lot more orders willing to come into our system if we could handle it. >>So what? So when you first started with Bhumi went from some number 2 1000 orders today. What was that original number that you guys were able to handle when it was more of a manual process? >>It depend on how many attempts we could hire that sometimes it was 100 orders we got in. Sometimes it was 100% dependent on people. Also depend on someone, Remember, understands the spreadsheet. >>The Sun's painful, >>painful and not really easy to plan for. >>But you discovered pretty quickly you went from I won't say 0 to 1000. But somewhere in between that realized tha the capabilities, though of this system was gonna allow you to get 20,000 orders per day. Where was the demand coming from? Was it coming from trading partners was coming from their customers? Was it coming from your internal team seeing Hey, guys, I think there's a lot more power here than we originally thought. >>Well, success begets success because we were able to get an order in now in a timely fashion and ship it out there. All of a sudden, I realized we were shipping orders within 48 to 72 hours. It wasn't taking 10 days anymore, so we had repeat customers, which obviously makes your numbers go up. And then, as you know, your experience is good and you share it because social media is the weight of the world All the sudden, you know if if you tell two friends and they tell two friends we start getting more volume. Damn white starts happening is someone realizes they're losing market share of their brick and mortar website. And who was fulfilling the orders for them if they're doing so well and we're losing business and they start knocking on the door saying what? We'd like to work with you as well. And the other thing, too, is just timing. In the United States, it's pretty warm between April and October, and the bulk of perishable and heat sensitive product will ship through one of our warehouses because we have the thermal controls in the programs in place to give a good experience to make sure the product arrives the way it's supposed to be treated. >>Yeah, you were mentioning that when you were on stage this morning with Mandy Dolly Well, Mami CMO and Jason Maynard from Net Sweet that there are obviously, if you order some chocolate. I wanted to get there in the exact state in which I saw it online, right? But there's you've gotta have a lot of access, invisibility and systems to be able to help you facilitate that temperature control, depending on the type of product. >>Absolutely. So we're very proud of the fact that, you know, we're temperature controlled where humidity controlled were suf certified. We've done everything the right way to make sure that what we do is gonna be the best experience that your food is safe. Because, Paramount, the last thing we ever want to do is to keep a product of someone's gonna make your child sex because, you know, you don't want anyone to get sick. But the worst feeling is apparent is when your kid doesn't feel well. So we understand that Andi have a phenomenal staff. Are Q A team will go through and we have ways to test the product to get to the melting point. And we know different products melted different temperatures, and we determine what those temperatures are. We build those thresholds we do calls out to get the weather. No, I'm shipping it from my location to you. What's the temperature of my It doesn't matter if it's cold at your place. It is 90 where I'm shipping it from. So we look at what is it now? Where is it going? What's it gonna be the next few days? How big is it? You know how much product is in there with that? That isn't heat sensitive. And we have a pretty complex algorithm that we put in place That has really enabled us to handle the summer months and give a good product because, I mean a lot of people like s'mores, but they don't want the pre melted chocolate showing up at their house. >>Would agree. That takes the fun out of the bonfire part, right? Exactly. So let's talk about the people transformation because you were saying your 100% dependent on manual Somebody even sending the spreadsheet little into star inputting data to process X number of orders per day went from almost 0 to 1000 overnight with Bhumi, then saw this capacity for 20,000. How have has your team has other business units within candy like finance? How are they benefiting from all of this? What a presume is massive workforce productivity gains that you're giving everybody? >>Absolutely. It was a great problem tohave because as we got bigger and we started getting more and more orders than we got more and more invoices and you know, we got more and more checks in which we always think it's a good thing, but those checks need to be reconciled. They have to be reconciled against the transaction Inside the Nets week. It's no exaggeration that we would have pages printed out with a ruler going down and highlighting one by one on the invoice to make sure nothing was omitted. And we were spending an individual spent an eight hour day, three days a week, just going through direct missile. One invoice that was coming in and we would get two or three a week from them. So it was painful and again also error prone. And these people are very creative, very smart, and they offer so much more to the business that it was a waste of their time in a waste of their intellect. S o del. Booming, we found out, is not just any eyes phenomenal, Aditi, I but it has all these other tools and won. The tools we had was to be able to take the remittance file from the financial institution, reconcile it against the invoice is in the system and create a C S V import that would run that we have a script for that created a cash payment in our system that would actually close out the invoices and be paid so that we don't take care of it. It was done, and finance would basically get the file and e mail to us. We would file it back and they'd run an import. So instead of 250 hours a week, it was five minutes of file. >>That's a dramatics saving hundreds of hours a month, but also faster time to revenue recognition. >>That's a big one, you know, because when you try to get people discounts or give them brakes or if your terms are out there, it's nice to get it in there and keep your system's clean, because you also have to answer to the end of the month. You know you want to close the books and everything in manual processes. Air one the few things that you can't just throw more horsepower at. >>I'm glad you brought up, though from a resource kind of reallocation. Perspective is, these folks, in particular areas of the business, have value that they're not able before weren't able to really unlock and deliver. Now, with the technology in place, they're able to probably focus on more strategic areas of the business or more strategic projects. I also imagine your sales. We said faster time to revenue in revenue recognition, but big boost to candy dot comes sales. Since you've implemented the technology >>direct, I mean the sales numbers have just grown. I mean, as much as we do. No do are forecasting and think where it's going to go. Wee wee drastically underestimated this year. The summer was very, very good to us. Our first year under booming, we ran for 11 months. We did a little over 600,000 orders for that first year. In comparison, in June, July and August this year, we did over a 1,000,000 orders. That's a lot of chocolate. So a >>lot of candy, >>most certainly >>busier time, period. I mean Halloweens in a few weeks, Christmas is coming. How does that compare in terms of like the Flux >>way? Have a peek? Obviously, Halloween Halloween is obviously the time, of course. November 1st, our orders are zero because everyone walks in with a pillowcase of candy from their kids to the office, so it literally goes from a 1,000,000 miles an hour or two nothing, and it's it's kind of eerie. But throughout the summer we stay very, very busy because a lot of the market places don't have the facility and listen, they're great, you know, it's one stop shopping. They have everything, but everything is in a warehouse in that entire warehouse is not properly controlled to handle food products. So they decided it was an advantageous for them to ship, you know, during the summer, and it's poorly monitored as a summer Shipp program. But it's really more of a heat sensitive program because we'll add the thermal product to protect the thermal packaging to protect the product, even in February. I mean, there's some spots in Florida in Texas at a pretty one that you want to protect the item. So it's a heat sensitive program that we're very proud of, and we keep advancing and we keep growing. And, you know, I have. I'm very fortunate. I have a great team. I mean, we're not gonna call out, you know, like Jim and Scott, because that would be wrong to deal with. These guys have been with me from the start, and they put the E. T. I in place. They put the scripting in place that the guys were just, you know, rock stars on. Do I look good because of their effort? And I'm very, very proud of the team we've assembled that does this to make sure that you're and satisfaction is always met. >>Awesome story. So I imagine you know, when we hear like, four out of five dentists recommend this kind of bet. Is the fifth dentist recommending candy dot com? Is that where that guy's been? >>Yeah, he's got four kids >>going through college and >>everything, so he figures candy dot com to go. Way to make the money to make sure those tuition skip. >>All right. Well, Gary, it's been a pleasure to have you on the keys. Thank you for sharing what you're doing with bhumi at candy dot com. We appreciate and thanks for all the candy. >>Oh, our pleasure. Thank you very much for having been a great couple of days. I'm glad to be part of it. >>All right. Our pleasure for John Ferrier. I'm Lisa Martin. You're watching the Cube from Bhumi World 19. Thanks for watching
SUMMARY :
and John and I are excited to be talking next with a chief technology officer of candy dot So tell our audience about candy dot com Guinea all that you want dot com in the lines just weren't long enough to put him in, no matter where you checked out, One of the things you said when I met you today whilst eating candy that you guys brought And the technology side talk about how you guys were organized. So you started with, um, how many different partners trading We had one that was interested, one that wanted to work with Austin, and we very rapidly came to become aware of what we needed to do. Tell us about that and what you thought was gonna solve all of our problems. We needed to come up with a better way in a more efficient way to get the data in. Sometimes we were one, and sometimes we were as far away from one is possible and you So when you were looking for the right integration partner, We had everything that we needed to manage our own world and take control of it. What have you been able to it as some governance to it as some logic to it, too, you know, not compete with jobs. What was that original number that you guys were able to handle when it was more of a manual process? It depend on how many attempts we could hire that sometimes it was 100 orders we got in. though of this system was gonna allow you to get 20,000 orders per day. And then, as you know, your experience is good and you share it because social media is the weight of the world Yeah, you were mentioning that when you were on stage this morning with Mandy Dolly Well, So we're very proud of the fact that, you know, we're temperature controlled where humidity Somebody even sending the spreadsheet little into star inputting data to process X number orders than we got more and more invoices and you know, time to revenue recognition. That's a big one, you know, because when you try to get people discounts or give them brakes or if your terms We said faster time to revenue in revenue recognition, I mean, as much as we do. How does that compare in terms of like the Flux They put the scripting in place that the guys were just, you know, rock stars on. So I imagine you know, when we hear like, four out of five dentists recommend this kind Way to make the money to make sure those tuition skip. Well, Gary, it's been a pleasure to have you on the keys. Thank you very much for having been a great couple of days. All right.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
John Ferrier | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 days | QUANTITY | 0.99+ |
February of 2016 | DATE | 0.99+ |
Jason Maynard | PERSON | 0.99+ |
Florida | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
20,000 | QUANTITY | 0.99+ |
two friends | QUANTITY | 0.99+ |
11 months | QUANTITY | 0.99+ |
100 orders | QUANTITY | 0.99+ |
Gary Cifatte | PERSON | 0.99+ |
Washington, D. C. | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
February | DATE | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
1000 orders | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
November 1st | DATE | 0.99+ |
20,000 orders | QUANTITY | 0.99+ |
Texas | LOCATION | 0.99+ |
June | DATE | 0.99+ |
four kids | QUANTITY | 0.99+ |
100 people | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
90 | QUANTITY | 0.99+ |
fifth dentist | QUANTITY | 0.99+ |
One invoice | QUANTITY | 0.99+ |
Net Sweet | ORGANIZATION | 0.99+ |
72 hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
100 | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Andi | PERSON | 0.99+ |
July | DATE | 0.99+ |
October | DATE | 0.98+ |
Christmas | EVENT | 0.98+ |
two years ago | DATE | 0.98+ |
this year | DATE | 0.98+ |
Boom | ORGANIZATION | 0.98+ |
three days a week | QUANTITY | 0.98+ |
first couple | QUANTITY | 0.98+ |
1000 | QUANTITY | 0.98+ |
bhumi | PERSON | 0.98+ |
April | DATE | 0.98+ |
250 hours a week | QUANTITY | 0.98+ |
over 600,000 orders | QUANTITY | 0.98+ |
48 | QUANTITY | 0.97+ |
five dentists | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
Bhumi | ORGANIZATION | 0.97+ |
candy dot com | ORGANIZATION | 0.96+ |
32 actual integrations | QUANTITY | 0.96+ |
Bumi | ORGANIZATION | 0.96+ |
0 | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
Cube | TITLE | 0.95+ |
a week | QUANTITY | 0.95+ |
hundreds of hours a month | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
1,000,000 miles an hour | QUANTITY | 0.94+ |
Mandy Dolly Well | PERSON | 0.93+ |
Halloween | EVENT | 0.93+ |
one day | QUANTITY | 0.93+ |
Trading Partner Network | ORGANIZATION | 0.92+ |
Mami CMO | PERSON | 0.92+ |
Candy.com | ORGANIZATION | 0.9+ |