Image Title

Search Results for CNS Days:

Erin A. Boyd, Red Hat | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE, covering KubeCon + CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to the third day of wall-to-wall coverage here at Kubecon + CloudNativeCon 2019 in San Diego. I am your host for the three days of coverage, Stu Miniman. Joining me this morning is Justin Warren. And happy to welcome back to the program, Erin Boyd who's a senior principal software engineer at Red Hat. Erin, thanks so much for joining us. >> Thanks for having me. >> All right, so we had a chance to catch up in Barcelona on theCUBE there. Storage is definitely one of the faster moving areas of this ecosystem over the last two years. Why don't we start with, really, the event? So, you know, as I said, we're in day three but day zero there were a whole lot of things we had. Some of your peers at Red Hat have talked about OpenShift Commons, but storage, to my understanding had a couple of things going on. Why don't you share with our audience a little bit of that? >> Sure, so we had a SIG face-to-face for Kubernetes, it was probably one of the best attended. We had to cap the number of attendees, so about 60 different people came to talk about the future of Kubernetes in storage, and what we need to be doing to meet our customers' needs. In conjunction with that, there was a parallel session called CNS Days, which is Container Native Storage Days. That event is very customer focused, so I really enjoyed bouncing between the two of them. To go from the hypothetical, programming, architecture view, straight to what customers in the enterprise are looking at and doing, and what their real needs are. >> So from that SIG, can you actually share a little bit of where we are, where some of the requests are? We know storage is never one way to fix it, there's been some debates, there's a couple different ways to do... I mean, traditional storage, you've got block, file, and object. Cloud storage, there are more options in cloud storage today than there was, if I was to configure a server, or buy a storage array in my own data center. So where are we, what are those asks? What's on the roadmap there? >> Right, so I think for the past five years, we've been really focused on being mindful of what APIs are common across all the vendors. I think we want to ensure that we're not excluding any vendors from being part of this ecosystem. And so, with that, we've created the basis of things like persistent volumes, persistent volumes claims, storage classes to automate that, storage quotas to be able to have management and control over it. So I think now we're looking to the next evolution of... As the model's maturing, and people are actually running stateful applications on Kubernetes, we need to be addressing their needs. So things like snapshotting, eventually volume cloning, which has just gone in, and migrating. All these type of things that exist within the data plane are going to be the next evolution of things we look at in the SIG. >> Yeah, so one criticism that's been mentioned about Kubernetes a few times, that one, it's a bit complicated. But also, it didn't really deal that well with stateful sets. Stateful data management has always been, it's been a little bit lacking. That seems to have pretty much been sorted out now. As you mentioned, there's a lot more work being done on storage operators. But you're talking about some of these data management features that operators from other paradigms are kind of used to being there. When you're thinking about moving workloads to Kubernetes, or putting in new workloads on Kubernetes, if you're unsure about, "Well, will I be able to operate this in the same way that I did things before?" How do you think people should be thinking about those kind of data services in Kubernetes? >> So I think it's great that you mentioned operators. Because that was one of the key things when Rook came into the landscape, to be able to lower the complexity of taking something that requires physical storage and compute, geography, node selection. All those things, it helped people who were used to just the cloud model. I create a PVC, it's a request for storage, Amazon magically fulfills it. I don't know what's backing it. To be able to take these more complex storage systems and deploy them within the ecosystem, it also does a good job supporting our Brownfield customers, because not every customer that's coming to Kubernetes is green. So it's important that we understand that some customers want to keep their data on-prem, maybe burst to the cloud to leverage those services, but then keep their data close to home. So operators help facilitate that. >> Yeah, Erin, I hesitate a little bit to ask this, but I'm wondering if you can do a little compare, contrast for us, for what the industry had done back in OpenStack days? When I looked at storage, every traditional storage company certified their environment for OpenStack. On a storage standpoint, it feels like a different story to me when I hear about the ecosystem of operators in OpenStack. So I know you know this space, so maybe you can give us a little bit of what we learned in the past. What's similar, what's different? >> Right, well I think one of the benefits is we have a lot of the same key players. As you may know, OpenShift has pivoted from Gluster to Ceph, Ceph being the major backer of OpenStack. So we're able to take some of that technical debt, and learn our lessons from things we could improve, and apply those things within Kubernetes. I just think that it's a little slower migration, because in OpenStack, like you said, we had certification, there were different drivers. And we're trying to learn from, maybe, I wouldn't even call those mistakes, but, how can we better automate this? What can we do from an operational perspective to make it easier? >> Well I think because one of the... It felt like we were kind of taking some older models and... I'm testing it, I'm adding it. The ecosystem for operators here is different. Many of these, we're talking very much software-driven solutions. It's built for container architectures, so it's understandable that it might take a little bit longer because it's a different paradigm. >> Right, well, and I think the certification kind of... It wasn't an inhibitor but it certainly took a lot of time. And I think our take was on... We used to have all the storage providers be entry providers within Kubernetes. And with CSI, we have since started to redo the plugins and the sidecars, and move that out of core. So then the certification kind of falls outside of that instead of being more tightly wound into the platform. And I think it will allow us to have a lot more flexibility. Instead of waiting on each release, vendors can create operators, certify them themselves, have them in their own CSI driver, and move at the pace that they need to move. >> So how do you balance that need for Kubernetes to be a common operating platform that people can build on with each vendor's desire to provide their own unique capabilities that they think that they do particularly well? That's why they charge the money that they do, because they think that theirs is the best storage ever. How do you balance that tension between the need for a standard platform and to make it interoperable, but still allowing the flexibility for people to have their own kind of innovation in there? >> So when we created the storage class, for instance, to be able to create a service level over storage, to be able to provide the provisioner that we're going to use, we made the specification of that section completely opaque. And what that allowed us to do is that when vendors wrote their provisioners and now their CSI drivers, allowed them to feed in different attributes of the storage that they want to leverage, that don't necessarily have to be in core Kubernetes. So it provided a huge amount of flexibility on that. The other side of that, though, is, the feeback we get from real users is "I need backup and recovery, and I need DR, and I need that across the platform." So I really think as we look to scale this out, we have to be looking at the commonalities between all storage and bringing those APIs into Kubernetes. >> One of the things I've really liked to see in this ecosystem over the last year or so, and really highlighted at this show, we're talking a lot more about workloads and applications and how those... What works today and where we're growing. Can you speak a little bit from your world as to where we are, what's working great, what customers are deploying, and a little bit, the road map of where we still need to go? >> Sure, I think workloads are key. I mean, I think that we have to focus on the actual end-to-end delivery of that, and so we have to figure out a way that we can make the data more agile, and create interfaces to really enable that, because it's very unlikely that an enterprise company is going to rely on one cloud or stay with one cloud, or want their data in one cloud. They're going to want to have the flexibility to leverage that. So as we enable those workloads, some are very complex. We started with, "Hey, I just want to containerize my application and get it running. Now I want to have some sort of state, which is persistent storage, and now I want to be able to scale that out across n number of clusters." That's where the workloads become really important. And long term, where we need policy to automate that. My pod goes down, I restart it, it needs to know that because of, maybe, the data that that workload's producing, it can only stay in this geographical region. >> Yeah, we talk about multicloud. You mentioned data protection, data protection is something I need to do across the board. Security is something I need to do across the board. My automation needs to take all that into account. How's Red Hat helping customers get their arms around that challenge? >> Yeah, so I think Red Hat really does take a holistic view in making sure that we provide a very consistent, secure platform. I think that's one of the things that you see when you come on to OpenShift, for instance, or OKR, that you're seeing security tightened a little bit more, to ensure that you're running in the best possible way that you can, to protect your data. And then, the use of Rook Ceph, for instance, Ceph provides that universal backplane, where if you're going to have encryption or anything like that, you know it's going to be the same across that. >> It sounds like there's an opportunity here for people new to Kubernetes who have been doing things in a previous way. There's a little bit of reticence from this community to understand enterprise, they're like, "Well, actually, you're kind of doing it wrong. It's slow and inflexible." There's actually a lot of lessons that we've learned in enterprise, particularly around these workloads. Having security, having backup in DR. In the keynote this morning, there was a lot of discussion about the security that either is in Kubernetes, and some parts it's kind of lacking. I think there's a lot that both of these communities can learn from each other, so I'm seeing a lot of moves of late to be a little bit more welcoming to some people who are coming to Kubernetes from other ecosystems. To be able to bring the ideas that they have that... We've already learned these lessons before, we can take some of that knowledge and bring it into Kubernetes to help us to do that better. Do you see Red Hat bringing a lot of that expereience in its work... Red Hat's been around for quite some time now, so you've done a lot of this already. Are you bringing all of that knowledge into Kubernetes and sharing it with the ecosystem? >> Absolutley, and just like Stu pointed out, I mean, OpenStack was a big part of our evolution, and security within RHEL, and I think we absolutely should take those lessons learned and look to how we do protect our customers' data, and make sure that the platform, Kubernetes itself and as we evolve OpenShift, can provide that, and ways that we can certify that. >> Erin, you're meeting with a lot of customers. You were talking about the Day Zero thing. What's top of mind for your customers? We talk about, that Kubernetes has crossed the chasm but to get the vast majority, there's still lots of work to do. We need to, as an industry, make things simpler. What's working well, and what are some of the challenges from the customers that you've talked to? >> So I think, if you walk in, across the hall, and you see how many vendors are there, it's trying to get a handle on what I should even be doing. And as the co-lead of the CNCF Storage SIG, I think that's one of the initiatives that we take very seriously. So in addition to a storage whitepaper, we've been working on use cases that define, when should I use a data store? When should I use object? Why would I want to use file? And then really taking these real-world examples, creating use cases and actual implementations so someone can, "Oh, that's similar to my workload." Here are some tools to accelerate understanding how to get that set up. And also creating those guard rails from an architectural standpoint. You don't want to go down this path, that's not right for your workload. So we're hoping to at least provide an education around containerized storage that'll help customers. >> Yeah, I'm just curious. I think back ten years ago, I was working for a large storage company. We were having some of these same conversations. So is it very different now in the containerized, multicloud world? Or are some of the basic decision tree discussions around block, file, and object and application the same as we might have been having a decade ago? >> I think we're starting to just touch on those, and I'm glad that you brought up object. That was one of the things I talked about in Barcelona, and we actually talked about at the face-to-face. To me, it's kind of the missing piece of storage today in Kubernetes, and I think we're finally starting to see that more customers are asking for that and realizing that's an important workload to be able to support at its core. So I think, yes, we're having the same conversations again, but certainly in a different context. >> Yeah, I mean, back in the day, it was, the future is object but we don't know how we'd get there. If you look behind the scenes in most public clouds, object's running a lot of what's there. All right, Erin, I want to give you the final word. KubeCon 2019, from that storage perspective. What should people watching take away? >> That we're only beginning with storage, yeah. We still have a lot of work to do, but I think it's a wonderful community and vibrant, and I think there'll be a lot of changes in the coming years. >> All right. Well, definitely a vibrant ecosystem. Erin, thank you so much for all the updates. We'll be back with more coverage here, for Justin Warren. I'm Stu Miniman. Thank you for watching theCUBE. (techno music)

Published Date : Nov 21 2019

SUMMARY :

Brought to you by Red Hat, the Cloud Native And happy to welcome back to the program, Erin Boyd to my understanding had a couple of things going on. We had to cap the number of attendees, so about 60 So from that SIG, can you actually share a little bit are going to be the next evolution of That seems to have pretty much been sorted out now. came into the landscape, to be able to lower the complexity Yeah, Erin, I hesitate a little bit to ask this, but to Ceph, Ceph being the major backer of OpenStack. It felt like we were kind of taking some older models the pace that they need to move. but still allowing the flexibility for people to that don't necessarily have to be in core Kubernetes. One of the things I've really liked to see I mean, I think that we have to focus on the actual Security is something I need to do across the board. I think that's one of the things that you see moves of late to be a little bit more welcoming take those lessons learned and look to how we do protect but to get the vast majority, So in addition to a storage whitepaper, the same as we might have been having a decade ago? and I'm glad that you brought up object. All right, Erin, I want to give you the final word. That we're only beginning with storage, yeah. Erin, thank you so much for all the updates.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

ErinPERSON

0.99+

Erin BoydPERSON

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

Stu MinimanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

twoQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

Erin A. BoydPERSON

0.99+

three daysQUANTITY

0.99+

bothQUANTITY

0.99+

KubeConEVENT

0.99+

oneQUANTITY

0.99+

third dayQUANTITY

0.98+

CNS DaysEVENT

0.98+

each releaseQUANTITY

0.98+

one cloudQUANTITY

0.98+

ten years agoDATE

0.98+

KubeconEVENT

0.98+

OpenStackTITLE

0.98+

StuPERSON

0.98+

last yearDATE

0.97+

BrownfieldORGANIZATION

0.97+

todayDATE

0.97+

KubernetesTITLE

0.97+

one wayQUANTITY

0.96+

day threeQUANTITY

0.96+

CloudNativeConEVENT

0.96+

Container Native Storage DaysEVENT

0.96+

OpenShiftTITLE

0.94+

about 60 different peopleQUANTITY

0.92+

RHELTITLE

0.92+

CloudNativeCon NA 2019EVENT

0.91+

OneQUANTITY

0.9+

each vendorQUANTITY

0.89+

CephORGANIZATION

0.89+

multicloudORGANIZATION

0.89+

a decade agoDATE

0.88+

CloudNativeCon 2019EVENT

0.88+

day zeroQUANTITY

0.87+

this morningDATE

0.86+

OKRORGANIZATION

0.84+

Rook CephORGANIZATION

0.82+

KubeCon 2019EVENT

0.82+

last two yearsDATE

0.82+

CNCF Storage SIGORGANIZATION

0.8+

one criticismQUANTITY

0.76+

OpenShift CommonsORGANIZATION

0.76+

past five yearsDATE

0.73+

SIGORGANIZATION

0.73+

coupleQUANTITY

0.68+