Image Title

Search Results for 35 ecosystem partners:

Steve Gordon, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Announcer: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2021-Virtual, brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hey, welcome back everyone to theCUBE's coverage of KubeCon and CloudNativeCon 2021-Virtual. I'm John Furrier, your host here on theCUBE. We've got Steve Gordon, Director of Product Management, Cloud Platforms at Red Hat. Steve, welcome to theCUBE, good to see you, thanks for coming on. >> Hey John, thanks for having me on, it's great to be back. >> So soon we'll be in real life, I think North America show, this is for the Europe Virtual, I think the North American one might be in person. It's not yet official. We'll hear, but we'll find out, but looking good so far. But thanks for all your collaboration. You guys have been a big part of the CNCF we've been covering on theCUBE, as you know, since the beginning. But, I wanted to get into the Edge conversation that's been going on. And first I want to just get this out there. You guys are sponsoring Edge Day here at KubeCon. I want you to bring that together for us, because this is a big part of what Red Hat's talking about and frankly customers. The Edge is the most explosive growth area. It's got the most complexity, it's crazy. It's got data, it's got everything at the Edge. Everything's happening. How important is Kubernetes to Edge Computing? >> Yeah, it's certainly interesting to be here talking about it now, and having kind of a dedicated Kubernetes Edge Day. I was thinking back earlier, I think it was one of the last in-person KubeCon events I think, if not the last, the San Diego event where there was already kind of a cresting of interest in Edge and kind of topics on the agenda around Edge. And it's just great to see that momentum has continued up to where we are today. And really more and more people not only talking about using Kubernetes for Edge, but actually getting in there and doing it. And I think, when we look at why people are doing that, they're really leaning into some of the things that they saw as strengths of Kubernetes in general, that they're now able to apply to edge computing use cases in terms of what they can actually do in terms of having a common interface to this very powerful platform that you can take to a growing multitude of footprints, be they your public cloud providers, where a lot of people may have started their Kubernetes journey or their own data center, to these edge locations where they're increasingly trying to do processing closer to where they're collecting data, basically. >> You know, when you think about Edge and all the evolution with Cloud Native, what's interesting is Kubernetes is enabling a lot of value. I'd like to get your thoughts. What are you hearing from customers around use cases? I mean, you are doing product management, you've got to document all the features, the wishlist. You have the keys to the kingdom on what's going on over at Red Hat. You know, we're seeing just the amazing connectivity between businesses with hybrid cloud. It's a game changer. Haven't seen this kind of change at this level since the late '80s, early '90s in terms of inflection point impact. This is huge. What are you hearing? >> I think it's really interesting that you use the word connectivity there because one of the first edge computing use cases that I've really been closely involved with and working a lot on, which then grows into the others, is around telecommunications and 5G networking. And the reason we're working with service providers on that adoption of Kubernetes as they build 5G basically as a cloud native platform from the ground up, is they're really leveraging what they've seen with Kubernetes elsewhere and taking that to deliver this connectivity, which is going to be crucial for other use cases. If you think about people whether they're trying to do automotive edge cases, where they're increasingly putting more sensors on the car to make smarter decisions, but also things around the infotainment system using more and more data there as well. If you think about factory edge, all of these use cases build on connectivity as one of the core fundamental things they need. So that's why we've been really zoomed in there with the service providers and our partners, trying to deliver a 5G networking capabilities as fast as we can and the throughput and latency benefits that come with that. >> If you don't mind me asking, I got to just go one step deeper if you don't mind. You mentioned some of these use cases, the connectivity. You know, IoT was the big buzz word, okay IoT. It's an Edge, it's Operational Technology, or it's a dumb endpoint or a node on the network has connectivity. It's got power. It's a purpose built device. It's operating, it's getting surveillance data, whatever the hell it's doing, right. It's got Edge. Now you're bringing in more intelligent, which is an IT kind of thing, state, databases, caching. Is the database too slow? Is it too fast? So again, it brings up more complexity. Can you just talk about how you view that? Because this is what I'm hearing, what do you think? >> Yeah, I agree. I think there's a real spectrum, when we talk about edge computing, both in terms of the footprints and the locations, and the various constraints that each of those imply. And sometimes those strengths can be, as you're talking about as a specially designed board which has a very specific chip on it, has very specific memory and storage constraints or it can be a literal physical constraint in terms of I only have this much space in this location to actually put something, or that space is subject to excess heat or other considerations environmentally. And I think when we look at what we're trying to provide, not just with Kubernetes but also with Linux, is a variety of solutions that can help people no matter where they are along that spectrum of the smallest devices where maybe Red Hat Enterprise Linux, or REL for Edge is suitable to those use cases where maybe there's a little more flexibility in terms of, what are the workloads I might want to run on that in the future? Or how do I want to grow that environment potentially in the future as well? If I want to add nodes, then all of a sudden, the capability that nannies brings can be a more flexible building base for them to start with. >> So with all of these use cases and the changing dynamics and the power dynamics between Operational Technology in IT, which we're kind of riffing on, what should developers take away from that when they're considering their development, whether they just want an app, be app developers, programming the infrastructure or they're tinkering with the underlying, some database work, or if they're under the hood kind of full dev ops? What should developers take into consideration for all these new use cases? >> Yeah, I think one of the key things is that we're trying to minimize the impact to the developer as much as we can. Now of course, with an edge computing use case where you may be designing your application specifically for that board or device, then that's a more challenging proposition. But there's also the case increasingly where that intelligence already exists in the application somewhere, whether it's in the data center or in the cloud, and they're just trying to move it closer to that endpoint, where the actual data is collected. And that's where I think there's a really powerful story in terms of being able to use Kubernetes and OpenShift as that interface that the application developer interacts with but can use that same interface, whether they're running in the cloud maybe for development purposes, but also when they take it to production and it's running somewhere else. >> I got to ask you the AI impact because every conversation I have or everyone I interview that's an expert as a practitioner is usually something along the lines of chief architect of cloud and AI. You're seeing a lot of cloud, SRE, cloud-scale architects meeting and also running the AI piece, especially in industries. So AI as a certain component seems to be resonating from a functional persona standpoint. People who are doing these transformations tend to have cloud and AI responsibility. Is that a fluke or is that just the pattern that's real? >> No, I think that's very real. And I think when you look at AI and machine learning and how it works, it's very data centric in terms of what is the data I'm collecting, sending back to the mothership, maybe in terms of actually training my model. But when I actually go to processing something, I want to make that as close as I can to the actual data collection, so that I can minimize what I'm trying to send back. Particularly, people may not be as cognizant of it, but even today, many times we're talking about sites where that connectivity is actually fairly limited in some of these edge use cases still today. So what you're actually putting over the pipe is something you're still trying to minimize, while trying to advance your business and improve your agility, by making these decisions closer to the edge. >> What's the advantage for Red Hat? Talk about the benefits. What are you guys bringing to the table? Obviously, hybrid cloud is the new shift. Everyone's agreed to that. I mean, pretty much the consensus is public clouds, great, been there, done that. It's out there pumping out as a resource, but now enterprise is goading us to keep stuff on premises, especially when you talk about factories or whatever, on premises, things that they might need, stuff on premise. So it's clear hybrid is happening. Everyone's in agreement. What does Red Hat bring to the table? What's in it for the customer? >> Yeah, I think I would say hybrid is really an evolving at the moment in terms of, I think, Hybrid has kind of gone through this transition where, first of all, it was maybe moving from my data center to public cloud and I'm managing most of those through that transition, and maybe I'm (indistinct) public clouds. And now we're seeing this transition where it's almost that some of that processing is moving back out again closer to the use case of the data. And that's where we really see as an extension of our existing hybrid cloud story, which is simply to say that we're trying to provide a consistent experience and interface for any footprint, any location, basically. And that's where OpenShift is a really powerful platform for doing this. But also, it's got Kubernetes at the heart of it. but it's also worth considering when we look at Kubernetes, is there's this entire Cloud Native ecosystem around it. And that's an increasingly crucial part of why people are making these decisions as well. It's not just Kubernetes itself, but all of those other projects both directly in the CNCF ecosystem itself, but also in that broader CNCF landscape of projects which people can leverage, and even if they don't leverage them today, know they have options out there for when they need to change in the future if they have a new need for their application. >> Yeah, Steve, I totally agree with you. And I want to just get your thoughts on this because I was kind of riffing with Brian Gracely who works at Red Hat on your team. And he was saying that, you know, we were talking about KubeCon + CloudNativeCon as the name of the conference. He's a little bit more CloudNativeCon this year than KubeCon, inferring, implying, and saying that, okay so what about Kubernetes, Kubernetes, Kubernetes? Now it's like, whoa, CloudNative is starting to come to the table, which shows the enablement of Kubernetes. That was our point. The point was, okay, if Kubernetes does its job as creating a lever, some leverage to create value and that's being rendered in CloudNative, and that enterprise is, not the hardcore hyperscalers and/or the early adopters, I call it classic enterprise, are coming in. They're contributing to open source as participants, and they're harvesting the value in creating CloudNative. What's your reaction to that? And can you share your perspective on there's more CloudNative going on than ever before? >> Yeah, I certainly think, you know, we've always thought from the beginning of OpenShift that it was about more than just Linux and Kubernetes and even the container technologies that came before them from the point of view of, to really build a fully operational and useful platform, you need more than just those pieces. That's something that's been core to what we've been trying to build from the beginning. But it's also what you see in the community is people making those decisions as well, as you know, what are these pieces I need, whether it's fairly fundamental infrastructure concerns like logging and monitoring, or whether it's things like trying to enable different applications on top using projects like KubeVert for virtualization, Istio for service mesh and so on. You know, those are all considerations that people have been making gradually. I think what you're seeing now is there's a growing concern in some of these areas within that broad CNCF landscape in terms of, okay, what is the right option for each of these things that I need to build the platform? And certainly, we see our role is to guide customers to those solutions, but it's also great to see that consensus emerging in the communities that we care about, like the CNCF. >> Great stuff. Steve, I got to ask you a final question here. As you guys innovate in the open, I know your roadmaps are all out there in the open. And I got to ask you, product managing is about making decisions about what you what you work on. I know there's a lot of debates. Red Hat has a culture of innovation and engineering, so there's heated arguments, but you guys align at the end of the day. That's kind of the culture. What's top of mind, if someone asks you, "Hey, Steve, bottom line, I'm a Red Hat customer. I'm going full throttle as a hybrid. We're investing. You guys have the cloud platforms, what's in it for me? What's the bottom line?" What do you say? >> Yeah, I think the big thing for us is, you know, I talked about that this is extending the hybrid cloud to the edge. And we're certainly very conscious that we've done a great job at addressing a number of footprints that are core to the way people have done computing today. And now as we move to the edge, that there's a real challenge to go and address more of those footprints. And that's, whether it's delivering OpenShift on a single node of itself, but also working with cloud providers on their edge solutions, as they move further out from the cloud as well. So I think that's really core to the mission is continuing to enable those footprints so that we can be true to that mission of delivering a platform that is consistent across any footprint at any location. And certainly that's core to me. I think the other big trend that we're tracking and really continuing to work on, you know, you talked about AI machine learning, the other other space we really see kind of continuing to develop and certainly relevant in the work with the telecommunications companies I do but also increasingly in the accelerator space where there's really a lot of new and very interesting things happening with hardware and silicon, whether it be kind of FPGAs, EA6, and even the data processing units, lots of things happening in that space that I think are very interesting and going to be key to the next three to five years. >> Yeah, and software needs to run on hardware. Love your tagline there. It sounds like a nice marketing slogan. Any workload, any footprint, any location. (laughs) Hey, DevSecOps, you got to scale it up. So good job. Thank you very much for coming on. Steve Gordon, Director of Product Management, Clout Platforms, Red Hat, Steve, thanks for coming on. >> Thanks, John, really appreciate it. >> Okay, this is theCUBE coverage of KubeCon and CloudNativeCon 2021 Europe Virtual. I'm John Furrier, your host from theCUBE. Thanks for watching. (serene music)

Published Date : May 4 2021

SUMMARY :

brought to you by Red Hat, theCUBE, good to see you, me on, it's great to be back. The Edge is the most that they're now able to apply You have the keys to the kingdom on the car to make smarter decisions, I got to just go one step or that space is subject to excess heat in terms of being able to use I got to ask you the AI impact And I think when you look What's in it for the customer? is really an evolving at the as the name of the conference. that I need to build the platform? And I got to ask you, that are core to the way people needs to run on hardware. of KubeCon and CloudNativeCon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Brian GracelyPERSON

0.99+

Steve GordonPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

JohnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

San DiegoLOCATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

Ecosystem PartnersORGANIZATION

0.98+

LinuxTITLE

0.98+

late '80sDATE

0.98+

Edge DayEVENT

0.98+

CloudNativeCon 2021-VirtualEVENT

0.98+

early '90sDATE

0.98+

eachQUANTITY

0.97+

CloudNativeCon Europe 2021-VirtualEVENT

0.97+

CloudNativeConEVENT

0.97+

singleQUANTITY

0.97+

CloudNativeTITLE

0.97+

theCUBEORGANIZATION

0.96+

CNCFORGANIZATION

0.95+

firstQUANTITY

0.95+

this yearDATE

0.95+

KubernetesTITLE

0.94+

North AmericaLOCATION

0.94+

Europe VirtualEVENT

0.94+

CloudNativeCon 2021 Europe VirtualEVENT

0.93+

Red Hat Enterprise LinuxTITLE

0.93+

OpenShiftTITLE

0.92+

five yearsQUANTITY

0.91+

Clout PlatformsORGANIZATION

0.89+

Kubernetes Edge DayEVENT

0.84+

REL for EdgeTITLE

0.84+

EdgeEVENT

0.8+

CloudNativeCon Europe 2021 - VirtualEVENT

0.77+

EdgeORGANIZATION

0.7+

Jasmine James, Twitter and Stephen Augustus, Cisco | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Narrator: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe, 2021 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hello, welcome back to theCUBE'S coverage of KubeCon and CloudNativeCon 2021 Virtual, I'm John Furrier your host of theCUBE. We've got two great guests here, always great to talk to the KubeCon co-chairs and we have Stephen Augustus Head of Open Source at Cisco and also the KubeCon co-chair great to have you back. And Jasmine James Manager and Engineering Effectives at Twitter, the KubeCon co-chair, she's new on the job so we're not going to grill her too hard but she's excited to share her perspective, Jasmine, Stephen great to see you. Thanks for coming on theCUBE. >> Thanks for having us. >> Thank you. >> So obviously the co-chairs you guys see everything upfront Jasmine, you're going to learn that this is a really kind of key fun position because you've got to multiple hats you got to wear, you got to put a great program together, you got to entertain and surprise and delight the attendees and also can get the right trends, pick everything right and then keep that harmonious vibe going at CNCF and KubeCon is hard so it's a hard job. So I got to ask you out of the gate, what are the top trends that you guys have selected and are pushing forward this year that we're seeing evolve and unfold here at KubeCon? >> For sure yeah. So I'm excited to see, and I would say that some of the top trends for Cloud Native right now are just changes in the ecosystem, how we think about different use cases for Cloud Native technology. So you'll see lot's of talk about new architectures being introduced into Cloud Native technologies or things like WebAssembly. WebAssembly Wasm used cases and really starting to and again, I think I mentioned this every time, but like what are the customer used cases actually really thinking about how all of these building blocks connect and create a cohesive story. So I think a lot of it is enduring and will always be a part. My favorite thing to see is pretty much always maintainer and user stories, but yeah, but architecture is Wasm and security. Security is a huge focus and it's nice to see it comes to the forefront as we talked about having these like the security day, as well as all of the talk arounds, supply chain security, it has been a really, really, really big event (laughs) I'll say. >> Yeah. Well, great shot from last year we have been we're virtual again, but we're back in, the real world is coming back in the fall, so we hopefully in North America we'll be in person. Jasmine, you're new to the job. Tell us a little about you introduce yourself to the community and tell more about who you are and why you're so excited to be the co-chair with Stephen. >> Yeah, absolutely. So I'm Jasmine James, I've been in the industry for the past five or six years previous at Delta Airlines, now at Twitter, as a part of my job at Delta we did a huge drive on adopting Kubernetes. So a lot of those experiences, I was very, very blessed to be a part of in making the adoption and really the cultural shift, easy for developers during my time there. I'm really excited to experience like Cloud Native from the co-chair perspective because historically I've been like on the consumer side going to talk, taking all those best practices, stealing everything I could into bring it back into my job. So make everyone's life easier. So it's really, really great to see all of the fantastic ideas that are being presented, all of the growth and maturity within the Cloud Native world. Similar to Stephen, I'm super excited to hear about the security stuff, especially as it relates to making it easy for developers to shift left on security versus it being such an afterthought and making it something that you don't really have to think about. Developer experience is huge for me which is why I took the job at Twitter six months ago, so I'm really excited to see what I can learn from the other co-chairs and to bring it back to my day-to-day. >> Yeah, Twitter's been very active in open source. Everyone knows that and it's a great chance to see you land there. One of the interesting trends is this year I'll see besides security is GitOps but the one that I think is relevant to your background so fresh is the end user contributions and involvement has been really exploding on the scene. It's always been there. We've covered, Envoy with Lyft but now enterprise is now mainstream enterprises have been kind of going to the open source well and bringing those goodies back to their camps and building out and bringing it back. So you starting to see that flywheel developing you've been on that side now here. Talk about that dynamic and how real that is an important and share some perspective of what's really going on around this explosion around more end user contribution, more end user involvement. >> Absolutely. So I really think that a lot of industry like players are starting to see the importance of contributing back to open source because historically we've done a lot of taking, utilizing these different components to drive the business logic and not really making an investment in the product itself. So it's really, really great to see large companies invest in open source, even have whole teams dedicated to open source and how it's consumed internally. So I really think it's going to be a big win for the companies and for the open source community because I really am a big believer in like giving back and making sure that you should give back as much as you're taking and by making it easy for companies to do the right thing and then even highlighting it as a part of CNCF, it'll be really, really great, just a drive for a great environment for everyone. So really excited to see that. >> That's really good. She has been awesome stuff. Great, great insight. Stephen, I just have you piggyback off that and comment on companies enterprises that want to get more involved with the Cloud Native community from their respective experiences, what's the playbook, is there a new on-ramps? Is there new things? Is there a best practice? What's your view? I mean, obviously everyone's growing and changing. You look at IT has changed. I mean, IT is evolving completely to CloudOps, SRE get ops day two operations. It's pretty much standard now but they need to learn and change. What's your take on this? >> Yeah, so I think that to Jasmine's point and I'm not sure how much we've discussed my background in the past, but I actually came from the corporate IT background, did Desktop Sr, Desktop helped us support all of that stuff up into operations, DevOps, SRE, production engineering. I was an SRE at a startup who used core West technologies and started using Kubernetes back when Kubernetes is that one, two, I think. And that was my first journey into Cloud Native. And I became core less is like only customer to employee convert, right? So I'm very much big on that end user story and figuring out how to get people involved because that was my story as well. So I think that, some of the work that we do or a lot of the work that we do in contributor strategy, the SIG CNCF St. Contributor Strategy is all around thinking through how to bring on new contributors to these various Cloud Native projects, Right? So we've had chats with container D and linker D and a bunch of other folks across the ecosystem, as well as the kind of that maintainer circle sessions that we hold which are kind of like a private, not recorded. So maintainers can kind of get raw and talk about what they're feeling, whether it be around bolstering contributions or whether it'd be like managing burnout, right? Or thinking about how you talk through the values and the principles for your projects. So I think that, part of that story is building for multiple use cases, right? You take Kubernetes for example, right? So Ameritas chair for sync PM over in Kubernetes, one of the sub project owners for the enhancements sub project which involves basically like figuring out how we intake new enhancements to the community but as well as like what the end user cases are all of the use cases for that, right? How do we make it easy to use the technology and how we make it more effective for people to have conversations about how they use technology, right? So I think it's kind of a continuing story and it's delightful to see all of the people getting involved in a SIG Contributor Strategy, because it means that they care about all of the folks that are coming into their projects and making it a more welcoming and easier to contribute place so. >> Yeah. That's great stuff. And one of the things you mentioned about IT in your background and the scale change from IT and just the operational change over is interesting. I was just talking with a friend and we were talking about, get Op and, SRAs and how, in colleges is that an engineering track or is it computer science and it's kind of a hybrid, right? So you're seeing essentially this new operational model at scale that's CloudOps. So you've got hybrid, you've got on-premise, you've got Cloud Native and now soon to be multi-cloud so new things come into play architecture, coding, and programmability. All these things are like projects now in CNCF. And that's a lot of vendors and contributors but as a company, the IT functions is changing fast. So that's going to require more training and more involvement and yet open source is filling the void if you look at some of the successes out there, it's interesting. Can you comment on the companies that are out there saying, "Hey, I know my IT department is going to be turning into essentially SRE operations or CloudOps at scale. How do they get there? How could they work with KubeCon and what's the key playbook? How would you answer that? >> Yeah, so I would say, first off the place to go is the one-on-one track. We specifically craft that one-on-one track to make sure that people who are new to Cloud Native get a very cohesive story around what they're trying to get into, right? At any one time. So head to the one-on-one track, please add to the one-on-one track, hang out, definitely check out all of the keynotes that again, the keynotes, we put a lot of work into making sure these keynotes tell a very nice story about all of the technology and the amount of work that our presenters put into it as well is phenomenal. It's top notch. It's top notch every time. So those will always be my suggestions. Actually go to the keynotes and definitely check out the one-on-one track. >> Awesome. Jasmine, I got to get your take on this now that you're on the KubeCon and you're co-chairing with Stephen, what's your story to the folks that are in the end user side out there that were in your old position that you were at Delta doing some great Kubernetes work but now it's going beyond Kubernetes. I was just talking with another participant in the KubeCon ecosystem is saying, "It's not just Kubernetes anymore. There's other systems that we're going to deploy our real-time metrics on and whatnot". So what's the story? What's the update? What do you see on the inside now now that you're on board and you're at a Hyperscale at Twitter, what's your advice? What's your commentary to your old friends and the end user world? >> Yeah. It's not an easy task. I think that was, you had mentioned about starting with the one-on-one is like super key. Like that's where you should start. There's so many great stories out there in previous KubeCon that have been told. I was listening to those stories and the great thing about our community is that it's authentic, right? We're telling like all of the ways we tripped up so we can prevent you from doing this same thing and having an easier path, which is really awesome. Another thing I would say is do not underestimate the cultural shift, right? There are so many tools and technologies out there, but there's also a cultural transformation that has to happen. You're shifting from, traditional IT roles to a really holistic like so many different things are changing about the way infrastructure was interacted with the way developers are developing. So don't underestimate the cultural shift and make sure you're bringing everyone to the party because there's a lot of perspectives from the development side that needs to be considered before you make the shift initially So that way you can make sure you're approaching the problem in the right way. So those would be my recommendation. >> Also, speaking of cultural shifts, Stephen I know this is a big passion of yours is diversity in the ecosystem. I think with COVID we've seen probably in the past two years a major cultural shifts on the personnel involved, the people participating, still a lot more work to get done. Where are we on diversity in the ecosystem? How would you rate the progress and the overall achievements? >> I would say doing better, but never stop what has happened in COVID I think, if you look across companies, if you look across the opportunities that have opened up for people in general, there have been plenty of doors that have shut, right? And doors that have really made the assumption that you need to be physical are in person to do good work. And I think that the Cloud Native ecosystem the work that the LF and CNCF do, and really the way that we interact in projects has kind of pushed towards this async first, this remote first work culture, right? So you see it in these large corporations that have had to change the travel policies because of COVID and really for someone who's coming off being like a field engineer and solutions architect, right? The bread and butter is hopping on and off a plane, shaking hands, going to dinner, doing the song and dance, right? With customers. And for that model to functionally shift, right? Having conversations in different ways, right? And yeah, sometimes it's a lot of Zoom calls, right? Zoom calls, webinars, all of these things but I think some of what has happened is, you take the release team, for example, the Kubernetes release team. This is our first cycle with Dave Vellante who's our 121 released team lead is based in India, right? And that's the first time that we've had APAC region release team lead and what that forced us to do, we were already working on it. But what that forced us to do is really focused on asynchronous communication. How can we get things done without having to have people in the room? And we were like, "With Dave Vellante in here, it either works or it doesn't like, we're either going to prove that what we've put in place works for asynchronous communication or it doesn't." And then, given that a project of this scale can operate just fine, right? Right just fine delivering a release with people all across the globe. It proves that we have a lot of flexibility in the way that we offer opportunities, both on the open source side, as well as on the company side. >> Yeah. And I got to say KubeCon has always been global from day one. I was in Shanghai and I was in hung, Jo, visiting Ali Baba. And who do I see in the lobby? The CNCF crew. And I'm like, "What are you guys doing here?" "Oh, we're here talking to the cloud with Alibaba." So global is huge. You guys have nailed that. So congratulations and keep that going. Jasmine, your perspective is women in tech. I mean, you're seeing more and more focus and some great doors opening. It's still not enough. We've been covering this for a long time. Still the numbers are down, but we had a great conference recently at Stanford Women in Data Science amazing conference, a lot of power players coming in, women in tech is evolving. What's your take on this still a lot more work to done. You're an inspiration. Share your story. >> Yeah. We have a long way to go. There's no question about it. I do think that there's a lot of great organizations CNCF being one of them, really doing a great job at sharing, networking opportunities, encouraging other women to contribute to open source and letting that be sort of the gateway into a tech career. My journey is starting as a systems engineer at Delta, working my way into leadership, somehow I'm not sure I ended up there but really sort of shifting and being able to lift other women up has been like so fortunate to be able to do that. Women who code being a mentor, things of that nature has been a great opportunity, but I do feel like the open source community has a long way go to be a more welcoming place for women contributors, things like code of conduct, that being very prevalent making sure that it's not daunting and scary, going into GitHub and starting to create a PR for out of fear of what someone might say about your contributions instead of it being sort of an educational experience. So I think there's a lot of opportunities but there's a lot of programs, networking opportunities out there, especially everyone being remote now that have presented themselves. So I'm very hopeful. And the CNCF, like I said is doing a great job at highlighting these women contributors that are making changes to CNCF projects in really making it something that is celebrated which is really great. >> Yeah. You know that I love Stephen and we thought this last time and the Clubhouse app has come online since we were last talking and it's all audio. So there's a lot of ideas and it's all open. So with a synchronous first you have more access but still context matters. So the language, so there's still more opportunities potentially to offend or get it right so this is now becoming a new cultural shift. You brought this up last time we chatted around the language, language is important. So I think this is something that we're keeping an eye on and trying to keep open dialogue around, "Hey it matters what you say, asynchronously or in texts." We all know that text moment where someone said, "I didn't really mean that." But it was offensive or- >> It's like you said it. (laughs) >> (murmurs) you passionate about this here. This is super important how we work. >> Yeah. So you mentioned Clubhouse and it's something that I don't like. (laughs) So no offense to anyone who is behind creating new technologies for sure. But I think that Clubhouse from, if you take platforms like that, let's generalize, you take platforms like that and you think about the unintentional exclusion that those platforms involve, right? If you think about folks with disabilities who are not necessarily able to hear a conversation, right? Or you don't provide opportunities to like caption your conversations, right? That either intentionally or unintentionally excludes a group of folks, right? So I've seen Cloud Native, I've seen Cloud Native things happen on a Clubhouse, on a Twitter Spaces. I won't personally be involved in them until I know that it's a platform that is not exclusive. So I think that it's great that we're having new opportunities to engage with folks that are not necessarily, you've got people prefer the Slack and discord vibe, you've got people who prefer the text over phone calls, so to speak thing, right? You've got people who prefer phone calls. So maybe like, maybe Clubhouse, Twitter Spaces, insert new, I guess Disco is doing a thing too- >> They call it stages. Disco has stages, which is- >> Stages. They have stages. Okay. All right. So insert, Clubhouse clone here and- >> Kube House. We've got a Kube House come on in. >> Kube House. Kube House. >> Trivial (murmurs). >> So we've got great ways to engage there for people who prefer that type of engagement and something that is explicitly different from the I'm on a Zoom call all day kind of vibe enjoy yourselves, try to make it as engaging as possible, just realize what you may unintentionally be doing by creating a community that not everyone can be a part of. >> Yeah. Technical consequences. I mean, this is key language matters to how you get involved and how you support it. I mean, the accessibility piece, I never thought about that. If you can't listen, I mean, you can't there's no content there. >> Yeah. Yeah. And that's a huge part of the Cloud Native community, right? Thinking through accessibility, internationalization, localization, to make sure that our contributions are actually accessible, right? To folks who want to get involved and not just prioritizing, let's say the U.S. or our English speaking part of the world so. >> Awesome. Jasmine, what's your take? What can we do better in the world to make the diversity and inclusion not a conversation because when it's not a conversation, then it's solved. I mean, ultimately it's got a lot more work to do but you can't be exclusive. You got to be diverse more and more output happens. What's your take on this? >> Yeah. I feel like they'll always be work to do in this space because there's so many groups of people, right? That we have to take an account for. I think that thinking through inclusion in the onset of whatever you're doing is the best way to get ahead of it. There's so many different components of it and you want to make sure that you're making a space for everyone. I also think that making sure that you have a pipeline of a network of people that represent a good subset of the world is going to be very key for shaping any program or any sort of project that anyone does in the future. But I do think it's something that we have to consistently keep at the forefront of our mind always consider. It's great that it's in so many conversations right now. It really makes me happy especially being a mom with an eight year old girl who's into computer science as well. That there'll be better opportunities and hopefully more prevalent opportunities and representation for her by the time she grows up. So really, really great. >> Get her coding early, as I always say. Jasmine great to have you and Stephen as well. Good to see you. Final question. What do you hope people walk away with this year from KubeCon? What's the final kind of objective? Jasmine, we'll start with you. >> Wow. Final objective. I think that I would want people to walk away with a sense of community. I feel like the KubeCon CNCF world is a great place to get knowledge, but also an established sense of community not stopping at just the conference and taking part of the community, giving back, contributing would be a great thing for people to walk away with. >> Awesome. Stephen? >> I'm all about community as well. So I think that one of the fun things that we've been doing, is just engaging in different ways than we have normally across the kind of the KubeCon boundaries, right? So you take CNCF Twitch, you take some of the things that I can't mention yet, but are coming out you should see around and pose KubeCon week, the way that we're engaging with people is changing and it's needed to change because of how the world is right now. So I hope that to reinforce the community point, my favorite part of any conference is the hallway track. And I think I've mentioned this last time and we're trying our best. We're trying our best to create it. We've had lots of great feedback about, whether it be people playing among us on CNCF Twitch or hanging out on Slack silly early hours, just chatting it up. And are kind of like crafted hallway track. So I think that engage, don't be afraid to say hello. I know that it's new and scary sometimes and trust me, we've literally all been here. It's going to be okay, come in, have some fun, we're all pretty friendly. We're all pretty friendly and we know and understand that the only way to make this community survive and thrive is to bring on new contributors, is to get new perspectives and continue building awesome technology. So don't be afraid. >> I love it. You guys have a global diverse and knowledgeable and open community. Congratulations. Jasmine James, Stephen Augustus, co-chairs for KubeCon here on theCUBE breaking it down, I'm John Furrier for your host, thanks for watching. (upbeat music)

Published Date : May 4 2021

SUMMARY :

brought to you by Red Hat, and also the KubeCon co-chair So I got to ask you out of the gate, and really starting to and tell more about who you are on the consumer side going to talk, to see you land there. and making sure that you but they need to learn and change. and it's delightful to see all and just the operational the place to go is the one-on-one track. that are in the end user side So that way you can make and the overall achievements? and really the way that And I got to say KubeCon has always been and being able to lift So the language, so there's It's like you said it. you passionate about this here. and it's something that I don't like. They call it stages. So insert, Clubhouse clone here and- We've got a Kube House come on in. Kube House. different from the I'm I mean, the accessibility piece, speaking part of the world so. You got to be diverse more of the world is going to be What's the final kind of objective? and taking part of the Awesome. So I hope that to reinforce and knowledgeable and open community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

JasminePERSON

0.99+

Dave VellantePERSON

0.99+

Jasmine JamesPERSON

0.99+

IndiaLOCATION

0.99+

ShanghaiLOCATION

0.99+

Stephen AugustusPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

last yearDATE

0.99+

Delta AirlinesORGANIZATION

0.99+

North AmericaLOCATION

0.99+

hungLOCATION

0.99+

CNCFORGANIZATION

0.99+

DiscoORGANIZATION

0.99+

KubeConEVENT

0.99+

six months agoDATE

0.99+

ClubhouseTITLE

0.99+

TwitterORGANIZATION

0.99+

APACORGANIZATION

0.98+

first cycleQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

oneQUANTITY

0.98+

CloudOpsTITLE

0.98+

this yearDATE

0.98+

Cloud NativeTITLE

0.98+

first journeyQUANTITY

0.97+

U.S.LOCATION

0.97+

first timeQUANTITY

0.97+

two great guestsQUANTITY

0.97+

GitOpsTITLE

0.97+

one timeQUANTITY

0.96+

KubernetesTITLE

0.96+

bothQUANTITY

0.96+

twoQUANTITY

0.96+

LFORGANIZATION

0.96+

SIGORGANIZATION

0.96+

CloudNativeCon 2021 VirtualEVENT

0.95+

121 released teamQUANTITY

0.94+

ClubhouseORGANIZATION

0.94+

Steve Canepa & Jeffrey Hammond | CUBE Conversation, December 2020


 

(upbeat music) >> From ''theCUBE studios,'' in Palo Alto, in Boston, connecting with thought leaders all around the world. This is ''theCUBE Conversation.'' >> Hi, I'm John Walls. And as we're all aware, technology continues to evolve these days at an incredible pace and it's changing the way industries are doing their business all over the world and that's certainly true in telecommunications, CSPs all around the globe are developing plans on how to leverage the power of 5G technology and their network operations are certainly central to that mission. That is the genesis of ''IBM's Cloud for Telecommunications Service.'' That's a unified open hybrid architecture, that was recently launched and was developed to provide telecoms with the solutions they need to meet their very unique network demands and needs. I want us to talk more about that. I'm joined by Steve Canepa, who is the Global GM and Managing Director of the communication sector at IBM. Steve, good to see you today. >> Yeah, you too, John. >> And Jeffrey Hammond. So, he's the Principal Analyst and Vice President at Forrester. Jeffrey, thank you for your time as well today. Good to see you. >> Thanks a lot. It's great to be here. >> Yeah, Steve, let's just jump right in. First off, I mean, to me, the overarching question is, why telecom, I know that IBM has been very focused on providing these kinds of industries specific services, you've done very well in finance, now you're shifting over to telecom. What was the driver there? >> First, great to be with you today, John, and, you know, if we look at the marketplace, especially in 2020, I think the one thing that's, everyone can agree with, is that the rate and pace of change is just really accelerating and is a very, very dynamic marketplace. And so, if we look at the way both our personal lives are now guided by connectivity, and the use of multiple devices throughout the day, the same with our professional lives. So, connectivity really sits at the heart of how value and solutions are delivered and for businesses, this is becoming a critical issue. So, as we work with the telecommunication providers around the world, we're helping them transform their business to make it much more agile, to make it open and make them deliver new services much more quickly and to engage digitally with their clients to bring that kind of experience that we all expect now, so, that the rate pace of change, and the need for the telecommunications industry to bring new value, is really driving a tremendous opportunity for us to work with them. >> Jeffrey what's happening in the telecom space? That, I mean, these aren't just small trends, right? These are tectonic shifts that are going on in terms of their new capabilities and their needs. I'm sure this digital transformation has been driven in some part by COVID, but there are other forces going on here, I would assume too. What do you see from your analyst seat? >> Yeah, I look at it, you know, from a glass half full and a glass half empty approach. From a half empty approach, the shifts to remote work and remote learning, and from traditional retail channels, brick and mortar channels to digital ones, have really put a strain on the existing networking infrastructure, especially, at the Edge, but they've also demonstrated just how critical it is to get that right. You know, as an example, I'm actually talking to you today over my hotspot on my iPhone. So, I think a lot more about the performance of my local cell tower now than I ever did a year ago. and I want it to be as good as it can possibly be and give me as many capabilities as it can. From a glass half full perspective, the opportunities that a modernized network infrastructure gives us are, I think, more readily apparent than ever, you know, most of my wife's doctor's appointments have shifted to remote appointments and every time she calls up to connect, I kind of cringe in the other room and it's like, are they going to get video working? Are they going to get audio working? Are they actually going to have to shift to an old-style phone call to make this happen? Well, things like 5G really are poised to solve those kinds of challenges. They promise, 5G promises, exponential improvements in connectivity speed, capacity, and reductions in latency that are going to allow us to look at some really interesting workloads, IOT workloads, automation workloads, and a lot of Edge use cases. I think 5G sets the stage or Edge compute. Expanding Edge compute scenarios, make it possible to distribute data and services where businesses can best optimize their outcomes, whether it's IOT enabled assets, whether it's connected environments, whether it's personalization, whether it's rich content, AI, or even extended reality workloads. So, you might seem like, that's what a little over the horizon, but it's actually not that far away. And as companies gain the ability to manage and analyze and localize their data, and unlocks real-time insights in a way that they just haven't had before, it can drive expanded engagement and automation in close proximity to the end point devices and customers. And none of that happens without the telco providers and the infrastructure that they own being on board and providing the capabilities for developers like me to take advantage of the infrastructure that they've put in place. So, my perspective on it is, that transformation, that digital transformation, is not going to happen on its own. Someone's got to provision the infrastructure, someone's got to write the code, someone's got to get the services as close to my cell tower or to the Edge as possible and so, that's one of the reasons that when we ask decision makers in the telco space about their priorities from a business perspective, what they tell us is, one of their top three priorities is, we need to improve our ability to innovate and the other two are, we need to grow our revenue and we need to improve our product and services. What's going on from a software perspective in the telco space, is set to make all three of those possible, from my perspective. >> You know, Steve, Jeffrey just unpacked an awful lot there, did a really nice job of that. So, let's talk about first off, that telco relationship IBM's had, or has. You work with data, the 10 largest communication service providers in the world, and I'm sure you're on this journey with them, right? They've been telling you about their challenges and you recognize their needs. This is, you have had maybe some specific examples of that dialogue, that has progressed as your relationship has matured and you provide a different service to them. What are they telling you? What did they tell you say, '' This is where we have got to get better. We've got to get a little sharper, a little leaner.'' And then how did IBM respond to that? >> Yeah, I mean, critical to what Jeffrey just shared is under the covers. You know, 5G is going to take five times the cost that 4G took to deploy. So, if you're a telco, you have to get much more efficient. You have to drive a much more effective TCO into cost of deploying and managing and running that network architecture. When the network becomes a software defined platform, it opens up the opportunity to use open source, open technology, and to drive a tremendous ecosystem of innovation that you can then capture that value onto that open software network. And as the Edge emerged as compute and storage and connectivity, both to the Edge as Jeffrey described, then the opportunity to deliver B2B use cases to take advantage of the latency improvements with 5G, take advantage of the bandwidth capabilities that you have moving video and AI out to the Edge, so, you can create insights as a service. These are the underlying transformations that the telcos are making right now to capture this value. And in fact, we have an institute for business value on our website. You can see some of the surveys and analysis we've done but 84% of the telco clients say, you know, '' Improving the automation and the intelligence of this network platform becomes critical.'' So, from our standpoint, we see a tremendous opportunity to create an open architecture to allow the telcos to regain control of their architecture so that they can pick the solutions and services that work best for them to create value for their customers and then allows them to deploy them incredibly quickly. In fact, just this last week, we announced a milestone with Bharti, a project that we're doing in India, already has over 300 million subscribers. We've taken their ability to deploy their run environment, one of the core domains of the network, where you actually do the access over the cell towers. We've improved that from weeks down to a few days. In fact, our objective is to get to a few minutes. Applying that kind of automation dramatically improves the kind of service they can deliver. When we talk about relationships we have with Vodafone, AT$T, Verizon, about working with them on their mobile Edge compute platforms, it will allow them to extend their network. In fact, with our cloud announcement that you highlighted at the top, we announced a capability called the IBM Cloud Satellite and what IBM Cloud Satellite does is, it's built with Red Hat, so, it's open architecture, it takes advantage of the millions and millions of upstream developers, that are developing every single day to build a foundational shift architecture that allows us to deploy these services so quickly and we can move that capability right now to the Edge. What that means for a telco, is they can deploy those services wherever they want to deploy them, on their private infrastructure or on a public cloud, on a customer's premise, that gives them the flexibility. The automation allows them to do it smartly and very quickly and then in partnering with clients, they can create new end Edge services, things like, you know, manufacturing 4.0 you may have heard of or as you mentioned, advanced healthcare services. Every single industry is going to take advantage of these changes and we're really excited about the opportunity to work in combination with the telcos and speed the pace of innovation in the market. >> Jeffrey, I'd like to go back to the Bharti there. I was going to get into it a little bit later but Steve brought it up. This major Indian CSP, as you mentioned, 300 million subs, 400 million around the world. What does that say to you in terms of its commitment and its, the needs that are being addressed and how it's going to fundamentally change the way it is doing business as far as setting the pace in the telecom industry? >> Well, I think, one of the things that highlights it is, you know, this isn't just a U.S phenomenon or a European phenomenon. Indeed, in some cases we're seeing countries outside the U.S in advance, moving faster, Switzerland, as an example. We expect 90% of the population in Germany to be covered by 5G By 2025, we expect 90% of the population in South Korea to be covered by 2026, 160 million connections in in China as well. So, in some ways, what's happening in the telco world is mirroring what has happened in the public cloud world, which is the world's gone flat. And that's great from a developer perspective because that means that I don't have to learn specialized technologies or specialized services, in order to look at these network infrastructure platforms as part of the addressable surface that I have. That's one of the things that I think has always held the larger developer population back and has kept them from taking advantage of the telco networks. Is, they've always been bit of a black box to the vast majority of developers, you know, IP goes in, IP comes out but that's about all the control I have, unless I want to go and dig deep into those, you know, industry specific specifications. I was cleaning out my office last week because I'm in the process of moving and I came across my '' IMS Explained Handbook from 2006,'' and I remember going deep into that because, you know, we were told that that's going to make it so that IT infrastructure and telco infrastructure is going to converge and it did to a little bit, but not in a way that all the developers out there could really take advantage of telco infrastructure. And then I remember the next thing was like, well, '' Java Amiens on the front end with mobile clients, that's going to make everything different and we're going to be able to build apps everywhere.'' What ended up being was we would write once and test everywhere, across all the different devices that we had to support. And you know, what really drove you equity? Was the iPhone and apps that we could use HTML like technology or that we could use Java to build and it exploded. And we got millions of applications on the front end of the network. What I see potentially happening now, is the same thing on the backend infrastructure side, because the reality is for any developer that is trying to build modern applications, that's trying to take advantage of cloud native technologies, things start with containers and specifically, OCI compliant containers. That is the basis for how we think about building services and handing them off to operators to run them for us. And with what's going on here, by building on top of OpenShift, you take that, you know, essentially de facto standard of containers as the way that we communicate on the infrastructure side globally, from a software development perspective and you make that the entry point for developers into the modern telco outcome system. And so, basically, it means that if I want to push all the way out to the Edge and I want to get as close as I possibly can, as long as I can give you a container to execute that capability, I'm well on the way to making that a reality, that's a game changer in my opinion. >> Yeah, I was on. >> Just to pick it, just if I could, just to pick up on that because I think Jeffrey made a really important point. So, it's kind of like, in a way, an auntie to the ball here is this open architecture because it empowers the entire ecosystem and it allows the telcos to take advantage of enormous innovation that's happening in the marketplace. And that's why, you know, the 35 ecosystem partners that we announced when we announced the IBM Cloud for telco, that's why they're so important because it allows you to have choice. But the other piece, which he hinted at, I wanted to just underscore, is today, in it kind of the first wave of cloud, only about 20% of the applications move to cloud. They were mostly funny digital applications. In fact, we moved our funny digital applications as well into Watson, we have over 1.5 billion customers of telcos today around the world that can access Watson, through our various chatbot and call center or an agent assist solutions we've deployed. But the 80% of applications that haven't moved yet, haven't moved because it's tough to move them, because they're mission critical, they need, you know, regulatory controls, they have to have world-class security, they need to be able to provide data sovereignty as you're operating in different countries around the world and you have to make sure that you have the data in places that you need, these are the attributes, that kind of open up the opportunity for all these other workloads to move. And those are the exact kind of capabilities that we've built into the IBM Cloud for telco, so that we can enable telcos to move their applications into this environment safely, securely, and do it, as Jeffrey described, on an open architecture that gives them that agility and flexibility. And we're seeing it happen real time, you know, I'll just give you another quick example, Vodafone India, their CTO has said publicly and moving to this cloud architecture, he sees it as a universal cloud architecture, so, they're going to run not just their internal it workloads, not just their network services, their voice data and multimedia network services workloads, but also their B2B enterprise workloads, as Jeffrey was starting to describe. Those workloads that are going to move out to the Edge. And by being able to run on a common platform, he's said publicly that they're seeing an 80% improvement in their CapEx, a 50% improvement in their OPEX, and then 90% improvement in the cost to get productions and services deployed. So, the ability to embrace this open architecture and to have the underlying capabilities and attributes in a cloud platform that responds to the specific needs of telco and enterprise workloads, we think is a really powerful combination. >> Steve, the ecosystem, Jeffrey, you brought it up as well. So, I'd like, just to give you a moment to talk about that a little bit, not a small point, by any means you have nearly 40 partners lined up in this respect, from a hardware vendor, software vendors, SAS providers. I mean, it's a pretty impressive lineup and what kind of a statement is that in your, from your perspective, that you're making to the marketplace when you bring that kind of breadth and depth, that kind of bench, basically the game? >> From our view, it's exciting, and we're only getting started. I mean, we literally have not made the announcement, just a matter of a couple of months ago, and every day that passes, we have additional partners that see the power in joining this open architecture approach that we've put in place. The reason that it delivers such values for all the players, you know, one of the hallmarks of a platform approach is that for every player that joins the platform, it brings value to all the players on the cloud. So as we build this ecosystem and we take the leverage of the open source community, and we build on the power of OpenShift and containers, as Jeffrey was saying, we're creating momentum in the marketplace and back to my very first point I made, when the market's moving really quickly, you've got to be agile. And to be agile in today's market, you have to infuse automation at scale, you have to infuse security at scale and you have to infuse intelligence at scale. And that's exactly what we can help the telcos do, and do it in partnership with these enterprise clients. Instinctively >> One of the values of that is that, you know, we're seeing the larger trend in the cloud native space of folks that used to build packaged software services, is essentially taking advantage of these architectural capabilities and containerizing their applications as part of their future strategy. I mean, just two weeks ago, Salesforce basically said, we're reinvisioning Salesforce as a set of containerized workloads that we deliver, SAP is going in very much the same direction. So as you think about these business workloads, where you get data coming from the infrastructure and you want to go all the way back to the back office and you want to make sure that data gets updated in your supply chain management system, being able to do that with a consistent architecture makes these integration challenges just an order of magnitude easier. I actually want to drill in on that data point for a minute because I think that that's also key to understanding what's going on here, because, you know, during the early days of the public cloud and even WebDuo before that, one of the things that drove WebDuo was the idea that data is the new Intel inside and in some ways that was around centralized data because we had 40 or 50 years to get all the data into the data centers and into the, and then put it in the public cloud. But that's not what is happening today. So much of the new data is actually originating at the Edge and increasingly it needs to stay at the Edge if for no other reason than to make sure that the folks that are trying to use it well aren't running up huge ingestion costs, trying to move it all back to the public cloud providers, analyze it and then push it back out and do that within the realm of the laws of physics. So, you know, one of the big things that's driving the Edge is, in the move toward the Edge, and the interest in 5G is that allows us to do more with data where the data originates. So, as an example, a manufacturer that I've been working with that basically came across exactly that problem, as they stood up more and more connected devices, they were seeing their data ingestion volume spiking and kind of running ahead of their budgets for data ingestion but they were like, well, we can't just leave this data and discard it at the Edge, because what happens if it turns out to be valuable for the maintenance, preventative maintenance use cases that we want to run, or for the machine wear characteristics that we want to run. So, we need to find a way to get our models out close to the data so we don't have to bring it all back to the core. In retailing, personalization is something that a lot of folks are looking at right now and even clientelling and that's, again, another situation where you want to get the data close to where the customer actually lives from a geographic basis and into the hands of the person that's in the store but you don't want to necessarily have to go and install a lot of complex hardware in the retail outlet because then somebody has to manage, you know, those servers and manage all those capabilities. So, you know, in the case of the retailer that I was working with, what they wanted was to get that capability as close as possible to the store, but no closer. And the idea of essentially a virtual back office that they could stand up whenever they opened up a new retail outlet, or even had a franchisee open up an outlet, was an extremely powerful concept and that's the kind of thing that you can do when you're saying,'' Well it's just a set of containers and if I have a, you know, essentially a control plane that I deploy it to, then I can do that on top of that telco provider that they sign up to be a strategic services provider.'' There are lots of other interesting scenarios, tourism, if you think about, you know, the tourist economies that we have around the world and the data that, you know, mobile devices throw off that let us get anonymized information about who's coming, where they're going, what they're spending, how long they're staying, there's a huge set of data there that you can use to grow revenue. You know, other types of use cases, transportation? We see, you know, municipal governments kind of looking at how they can use anonymized data around commute patterns to impact their planning. That's all data that's coming from the the telco infrastructure. >> You know, when we're talking about these massive advantages, right, as this hybrid cloud approach about skill ones, build one's, easy management, efficient management, all of these things, Steve, I think we almost, we'd be derelict to duty if we didn't talk about security a little bit. Just ultimately at the end of the day, you've got to provide this as you pointed out, world-class secure environment. And so, in terms of the hybrid approach, what kind of considerations do you have to make that are special to that and that are being deployed and have been considered >> You know, that's a great point. One of the benefits to Comms from moving to an open architecture, is that you componentize the framework of that architecture, and you have suppliers supplying applications for the various different services that we just talked through. And the ability then to integrate security is essentially a foundational element to the entire Premack architecture. We've stayed very compliant with the Nanci framework architecture and the way that we've worked with the telcos and bringing forth a solution, because we specifically want them to have the choice but how is that choice being married with the kind of security you just talked about. And to Jeffrey's point, you know, when you move those applications out to the Edge and that data, you know, many of the analysts are saying now by 2025, as much as 75% of the data created in the world will happen at the Edge. So, this is a massive shift. And when that shift occurs, you have to have the security to make sure that you're going to take care of that data in the way that it should be and that meets all regulatory, you know, governance already rules and regulations. So, that becomes really critical. The other piece though, is just the amount of value that gets created. The reason that data is at the Edge is because now you can act on it at the Edge, you can extract insights and in fact, most of the analysts will say,'' In the next three years, we'll see $675 billion of new value created at the Edge with these kinds of applications.'' And going back on the manufacturing example, I mean, we're already working today with manufacturers and they already had, you know, hundreds of IOT sensors deployed in the factory and we have an Edge application manager that extends right out to the far Edge, if you will, right out onto that factory floor to help get intelligence from those devices. But now think about adding to that the AI capabilities, the video capabilities, watching that manufacturing line to make sure every product that comes off that line is absolutely perfect, Watching the employees to make sure they're staying in safety zones, you know, watching the actual equipment itself to make sure it is performing the way it's supposed to, maybe using an analytics and AI capabilities to predict, you know, issues that might arise before they even happen, so you can take preventative action. This kind of intelligence, you know, makes the business run smarter, faster, more effective. So, that's where we see tremendous service. So, it's not just the fact that data will be created and it will be higher fidelity data to include the analytics, AI, you don't include unstructured data like video data and image data, audio data, but the ability to then extract insights and value out of it. And this is why we believe the ecosystem we talked about earlier, our partnership with the telco's and the ability to bring ecosystem partners and they can add value is just a tremendous momentum that we're going to build. >> Well, the market opportunity is certainly great. As you pointed out, a lot of additional value yet to be created, significant value and obviously, a lot of money to be spent as well by telcos, by some estimates, a hundred billion plus, just by the year 2022 and getting this new software defined platforms up and running. So, congratulations to IBM for this launch and we wish you continued success, Steve, in that endeavor and thank you for your time and Jeffrey, thank you as well for your insights from Forester. >> Always a pleasure. (upbeat music)

Published Date : Dec 16 2020

SUMMARY :

all around the world. and it's changing the way industries So, he's the Principal Analyst It's great to be here. the overarching question is, is that the rate and pace of change in the telecom space? and the other two are, we and you recognize their needs. and AI out to the Edge, What does that say to you and it did to a little and it allows the telcos to take advantage that kind of bench, basically the game? that see the power and the data that, you know, that are special to that and the ability to and we wish you continued success, Steve, Always a pleasure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

JeffreyPERSON

0.99+

VodafoneORGANIZATION

0.99+

Steve CanepaPERSON

0.99+

VerizonORGANIZATION

0.99+

GermanyLOCATION

0.99+

IBMORGANIZATION

0.99+

Jeffrey HammondPERSON

0.99+

John WallsPERSON

0.99+

IndiaLOCATION

0.99+

December 2020DATE

0.99+

Palo AltoLOCATION

0.99+

JohnPERSON

0.99+

$675 billionQUANTITY

0.99+

2020DATE

0.99+

80%QUANTITY

0.99+

ChinaLOCATION

0.99+

40QUANTITY

0.99+

75%QUANTITY

0.99+

South KoreaLOCATION

0.99+

90%QUANTITY

0.99+

50%QUANTITY

0.99+

BostonLOCATION

0.99+

millionsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

U.SLOCATION

0.99+

Vodafone IndiaORGANIZATION

0.99+

84%QUANTITY

0.99+

FirstQUANTITY

0.99+

2025DATE

0.99+

telcoORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

BhartiORGANIZATION

0.99+

last weekDATE

0.99+

five timesQUANTITY

0.99+

hundredsQUANTITY

0.99+

JavaTITLE

0.99+

SASORGANIZATION

0.99+

IMS Explained HandbookTITLE

0.99+

twoQUANTITY

0.99+

AT$TORGANIZATION

0.99+

5GORGANIZATION

0.99+

300 millionQUANTITY

0.99+

ForresterORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

400 millionQUANTITY

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

EdgeORGANIZATION

0.99+

two weeks agoDATE

0.98+

2022DATE

0.98+

todayDATE

0.98+

WatsonTITLE

0.98+

35 ecosystem partnersQUANTITY

0.98+

2006DATE

0.98+

bothQUANTITY

0.98+

over 300 million subscribersQUANTITY

0.98+

2026DATE

0.98+

telcosORGANIZATION

0.98+

over 1.5 billion customersQUANTITY

0.98+

Chris Aniszczyk, CNCF and JR Storment, FinOps Foundation | KubeCon + CloudNativeCon NA 2020


 

>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America. 2020. Virtual Brought to You by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners Welcome back to the Cube. Virtual coverage of KUB Con Cloud native 2020. It's virtual this year. We're not face to face. Were normally in person where we have great interviews. Everyone's kind of jamming in the hallways, having a good time talking tech, identifying the new projects and knew where So we're not. There were remote. I'm John for your host. We've got two great gas, both Cuba alumni's Chris. And is it chief technology officer of the C and C F Chris, Welcome back. Great to see you. Thanks for coming on. Appreciate it. >>Awesome. Glad to be here. >>And, of course, another Cube alumni who is in studio. But we haven't had him at a Show Jr store meant executive director of the Fin Ops Foundation. And that's the purpose of this session. A interesting data point we're going to dig into how cloud has been enabling Mawr communities, more networks of practitioners who are still working together, and it's also a success point Chris on the C N C F vision, which has been playing out beautifully. So we're looking forward to digging. Jr. Thanks for coming on. Great to see you. >>Yeah, great to be here. Thanks, John. >>So, first of all, I want to get the facts out there. I think this is really important story that people should pay attention to the Finn Ops Foundation. That J. R. That you're running is really an interesting success point because it's it's not the c n c f. Okay. It's a practitioner that builds on cloud. Your experience in community you had is doing specific things that they're I won't say narrow but specific toe a certain fintech things. But it's really about the success of Cloud. Can you explain and and layout for take a minute to explain What is the fin Ops foundation and has it relate to see NCF? >>Yeah, definitely. So you know, if you think about this, the shift that we've had to companies deploying primarily in cloud, whether it be containers a ciencia focuses on or traditional infrastructure. The thing that typically people focus on right is the technology and innovation and speed to market in all those areas. But invariably companies hit this. We'd like to call the spend panic moment where they realize they're They're initially spending much more than they expected. But more importantly, they don't really have the processes in place or the people or the tools to do things like fully, you know, understand where their costs are going to look at how to optimize those to operate that in their organizations. And so the foundation pinups foundation eyes really focused on, uh, the people in practitioners who are in organizations doing cloud financial management, which is, you know, being those who drive this accountability of this variable spin model that's existed. So we were partnering very closely with, uh, see NCF. And we're now actually part of the Linux Foundation as of a few months ago, Uh, and you know, just to kind of put into context how that you kind of Iraq together, whereas, you know, CNC s very focused on open source coordinative projects, you know, For example, Spotify just launched their backstage cloud called Management Tool into CFCF Spotify folks, in our end, are working on the best practices around the cloud financial management that standards to go along with that. So we're there to help, you know, define this sort of cultural transformation, which is a shift to now. Engineers happen to think about costs as they never did before. On finance, people happen to partner with technology teams at the speed of cloud, and, you know executives happen to make trade off decisions and really change the way that they operate the business. With this variable page ago, engineers have all the access to spend the money in Cloud Model. >>Hey, blank check for engineers who doesn't like that rain that in its like shift left for security. And now you've got to deal with the financial Finn ops. It's really important. It's super point, Chris. In all seriousness. Putting kidding aside, this is exactly the kind of thing you see with open sores. You're seeing things like shift left, where you wanna have security baked in. You know what Jr is done in a fabulous job with his community now part of Linux Foundation scaling up, there's important things to nail down that is specific to that domain that are related to cloud. What's your thoughts on this? Because you're seeing it play out. >>Yeah, no, I mean, you know, I talked to a lot of our end user members and companies that have been adopting Cloud Native and I have lots of friends that run, you know, cloud infrastructure at companies. And Justus Jr said, You know, eventually there's been a lot of success and cognitive and want to start using a lot of things. Your bills are a little bit more higher than you expect. You actually have trouble figuring out, you know, kind of who's using what because, you know, let's be honest. A lot of the clouds have built amazing services. But let's say the financial management and cost management accounting tools charge back is not really built in well. And so I kind of noticed this this issue where it's like, great everyone's using all these services. Everything is great, But costs are a little bit confusing, hard to manage and, you know, you know, scientifically, you know, I ran into, you know, Jr and his community out there because my community was having a need of like, you know, there's just not good tools, standards, no practices out there. And, you know, the Finau Foundation was working on these kind of great things. So we started definitely found a way to kind of work together and be under the same umbrella foundation, you know, under the under Linux Foundation. In my personal opinion, I see more and more standards and tools to be created in this space. You know, there's, you know, very few specifications or standards and trying to get cost, you know, data out of different clouds and tools out there, I predict, Ah, lot more work is going to be done. Um, in this space, whether it's done and defendants foundation itself, CNC f, I think will probably be, uh, collaboration amongst communities. Can I truly figure this out? So, uh, engineers have any easier understanding of, you know, if I spent up the service or experiment? How much is this actually going to potentially impact the cost of things and and for a while, You know, uh, engineers just don't think about this. When I was at Twitter, we spot up services all time without really care about cost on, and that's happening a lot of small companies now, which don't necessarily have as a big bucket. So I'm excited about the space. I think you're gonna see a huge amount of focus on cloud financial management drops in the near future. >>Chris, thanks for that great insight. I think you've got a great perspective. You know, in some cases, it's a fast and loose environment. Like Twitter. You mentioned you've got kind of a blank check and the rocket ships going. But, Jr, this brings up to kind of points. This kind of like the whole code side of it. The software piece where people are building code, but also this the human error. I mean, we were playing with clubs, so we have a big media cloud and Amazon and we left there. One of the buckets open on the switches and elemental. We're getting charged. Massive amounts for us cash were like, Wait a minute, not even using this thing. We used it once, and it left it open. It was like the water was flowing through the pipes and charging us. So you know, this human error is throwing the wrong switch. I mean, it was simply one configuration error, in some cases, just more about planning and thinking about prototypes. >>Yeah. I mean, so take what your experience there. Waas and multiply by 1000 development teams in a big organization who all have access to cloud. And then, you know, it's it's and this isn't really about a set of new technologies. It's about a new set of processes and a cultural change, as Chris mentioned, you know, engineers now thinking about cost and this being a whole new efficiency metric for them to manage, right? You know, finance teams now see this world where it's like tomorrow. The cost could go three x the next day they could go down. You've got, you know, things spending up by the second. So there's a whole set of cross functional, and that's the majority of the work that are members do is really around. How do we get these cross functional teams working together? How do we get you know, each team up leveled on what they need, understand with cloud? Because not only is it, you know, highly variable, but it's highly decentralized now, and we're seeing, you know, cloud hit. These sort of material spend levels where you know, the big, big cloud spenders out there spending, you know, high nine figures in some cases you know, in cloud and it's this material for their for their businesses. >>And let's just let's be honest. Here is like Clouds, for the most part, don't really have a huge incentive in offering limits and so on. It's just, you know, like, hey, the more usage that the better And hopefully getting a group of practitioners in real figures. Well, holy put pressure to build better tools and services in this area. I think actually it is happening. I think Jared could correct me if wrong. I think AWS recently announced a feature where I think it's finally like quotas, you know, enabled, you know, you have introducing quotas now for and building limits at some level, which, you know, I think it's 2020 Thank you know, >>just to push back a little bit in support of our friends, you ask Google this company, you know, for a long time doing this work, we were worried that the cloud would be like, What are you doing? Are you trying to get our trying to minimize commitments and you know the dirty secret of this type of work? And I were just talking a bunch of practitioners today is that cloud spend never really goes down. When you do this work, you actually end up spending more because you know you're more comfortable with the efficiency that you're getting, and your CEO is like, let's move more workloads over. But let's accelerate. Let's let's do Maurin Cloud goes out more data centers. And so the cloud providers air actually largely incentivized to say, Yeah, we want people to be officially don't understand this And so it's been a great collaboration with those companies. As you said, you know, aws, Google, that you're certainly really focused in this area and ship more features and more data for you. It's >>really about getting smart. I mean, you know, they no, >>you could >>do it. I mean, remember the old browser days you could switch the default search engine through 10 menus. You could certainly find the way if you really wanted to dig in and make policy a simple abstraction layer feature, which is really a no brainer thing. So I think getting smarter is the right message. I want to get into the synergy Chris, between this this trend, because I think this points to, um kind of what actually happened here if you look at it at least from my perspective and correct me if I'm wrong. But you had jr had a community of practitioners who was sharing information. Sounds like open source. They're talking and sharing, you know? Hey, don't throw that switch. Do This is the best practice. Um, that's what open communities do. But now you're getting into software. You have to embed cost management into everything, just like security I mentioned earlier. So this trend, I think if you kind of connect the dots is gonna happen in other areas on this is really the synergy. Um, I getting that right with CNC >>f eso The way I see it is, and I dream of a future where developers, as they develop software, will be able to have some insight almost immediately off how much potential, you know, cost or impact. They'll have, you know, on maybe a new service or spinning up or potentially earlier in the development cycle saying, Hey, maybe you're not doing this in a way that is efficient. Maybe you something else. Just having that feedback loop. Ah lot. You know, closer to Deb time than you know a couple weeks out. Something crazy happens all of a sudden you notice, You know, based on you know, your phase or financial folks reaching out to you saying, Hey, what's going on here? This is a little bit insane. So I think what we'll see is, as you know, practitioners and you know, Jr spinoffs, foundation community, you know, get together share practices. A lot of them, you know, just as we saw on sense. Yeah, kind of build their own tools, models, abstractions. And, you know, they're starting to share these things. And once you start sharing these things, you end up with a you know, a dozen tools. Eventually, you know, sharing, you know, knowledge sharing, code sharing, you know, specifications. Sharing happens Eventually, things kind of, you know, become de facto tools and standards. And I think we'll see that, you know, transition in the thin ops community over the next 12 to 4 months. You know, very soon in my thing. I think that's kind of where I see things going, >>Jr. This really kind of also puts a riel, you know, spotlight and illustrates the whole developer. First cliche. I mean, it's really not a cliche. It's It's happening. Developers first, when you start getting into the calculations of our oi, which is the number one C level question is Hey, what's the are aware of this problem Project or I won't say cover your ass. But I mean, if someone kind of does a project that it breaks the bank or causes a, you know, financial problem, you know, someone gets pulled out to the back would shed. So, you know, here you're you're balancing both ends of the spectrum, you know, risk management on one side, and you've got return on investment on the other. Is that coming out from the conversation where you guys just in the early stages, I could almost imagine that this is a beautiful tailwind for you? These thes trends, >>Yeah. I mean, if you think about the work that we're doing in our practice you're doing, it's not about saving money. It's about making money because you actually want empower those engineers to be the innovation engines in the organization to deliver faster to ship faster. At the same time, they now can have, you know, tangible financial roo impacts on the business. So it's a new up leveling skill for them. But then it's also, I think, to Christmas point of, you know, people seeing this stuff more quickly. You know what the model looks like when it's really great is that engineers get near real time visibility into the impact of their change is on the business, and they can start to have conversations with the business or with their finance partners about Okay, you know, if you want me to move fast, I could move fast, But it's gonna cost this if you want me to optimize the cost. I could do that or I can optimize performance. And there's actually, you know, deeper are like conversation the candidate up. >>Now I know a lot of people who watch the Cube always share with me privately and Chris, you got great vision on this. We talked many times about it. We're learning a lot, and the developers are on the front lines and, you know, a lot of them don't have MBAs and, you know they're not in the business, but they can learn quick. If you can code, you can learn business. So, you know, I want you to take a minute Jr and share some, um, educational knowledge to developers were out there who have to sit in these meetings and have to say, Hey, I got to justify this project. Buy versus build. I need to learn all that in business school when I had to see s degree and got my MBA, so I kind of blended it together. But could you share what the community is doing and saying, How does that engineer sit in the meeting and defend or justify, or you some of the best practices what's coming out of the foundation? >>Yeah, I mean, and we're looking at first what a core principles that the whole organization used to line around. And then for each persona, like engineers, what they need to know. So I mean, first and foremost, it's It's about collaboration, you know, with their partners andan starting to get to that world where you're thinking about your use of cloud from a business value driver, right? Like, what is the impact of this? The critical part of that? Those early decentralization where you know, now you've got everybody basically taking ownership for their cloud usage. So for engineers, it's yes, we get that information in front of us quickly. But now we have a new efficiency metric. And engineers don't like inefficiency, right? They want to write fishing code. They wanna have efficient outcomes. Um, at the same time, those engineers need to now, you know, have ah, we call it, call it a common lexicon. Or for Hitchhiker's Guide to the Galaxy, folks. Ah, Babel fish that needs to be developed between these teams. So a lot of the conversations with engineers right now is in the foundation is okay. What What financial terms do I need to understand? To have meaningful conversations about Op X and Capex? And what I'm going to make a commitment to a cloud provider like a committed use discount, Google or reserved instance or savings Planet AWS. You know, Is it okay for me to make that? What? How does that impact our, you know, cost of capital. And then and then once I make that, how do I ensure that I could work with those teams to get that allocated and accounted? The right area is not just for charge back purposes, but also so that my teams can see my portion of the estate, right? And they were having the flip side of that conversation with all the finance folks of like, You need to understand how the variable cloud, you know, model works. And you need to understand what these things mean and how they impact the business. And then all that's coming together. And to the point of like, how we're working with C and C f you know, into best practices White papers, you know, training Siri's etcetera, sets of KP eyes and capabilities. Onda. All these problems have been around for years, and I wouldn't say they're solved. But the knowledge is out there were pulling it together. The new level that we're trying to talk with the NCF is okay. In the old world of Cloud, you had 1 to 1 use of a resource. You're running a thing on an instance in the new world, you're running in containers and that, you know, cluster may have lots of pods and name spaces, things inside of it that may be doing lots of different workloads, and you can no longer allocate. I've got this easy to instance and this storage to this thing it's now split up and very ephemeral. And it is a whole new layer of virtualization on top of virtual ization that we didn't have to deal with before. >>And you've got multiple cloud. I'll throw that in there, just make another dimension on it. Chris, tie this together cause this is nice energy to scale up what he's built with the community now, part of the Linux Foundation. This fits nicely into your vision, you know, perfectly. >>Yeah, no, 100% like, you know, so little foundation. You know, as you're well, well aware, is just a federation of open source foundations of groups working together to share knowledge. So it definitely fits in kind of the little foundation mission of, you know, building the largest share technology investment for, you know, humankind. So definitely good there with my kind of C and C f c T o hat, you know, on is, you know, I want to make sure that you know, you know my community and and, you know, the community of cloud native has access and, you know, knowledge about modern. You know, cloud financial management practices out there. If you look at some of the new and upcoming projects in ciencia things like, you know, you know, backstage, which came out of Spotify. They're starting to add functionality that, you know, you know, originally backstage kind of started out as this, you know, everyone builds their own service catalog to go catalog, and you know who owns what and, you know and all that goodness and developers used it. And eventually what happened is they started to add cost, you know, metrics to each of these services and so on. So it surfaces things a little bit closer, you know, a depth time. So my whole goal is to, you know, take some of these great, you know, practices and potential tools that were being built by this wonderful spinoffs community and trying to bring it into the project. You know, front inside of CNC F. So having more projects either exposed, you know, useful. You know, Finn, ops related metrics or, you know, be able to, you know, uh, you know, tool themselves to quickly be able to get useful metrics that could be used by thin ox practitioners out there. That's my kind of goal. And, you know, I just love seeing two communities, uh, come together to improve, improve the state of the world. >>It's just a great vision, and it's needed so and again. It's not about saving money. Certainly does that if you play it right, but it's about growth and people. You need better instrumentation. You need better data. You've got cloud scale. Why not do something there, right? >>Absolutely. It's just maturity after the day because, you know, a lot of engineers, you know, they just love this whole like, you know, rental model just uses many Resource is they want, you know, without even thinking about just basic, you know, metrics in terms of, you know, how many idle instances do I have out there and so, like, people just don't think about that. They think about getting the work done, getting the job done. And if they anything we do to kind of make them think a little bit earlier about costs and impact efficiency, charge back, you know, I think the better the world isn't Honestly, you know, I do see this to me. It's It's almost like, you know, with my hippie hat on. It's like Stephen Green or for the more efficient we are. You know, the better the world off cloud is coming. Can you grow? But we need to be more efficient and careful about the resource is that we use in sentencing >>and certainly with the pandemic, people are virtually you wanted mental health, too. I mean, if people gonna be pulling their hair out, worrying about dollars and cents at scale, I mean, people are gonna be freaking out and you're in meetings justifying why you did things. I mean, that's a time waster, right? I mean, you know, talking about wasting time. >>I have a lot of friends who, you know, run infrastructure at companies. And there's a lot of you know, some companies have been, you know, blessed during this, you know, crazy time with usage. But there is a kind of laser focused on understanding costs and so on and you not be. Do not believe how difficult it is sometimes even just to get, you know, reporting out of these systems, especially if you're using, you know, multiple clouds and multiple services across them. It's not. It's non trivial. And, you know, Jared could speak to this, But, you know, a lot of this world runs in like terrible spreadsheets, right and in versus kind of, you know, nice automated tools with potential, a p I. So there's a lot of this stuff. It's just done sadly in spreadsheets. >>Yeah, salute the flag toe. One standard to rally around us. We see this all the time Jr and emerging inflection points. No de facto kind of things develop. Kubernetes took that track. That was great. What's your take on what he just said? I mean, this is a critical path item for people from all around. >>Yeah, and it's It's really like becoming this bigger and bigger data problem is well, because if you look at the way the clouds are building, they're building per seconds and and down to the very fine grain detail, you know, or functions and and service. And that's amazing for being able to have accountability. But also you get people with at the end of the month of 300 gigabyte billing files, with hundreds of millions of rows and columns attached. So, you know, that's where we do see you companies come together. So yeah, it is a spreadsheet problem, but you can now no longer open your bill in a spreadsheet because it's too big. Eso you know, there's the native tools are doing a lot of work, you know, as you mentioned, you know, AWS and Azure Google shipping a lot. There's there's great, you know, management platforms out there. They're doing work in this area, you know, there's there's people trying to build their own open source the things like Chris was talking about as well. But really, at the end of the day like this, this is This is not a technology. Changes is sort of a cultural shift internally, and it's It's a lot like the like, you know, move from data center to cloud or like waterfall to Dev ops. It's It's a shift in how we're managing, you know, the finances of the money in the business and bringing these groups together. So it it takes time and it takes involvement. I'm also amazed I look like the job titles of the people who are plugged into the Phenoms Foundation and they range from like principal engineers to tech procurement. Thio you know, product leaders to C. T. O. S. And these people are now coming together in the classic to get a seat at the table right toe, Have these conversations and talk about not How do we reduce, you know, cost in the old eighties world. But how do we work together to be more quickly to innovate, to take advantage of these cognitive technologies so that we could be more competitive? Especially now >>it's automation. I mean, all these things are at play. It's about software. I mean, software defined operations is clearly the trend we've been covering. You guys been riding the wave cloud Native actually is so important in all these modern APS, and it applies to almost every aspect of stacks, so makes total sense. Great vision. Um, Chris props to you for that, Jr. Congratulations on a great community, Jerry. I'll give you the final word. Put a plug in for the folks watching on the fin ops Foundation where you're at. What are you looking to do? You adding people, What's your objectives? Take a minute to give the plug? >>Yeah, definitely. We were in open source community, which means we thrive on people contributing inputs. You know, we've got now almost 3000 practitioner members, which is up from 1500 just this this summer on You know, we're looking for those who have either an interesting need to plug into are checked advisory council to help define standards as part of this event, The cognitive gone we're launching Ah, white paper on kubernetes. Uh, and how to do confidential management for it, which was a collaborative effort of a few dozen of our practitioners, as well as our vendor members from VM Ware and Google and APP Thio and a bunch of others who have come together to basically defined how to do this. Well, and, you know, we're looking for folks to plug into that, you know, because at the end of the day, this is about everybody sort of up leveling their skills and knowledge and, you know, the knowledge is out there, nobody's head, and we're focused on how toe drive. Ah, you know, a central collection of that be the central community for it. You enable the people doing this work to get better their jobs and, you know, contribute more of their companies. So I invite you to join us. You know, if your practitioner ITT's Frito, get in there and plug into all the bits and there's great slack interaction channels where people are talking about kubernetes or pinups kubernetes or I need to be asked Google or where we want to go. So I hope you consider joining in the community and join the conversation. >>Thanks for doing that, Chris. Good vision. Thanks for being part of the segment. And, as always, C N C F. This is an enablement model. You throw out the soil, but the 1000 flowers bloom. You don't know what's going to come out of it. You know, new standards, new communities, new vendors, new companies, some entrepreneur Mike jump in this thing and say, Hey, I'm gonna build a better tool. >>Love it. >>You never know. Right? So thanks so much for you guys for coming in. Thanks for the insight. Appreciate. >>Thanks so much, John. >>Thank you for having us. >>Okay. I'm John Furry, the host of the Cube covering Coop Con Cloud, Native Con 2020 with virtual This year, we wish we could be there face to face, but it's cute. Virtual. Thanks for watching

Published Date : Nov 19 2020

SUMMARY :

And is it chief technology officer of the C and C F Chris, Glad to be here. And that's the purpose of this session. Yeah, great to be here. Your experience in community you had is doing specific things that they're I won't say narrow but So you know, if you think about this, the shift that we've had to companies deploying primarily of thing you see with open sores. Cloud Native and I have lots of friends that run, you know, cloud infrastructure at companies. So you know, this human error is throwing you know, high nine figures in some cases you know, in cloud and it's this material for their for their businesses. some level, which, you know, I think it's 2020 Thank you know, just to push back a little bit in support of our friends, you ask Google this company, you know, I mean, you know, they no, I mean, remember the old browser days you could switch the default search engine through 10 menus. So I think what we'll see is, as you know, practitioners and you know, that it breaks the bank or causes a, you know, financial problem, you know, I think, to Christmas point of, you know, people seeing this stuff more quickly. you know, a lot of them don't have MBAs and, you know they're not in the business, but they can learn quick. Um, at the same time, those engineers need to now, you know, have ah, we call it, energy to scale up what he's built with the community now, part of the Linux Foundation. So it definitely fits in kind of the little foundation mission of, you know, Certainly does that if you play it right, but it's about growth and people. It's just maturity after the day because, you know, a lot of engineers, I mean, you know, talking about wasting time. And, you know, Jared could speak to this, But, you know, a lot of this world runs I mean, this is a critical path item for people from Eso you know, there's the native tools are doing a lot of work, you know, as you mentioned, Um, Chris props to you for that, you know, we're looking for folks to plug into that, you know, because at the end of the day, this is about everybody sort of up leveling Thanks for being part of the segment. So thanks so much for you guys for coming in. Thanks for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ChrisPERSON

0.99+

Finn Ops FoundationORGANIZATION

0.99+

JerryPERSON

0.99+

SpotifyORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

JrPERSON

0.99+

Stephen GreenPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurryPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

Fin Ops FoundationORGANIZATION

0.99+

Chris AniszczykPERSON

0.99+

AmazonORGANIZATION

0.99+

Native Computing FoundationORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Phenoms FoundationORGANIZATION

0.99+

Finau FoundationORGANIZATION

0.99+

JaredPERSON

0.99+

SiriTITLE

0.99+

MikePERSON

0.99+

1000 flowersQUANTITY

0.99+

Justus JrPERSON

0.99+

The CloudORGANIZATION

0.99+

10 menusQUANTITY

0.99+

Hitchhiker's Guide to the GalaxyTITLE

0.99+

100%QUANTITY

0.99+

FinOps FoundationORGANIZATION

0.99+

awsORGANIZATION

0.99+

CapexORGANIZATION

0.99+

1QUANTITY

0.99+

NCFORGANIZATION

0.99+

bothQUANTITY

0.99+

Jr.PERSON

0.99+

TwitterORGANIZATION

0.99+

CubeORGANIZATION

0.98+

1000 development teamsQUANTITY

0.98+

300 gigabyteQUANTITY

0.98+

CNCFORGANIZATION

0.98+

ITTORGANIZATION

0.98+

2020DATE

0.98+

J. R.PERSON

0.98+

each personaQUANTITY

0.97+

secondQUANTITY

0.97+

KubeConEVENT

0.97+

OneQUANTITY

0.97+

tomorrowDATE

0.97+

two communitiesQUANTITY

0.97+

Ecosystem PartnersORGANIZATION

0.97+

todayDATE

0.96+

each teamQUANTITY

0.96+

CloudNativeConEVENT

0.96+

1500QUANTITY

0.96+

CubaLOCATION

0.96+

pandemicEVENT

0.96+

JR StormentPERSON

0.96+

First clicheQUANTITY

0.96+

fin ops FoundationORGANIZATION

0.96+

firstQUANTITY

0.95+

threeQUANTITY

0.95+

This yearDATE

0.95+

this yearDATE

0.95+

C FORGANIZATION

0.94+

next dayDATE

0.94+

ChristmasEVENT

0.94+

FritoPERSON

0.93+

C N C FORGANIZATION

0.93+

Sam Werner, IBM and Brent Compton, Red Hat | KubeCon + CloudNativeCon NA 2020


 

>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America. 2020. Virtual Brought to You by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners. Hey, welcome back, everybody. Jeffrey here with the Cube coming to you from our Palo Alto studios with our ongoing coverage of Q. Khan Cloud, Native Con 2020 North America. Of course, it's virtual like everything else is in 2020 but we're excited to be back. It's a terrific show, and we're excited our next guest. So let's introduce him. And we've got Sam Warner, the VP of offering manager and business line executive for storage for IBM. Sam. Great to see you. >>Great to be here. >>And also joining us is Brent Compton. He's a senior director of data services for Redhead. Great. See you, Brent. >>Thank you. >>So let's let's jump into it. Cloud Native. Everything's about cloud native. Everything's about containers. Everything is about kind of container ization and flexibility. But then there's this thing in the back and called storage. We actually have toe keep this stuff and record this stuff and have data protection for this stuff in business resiliency love to jump into it, so lets you know where does storage fit within a container world? And how is the growth of containers and the adoption containers really had you rethink the way that you think about storage and how clients you think about stories saying, Let's start with you >>e mean, it's a great question. And first off, I'm really excited about another cube con. Uh, we did Europe now, uh, doing North America so very excited to be, you know, seeing all the you know, all the news and all the people talking about the advancements around kubernetes. And we're very excited about it now. You asked a very good question. Important question. We're seeing an acceleration of digital transformation, and the people that are going through this digital transformation are using containers to now modernize the rest of their infrastructure. The interesting thing about it, though, is those initiatives are being driven out of the application teams. The business lines in an organization, and a lot of them don't understand that there's a lot of complexity to this storage piece here. So the storage teams I talked to are all of a sudden getting these initiatives thrown on them or a kind of halfway their strategy. And they're scratching their heads, trying to figure out now how they can support these applications with persistent storage. Because that's not where containers started. They started with micro services, and now now they're in a quandary. They have to deliver a certain S L. A to their customers, and they're trying to figure out how they do it in this new environment, which in a lot of cases, has been designed outside of their scope. So they're seeing issues with data protection. Some of the kind of core things that they've been dealing with for years are now. They're now having to solve all over again. So that's what we're working on helping them with reinventing how storage is deployed to help them deliver the same level of security, availability and everything they have in the past. Uh, in these new environments, >>right? So, yeah, e say you've been involved in this for a long time. You know, you've worked in hyper converge. You've worked in big data. You know, the evolution of big data continues to change, as ultimately we want to get people the information to make good decisions, but we've gone through a lot of integrations over the years. So how is it different? You know? Now how is it different with containers? What can we finally do you as a as an architect that we couldn't do before? >>Infrastructure is code. That's, I think, one of the fundamental differences of the storage admin of yesteryear versus storage admin of today today, Azaz Sam mentioned As people are developing and deploying applications, those applications need to dynamically provisioned the infrastructure dynamically provisioned what they need from compute dynamically provisioned what they need from storage dynamically provisioned network paths and so that that that element of infrastructure is code. A dynamically provisioned infrastructure is very different from well from yesterday, when applications or teams needed to. Well, when they needed storage, they would you know, they would file a ticket and typically wait. Now they make an a p A. Now they make an A p. I call and storage is dynamically provisioned and provided to their application. >>But what what I think hard to understand for the layman. And maybe it's just me, right? I It's very easy to understand dynamic infrastructure around, um compute right, I'm Pepsi. I'm running it out for the Super Bowl. I need I know how much people are gonna hit by hit my site and it's kind of easy to understand. Dynamic provisioning around networking again for the same example. What's less easy to understand its dynamic provisioning for storage? It's one thing to say, you know, there's a there's a pool of storage resource is that I'm going to dynamically provisioned for this particular after this particular moment. But one of the whole things about the dynamic is not only is it available when you need it, but I could make it big, and conversely, I could make it smaller go away. I get that for servers, and I kind of get that for networking, supporting an application and that example I just talked about. But we can't It doesn't go away a lot of the time for storage, right? That's important data that's maybe feeding another process. There's all kinds of rules and regulations, So when you talk about dynamic infrastructure for storage, it makes a lot of sense for grabbing some to provision for some new application. But it's >>hard to >>understand in terms of true dynamics in terms of either scaling down or scaling up or turning off when I don't particularly need that much capacity or even that application right now, how does it work within storage versus No, just servers or I'm grabbing them and then I'm putting it back in the pool. >>Let me start on this one, and then I'm gonna hand it off to Brent. Um, you know, let's not forget, by the way, that enterprises have very significant investments in infrastructure and they're able to deliver six nines of availability on their storage. And they have d are worked out in all of their security, encryption, everything. It's already in place, and they're sure that they can deliver on their SLS. So they want to start with that. You have to leverage that investment. So first of all, you have to figure out how to automate that into the environment, that existing sand, and that's where things like uh, a P I s the container storage interface CS I drivers come in. IBM provides that across your entire portfolio, allowing you to integrate your storage into a kubernetes environment into an open shipped environment so that it can be automated, but you have to go beyond that and be able to extend that environment, then into other infrastructure, for example, into a public cloud. So with the IBM flash system, family with our spectrum virtualized software were actually able to deploy that storage layer not only on Prem on our award winning a race, but we can also do it in the cloud. So we allow you to take your existing infrastructure investments and integrate that into your communities environment and using things like danceable, fully automated environment. I'll get into data protection before we're done talking. But I do want Brent to talk a bit about how container native storage comes into that next as well. On how you can start building out new environments for, uh, for your applications. >>Yeah, What the two of you are alluding to is effectively kubernetes services layer, which is not storage. It consumes storage from the infrastructure, Assam said. Just because people deploy Kubernetes cluster doesn't mean that they go out and get an entirely new infrastructure for that. If they're deploying their kubernetes cluster on premises, they have servers. If they're deploying their kubernetes cluster on AWS or an azure on G C P. They have infrastructure there. Uh, what the two of you are alluding to is that services layer, which is independent of storage that can dynamically provisioned, provide data protection services. As I mentioned, we have good stuff to talk about their relative to data protection services for kubernetes clusters. But that's it's the abstraction layer or data services layer that sits on top of storage, which is different. So the basics of storage underneath in the infrastructure, you know, remain the same, Jeff. But the how that storage is provisioned and this abstraction layer of services which sits on top of the storage storage might be IBM flash system array storage, maybe E m c sand storage, maybe a W S E B s. That's the storage infrastructure. But this abstraction layer that sits on top this data services layer is what allows for the dynamic interaction of applications with the underlying storage infrastructure. >>And then again, just for people that aren't completely tuned in, Then what's the benefit to the application developer provider distributor with that type of an infrastructure behind And what can they do that they just couldn't do before? >>Well, I mean Look, we're, uh, e I mean, we're trying to solve the same problem over and over again, right? It's always about helping application developers build applications more quickly helps them be more agile. I t is always trying to keep up with the application developer and always struggles to. In fact, that's where the emergency cloud really came from. Just trying to keep up with the developer eso by giving them that automation. It gives them the ability to provision storage in real time, of course, without having open a ticket like friends said. But really, the Holy Grail here is getting to a developed once and deploy anywhere model. That's what they're trying to get to. So having an automated storage layer allows them to do that and ensure that they have access to storage and data, no matter where their application gets it >>right, Right, that pesky little detail. When I have to develop that up, it does have to sit somewhere and and I don't think storage really has gotten enough of of the bright light, really in kind of this app centric, developer centric world, we talk all the time about having compute available and and software defined networking. But you know, having this software defined storage that lives comfortably in this container world is pretty is pretty interesting. In a great development, I want to shift gears a >>little bit. Just one thing. Go >>ahead, >>plus one to Sam's comments. There all the application developer wants, they want an A P I and they want the same a p I to provision the storage regardless of where their app is running. The rest of the details they usually don't care about. Sure. They wanted to perform what not give him an A p I and make it the same regardless of where they're running the app. >>Because not only do they want to perform, they probably just presume performance, right? I mean, that's the other thing is that the best in class quickly becomes presumed baseline in a very short short period of time. So you've got to just you just got to just deliver the goods, right? They're gonna get frustrated and not be productive. But I wanted to shift gears up a little bit and talk about some of the macro trends. Right? We're here towards the end of 2020. Obviously, Cove It had a huge impact on business and a lot of different ways. And it's really evolved from March, this light switch moment. Everybody work from home, too. Now, this kind of extended time, that's probably gonna go on for a while. I'm just curious some of the things that you've seen with your customers not so much at the beginning, because that was that was a special and short period of time. But mawr, as we've extended and and are looking to, um, probably extended this for a while, you know, What is the impact of this increased work from home increase attack surface? You know, some of these macro things that we're seeing that cove it has caused and any other kind of macro trends beyond just this container ization that you guys were seeing impacting your world. Start with you, Sam. >>You know, I don't think it's actually changed what people were going to do or the strategy. What I've seen it do is accelerate things and maybe changed the way they're getting their, uh and so they're actually a lot of enterprises were running into challenges more quickly than they thought they would. And so they're coming to us and asking us to help them. Saw them, for example, backing up their data and these container environments as you move mission critical applications that maybe we're gonna move more slowly. They're realizing that as they've moved them, they can't get the level of data protection they need. And that's why actually we just announced it at the end of October. Updates to our modern data protection portfolio. It now is containerized. It could be deployed very easily in an automated fashion, but on top of that, it integrates down into the A P. I layer down into CSE drivers and allows you to do container where snapshots of your applications so you could do operational recovery. If there's some sort of an event you can recover from that you can do D R. And you can even use it for data migration. So we're helping them accelerate. So the biggest I think requests I'm getting from our customers, and how can you help us accelerate? And how can you help us fix these problems that we went running into as we tried to accelerate our digital transformation? >>Brent, Anyone that you wanna highlight? >>Mm. Okay. Ironically, one of my team was just speaking with one of the cruise lines, um, two days ago. We all know what's happened them. So if we just use them as an example, I'm clearly our customers need to do things differently now. So plus one to Sam's statement about acceleration on I would add another word to that which is agility, you know, frankly, they're having to do things in ways they never envisioned 10 months ago. So there need to cut cycle times to deploy effectively new ways of how they transact business has resulted in accelerated poll for these types of infrastructure is code technologies. >>That's great. The one that jumped in my mind. Sam, is you were talking. We've we've had a lot of conversations. Obvious security always comes up on baking security and is is a theme. But ransomware as a specific type of security threat and the fact that these guys not only wanna lock up your data, but they want to go in and find the backup copies and and you know and really mess you up so it sounds like that's even more important to have the safe. And we're hearing, you know, all these conversations about air gaps and dynamic air gaps and, you know, can we get air gaps and some of these infrastructure set up so that we can, you know, put put those backups? Um, and recovery data sets in a safe place so that if we have a ransomware issue, getting back online is a really, really important thing, and it seems to just be increasing every day. We're seeing things, you know, if you can actually break the law sometimes if you if you pay the ransom because where these people operate, there's all kind of weird stuff that's coming out of. Ransomware is a very specific, you know, kind of type of security threat that even elevates, you know, kind of business continuity and resiliency on a whole nother level for this one particular risk factor. When if you're seeing some of that as well, >>it's a great point. In fact, it's clearly an industry that was resilient to a pandemic because we've seen it increase things. Is organized crime at this point, right? This isn't the old days of hackers, you know, playing around this is organized crime and it is accelerating. And that's one thing. I'm really glad you brought up. It's an area we've been really focused on across our whole portfolio. Of course, IBM tape offers the best most of the actual riel air gapping, physical air gapping We could take a cartridge offline. But beyond that we offer you the ability to dio you know, different types of logical air gaps, whether it's to a cloud we support. In fact, we just announced Now the spectrum protect. We have support for Google Cloud. We already supported AWS Azure IBM Cloud. So we give you the ability to do logical air gapping off to those different cloud environments. We give you the ability to use worm capability so you can put your backups in a vault that can't be changed. So we give you lots of different ways to do it. In our high end enterprise storage, we offer something called Safeguarded copy where we'll actually take data off line that could be recovered almost instantly. Something very unique to our storage that gives you, for the most mission critical applications. The fastest path recovery. One of things we've seen is some of our customers have done a great job creating a copy. But when the event actually happens, they find is gonna take too long to recover the data and they end up having to pay the ransom anyway. So you really have to think through an Indian strategy on we're able to help customers do a kind of health checks of their environment and figure out the right strategy. We have some offerings to help come in and do that for our customers. >>Shift gears a little bit, uh, were unanswerable fest earlier this year and a lot of talk about automation. Obviously, answer was part of the Red Hat family, which is part of the IBM family. But, you know, we're seeing Mawr and Mawr conversations about automation about, you know, moving the mundane and the air prone and all the things that we shouldn't be doing as people and letting people doom or high value stuff. When if you could talk a little bit about the role of automation, that the kind of development of automation and how you're seeing that, you know, impact your deployments, >>right? You want to take that one first? >>Yeah, sure. Um, s o the first is, um when you think about individual kubernetes clusters. There's a level of automation that's required there. I mean, that's the fundamental. I mean, back to the infrastructure is code that's inherently. That's automation. To effectively declare the state of what you want your application, your cluster to be, and that's the essence of kubernetes. You declare what the state is, and then you pass that declaration to kubernetes, and it makes it so. So there's the kubernetes level automation. But then there's, You know what happens for larger enterprises when you have, you know, tens or hundreds of kubernetes clusters. Eso That's an area of Jeff you mentioned answerable. Now that's an area of with, you know, the work, the red hats doing the community for multi cluster management, actually in the community and together with IBM for automating the management of multiple clusters. And last thing I'll touch on here is that's particularly important as you go to the edge. I mean, this is all well and good when you're talking about, you know, safe raised floor data center environments. But what happens when you're tens or hundreds or even thousands of kubernetes clusters are running in an oil field somewhere? Automation becomes not only nice to have, but it's fundamental to the operation. >>Yeah, but let me just add onto that real quick. You know, it's funny, because actually, in this cove it era, you're starting to see that same requirement in the data center in the core data center. In fact, I would say that because there's less bodies now in the data center, more people working remotely. The automation in need for automation is actually actually accelerating as well. So I think what you said is actually true for the core data center now as well, >>right? So I wanna give you guys the last word before before we close the segment. Um, I'm gonna start with you, Brent. Really, From a perspective of big data and you've been involved again in big data for a long time. As you look back, it kind of the data warehouse era. And then we had kind of this whole rage with the Hadoop era, and, you know, we just continue to get more and more sophisticated with big data processes and applications. But at the end of the day, still about getting the right data to the right person at the right time to do something about it. I wonder if if you can, you know, kind of reflect over that journey and where we are now in terms of this mission of getting, you know, the right data to the right person at the right time so they could make the right decision. >>I think I'll close with accessibility. Um, that Z these days, we you know, the data scientists and data engineers that we work with. The key problem that they have is is accessibility and sharing of data. I mean, this has been wonderfully manifest. In fact, we did some work with the province of Ontario. You could look that stop hashtag house my flattening eso the work with them to get a pool of data. Scientists in the community in the province of Ontario, Canada, toe work together toe understand how to track co vid cases s such so that government could make intelligent responses and policy based on based on the fax so that that need highlights the accessibility that's required from today's, you know, yesteryear. It was maybe, uh, smaller groups of individual data scientists working in silos. Now it's people across industry as manifest by that That need accessibility as well as agility. They need to be able to spin up an environment that will allow them to in this case, um, to develop and deploy inference models using shared data sets without going through years of design. So accessibility on back to the back to the the acceleration and agility that Sam talked about. So I'll close with those words >>That's great. And the consistent with the democratization of two is another word that we're here, you know, over and over again in terms of, you know, getting it out of the hands of the data scientists and getting it into the hands of the people who are making frontline business decisions every day. And Sam for you, for your clothes. I love for you Thio reflect on kind of the changing environment in terms of your requirements for the types of workloads that you now are, you know, looking to support. So it's not just taking care of the data center and relatively straightforward stuff. But you've got hybrid. You've got multi cloud, not to mention all the media, the developments in the media between tape and obviously flash, um, spinning, spinning drives. But you know, really, We've seen this huge thing with flash. But now, with cloud and the increased kind of autumn autonomy ization of of units to be able to apply big batches in small batches to particular workloads across all these different requirements. When if you could just share a little bit about how you guys are thinking about, you know, modernizing storage and moving storage forward. What are some of your what are some of your your priorities? What are you looking forward to, uh, to be able to deliver, You know, basically the stuff underneath all these other applications. I mean, applications basically is data whether you I and some in some computer on top. You guys something underneath the whole package? >>Yeah. Yeah. You know, first of all, you know, back toe what Brent was saying, Uh, data could be the most valuable asset of an enterprise. You could give an enterprising, incredible, uh, competitive advantage as an incumbent if you could take advantage of that data using modern analytics and a I. So it could be your greatest asset. And it can also be the biggest inhibitor to digital transformation. If you don't figure out how to build a new type of modern infrastructure to support access to that data and support these new deployment models of your application. So you have to think that through. And that's not just for your big data, which the big data, of course, is extremely important and growing at incredible pace. All this unstructured data, You also have to think about your mission critical applications. We see a lot of people going through their transformation and modernization of S a p with move toe s four Hana. They have to think about how that fits into a multi cloud environment. They need to think about the life cycle of their data is they go into these new modern environments. And, yes, tape is still a very vibrant part of that deployment. So what we're working on an IBM has always been a leader in software defined storage. We have an incredible portfolio of capabilities. We're working on modernizing that software to help you automate your infrastructure. And sure, you can deliver enterprise class sls. There's no nobody's going to alleviate the requirements of having, you know, near perfect availability. You don't because you're moving into a kubernetes environment. Get a break on your downtime. So we're able to give that riel enterprise class support for doing that. One of the things we just announced that the end of October was we've containerized our spectrum scale client, allowing you now toe automate the deployment of your cluster file system through communities. So you'll see more and more of that. We're offering you leading modern native protection for kubernetes will be the first to integrate with OCP and open ship container storage for data protection. And our flashes from family will continue to be on the leading edge of the curve around answerable automation and C s I integration with who are already so we'll continue to focus on that and ensure that you could take advantage of our world class storage products in your new modern environment. And, of course, giving you that portability between on from in any cloud that you choose to run in >>exciting times. No, no shortage of job security for you, gentlemen, that's for sure. All right, Well, Brent, Sam, thanks for taking a few minutes and, uh, is great to catch up. And again. Congratulations on the success. Thank you. Thank you. Thank you. Alrighty, Sammy's Brent. I'm Jeff, You're watching the cubes. Continuing coverage of Q. Khan Cloud, Native Con North America 2020. Thanks for watching. We'll see you next time.

Published Date : Nov 18 2020

SUMMARY :

Jeffrey here with the Cube coming to you from our Palo Alto studios with our ongoing coverage of And also joining us is Brent Compton. to jump into it, so lets you know where does storage fit within a container to be, you know, seeing all the you know, all the news and What can we finally do you as a as an architect Well, when they needed storage, they would you But one of the whole things about the dynamic is not only is it available when you need how does it work within storage versus No, just servers or I'm grabbing them and then I'm putting it back in the pool. So we allow you to take your existing infrastructure investments Yeah, What the two of you are alluding to is effectively kubernetes services layer, But really, the Holy Grail here is getting to a developed once and deploy anywhere But you know, having this software defined storage Just one thing. The rest of the details they usually don't care about. and are looking to, um, probably extended this for a while, you know, What is the impact of this increased So the biggest I think requests I'm getting from our customers, and how can you help us accelerate? on I would add another word to that which is agility, you know, frankly, they're having to do things And we're hearing, you know, all these conversations about air gaps and dynamic air gaps and, you know, But beyond that we offer you the ability to dio you know, different types of logical air gaps, that the kind of development of automation and how you're seeing that, you know, impact your deployments, To effectively declare the state of what you want your application, So I think what you said is actually true for the core data center of getting, you know, the right data to the right person at the right time so they could make the right decision. we you know, the data scientists and data engineers that we work with. the types of workloads that you now are, you know, looking to support. that software to help you automate your infrastructure. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamPERSON

0.99+

Red HatORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Brent ComptonPERSON

0.99+

Sam WarnerPERSON

0.99+

JeffPERSON

0.99+

BrentPERSON

0.99+

Native Computing FoundationORGANIZATION

0.99+

RedheadORGANIZATION

0.99+

yesterdayDATE

0.99+

Sam WernerPERSON

0.99+

JeffreyPERSON

0.99+

EuropeLOCATION

0.99+

SammyPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

Ecosystem PartnersORGANIZATION

0.99+

hundredsQUANTITY

0.99+

The CloudORGANIZATION

0.99+

tensQUANTITY

0.99+

Super BowlEVENT

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

North AmericaLOCATION

0.99+

10 months agoDATE

0.99+

MawrPERSON

0.99+

end of 2020DATE

0.99+

two days agoDATE

0.99+

Q. KhanPERSON

0.99+

PepsiORGANIZATION

0.99+

MarchDATE

0.98+

Palo AltoLOCATION

0.98+

Azaz SamPERSON

0.98+

firstQUANTITY

0.98+

AssamPERSON

0.98+

KubeConEVENT

0.97+

oneQUANTITY

0.97+

CloudNativeConEVENT

0.97+

OntarioLOCATION

0.96+

end of OctoberDATE

0.96+

OneQUANTITY

0.96+

one thingQUANTITY

0.95+

earlier this yearDATE

0.95+

ThioPERSON

0.92+

six ninesQUANTITY

0.91+

CloudORGANIZATION

0.9+

Q. KhanPERSON

0.89+

Ontario, CanadaLOCATION

0.87+

NA 2020EVENT

0.85+

thousands of kubernetesQUANTITY

0.84+

coveORGANIZATION

0.82+

G C P.TITLE

0.8+

kubernetesQUANTITY

0.8+

Stefanie Chiras & Joe Fernandes, Red Hat | KubeCon + CloudNativeCon NA 2020


 

>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, Native Computing Foundation and Ecosystem Partners. Hello, everyone. And welcome back to the cubes Ongoing coverage of Cuba con North America. Joe Fernandez is here. He's with Stephanie, Cheras and Joe's, the V, P and GM for core cloud platforms. That red hat and Stephanie is this s VP and GM of the Red Hat Enterprise. Lennox bu. Two great friends of the Cube. Awesome seeing you guys. How you doing? >>It's great to be here, Dave. Yeah, thanks >>for the opportunity. >>Hey, so we all talked, you know, recently, uh, answerable fest Seems like a while ago, but But we talked about what's new? Red hat really coming at it from an automation perspective. But I wonder if we could take a view from open shift and what's new from the standpoint of you really focus on helping customers, you know, change their operations and operationalize. And Stephanie, Maybe you could start, and then, you know, Joe, you could bring in some added color. >>No, that's great. And I think you know one of the things we try and do it. Red hat clearly building off of open source. We have been focused on this open hybrid cloud strategy for, you know, really years. Now the beauty of it is that hybrid cloud and open hybrid cloud continues to evolve right with bringing in things like speed and stability and scale and now adding in other footprints, like manage services as well as edge and pulling that all together across the whole red hat portfolio from the platforms, right? Certainly with Lennox and roll into open shift in the platform with open shift and then adding automation, which certainly you need for scale. But it's ah, it's continues to evolve as the as the definition of open hybrid cloud evolves. >>Great. So thank you, Stephanie jokes. You guys got hard news here that you could maybe talk about 46? >>Yeah. Eso eso open shift is our enterprise kubernetes platform. With this announcement, we announced the release of open ship 4.6 Eso eso We're doing releases every quarter tracking the upstream kubernetes release cycle. So this brings communities 1.19, which is, um but itself brings a number of new innovations, some specific things to call out. We have this new automated installer for open shift on bare metal, and that's definitely a trend that we're seeing is more customers not only looking at containers but looking at running containers directly on bare metal environments. Open shift provides an abstraction, you know, which combines Cuban. And he's, uh, on top of Lennox with RL. I really across all environments, from bare metal to virtualization platforms to the various public clouds and out to the edge. But we're seeing a lot of interest in bare metal. This is basically increasing the really three automation to install seamlessly and manage upgrades in those environments. We're also seeing a number of other enhancements open shifts service mesh, which is our SDO based solution for managing, uh, the interactions between micro services being able to manage traffic against those services. Being able to do tracing. We have a new release of that on open shift Ford out six on then, um, some work specific to the public cloud that we started extending into the government clouds. So we already supported AWS and Azure. With this release, we added support for the A W s government cloud as well. Azaz Acela's Microsoft Azure government on dso again This is really important to like our public sector customers who are looking to move to the public cloud leveraging open shift as an abstraction but wanted thio support it on the specialized clouds that they need to use with azure gonna meet us Cup. >>So, joke, we stay there for a minute. So so bare metal talking performance there because, you know, you know what? You really want to run fast, right? So that's the attractiveness there. And then the point about SDO in the open, open shift service measure that makes things simpler. Maybe talk a little bit about sort of business impact and what customers should expect to get out of >>these two things. So So let me take them one at a time, right? So so running on bare metal certainly performances a consideration. You know, I think a lot of fixed today are still running containers, and Cuban is on top of some form of virtualization. Either a platform like this fear or open stack, or maybe VMS in the in one of the public clouds. But, you know containers don't depend on a virtualization layer. Containers only depend on Lennox and Lennox runs great on bare metal. So as we see customers moving more towards performance and Leighton see sensitive workloads, they want to get that Barry mental performance on running open shift on bare metal and their containerized applications on that, uh, platform certainly gives them that advantage. Others just want to reduce the cost right. They want to reduce their VM sprawl, the infrastructure and operational cost of managing avert layer beneath their careers clusters. And that's another benefit. So we see a lot of uptake in open shift on bare metal on the service match side. This is really about You know how we see applications evolving, right? Uh, customers are moving more towards these distributed architectures, taking, you know, formally monolithic or enter applications and splitting them out into ah, lots of different services. The challenge there becomes. Then how do you manage all those connections? Right, Because something that was a single stack is now comprised of tens or hundreds of services on DSO. You wanna be able to manage traffic to those services, so if the service goes down, you can redirect that those requests thio to an alternative or fail over service. Also tracing. If you're looking at performance issues, you need to know where in your architecture, er you're having those degradations and so forth. And, you know, those are some of the challenges that people can sort of overcome or get help with by using service mash, which is powered by SDO. >>And then I'm sorry, Stephanie ever get to in a minute. But which is 11 follow up on that Joe is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, for instance is you're gonna you're gonna bring this across clouds. I'm gonna You're gonna bring it on, Prem And we're gonna talk about the edge in in a minute. Is that right? From a differentiation standpoint, >>Yeah, that That's one of the key >>differentiations. You know, Read has been talking about the hybrid cloud for a long time. We've we've been articulating are open hybrid cloud strategy, Andi, >>even if that's >>not a strategy that you may be thinking about, it is ultimately where folks end up right, because all of our enterprise customers still have applications running in the data center. But they're also all starting to move applications out to the public cloud. As they expand their usage of public cloud, you start seeing them adopted multi cloud strategies because they don't want to put all their eggs in one basket. And then for certain classes of applications, they need to move those applications closer to the data. And and so you start to see EJ becoming part of that hybrid cloud picture on DSO. What we do is basically provide a consistency across all those environments, right? We want run great on Amazon, but also great on Azure on Google on bare metal in the data center during medal out at the edge on top of your favorite virtualization platform. And yeah, that that consistency to take a set of applications and run them the same way across all those environments. That is just one of the key benefits of going with red hat as your provider for open hybrid cloud solutions. >>All right, thank you. Stephanie would come back to you here, so I mean, we talk about rail a lot because your business unit that you manage, but we're starting to see red hats edge strategy unfolded. Kind of real is really the linchpin I wanna You could talk about how you're thinking about the edge and and particularly interested in how you're handling scale and why you feel like you're in a good position toe handle that massive scale on the requirements of the edge and versus hey, we need a new OS for the edge. >>Yeah, I think. And Joe did a great job of said and up it does come back to our view around this open hybrid cloud story has always been about consistency. It's about that language that you speak, no matter where you want to run your applications in between rela on on my side and Joe with open shift and and of course, you know we run the same Lennox underneath. So real core os is part of open shift that consistently see leads to a lot of flexibility, whether it's through a broad ecosystem or it's across footprints. And so now is we have been talking with customers about how they want to move their applications closer to data, you know, further out and away from their data center. So some of it is about distributing your data center, getting that compute closer to the data or closer to your customers. It drives, drives some different requirements right around. How you do updates, how you do over the air updates. And so we have been working in typical red hat fashion, right? We've been looking at what's being done in the upstream. So in the fedora upstream community, there is a lot of working that has been done in what's called the I. O. T Special Interest group. They have been really investigating what the requirements are for this use case and edge. So now we're really pleased in, um, in our most recent release of really aid relate 00.3. We have put in some key capabilities that we're seeing being driven by these edge use cases. So things like How do you do quick image generation? And that's important because, as you distribute, want that consistency created tailored image, be able to deploy that in a consistent way, allow that to address scale, meet security requirements that you may have also right updates become very important when you start to spread this out. So we put in things in order to allow remote device mirroring so that you can put code into production and then you can schedule it on those remote devices toe happen with the minimal disruption. Things like things like we all know now, right with all this virtual stuff, we often run into things like not ideal bandwidth and sometimes intermittent connectivity with all of those devices out there. So we put in, um, capabilities around, being able to use something called rpm Austria, Um, in order to be able to deliver efficient over the air updates. And then, of course, you got to do intelligent rollbacks for per chance that something goes wrong. How do you come back to a previous state? So it's all about being able to deploy at scale in a distributed way, be ready for that use case and have some predictability and consistency. And again, that's what we build our platforms for. It's all about predictability and consistency, and that gives you flexibility to add your innovation on top. >>I'm glad you mentioned intelligent rollbacks I learned a long time ago. You always ask the question. What happens when something goes wrong? You learn a lot from the answer to that, but You know, we talk a lot about cloud native. Sounds like you're adapting well to become edge native. >>Yeah. I mean, I mean, we're finding whether it's inthe e verticals, right in the very specific use cases or whether it's in sort of an enterprise edge use case. Having consistency brings a ton of flexibility. It was funny, one of our talking with a customer not too long ago. And they said, you know, agility is the new version of efficiency. So it's that having that sort of language be spoken everywhere from your core data center all the way out to the edge that allows you a lot of flexibility going forward. >>So what if you could talk? I mentioned just mentioned Cloud Native. I mean, I think people sometimes just underestimate the effort. It takes tow, make all this stuff run in all the different clouds the engineering efforts required. And I'm wondering what kind of engineering you do with if any with the cloud providers and and, of course, the balance of the ecosystem. But But maybe you could describe that a little bit. >>Yeah, so? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, Azure, Google or IBM Cloud. Obviously, Andi, we're you know, we're very keen on sort of making sure that we're providing the best environment to run enterprise applications across all those environments, whether you're running it directly just with Lennox on Ralph or whether you're running it in a containerized environment with Open Chef, which which includes route eso eso, our partnership includes work we do upstream, for example. You know, Red Hat help. Google launched the Cuban community, and I've been, you know, with Google. You know, we've been the top two contributors driving that product that project since inception, um, but then also extends into sort of our hosted services. So we run a jointly developed and jointly managed service called the Azure Red Hat Open Shift Service. Together with Microsoft were our joint customers can get access to open shift in an azure environment as a native azure service, meaning it's, you know, it's fully integrated, just like any other. As your service you can tied into as you're building and so forth. It's sold by by Azure Microsoft's sales reps. Um, but you know, we get the benefit of working together with our Microsoft counterparts and developing that service in managing that service and then in supporting our joint customers. We over the summer announced sort of a similar partnership with Amazon and we'll be launching are already doing pilots on the Amazon Red Hat Open ship service, which is which is, you know, the same concept now applied to the AWS cloud. So that will be coming out g a later this year, right? But again, whether it's working upstream or whether it's, you know, partnering on managed services. I know Stephanie team also do a lot of work with Microsoft, for example, on sequel server on Lenox dot net on Lenox. Whoever thought be running that applications on Linux. But that's, you know, a couple of years old now, a few years old, So eso again. It's been a great partnership, not just with Microsoft, but with all the cloud providers. >>So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. Later this year. I want to circle back to >>that. Yeah, eso we way announced a preview earlier this year of of the Amazon Red Hat Open ships service. It's not generally available yet. We're you know, we're taking customers. We want toe, sort of be early access, get access to pilots and then that'll be generally available later this year. Although Red Hat does manage our own service Open ship dedicated that's available on AWS today. But that's a service that's, you know, solely, uh, operated by Red Hat. This new service will be jointly operated by Red Hat and Amazon together Idea. That would be sort of a service that we are delivering together as partners >>as a managed service and and okay, so that's in beta now. I presume if it's gonna be g a little, it's >>like, Yeah, that's yeah, >>that's probably running on bare metal. I would imagine that >>one is running >>on E. C. Two. That's running an A W C C T V exactly, and >>run again. You know, all of our all of >>our I mean, we you know, that open shift does offer bare metal cloud, and we do you know, we do have customers who can take the open shift software and deploy it there right now are managed. Offering is running on top of the C two and on top of Azure VM. But again, this is this is appealing to customers who, you know, like what we bring in terms of an enterprise kubernetes platform, but don't wanna, you know, operated themselves, right? So it's a fully managed service. You just come and build and deploy your APS, and then we manage all of the infrastructure and all the underlying platform for you >>that's going to explode. My prediction. Um, let's take an example of heart example of security. And I'm interested in how you guys ensure a consistent, you know, security experience across all these locations on Prem Cloud. Multiple clouds, the edge. Maybe you could talk about that. And Stephanie, I'm sure you have a perspective on this is Well, from the standpoint of of Ralph. So who wants to start? >>Yeah, Maybe I could start from the bottom and then I'll pass it over to Joe to talk a bit. I think one of these aspects about security it's clearly top of mind of all customers. Um, it does start with the very bottom and base selection in your OS. We continue to drive SC Lennox capabilities into rural to provide that foundational layer. And then as we run real core OS and open shift, we bring over that s C Lennox capability as well. Um, but, you know, there's a whole lot of ways to tackle this we've done. We've done a lot around our policies around, um see ve updates, etcetera around rail to make sure that we are continuing to provide on DCA mitt too. Mitigating all critical and importance, providing better transparency toe how we assess those CVS. So security is certainly top of mind for us. And then as we move forward, right there's also and joke and talk about the security work we do is also capabilities to do that in container ization. But you know, we we work. We work all the way from the base to doing things like these images in these easy to build images, which are tailored so you can make them smaller, less surface area for security. Security is one of those things. That's a lifestyle, right? You gotta look at it from all the way the base in the operating system, with things like sc Lennox toe how you build your images, which now we've added new capabilities. There And then, of course, in containers. There's, um there's a whole focus in the open shift area around container container security, >>Joe. Anything you want to add to that? >>Yeah, sure. I >>mean, I think, you know, obviously, Lennox is the foundation for, you know, for all public clouds. It's it's driving enterprise applications in the data center, part of keeping those applications. Security is keeping them up to date And, you know, through, you know, through real, we provide, you know, securing up to date foundation as a Stephanie mentioned as you move into open shift, you're also been able to take advantage of, uh, Thio to take advantage of essentially mutability. Right? So now the application that you're deploying isn't immutable unit that you build once as a container image, and then you deploy that out all your various environments. When you have to do an update, you don't go and update all those environments. You build a new image that includes those updates, and then you deploy those images out rolling fashion and, as you mentioned that you could go back if there's issues. So the idea, the notion of immutable application deployments has a lot to do with security, and it's enabled by containers. And then, obviously you have cured Panetti's and, you know, and all the rest of our capabilities as part of open Shift managing that for you. We've extended that concept to the entire platform. So Stephanie mentioned, real core West Open shift has always run on real. What we have done in open shift for is we've taken an immutable version of Ralph. So it's the same red hat enterprise Lennox that we've had for years. But now, in this latest version relate, we have a new way to package and deploy it as a relic or OS image, and then that becomes part of the platform. So when customers want toe in addition to keeping their applications up to date, they need to keep their platform up to dates. Need to keep, you know, up with the latest kubernetes patches up with the latest Lennox packages. What we're doing is delivering that as one platform, so when you get updates for open shift, they could include updates for kubernetes. They could include updates for Lennox itself as well as all the integrated services and again, all of this is just you know this is how you keep your applications secure. Is making sure your you know, taking care of that hygiene of, you know, managing your vulnerabilities, keeping everything patched in up to date and ultimately ensuring security for your application and users. >>I know I'm going a little bit over, but I have I have one question that I wanna ask you guys and a broad question about maybe a trends you see in the business. I mean, you look at what we talk a lot about cloud native, and you look at kubernetes and the interest in kubernetes off the charts. It's an area that has a lot of spending momentum. People are putting resource is behind it. But you know, really, to build these sort of modern applications, it's considered state of the art on. Do you see a lot of people trying to really bring that modern approach toe any cloud we've been talking about? EJ. You wanna bring it also on Prem And people generally associate this notion of cloud native with this kind of elite developers, right? But you're bringing it to the masses and there's 20 million plus software developers out there, and most you know, with all due respect that you know they may not be the the the elites of the elite. So how are you seeing this evolve in terms of re Skilling people to be able, handle and take advantage of all this? You know, cool new stuff that's coming out. >>Yeah, I can start, you know, open shift. Our focus from the beginning has been bringing kubernetes to the enterprise. So we think of open shift as the dominant enterprise kubernetes platform enterprises come in all shapes and sizes and and skill sets. As you mentioned, they have unique requirements in terms of how they need toe run stuff in their data center and then also bring that to production, whether it's in the data center across the public clouds eso So part of it is, you know, making sure that the technology meets the requirements and then part of it is working. The people process and and culture thio make them help them understand what it means to sort of take advantage of container ization and cloud native platforms and communities. Of course, this is nothing new to red hat, right? This is what we did 20 years ago when we first brought Lennox to the Enterprise with well, right on. In essence, Carozza is basically distributed. Lennox right Kubernetes builds on Lennox and brings it out to your cluster to your distributed systems on across the hybrid cloud. So So nothing new for Red Hat. But a lot of the same challenges apply to this new cloud native world. >>Awesome. Stephanie, we'll give you the last word, >>all right? And I think just a touch on what Joe talked about it. And Joe and I worked really closely on this, right? The ability to run containers right is someone launches down this because it is magical. What could be done with deploying applications? Using a container technology, we built the capabilities and the tools directly into rural in order to be able to build and deploy, leveraging things like pod man directly into rural. And that's exactly so, folks. Everyone who has a real subscription today can start on their container journey, start to build and deploy that, and then we work to help those skills then be transferrable as you movinto open shift in kubernetes and orchestration. So, you know, we work very closely to make sure that the skills building can be done directly on rail and then transfer into open shift. Because, as Joe said, at the end of the day, it's just a different way to deploy. Lennox, >>You guys are doing some good work. Keep it up. And thanks so much for coming back in. The Cube is great to talk to you today. >>Good to see you, Dave. >>Yes, Thank you. >>All right. Thank you for watching everybody. The cubes coverage of Cuba con en a continues right after this.

Published Date : Nov 18 2020

SUMMARY :

Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, It's great to be here, Dave. Hey, so we all talked, you know, recently, uh, answerable fest Seems like a We have been focused on this open hybrid cloud strategy for, you know, You guys got hard news here that you could maybe talk about 46? Open shift provides an abstraction, you know, you know, you know what? And, you know, those are some of the challenges is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, You know, Read has been talking about the hybrid cloud for a long time. And and so you start to see EJ becoming part of that hybrid cloud picture on Stephanie would come back to you here, so I mean, we talk about rail a lot because your business and that gives you flexibility to add your innovation on top. You learn a lot from the answer to that, And they said, you know, So what if you could talk? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. But that's a service that's, you know, solely, uh, operated by Red Hat. as a managed service and and okay, so that's in beta now. I would imagine that You know, all of our all of But again, this is this is appealing to customers who, you know, like what we bring in terms of And I'm interested in how you guys ensure a consistent, you know, security experience across all these But you know, we we work. I Need to keep, you know, up with the latest kubernetes patches up But you know, really, to build these sort of modern applications, eso So part of it is, you know, making sure that the technology meets the requirements Stephanie, we'll give you the last word, So, you know, we work very closely to make sure that the skills building can be done directly on The Cube is great to talk to you today. Thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoePERSON

0.99+

AmazonORGANIZATION

0.99+

StephaniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Joe FernandezPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

LenoxORGANIZATION

0.99+

Joe FernandesPERSON

0.99+

tensQUANTITY

0.99+

DavePERSON

0.99+

LennoxORGANIZATION

0.99+

Stefanie ChirasPERSON

0.99+

AWSORGANIZATION

0.99+

hundredsQUANTITY

0.99+

CherasPERSON

0.99+

RalphPERSON

0.99+

C twoTITLE

0.99+

LennoxPERSON

0.99+

one questionQUANTITY

0.99+

Ecosystem PartnersORGANIZATION

0.99+

LeightonORGANIZATION

0.98+

two thingsQUANTITY

0.98+

FordORGANIZATION

0.98+

todayDATE

0.98+

one platformQUANTITY

0.98+

ReadPERSON

0.98+

Red Hat EnterpriseORGANIZATION

0.98+

oneQUANTITY

0.97+

AzureORGANIZATION

0.97+

20 years agoDATE

0.97+

firstQUANTITY

0.97+

later this yearDATE

0.97+

AndiPERSON

0.96+

CloudNativeConEVENT

0.96+

DCAORGANIZATION

0.96+

one basketQUANTITY

0.95+

LinuxTITLE

0.95+

earlier this yearDATE

0.95+

single stackQUANTITY

0.94+

Later this yearDATE

0.92+

Stephen Augustus, VMware and Priyanka Sharma, CNCF | KubeCon + CloudNativeCon NA 2020


 

>> Voiceover: From around the globe, it's theCUBE, with coverage of Kubecon and CloudNativeCon, North America, 2020, virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back to theCUBE's coverage, virtual coverage of Kubecon and CloudNativeCon 2020. We're not in person this year, normally we're there in person. We have to do remote because of the pandemic, but hey, it opens up more conversations. And this is theCUBE virtual. I'm John Furrier, your host. And you'll see a lot of interviews. We've got some great guests, Talking to the leaders, the developers, the end users, as well as the vendors with the CNCF, we got two great guests, Priyanka Sharma, the General Manager of the CNCF, great to see you and Stephen Augustus OSS Engineer at VMware. He's also the KubeCon co-chair back on the cube. Thanks for coming on folks. I appreciate it. >> Thank you for having us. So, thanks for coming on, actually, remote and virtual. We're doing a lot of interviews, we're getting some perspectives, people are chatting in Slack. It's still got the hallway vibe feel, a lot of talks, a lot of action, keynotes happening, but I think the big story for me, and I would like to talk about, I want to get your perspective is this new working group that's out there. So I know there's some news around it. Could you take a minute to explain kind of what this is all about? >> Sure. I'll give a little bit of context for those who may have missed my keynote which... very bad. (Priyanka laughs) As I announced, I'm so proud to be working with the likes of Stephen Augustus here, and a bunch of other folks from different companies, different open source projects, et cetera, to bring inclusive naming to code. I think it's been a forever issue. Quite frankly. We've had many problematic terms in software out there. The most obvious one being master-slave. That really shouldn't be there. That have no place in an inclusive world, inclusive software, inclusive community with the help of amazing people like Stephen, folks from IBM, Red Hat, and many, many others. We came together because while there's a lot of positive enthusiasm and excitement for people to make the changes that are necessary to make the community welcome for all, there's a lot of different work streams happening. And we really wanted to make sure there is a centralized place for guidelines and discussion for everybody in a very non...pan-organizational kind of way. And so that's the working group that John is talking about. With that said, Stephen, I think you can do the best justice to speak to the overall initiative. >> Yeah, absolutely. So I think that's to Priyanka's point, there are lots of people who are interested in this work and again, lots of work where this is already happening, which is very exciting to say, but as any good engineer, I think that's it's important to not duplicate your work. It's important to recognize the efforts that are happening elsewhere and work towards bringing people together. So part of this is providing, being able to provide a forum for discussion for a variety of companies, for a variety of associations that... and foundations that are involved in inclusive naming efforts. And then to also provide a framework for walking people through how we evaluate language and how we make these kinds of changes. As an example, for Kubernetes, we started off the Kubernetes working group naming and the hope for the working group naming was that it was going to evolve into hopefully an effort like this, where we could bring a lot of people on and not just talk about Kubernetes. So since we formed that back in, I want to say, June-ish, we've done some work on about of providing a language evaluation framework, providing templates for recommendations, providing a workflow for moving from just a suggestion into kind of actuating those ideas right and removing that language where it gets tricky and code is thinking about, thinking about, say a Kubernetes API. And in fact that we have API deprecation policies. And that's something that we have to if offensive language is in one of our APIs, we have to work through our deprecation policy to get that done. So lots of moving parts, I'm very excited about the overall effort. >> Yeah, I mean, your mind can explode if you just think about all the complications involved, but I think this is super important. I think the world has voted on this, I think it's pretty obvious and Priyanka, you hit some of the key top-line points, inclusive software. This is kind of the high order bit, but when you get down to it, it's hard as hell to do, because if you want to get ne new namings and/or changing namings accepted by the community and code owners, you're dealing with two things, a polarizing environment around the world today, and two, the hassles involved, which includes duplicate efforts. So you've got kind of a juggling act going on between two forces. So it's a hard problem. So how are you tackling this? Because it's certainly the right thing to do. There's no debate there. How do you make it happen? How do you go in without kind of blowing things up, if you will? And do it in a way that's elegant and clean and accept it. 'Cause that's the... end of the day, it's acceptance and putting it code owners. >> Absolutely. I think so, as you said, we live in a polarizing environment right now. Most of us here though know that this is the right thing to do. Team CloudNative is for everyone. And that is the biggest takeaway I hope people get from our work in this initiative. Open source belongs to everybody and it was built for the problems of today. That's why I've been working on this. Now, when it goes into actual execution, as you said, there are many moving parts, Stephen and the Kubernetes working group, is our shining example and a really good blueprint for many folks to utilize. In addition to that, we have to bring in diverse organizations. It's not just open source projects. It's not just companies. It's also standards organizations. It's also folks who think about language in books, who have literally done PhDs in this subject. And then there are folks who are really struggling through making the changes today and tomorrow and giving them hope and excitement. So that at the end of this journey, not only do you know you've done the right thing, but you'd be recognized for it. And more people will be encouraged by your own experience. So we and the LF have been thinking at it from a holistic perspective, let's bring in the standards bodies, let's bring in the vendors, let's bring in the open source projects, give them guidelines and blueprints that we are lucky that our projects are able to generate, combine it with learnings from other people, because many people are doing great work so that there is one cohesive place where people can go and learn from each other. Eventually, what we hope to do is also have like a recognition program so that it's like, hey, this open source project did this. They are now certified X or there's like an awards program. They're still figuring that piece out, but more to come on that space. That's my part. But Stephen can tell you about all the heavy lifting that they've been doing. >> Before we get to Steve, I just want to say congratulations to you. That's great leadership. And I think you're taking a pragmatic approach and you putting the stake in the ground. And that's the number one thing, and I want to take my hat off to you guys and Priyanka, thank you for that leadership. All right, Stephen, let's talk about how this gets done because you guys open sources is what it's all about is about the people, it's about building on the successes of others, standing on the shoulders of others, you guys are used to sitting in rooms now virtually and squabbling over things like, code reviews and you got governing bodies. This is not a new thing in collaboration. So this is also a collaboration test. What are you seeing as the playbook to get this going? Can you share your insights into what the Kubernetes group's doing and how you see this. What are the few first few steps you see happening? So people can either understand it, understand the context and get involved? >> So I think it comes down to a lot of it is scope, right? So as a new contributor, as a current contributor, maybe you are one of those language experts, that is interested in getting involved as a co-chair myself for SIG Release. A lot of the things that we do, we have to consider scope. If we make this change, how is it affecting an end user? And maybe you work in contributor experience. Maybe you work in release, maybe you work in architecture. But you may not have the entire scope that you need to make a change. So I think that first it's amazing to see all of the thought that has gone through making certain changes, like discussing master and slave, discussing how we name control plane members, doing the... having the discussion around a whitelist and blacklist. What's hard about it is, is when people start making those changes. We've already seen several instances of an invigorated contributor, and maybe the new contributor coming in and starting to kind of like search and replace words. And it... I wish it was that simple, it's a discussion that has to be heard, you need buy-in from the code owners, if it's an API that you're touching, it's a conversation that you need to have with the SIG Architecture, as well as say SIG Docs. If it's something that's happening in Release, then it's a easier 'cause you can come and talk to me, but, overall I think it's getting people to the point where they can clearly understand how a change affects the community. So we kind of in this language evaluation framework, we have this idea of like first, second and third order concerns. And as you go through those concerns, there are like diminishing impacts of potential harm that a piece of language might be causing to people. So first order concerns are the ones that we want to eliminate immediately. And the ones that we commonly hear this discussion framed around. So master-slave and whitelist, blacklist. So those are ones that we know that are kind of like on the track to be removed. The next portion of that it's kind of like understanding what it means to provide a recommendation and who actually approves the recommendation. Because this group is, we have several language aficionados in this group, but we are by now means experts. And we also want to make sure that we do not make decisions entirely for the community. So, discussing that workflow from a turning a recommendation into actuating a solution for that is something that we would also do with the steering committee. So Kubernetes kind of like top governing body. Making sure that the decision is made from the top level and kind of filtered out to all of the places where people may own code or documentation around it is I think is really the biggest thing. And having a framework to make it easy to make, do those evaluations, is what we've been craving and now have. >> Well, congratulations. That's awesome. I think it's always... it's easier said than done. I mean, it's a system when you have systems and code, it's like, there's always consequences in systems architecture, you know that you do in large scales OSS. You guys know what that means. And I think the low hanging fruit, obviously master, slave, blacklist, whitelist, that's just got to get done. I mean, to me, if that just doesn't get done, that's just like a stake in the ground that must happen. But I think this idea of it takes a village, kind of is a play here. People just buy into it. That so it's a little bit of a PR thing going on too, for get buy-in, this is again a classic, getting people on board, Priyanka, isn't it? It's the obvious and then there's like, okay, let's just do this. And then what's the framework? What's the process? What's the scope? >> Yeah, absolutely agree. And many people are midway through the journey. That's one of the big challenges. Some people are on different phases of the journey, and that was one of the big reasons we started this working group, because we want to be able to provide a place of conversation for people at different stages. So we get align now rather than a year later, where everybody has their own terms as replacements and nothing works. And maybe the downstream projects that are affected, like who knows, right? It can go pretty bad. And it's very complex and it's large-scale opensource or coasters, anywhere, large software. And so because team CloudNative belongs to everyone because open source belongs to everyone. We got up, get people on the same page. For those who are eager to learn more, as I said in my keynote, please do join the two sessions that we have planned. One is going to happen, which is about inclusive naming in general, it's an hour and a half session happening on Thursday. I'm pretty sure. And there we will talk about all the various artists who are involved. Everybody will have a seat at the table and we'll have documentation and a presentation to share on how we recommend the all move, move together as an ecosystem, and then second is a presentation by Celeste in the Kubernetes working group about how Kubernetes specifically has done naming. And I feel like Stephen, you and your peers have done such amazing work that many can benefit from it. >> Well, I think engineers, you got two things going to work in for you, which is one, it's a mission. And that's... There's certainly societal benefits for this code, code is for the people. Love that, that's always been the marching orders, but also engineers are efficient. If you have duplicate efforts. I mean, it's like you think about people just doing it on their own, why not do it now, do it together, more efficient, fixing bugs over stuff, you could have solved now. I mean, this is a huge issue. So totally believe it. I know we got to go, but I want to get the news and Priyanka, you guys had some new stuff coming out from the CNCF, new things, survey, certifications, all kinds of new reports. Give us the quick highlights on the news. >> Yes, absolutely. So much news. So many talking points. Well, and that's a good thing, why? Because the CloudNative Ecosystem is thriving. There is so many people doing so much awesome stuff that I have a lot to share with you. And what does that tell us about our spirit? It tells you about the spirit of resilience. You heard about that briefly in the conversation we just had with Stephen about our working group to align various parties and initiatives together, to bring inclusive naming to code. It's about resilience because we did not get demoralized. We did not say, "oh, it's a pandemic. I can't meet anyone. So this isn't happening." No, we kept going. And that is happening in inclusive naming that is happening in the CloudNative series we're doing, that's happening in the new members that are joining, as you may have seen Volcano Engine just joined as platinum member and that's super exciting. They come from China. They're part of the larger organization that builds Tik Tok, which is pretty cool as a frequent bruiser I can say that, in addition, on a more serious note, security is really key and as I was talking to someone just minutes ago, security is not something that's a fad. Security is something that as we keep innovating, as cloud native keeps being the ground zero, for all future innovation, it keeps evolving. The problems keep getting more complex and we have to keep solving them. So in that spirit, we in CNCF see it as our job, our duty, to enable the ecosystem to be better conversant in the security needs of our code. So to that end, we are launching the CKS program, which is a certification for our Kubernetes security specialist. And it's been in the works for awhile as many of you may know, and today we are able to accept registrations. So that's a really exciting piece of news, I recommend you go ahead and do that as part of the KubeCon registration folks have a discount to get started, and I think they should do it now because as I said, the security problems keep getting worse, keep getting more complicated. And this is a great baseline for folks to start when they are thinking about this. it's also a great boon for any company out there, whether they're end users, vendors, it's all sometimes a blurry line between the two, which is all healthy. Everybody needs developers who are security conversant I would say, and this certification help you helps you achieve that. So send all of your people to go take it. So that's sort of the announcements. Then other things I would like to share are as you go, sorry, were you saying something? >> No. Go ahead. >> No, as you know, we talked about the whole thing of team CloudNative is for everyone. Open source is for everyone And I'm really proud that CNCF has offered over 1000 diversity scholarships since 2016 to traditionally under-representing our marginalized groups. And I think that is so nice, and, but just the very, very beginning. As we grow into 2021, you will see more and more of these initiatives. Every member I talked to was so excited that we put our money where our mouth is, and we support people with scholarships, mentorships, and this is only going to grow. And it just so like at almost 17%, the CNCF mentors in our program are women. So for folks who are looking for that inspiration, for folks who want to see someone who looks like them in these places, they have more diverse people to look up to. And so overall, I think our DEI focus is something I'm very proud of and something you may hear about in other news items. And then finally, I would like to say is that CloudNative continues to grow. The cloud native wave is strong. The 2.0 for team CloudNative is going very well. For the CloudNative annual survey, 2020, we found an astonishing number of places where CloudNative technologies are in production. You heard some stories that I told in my keynote of people using multiple CNCF projects together. And these are amazing and users who have this running in production. So our ecosystem has matured. And today I can tell you that Kubernetes is used in production, by at about 83% of the places out there. And this is up by 5% from 78% last year. And just so much strength in this ecosystem. I mean, now at 92% of people are using containers. So at this point we are ubiquitous. And as you've heard from us in various times, our 70 plus project portfolio shows that we are the ground zero of innovation in cloud native. So if you asked me to summarize the news, it's number one, team CloudNative and open source is for everyone. Number two, we take pride in our diversity and over 1000 scholarships have been given out since 2016 to recipients from underrepresented groups. Number three, this is the home base for innovation with 83% of folks using Kubernetes in production and 70 plus projects that deliver a wide variety of support to enterprises as they modernize their software and utilize containers. >> Awesome. That was a great summary. First of all, you're a great host. You should be hosting theCUBE with us. Great keynote, love the virtual events that you guys have been doing, love the innovation. I think I would just say just from my perspective and being from there from the beginning is it's always been inclusive and the experience of the events and the community have been top-notch. People squabble, people talk, people have conversations, but at the end of the day, it is a great community and it's fun, memorable, and people are accepting, it's a great job. Stephen, good job as co-chair this year. Well done. Congratulations. >> Thank you very much. >> Okay. Thanks for coming on, I appreciate it. >> Take it easy. >> Okay, this is theCUBE virtual, we wish we were there in person, but we're not, we're remote. This is the virtual Cube. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Nov 18 2020

SUMMARY :

brought to you by Red Hat, great to see you and Stephen It's still got the hallway And so that's the working group And in fact that we have the right thing to do. So that at the end of this journey, And that's the number one thing, And the ones that we commonly hear I mean, to me, if that the two sessions that we have planned. code is for the people. So to that end, we are and this is only going to grow. and the experience of the This is the virtual Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

StephenPERSON

0.99+

PriyankaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Priyanka SharmaPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

2021DATE

0.99+

ThursdayDATE

0.99+

Red HatORGANIZATION

0.99+

two sessionsQUANTITY

0.99+

IBMORGANIZATION

0.99+

ChinaLOCATION

0.99+

CelestePERSON

0.99+

two forcesQUANTITY

0.99+

Stephen AugustusPERSON

0.99+

92%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

83%QUANTITY

0.99+

two thingsQUANTITY

0.99+

secondQUANTITY

0.99+

2020DATE

0.99+

78%QUANTITY

0.99+

70 plus projectsQUANTITY

0.99+

CloudNativeORGANIZATION

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

2016DATE

0.99+

last yearDATE

0.99+

Ecosystem PartnersORGANIZATION

0.99+

todayDATE

0.99+

tomorrowDATE

0.99+

a year laterDATE

0.99+

KubernetesTITLE

0.99+

5%QUANTITY

0.99+

Stephen AugustusPERSON

0.99+

over 1000 scholarshipsQUANTITY

0.99+

oneQUANTITY

0.98+

Volcano EngineORGANIZATION

0.98+

CloudNativeConEVENT

0.98+

KubeConEVENT

0.98+

OneQUANTITY

0.98+

FirstQUANTITY

0.98+

first orderQUANTITY

0.97+

JuneDATE

0.97+

two great guestsQUANTITY

0.97+

an hour and a half sessionQUANTITY

0.97+

this yearDATE

0.97+

cloud nativeORGANIZATION

0.96+

LFORGANIZATION

0.96+

SIG DocsTITLE

0.93+

CloudNative EcosystemORGANIZATION

0.93+

about 83%QUANTITY

0.92+

almost 17%QUANTITY

0.92+

third orderQUANTITY

0.92+

Vijoy Pandey, Cisco | KubeCon + CloudNativeCon Europe 2020 - Virtual


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the CloudNative Computing Foundation, and Ecosystem Partners. >> Hi and welcome back to theCUBE's coverage of KubeCon, CloudNativeCon 2020 in Europe, of course the virtual edition. I'm Stu Miniman and happy to welcome back to the program one of the keynote speakers, he's also a board member of the CNCF, Vijoy Pandey who is the vice president and chief technology officer for Cloud at Cisco. Vijoy, nice to see you and thanks so much for joining us. >> Thank you Stu, and nice to see you again. It's a strange setting to be in but as long as we are both health, everything is good. >> Yeah, it's still a, we still get to be together a little bit even though while we're apart, we love the engagement and interaction that we normally get through the community but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Network, Please Evolve", been watching that journey. But why don't we start it first, you know, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so that, yeah let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports into Chuck. The role, the charter for this team, this new team, is to incubate the next bets for Cisco. And, if you can imagine, it's natural for Cisco to start with bets which are closer to its core business, but the charter for this group is to mover further and further out from Cisco's core business and takes this core into newer markets, into newer products, and newer businesses. I am running the engineering and research for that group. And, again, the whole deal behind this is to be a little bit nimble, to be a little startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast and you throw out 80% of those and concentrate on the 20% that make sense to take forward as a venture. >> Interesting. So it reminds me a little bit, but different, I remember John Chambers a number of years back talking about various adjacencies, trying to grow those next, you know, multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for, you know, driving innovation, giving engineering 20% of their time to work on things. Give us a little bit of insight. What's kind of an example of a bet that you might be looking at in the space? Bring us inside a little bit. >> Well that's actually a good question and I think a little bit of that comparison is, are those conversations that taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets. And, yes, my previous employer, I mean Google X actually goes pretty far out when it comes to incubations. The core business being primarily around ads, now Google Cloud as well, but you have things like Verily and Calico and others which are pretty far out from where Google started. And the way we are looking at these things within Cisco is, it's a new muscle for Cisco so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to go-to-market alignment or business alignment. So while the first bets that we are taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure's code, but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I talked a lot about this in the past couple of keynotes where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, whatever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets that we are going after is around connecting and securing API endpoints wherever they happen to be of whatever type they happen to be. And so API networking, or app networking, is one big bet that we're going after. Our other big bet is around API security and that has a bunch of other connotations to it where we think about security moving from runtime security where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security which is only under the developer pipeline and higher up in the stack. So those are two big bets that we're going after and as you can see, they're pretty close to Cisco's core business but also very differentiated from where Cisco is today. And once when you prove some of these bets out, you can walk further and further away or a few degrees away from Cisco's core as it exists today. >> All right, well Vijoy, I mentioned you're also on the board for the CNCF, maybe let's talk a little bit about open source. How does that play into what you're looking at for emerging technologies and these bets, you know, so many companies, that's an integral piece, and we've watched, you know really, the maturation of Cisco's journey, participating in these open source environments. So help us tie in where Cisco is when it comes to open source. >> So, yeah, so I think we've been pretty deeply involved in open source in our past. We've been deeply involved in Linux foundational networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack. We are big supporters of OpenStack. We have a couple of products that are on the OpenStack offering. And as you all know, we've been involved in CNCF right from the get go as a foundational member. We brought NSM as a project. It's sandbox currently. We're hoping to move it forward. But even beyond that, I mean we are big users of open source. You know a lot of us has offerings that we have from Cisco and you would not know this if you're not inside of Cisco, but Webex, for example, is a big, big user of linger D right from the get go from version 1.0. But we don't talk about it, which is sad. I think for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise site. We use Kubernetes very deeply in our security platforms. So we are pretty deep users internally in all our SAS products. But we want to press the accelerator and accelerate this whole journey towards open source quite a bit moving forward as part of ET&I, Emerging Tech and Incubation as well. So you will see more of us in open source forums, not just the NCF but very recently we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative and we actually are big believers in handling data in ethical and privacy preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health and we will be working very closely with Dan and the foundational companies there to, not just bring open source, but also evangelize and use what comes out of that forum. >> All right. Well, Vijoy, I think it's time for us to dig into your keynote. We've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on, and big focus you talked about was SD-WAN. Of course anybody that been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling Multicloud, so help us, you know, help explain to our audience a little bit, you know, what do you mean when you talk about things like CloudNative, SD-WAN, and how that helps people really enable their applications in the modern environment? >> Yeah, so, well we we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actual data connectivity across to your enterprise or branch and SD-WAN came in and changed the game there. But I think SD-WAN as it exists today is application-alaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM, or network service mesh, has actually helped us simplify operational complexities, simplify the ticketing and process hell that any developer needs to go through just to get a multicloud, multicluster app up and running. So the keynote actually talked about bringing those two things together where we've talked about using NSM in the past, in chapter one and chapter two, ah chapter two, no this is chapter three and at some point I would like to stop the chapters. I don't want this to be like, like an encyclopedia of networking (mumbling) But we are at chapter three and we are talking about how you can take the same consumption models that I talked about in chapter two which is just adding a simple annotation in your CRD and extending that notion of multicloud, multicluster wires within the components of our application but extending it all the way down to the user in an enterprise. And as you saw an example, Gavin Russom is trying to give a keynote holographically and he's suffering from SD-WAN being application alaware. And using this construct of a simple annotation, we can actually make SD-WAN CloudNative. We can make it application-aware, and we can guarantee the SLOs that Gavin is looking for in terms of 3D video, in terms of file access or audio just to make sure that he's successful and Ross doesn't come in and take his place. >> Well I expect Gavin will do something to mess things up on his own even if the technology works flawly. You know, Vijoy the modernization journey that customers are on is a neverending story. I understand the chapters need to end on the current volume that you're working on. But, you know, we'd love to get your view point. You talk about things like service mesh. It's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the the kind of real challenges but opportunities that they see in today's CloudNative space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree and we are seeing a lot of that happening where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've got linger D, you've got various proprietary notions around control planes like App Mesh from Amazon. There's Console which is an open source project But not part of (mumbles) today. So there's a whole bunch of service meshes in terms of control planes coming in on volumes becoming a de facto side car data plane, whatever you would like to call it, de facto standard there which is good for the community I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment. And that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for. Like API Gate was in the past. And those functions are actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of some service meshes is here to stay. And so some work has to be done in bringing these things together and that's something that we are trying to focus in on all as well because that's something that our customers are asking for. >> Yeah, actually you connected for me something I wanted to get your viewpoint on. Dial back you know 10, 15 years ago and everybody would say, "Ah, you know, I really want to have single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, you know, not that long before this event, Google had their Google Cloud show talking about the partnership that you have with Cisco with Google. They have Anthos. You look at Azure has Arc. You know, VMware has Tanzu. Everybody's talking about, really, kind of this multicluster management type of solution out there. And just want to get your viewpoint on this Vijoy is to, you know, how are we doing on the management plane and what do you think we need to do as a industry as a whole to make things better for customers? >> Yeah, but I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all of these things together. And I think the SMI apropos from Microsoft is bang on on that front where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes. As you saw in the SD-WAN and also going back in the Google discussion that you just, or Google conference that we just offered It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a CloudNative environment in the keynote. But I think the bigger problem here is how do we manage this complexity of disparate stacks, whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And, single pane of glass is over loaded as a term because it brings in these notions of big, monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, you know, the API economy should be able to help on these, you know, multi, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco and congratulations on the new role. >> Thank you Stu. It was a pleasure to be here. >> All right, and stay tuned for much more coverage of theCUBE at KubeCon, CloudNativeCon. I'm Stu Miniman and thanks for watching. (light digital music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, Vijoy, nice to see you and nice to see you again. since the last time we got together. and concentrate on the 20% that make sense that you might be looking at in the space? And the way we are looking at and we've watched, you and the foundational companies there to, and big focus you talked about was SD-WAN. and we are talking about What are some of the the and we are seeing a lot of that happening and what do you think we need in the Google discussion that you just, and give me the visibility I need Thank you Stu. I'm Stu Miniman and thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

20%QUANTITY

0.99+

Vijoy PandeyPERSON

0.99+

80%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

GavinPERSON

0.99+

Stu MinimanPERSON

0.99+

VijoyPERSON

0.99+

StuPERSON

0.99+

DanPERSON

0.99+

Emerging TechORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

ET&IORGANIZATION

0.99+

KubeConEVENT

0.99+

first betsQUANTITY

0.99+

Gavin RussomPERSON

0.99+

CloudNativeConEVENT

0.99+

VerilyORGANIZATION

0.99+

RossPERSON

0.99+

EuropeLOCATION

0.99+

ChuckPERSON

0.99+

WebexORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

John ChambersPERSON

0.99+

NSMORGANIZATION

0.98+

CalicoORGANIZATION

0.98+

two big betsQUANTITY

0.98+

bothQUANTITY

0.98+

NCFORGANIZATION

0.98+

VMwareORGANIZATION

0.97+

LinuxTITLE

0.97+

two thingsQUANTITY

0.97+

CloudNativeCon 2020EVENT

0.97+

todayDATE

0.96+

SASORGANIZATION

0.96+

Emerging Tech and IncubationORGANIZATION

0.96+

firstQUANTITY

0.96+

one big betQUANTITY

0.96+

chapter twoOTHER

0.95+

this yearDATE

0.95+

first few betsQUANTITY

0.95+

chapter oneOTHER

0.94+

TanzuORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

chapter threeOTHER

0.93+

Sam Werner, IBM & Brent Compton, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>>from around the globe. It's the Cube with coverage of Coop Con and Cloud, Native Con Europe 2020 Virtual brought to You by Red Hat, The Cloud Native Computing Foundation and its Ecosystem Partners. >>And welcome back to the Cube's coverage of Cube Con Cloud, Native Con Europe 20 twenties Virtual event. I'm Stew Minimum and and happy to Welcome back to the program, two of our Cube alumni. We're gonna be talking about storage in this kubernetes and container world. First of all, we have Sam Warner. He is the vice president of storage, offering management at IBM, and joining him is Brent Compton, senior director of storage and data architecture at Red Hat and Brent. Thank you for joining us, and we get to really dig in. It's the combined IBM and red hat activity in this space, of course, both companies very active in the space of the acquisition, and so we're excited to hear about what's going going. Ford. Sam. Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have had their conferences this year. We've heard quite a bit about how you know, Red Hat the solutions they've offered. The open source activity is really a foundational layer for much of what IBM is doing when it comes to storage, you know, What does that mean today? >>First of all, I'm really excited to be virtually at Cube Con this year, and I'm also really excited to be with my colleague Brent from Red Hat. This is, I think, the first time that IBM storage and Red Hat Storage have been able to get together and really articulate what we're doing to help our customers in the context of kubernetes and and also with open shift, the things we're doing there. So I think you'll find, ah, you know, as we talked today, that there's a lot of work we're doing to bring together the core capabilities of IBM storage that been helping enterprises with there core applications for years alongside, Ah, the incredible open source capabilities being developed, you know, by red Hat and how we can bring those together to help customers, uh, continue moving forward with their initiatives around kubernetes and rebuilding their applications to be develop once, deploy anywhere, which runs into quite a few challenges for storage. So, Brennan, I'm excited to talk about all the great things we're doing. Excited about getting to share it with everybody else. A cube con? >>Yes. So of course, containers When they first came out well, for stateless environments and we knew that, you know, we've seen this before. You know, those of us that live through that wave of virtualization, you kind of have a first generation solution. You know what application, What environment and be used. But if you know, as we've seen the huge explosion of containers and kubernetes, there's gonna be a maturation of the stack. Storage is a critical component of that. So maybe upfront if you could bring us up to speed you're steeped in, you know, a long history in this space. You know, the challenges that you're hearing from customers. Uhm And where are we today in 2020 for this? >>Thanks to do the most basic caps out there, I think are just traditional. I'm databases. APS that have databases like a post press, a longstanding APS out there that have databases like DB two so traditional APs that are moving towards a more agile environment. That's where we've seen in fact, our collaboration with IBM and particularly the DB two team. And that's where we've seen is they've gone to a micro services container based architecture we've seen pull from the market place. Say, you know, in addition to inventing new Cloud native APS, we want our tried true and tested perhaps I mean such as DB two, such as MQ. We want those to have the benefits of a red hat, open shift, agile environment. And that's where the collaboration between our group and Sam's group comes in together is providing the storage and data services for those state labs. >>Great, Sam, you know I IBM. You've been working with the storage administrator for a long time. What challenges are they facing when we go to the new architectures is it's still the same people it might There be a different part of the organization where you need to start in delivering these solutions. >>It's a really, really good question, and it's interesting cause I do spend a lot of time with storage administrators and the people who are operating the I T infrastructure. And what you'll find is that the decision maker isn't the i t operations or storage operations. People These decisions about implementing kubernetes and moving applications to these new environments are actually being driven by the business lines, which is, I guess, not so different from any other major technology shift. And the storage administrators now are struggling to keep up. So the business lines would like to accelerate development. They want to move to a developed, once deploy anywhere model, and so they start moving down the path of kubernetes. In order to do that, they start, you know, leveraging middleware components that are containerized and easy to deploy. And then they're turning to the I T infrastructure teams and asking them to be able to support it. And when you talk to the storage administrators, they're trying to figure out how to do some of the basic things that are absolutely core to what they do, which is protecting the data in the event of a disaster or some kind of a cyber attack, being able to recover the data, being able to keep the data safe, ensuring governance and privacy of the data. These things are difficult in any environment, but now you're moving to a completely new world and the storage administrators have ah tough challenge out of them. And I think that's where IBM and Red Hat can really come together with all of our experience and are very broad portfolio with incredibly enterprise hardened storage capabilities to help them move from their more traditional infrastructure to a kubernetes environment. >>Maybe if you could bring us up to date when we look back, it, like open stack of red hat, had a few projects from an open source standpoint to help bolster the open source or storage world in the container world. We saw some of those get boarded over. There's new projects. There's been a little bit of argument as to the various different ways to do storage. And of course, we know storage has never been a single solution. There's lots of different ways to do things, but, you know, where are we with the options out there? What's that? What's what's the recommendation from Red Hat and IBM as to how we should look at that? >>I wanna Bridget question to Sam's earlier comments about the challenges facing the storage admin. So if we start with the word agility, I mean, what is agility mean for it in the data world. We're conscious for agility from an application development standpoint. But if you use the term, of course, we've been used to the term Dev ops. But if we use the term data ops, what does that mean? What does that mean to you in the past? For decades, when a developer or someone deploying production wanted to create new storage or data, resource is typically typically filed a ticket and waited. So in the agile world of open shift in kubernetes, it's everything is self service and on demand or what? What kind of constraints and demands that place on the storage and data infrastructure. So now I'll come back to your questions. Do so yes. At the time, that red hat was, um, very heavily into open stack, Red Hat acquired SEF well acquired think tank and and a majority of the SEF developers who are most active in the community. And now so and that became the de facto software defying storage for open stack. But actually for the last time that we spoke at Coop Con and the Rook project has become very popular there in the CN CF as away effectively to make software defined storage systems like SEF. Simple so effectively. The power of SEF, made simple by rook inside of the open shift operator frame where people want that power that SEF brings. But they want the simplicity of self service on demand. And that's kind of the diffusion. The coming together of traditional software defined storage with agility in a kubernetes world. So rook SEF, open shift container storage. >>Wonderful. And I wonder if we could take that a little bit further. A lot of the discussion these days and I hear it every time I talk to IBM and Red Hat is customers air using hybrid clouds. So obviously that has to have an impact on storage. You know, moving data is not easy. There's a little bit of nuance there. So, you know, how do we go from what you were just talking about into a hybrid environ? >>I guess I'll take that one to start and Brent, please feel free to chime in on it. So, um, first of all, from an IBM perspective, you really have to start at a little bit higher level and at the middleware layer. So IBM is bringing together all of our capabilities everything from analytics and AI. So application, development and, uh, in all of our middleware on and packaging them up in something that we call cloud packs, which are pre built. Catalogs have containerized capabilities that can be easily deployed. Ah, in any open shift environment, which allows customers to build applications that could be deployed both on premises and then within public cloud. So in a hybrid multi cloud environment, of course, when you build that sort of environment, you need a storage and data layer, which allows you to move those applications around freely. And that's where the IBM storage suite for cloud packs was. And we've actually taken the core capabilities of the IBM storage software to find storage portfolio. Um, which give you everything you need for high performance block storage, scale out, um, file storage and object storage. And then we've combined that with the capabilities, uh, that we were just discussing from Red Hat, which including a CS on SEF, which allow you, ah, customer to create a common, agile and automated storage environment both on premises and the cloud giving consistent deployment and the ability to orchestrate the data to where it's needed >>I'll just add on to that. I mean that, as Sam noted and is probably most of you are aware. Hybrid Cloud is at the heart of the IBM acquisition of Red Hat with red hat open shift. The stated intent of red hat open shift is to be to become the default operating environment for the hybrid cloud, so effectively bring your own cloud wherever you run. So that that is at the very heart of the synergy between our companies and made manifest by the very large portfolios of software, which would be at which have been, um, moved to many of which to run in containers and embodied inside of IBM cloud packs. So IBM cloud packs backed by red hat open shift on wherever you're running on premises and in a public cloud. And no, with this storage suite for cloud packs that Sam referred to also having a deterministic experience. That's one of the things as we work, for instance, deeply with the IBM DB two team. One of the things that was critical for them, as they couldn't have they couldn't have their customers when they run on AWS have a completely different experience than when they ran on premises, say, on VM, where our on premises on bare metal critical to the DB two team t give their customers deterministic behavior wherever they can. >>Right? So, Sam, I I think any of our audience that it followed this space have heard Red House story about open shift in how it lives across multiple cloud environments. I'm not sure that everybody is familiar with how much of IBM storage solutions today are really this software driven. So ah, And therefore, you know, if I think about IBM, it's like, okay, and by storage or yes, it can live in the IBM Cloud. But from what I'm hearing from Brent in you and from what I know from previous discussion, this is independent and can live in multiple clouds, leveraging this underlying technology and can leverage the capabilities from those public cloud offers. That right, Sam? >>Yeah, that's right. And you know, we have the most comprehensive portfolio of software defined storage in the industry. Maybe to some, it's ah, it's a well kept secret, but those that use it No, the breadth of the portfolio. We have everything from the highest performing scale out file System Teoh Object store that can scale into the exabytes. We have our block storage as well, which runs within the public clouds and can extend back to your private cloud environment. When we talk to customers about deploying storage for hybrid multi cloud in a container environment, we give them a lot of houses to get there. We give them the ability to leverage their existing san infrastructure through the CS I drivers container storage interface. So our whole, uh, you know, physical on Prem infrastructure supports CS I today and then all the software that runs on our arrays also supports running on top of the public clouds, giving customers then the ability to extend that existing san infrastructure into a cloud environment. And now, with storage suite for cloud packs a sprint described earlier, we give you the ability to build a really agile infrastructure, leveraging the capabilities from Red Hat to give you a fully extensible environment and a common way of managing and deploying both on Prem and in the cloud. So we give you a journey with our portfolio to get from your existing infrastructure. Today, you don't have to throw it out it started with that and build out an environment that goes both on Prem and in the cloud. >>Yeah, Brent, I'm glad that you started with database, cause it's not something that I think most people would think about. You know, in a kubernetes environment, you Do you have any customer examples you might be able to give? Maybe Anonymous? Of course. Just talking about how those mission critical applications can fit into the new modern architect. The >>big banks. I mean, just full stop the big banks. But what I'd add to that So that's kind of frequently they start because applications based on structured data remain at the heart of a lot of enterprises. But I would say workload, category number two, our is all things machine Learning Analytics ai and we're seeing an explosion of adoption within the open shift. And, of course, cloud pack. IBM Cloud private for data, is a key market participant in that machine learning analytic space. So an explosion of the usage of of open shift for those types of workloads I was gonna touch just briefly on an example, going back to our kind of data data pipeline and how it started with databases, but it just it explodes. For instance, data pipeline automation, where you have data coming into your APS that are kubernetes based that our open shift based well, maybe we'll end up inside of Watson Studio inside of IBM ah, cloud pack for data. But along the way, there are a variety of transformations that need to occur. Let's say that you're a big bank. You need Teoh effectively as it comes in. You need to be able to run a CRC to ensure to a test that when when you modify the data, for instance, in a real time processing pipeline that when you pass it on to the next stage that you can guarantee well that you can attest that there's been no tampering of the data. So that's an illustration where it began, very with the basics of basic applications running with structured data with databases. Where we're seeing the state of the industry today is tremendous use of these kubernetes and open shift based architectures for machine learning. Analytics made more simple by data pay data pipeline automation through things like open shift container storage through things like open shift server lis or you have scale double functions and what not? So yeah, it began there. But boy, I tell you what. It's exploded since then. >>Yeah, great to hear not only traditional applications, but as you said so, so much interest. And the need for those new analytics use cases s so it's absolutely that's where it's going. Someone. One other piece of the storage story, of course, is not just that we have state full usage, but talk about data protection, if you could, on how you know things that I think of traditionally my backup restore and like, how does that fit into the whole discussion we've been having? >>You know, when you talk to customers, it's one of the biggest challenges they have honestly. And moving to containers is how do I get the same level of data protection that I use today? Ah, the environments are in many cases, more complex from a data and storage perspective. You want Teoh be able to take application consistent copies of your data that could be recovered quickly, Uh, and in some cases even reused. You can reuse the copies, for they have task for application migration. There's there's lots of or for actually AI or analytics. There's lots of use cases for the data, but a lot of the tools and AP eyes are still still very new in this space. IBM has made, uh, prior, uh, doing data protection for containers. Ah, top priority for our spectrum protect suite. And we provide the capabilities to do application aware snapshots of your storage environment so that a kubernetes developer can actually build in the resiliency they need. As they build applications in a storage administrator can get a pane of glass Ah, and visibility into all of the data and ensure that it's all being protected appropriately and provide things like S L A. So I think it's about, you know, the fact that the early days of communities tended to be stateless. Now that people are moving some of the more mission critical workloads, the data protection becomes just just critical as anything else you do in the environment. So the tools have to catch up. So that's a top priority of ours. And we provide a lot of those capabilities today and you'll see if you watch what we do with our spectrum. Protect suite will continue to provide the capabilities that our customers need to move their mission. Critical applications to a kubernetes environment. >>Alright And Brent? One other question. Looking forward a little bit. We've been talking for the last couple of years about how server lists can plug into this. Ah, higher kubernetes ecosystem. The K Native project is one that I, IBM and Red Hat has been involved with. So for open shift and server lis with I'm sure you're leveraging k native. What is the update? That >>the update is effectively adoption inside of a lot of cases like the big banks, but also other in the talk, uh, the largest companies in other industries as well. So if you take the words event driven architecture, many of them are coming to us with that's kind of top of mind of them is the need to say, you know, I need to ensure that when data first hits my environment, I can't wait. I can't wait for a scheduled batch job to come along and process that data and maybe run an inference. I mean, the classic cases you're ingesting a chest X ray, and you need to immediately run that against an inference model to determine if the patient has pneumonia or code 19 and then kick off another serverless function to anonymous data. Just send back in to retrain your model. So the need. And so you mentioned serverless. And of course, people say, Well, I could I could handle that just by really smart batch jobs, but kind of one of the other parts of server less that sometimes people forget but smart companies are aware of is that server lists is inherently scalable, so zero to end scalability. So as data is coming in, hitting your Kafka bus, hitting your object store, hitting your database and that if you picked up the the community project to be easy, Um, where something hits your relational database and I can automatically trigger an event onto the Kafka bus so that your entire our architecture becomes event >>driven. All right. Well, Sam, let me give you the funding. Let me let you have the final word. Excuse me on the IBM in this space and what you want them to have his takeaways from Cube con 2020 Europe. >>I'm actually gonna talk to I think, the storage administrators, if that's OK, because if you're not involved right now in the kubernetes projects that are happening within your enterprise, uh, they are happening and there will be new challenges. You've got a lot of investments you've made in your existing storage infrastructure. We had IBM and Red Hat can help you take advantage of the value of your existing infrastructure. Uh, the capabilities, the resiliency, the security of built into it with the years. And we can help you move forward into a hybrid, multi cloud environment built on containers. We've got the experience and the capabilities between Red Hat and IBM to help you be successful because it's still a lot of challenges there. But But our experience can help you implement that with the greatest success. Appreciate it. >>Alright, Sam and Brent, Thank you so much for joining. It's been excellent to be able to watch the maturation in this space of the last couple of years. >>Thank you. >>Alright, we'll be back with lots more coverage from Cube Con Cloud, native con Europe 2020 the virtual event. I'm stew Minimum And thank you for watching the Cube. Yeah, yeah, yeah, yeah

Published Date : Aug 18 2020

SUMMARY :

It's the Cube with coverage of Coop Con Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have the context of kubernetes and and also with open shift, and we knew that, you know, we've seen this before. Say, you know, in addition to inventing it's still the same people it might There be a different part of the organization where you need to start In order to do that, they start, you know, leveraging middleware components help bolster the open source or storage world in the container world. What kind of constraints and demands that place on the storage and data infrastructure. A lot of the discussion these deployment and the ability to orchestrate the data to where it's needed So that that is at the very heart of the synergy between our companies and But from what I'm hearing from Brent in you and from what I leveraging the capabilities from Red Hat to give you a fully extensible environment Yeah, Brent, I'm glad that you started with database, cause it's not something that So an explosion of the usage of of open shift for those types Yeah, great to hear not only traditional applications, but as you said so, so much interest. but a lot of the tools and AP eyes are still still very new in this space. for the last couple of years about how server lists can plug into this. of them is the need to say, you know, I need to ensure that when in this space and what you want them to have his takeaways from Cube con 2020 Europe. Hat and IBM to help you be successful because it's still a lot Alright, Sam and Brent, Thank you so much for joining. 2020 the virtual event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Sam WarnerPERSON

0.99+

BrentPERSON

0.99+

BrennanPERSON

0.99+

SamPERSON

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

Sam WernerPERSON

0.99+

OneQUANTITY

0.99+

2020DATE

0.99+

Red HatORGANIZATION

0.99+

Brent ComptonPERSON

0.99+

CubeORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

red hatORGANIZATION

0.99+

BothQUANTITY

0.98+

TodayDATE

0.98+

Coop ConORGANIZATION

0.98+

both companiesQUANTITY

0.98+

first generationQUANTITY

0.98+

this yearDATE

0.98+

KubeConEVENT

0.98+

this yearDATE

0.97+

red hatTITLE

0.97+

firstQUANTITY

0.96+

bothQUANTITY

0.96+

KafkaTITLE

0.96+

BridgetPERSON

0.96+

FirstQUANTITY

0.96+

single solutionQUANTITY

0.96+

SEFTITLE

0.95+

red HatORGANIZATION

0.95+

Stew MinimumPERSON

0.95+

CS ITITLE

0.94+

Steve Gordon, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Voice over: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 virtual, brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hi, I'm Stu Mittleman, and welcome back to theCUBE's Coverage of KubeCon CloudNativeCon Europe for 2020. Get to talk to the participants in this great community and ecosystem where they are around the globe. And when you think back to the early days of containers, it was, containers, they're lightweight, they're small, going to obliterate virtualization is often the headline that we had. Of course, we know everything in IT tends to be additive. And here we are in 2020 and containers and virtual machines, living side by side and often we'll see the back and forth that happens when we talk about virtualization in containers. To talk about that topic specifically, happy to welcome to the program, first time guest, Steve Gordon. He's the director of product management at Red Hat. Steve, thanks so much for joining us. >> Thanks so much Stu, it's great to be here. >> All right, as I teed up of course, virtualization was a wave that swept through the data center. It is a major piece, not only of what's in the data center, but even if you look at the public Clouds, often it was virtualization underneath there. Certain companies like Google, of course, really drove a container adoption. And often you hear when people talk about, I built something CloudNative, that underlying piece of being containerized and then using an orchestration layer like Kubernetes is what they talk about. So maybe stop for a sec, Red Hat of course, heavily involved in virtualization and containers, how you see that landscape and what's the general conversation you have with customers as to how they make the choice and how the lines blur between those worlds? >> Yeah, so at Red Hat, I think we've been working on certainly the current iteration of the next specialization with KVM for around 12 years and myself large portion of that. I think, one thing that's always been constant is while from the outside-in, specialization looks like it's been a fairly stable marketplace. It's always changing, it's always evolving. And what we're seeing right now is as people are adopting containers and even constructs built on top of containers into their workflows, there is more interest and more desire around how can I combine these things, recognizing that still an enormous percentage of my workloads are out there running in virtual machines today, but I'm building new things around them that need to be able to interact with them and springboard off of that. So I think for the last couple of years, I'm sure you yourself have seen a number of different projects pop up and the opensource community around this intersection of containers and visualization and how can these technologies compliment each other. And certainly KubeVirt is one of the projects that we've started in this space, in reaction to both that general interests, but also the real customer problems that people have, as they try and meld these two worlds. >> So Steve, at Red Hat Summit earlier this year, there was a lot of talk around container native virtualization. If you could just explain what that means, how that might be different from just virtualization in general, and we'll go from there. >> Sure, so back in, I think early 2017, late 2016, we started playing around this idea. We'd already seen the momentum around Kubernetes and the result the way we architected OpenShift, three at a time around, Kubernetes has this strength as an orchestration platform, but also a shared provider of storage, networking, et cetera, resources. And really thinking about, when we look at virtualization and containers, some of these problems are very common regardless of what footprint the workload happens to fit into. So leveraging that strength of Kubernetes as an orchestration platform, we started looking at, what would it look like to orchestrate virtual machines on that same platform right next to our application containers? And the extension of that the KubeVirt project and what has ultimately become OpenShift virtualization is based around that core idea of how can I make a traditional virtual machine to a full operating system, interact with and look exactly like a Kubernetes native construct, that I can use from the same platform? I can manage it using the same constructs, I can interact with it using the same console, all of these kinds of ideas. And then on top of that, not just bring in workloads as they lie, but enable really powerful workforce with people who are building a new application in containers that still need some backend components, say a database that's sitting in a VM, or also trying to integrate those virtual machines into new constructs, whether it's something like a pipeline or a service mesh. We're hearing a lot of questions around those things these days where people don't want to just apply those things to brand new workloads, but figure out how do they apply those constructs to the broader majority of their fleet of workflows that exist today. >> All right, so I believe back at Red Hat Summit, OpenShift virtualization was in beta. Where's the product that solution sets till today? >> Right, so at this year's KubeCon, we're happy to announce that OpenShift virtualization is moving to general availability. So it will be a fully supported part of OpenShift. And what that means is, you, as a subscriber to OpenShift, the platform, get virtualization as just an additional capability of that platform that you can enable as an operator from the operator hub, which is really a powerful thing for admins to be able to do that. But also is just really powerful in terms of the user experience. Like once that operator is enabled on your cluster, the little tab shows up, that shows that you can now go and create a virtual machine. But you also still get all of the metrics and the shared networking and so on that goes with that cluster, that underlies it all. And you can again do some really powerful things in terms of combining those constructs for both virtual machines and containers. >> When you talk about that line between virtualization and containers, a big question is, what does this mean for developers? How is it different from what they were using before? How do they engage and interact with their infrastructure today? >> Sure, so I think the way a lot of this current wave of technology got started for people was whether it was with Kubernetes or Docker before that, people would go and grab, easiest way they could grab compute for capacity was go to their virtual machine firm, whether that was their local virtualization estate at their company, or whether that was taking a credit card to public Cloud, getting a virtual machine and spinning up a container platform on top of that. What we're now seeing is, as that's transitioning into people building their workloads, almost entirely around these container constructs, in some cases when they're starting from scratch, there is more interest in, how do I leverage that platform directly? How do I, as my application group have more control over that platform? And in some cases, depending on the use case, like if they have demand for GPUs, for example, or other high-performance devices, that question of whether the virtualization layer between my physical host and my container is adding that much value? But then still wanting to bring in the traditional workloads they have as well. So I think we've seen this gradual transition where there is a growing interest in reevaluating, how do we start with container based architectures? To, okay, how has we transitioned towards more production scenarios and the growth in production scenarios? What tweaks do we make to that architecture? Does it still make sense to run all of that on top of virtual machines? Or does it make more sense to almost flip that equation as my workload mix gradually starts changing? >> Yeah, two thoughts come to mind on that. Number one is, are there specific applications out there, or I think about traditional VMs, often that Windows environments that we have there, is that some of the use case to bring them over to containers? And then also, once I've gotten it into the container environment, what are the steps to move forward? Because I have to expect that there's going to be some refactoring, some modernization to take advantage of the innovation and pace of change, not just to take it, containerize it and leave it. >> Yeah, so certainly, there is an enormous amount of potential out there in terms of Windows workloads, and people are definitely trying to work out how do they leverage those workloads in the context of OpenShift and Kubernetes based environment. And Windows containers obviously, is one way to address that. And certainly, that is very powerful in and of itself, for bringing those workloads to OpenShift and Kubernetes, but does have some constraints in terms of needing to be on a relatively recent version of Windows server and so on for those workloads to run in that construct. So where OpenShift virtualization helps with that is we can actually take an existing virtual machine workload, bring that across, even if it's say Windows server 2012, run it on top of the OpenShift virtualization platform as a VM, And then if or when you start modernizing more of that application, you can start teasing that out into actual containers. And that's actually something, it is one of our very early demos at Red Hat Summit 2018, I think was how you would go about doing that, and primarily we did that because it is a very powerful thing for customers to see how they can bring those, all the applications into this mix. And the other aspect of that I'll mention is one of our financial services customers who we've been working with, basically since that demo, they saw it from a hallway at Red Hat Summit and came and said, "Hey, we want to talk to you guys about that." One of the primary workload, is a Windows 10 style environment, that they happened to be bringing in as well. And that's more in that construct of treating OpenShift almost as a pool of compute, which you can use for many different workload types with the Windows 10 being just one aspect of that. And the other thing I'll say in terms of the second part of the question, what do I need to do in terms of refactoring? So we are very conscious of the fact that, if this is to provide value, you have to be able to bring in existing virtual machines with as minimal change as possible. So we do have a migration solution set, that we've had for a number of years, for bringing our virtual machines to Linux specialization stacks. We're expanding that to include OpenShift virtualization as a target, to help you bring in those existing virtual machine images. Where things do change a little bit is in terms of the operational approaches. Obviously, admin console now is OpenShift for those virtual machines, that does right now present a change. But we think it is a very powerful opportunity in terms of, as people get more and more production workloads into containers, for example, it's going to become a lot more appealing to have a backup solution, for example, that can cater to both the virtual machine workloads as well as any stateful container workloads you may have, which do exist in increasing numbers. >> Well, I'm glad you brought up a stateful discussion because as an industry, we've spent a long time making sure that virtual machines, have storage and have networking that is reliable in performance and the like. What should customers be thinking about and operators when they move to containers? Are there things that are different you manage bringing into, this brings them into the OpenShift management plane. So what else should I be thinking about? What do I need to do differently when I've embraced this? >> Yeah, so I think in terms of the things that virtual machine expects, the two big ones that come to mind to me are networking and storage. The compute piece is still there obviously, but I think is a little less complicated to solve just because the OpenShift and broader Kubernetes community have done such a great job of addressing that piece, and that's really what attracted us to it in the first place. But on the networking side, certainly the expectations of a traditional virtual machine are a little bit different to the networking model of Kubernetes by default. But again, we've seen a lot of growth in container based applications, particularly in the context of CloudNative network functions that have been pushing the boundaries of Kubernetes networking as well. That's resulted in projects like Motus, which allow us to give a virtual machine related to networking interface that it expects, but also give it the option of using the pod networking natively, for some of those more powerful constructs that are native to Kubernetes. So that's one of those areas where you've got a mix of options, depending on how far you want to go from a modernization perspective versus do I just want to bring this workload in and run it as it is. And my modernization is more built around it, in terms of the other container based things. Then similarly in storage, it's an area where obviously at Red Hat, we've been working close with the OpenShift container storage team, but we also work with a number of ecosystem partners on, not just how do we certify their storage plugins and make sure they work well both for containers and virtual machines, but also how do we push forward upstream efforts, around things like the container storage interface specification, to allow for these more powerful capabilities like snapshots cloning and so on which we need for virtual machines, but are also very valuable for container based workloads as well. >> Steve, you've mentioned some of the reasons why customers were moving towards this environment. Now that you're GA, what learnings did you have during beta? Are there any other customer stories you could share that you've learned along this journey? >> Yeah, so I think one of the things I'll say is that, there's no feedback like direct product in the hands of customer feedback. And it's really been interesting to see the different ways that people have applied it, not necessarily having set out to apply it, but having gotten partway through their journey and realized, hey, I need this capability. You have something that looks pretty handy and then having success with it. So in particular, in the telecommunications vertical, we've been working closely with a number of providers around the 5G rollouts and the 5G core in particular, where they've been focused on CloudNative network functions. And really what I mean by that is the wave of technology and the push they're making around 5G is to take what they started with network function virtualization a step further, and build that next generation network around CloudNative technologies, including Kubernetes and OpenShift. And as I've been doing that, I have been finding that some of the vendors are more or less prepared for that transition. And that's where, while they've been able to leverage the power of containers for those applications that are ready, they're also able to leverage OpenShift virtualization as a transitionary step, as they modernize the pieces that are taking a little bit longer. And that's where we've been able to run some applications in terms of the load balancer, in terms of a carrier grade database on top of OpenShift virtualization, which we probably wouldn't have set out to do this early in terms of our plan, but we're really able to react quickly to that customer demand and help them get that across the line. And I think that's a really powerful example where the end state may not necessarily be to run everything as a virtual machine forever, but that was still able to leverage this technology as a powerful tool in the context of our broadened up optimization effort. >> All right, well, Steve, thank you so much for giving us the updates. Congratulations on going GA for this solution. Definitely look forward to hearing more from the customers as they come. >> All right, thanks so much Stu. I appreciate it. >> All right, stay tuned for more coverage of KubeCon CloudNativeCon EU 2020, the virtual edition. I'm Stu Stu Mittleman. And thank you for watching theCUBE. (upbeat music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, is often the headline that we had. it's great to be here. and how the lines blur that need to be able to interact with them how that might be different that the KubeVirt project Where's the product that of that platform that you can enable and the growth in production scenarios? is that some of the use case that they happened to sure that virtual machines, that have been pushing the boundaries some of the reasons that is the wave of technology from the customers as they come. All right, thanks so much Stu. 2020, the virtual edition.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Stu MittlemanPERSON

0.99+

Steve GordonPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

late 2016DATE

0.99+

Windows 10TITLE

0.99+

2020DATE

0.99+

oneQUANTITY

0.99+

early 2017DATE

0.99+

OpenShiftTITLE

0.99+

KubeConEVENT

0.99+

WindowsTITLE

0.98+

StuPERSON

0.98+

two thoughtsQUANTITY

0.98+

bothQUANTITY

0.98+

Red Hat SummitEVENT

0.97+

one wayQUANTITY

0.97+

LinuxTITLE

0.97+

Red Hat SummitEVENT

0.97+

around 12 yearsQUANTITY

0.97+

CloudNativeCon Europe 2020EVENT

0.97+

todayDATE

0.97+

earlier this yearDATE

0.97+

first timeQUANTITY

0.96+

CloudNativeTITLE

0.96+

Ecosystem PartnersORGANIZATION

0.95+

MotusTITLE

0.94+

one aspectQUANTITY

0.94+

this yearDATE

0.93+

OneQUANTITY

0.93+

KubernetesTITLE

0.93+

Red Hat Summit 2018EVENT

0.92+

last couple of yearsDATE

0.91+

two worldsQUANTITY

0.9+

first placeQUANTITY

0.89+

second partQUANTITY

0.89+

theCUBEORGANIZATION

0.85+

Windows server 2012TITLE

0.82+

Number oneQUANTITY

0.81+

Stu Stu MittlemanPERSON

0.8+

both virtualQUANTITY

0.79+

KubeCon CloudNativeCon Europe for 2020EVENT

0.79+

one ofQUANTITY

0.79+

two big onesQUANTITY

0.78+

one thingQUANTITY

0.78+

Joe Fitzgerald, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>>from around the globe. >>It's the Cube with >>coverage of Coop Khan and Cloud Native Con Europe 2020 Virtual brought to you by Red Hat Cloud, >>Native Computing Foundation and >>Ecosystem Partners. Hi. And welcome back. I'm stew Minuteman. And this is the cube coverage of que con cognitive con 2020. The Europe virtual addition Course kubernetes won the container wars as we went from managing a few containers that managing clusters, too many customers managing multiple clusters and that and get more complicated. So to help understand those challenges and how solutions are being put out to solve them, having a welcome back to the from one of our cube alumni do if it Gerald is the vice president and general manager of the management business unit at Red Hat. Joe, good to see you again. Thanks so much for joining us >>two. Thanks for having me back. >>All right, so at Red Hat Summit, one of the interesting conversation do you and I add, was talking about advanced cluster management or a CME course. That was some people and some technology that came over to Red hat from IBM post acquisition. So it was tech preview give us the update. What's the news? And, you know, just level set for the audience. You know what cluster management is? >>Sure, So advanced Cluster manager or a CMS, We actually falling, basically, is a way to manage multiple clusters. Ross, even different environments, right? As people have adopted communities and you know, we have at several 1000 customers running open shift on their starting to push it in some very, very big ways. And so what they run into is a stay scale. They need better ways to manage. It would make those environments, and a CMS is a huge way to help manage those environments. It was early availability back at Summit end of April, and in just a few months now it's generally available. We're super excited about that. >>Well, that that Congratulations on moving that from technical preview to general availability so fast. What can you tell us? How many customers have you had used this? What have you learned in talking to them about this solution? >>So, first of all, we're really pleasantly surprised by the amount of people that were interested in the tech preview. Integrity is not a product that's ready to use in production yet so a lot of times accounts are not interested in. They want to wait for the production version. We had over 100 customers in our tech review across. Not only geography is all over the world Asia, America, Europe, us across all different verticals. There's a tremendous amount of interest in it. I think that just shows you know, how applicable it is to these environments of people trying to manage. So tremendous had update. We got great feedback from that. And in just a few months, we incorporate that feedback into the now generally available product. So great uptick during the tech created >>Excellent Bring assigned side a little bit, you know, When would I use this solution? If I just have a single cluster, Does it make sense for May eyes? Is it only for multi clusters? You know, what's the applicability of the offering? Yes, sir, even for >>single clusters that the things that ACM really does fall into three major areas right allows closer lifecycle management. Of course, that would mean that you have more than one cluster ondas people grow. They do for a number of reasons. Also, policy based management the ability to enforced and fig policies and enforce compliance across even your single cluster to make sure that stays perfect in terms of settings and configuration and things like that. Any other application. Lifecycle management The ability to deploy applications in more advanced way, even if you're on a single cluster, gets even better for multi cluster. But you can deploy your APS to just the clusters that are tagged a certainly, but lots of capabilities, even for application, even a single cluster. So we find even people that are running a single cluster need it askew, deployed more more clusters. You're definitely >>that's great. Any you mentioned you had feedback from customers. What are the things that I guess would be the biggest pain points that this solves for them that they were struggling with in the past? Well, >>first of being able to sort of Federated Management multiple clusters, right, as opposed to having to manage each cluster individually, but the ability to do policy based configuration management to just express the way you want things to stay, have them stay that way to adopt a more of a getups ethnology in terms of how they're managing their your open ships environments. There's lots more feedback, but those were some of the ones that seem to be fairly common, repetitive across the country. >>Yeah, and you know, Joe, you've also gotten automation in the management suite. How do I think about this? How does this fit into the broader management automation that customers were using? Well, >>I think as people in employees environments. And it was a long conversation about platform right? But there's a lot of things that have to go with the platform and red hats actually in very good about that, in terms of providing all the things you necessary that you would find necessary to make the five form successful in your environment. Right? So I was seen by four. We need storage, then development environments management, the automation ability to train on it. We have our open innovation labs. There's lots of things that are beyond the platform that people acquire in order to be successful. In the case of management automation, ACM was a huge advancement. Terms had managed these environments, but we're not done. We're gonna continue to ADM or automation integration with things like answerable mawr, integration with observe ability and analytics so far from done. But we want to make sure that open ship stays the best managed environment that's out there. I also do want to make a call out to the fact that you know, this team has been working on this technology for the past couple of years. And so, you know, it's only been a red hat for five months. This technology is actually very mature, but it is quite an accomplishment for any company to take a new team in a new technology. And in five months, do what Red Hat does to it in terms of making it consumable for the enterprise. So then kudos continue. Really not >>well. And I know a piece of that is, you know, moving that along to be open source. So, you know, where are we with the solution? Now that is be a How does that fit in tow being open? Source. >>Eso supports that are open source Already. When the process of open sourcing the rest of it, as you've seen over time read, it has a perfect record here of acquiring technologies that were either completely closed Source Open core in some cases where part it was open. It was closed. But that was the case with Ansell a few years ago. But basically our strategy is everything has to be open source. That takes time in the process of going through all of the processes necessary to open source parts of ACM on. We think that will find lots of interest in the community around the different projects inside of >>Yeah. How about what? One of the bigger concerns talking to customers in general about kubernetes even Mawr in 2020 is. What about security? How does a CME help customers make sure that their environment to secure? >>Yeah, so you know, configuration policies and forcing you can actually sent with ACM that you want things to be a certain way that somebody changes them that automatically either warn you about them or enforcement would set them back. So it's got some very strong security chops in terms of keeping the configurations just the way you want. That gets harder as you get more and more clusters. Imagine trying to keep everything but the same levels, settings, software, all the parts and pieces so affected you have ACM that can do this across any and all of your clusters really took the burden off people trying to maintain secure environments, >>okay, and so generally available. Now, anything you can share about how this solution is priced, how it fits in tow. The broader open shift offerings, >>Yes. Oh, so it's an add on for open shift is priced very similarly to open shift in terms of the, you know, core pricing. One thing I do want to mention about ACM, which maybe doesn't come out just by a description product is the fact that a scene was built from scratch for communities, environments and optimize for open shift. We're seeing a lot of competition out there that's taking products that were built for other environments, trying to sort of been member coerce them into managing kubernetes environments. We don't think people are going to be successful at that. Haven't been successful to date. So one things that we find as sort of a competitive differentiator for ACM and market is the fact that it was built from scratch designed for communities environments. So it is really well designed for the environment it's trying to manage, and we think that's gonna keep your competitive edge? >>Well, always. Joe. When you have a new architecture, you advantage of things. Any examples that you have is what, what a new architecture like this can do that that an older architecture might struggle with or not believe. Be able to do even though when you look at the product sheet, the words sound similar. But when you get underneath the covers, it's just not a good architect well fit. >>Yeah, so it's very similar sort of the shift from physical to virtual. You can't have a paradigm shift in the infrastructure and not have a sort of a corresponding paradigm shift in management tool. So the way you monitor these environments, where you secure them the way they scale and expand, we do resource management, security. All those things are vastly different in this environment compared to, let's say, a virtual more physical environment. So this has improved many times in the past. You know, paradigm shift in the infrastructure or the application environment will drive a commensurate paradigm shift in management. That's what you're seeing here. So that's why we thought it was super important to have management that was built for these environments. by design. So it's not trying to do sort of unnatural things north manage the environment. >>Yeah, I wondered. I love to hear just a little bit your philosophy as to what's needed in this space. You know, I look back to previous generations, look at virtualization. You know, Microsoft did very well at managing their environment, the M where did the same for their environments. But, you know, we've had generations of times where solutions have tried to be management of everything, and that could be challenging. So, you know, what's Red Hat in a CM's position and what do we need in the community space, you know, today and for the next couple of years. >>So kubernetes itself is the automation platform you talked about, you know, early on in the second. So you know, Cooper navies itself provides, you know, a lot of automation around container management. What a CME does is build a top it out and then capture, you know, data and events and configuration items in the environment and then allows you to define policies. People want to move away from manual processes. Certainly, but they wanna be able to get to a more state full expression of the way things should be. You want to be able to use more about, you know, sort of get up, you know, kind of philosophy where they say, this is how I want things today. Check the version in, keep it at that level. If it changes, put it back. Tell me about it. But sort of the era of chasing. You know, management with people is changing. You're seeing a huge premium now on probation. So automation at all levels. And I think this is where a cm's automation on top of open shift automation on down the road, combined with things like ansell, will provide the most automated environment you can have for these container platforms. Um, so it's definitely changing your seeing observe ability, ai ops getups type of philosophies Coming in these air very different manager in the past helps you seeing innovation across the whole management landscape in the communities environment because they are so different. The physics of them are different than the previous environments. We think with ACM answerable or insights product and some over analytics that we've got the right thing for this environment >>and can give us a little bit of a look forward, you know? How often should we expect to see updates on this? Of course. You mentioned getting feedback from the community from the technical preview to G A. So give us a little bit. Look, you know, what should we be expecting to see from a CME down the right the So >>the ACM team is far from done, right? So they're going to continue to rev, you know, just like we read open shift, that very, very fast base we're gonna be reading ACM and fast face. Also, you see a lot of integration between ACM. A lot of the partners were already working with in the application monitoring space and the analytics space security automation I would expect to see in the uncivil fest time frame, which is mid October, will cease, um, integration with danceable on ACM around things. That insult does very well combined with what ACM does. A sand will continue to push out on Mawr cluster management, more policy based management and certainly advancing the application life cycles that people are very interested in ruined faster. They want to move faster with a higher degree of certainty in their application. Employments on ACM is right there. >>It just final question for you, Joe, is, you know, just in the broader space, looking at management in this kind of cube con cloud, native con ecosystem final words, you want customers to understand where we are today and where we need to go down the road. >>So I think the you know, the market and industry has decided communities is the platform of future right? And certainly we were one of the earliest to invest in container management platforms with open shift were one of the first to invest in communities. We have thousands of customers running open shift back Russell Industries on geography is so we bet on that a long time ago. Now we're betting on the management automation of those environments and bringing them to scale. And the other thing I think that redhead is unique on is that we think that people gonna want to run their kubernetes environments across all different kinds of environments, whether it's on premise visible in virtual multiple public clouds, where we have offerings as well as at the edge. Right. So this is gonna be an environment that's going to be very, very ubiquitous. Pervasive, deported scale. And so the management of a nation has become a necessity. And so but had investing in the right areas to make sure that enterprises continues communities particularly open shift in all the environments that they want at the scale. >>All right. Excellent. Well, Joe, I know we'll be catching up with you and your team for answerable fest. Ah, coming in the fall. Thanks so much for the update. Congratulations to you in the team on the rapid progression of ACM now being G A. >>Thanks to appreciate it, we'll see you soon. >>All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on still minimum and thanks, as always, for watching the Cube.

Published Date : Aug 18 2020

SUMMARY :

Joe, good to see you again. Thanks for having me back. All right, so at Red Hat Summit, one of the interesting conversation do you and I add, As people have adopted communities and you know, we have at several 1000 customers running open shift What have you learned in talking to I think that just shows you know, how applicable it Also, policy based management the ability to Any you mentioned you had feedback from customers. express the way you want things to stay, have them stay that way to adopt a more of a getups Yeah, and you know, Joe, you've also gotten automation in the management suite. in terms of providing all the things you necessary that you would find necessary to make the five form successful And I know a piece of that is, you know, moving that along to be open source. When the process of open sourcing the rest of it, as you've seen One of the bigger concerns talking to customers in general about kubernetes configurations just the way you want. Now, anything you can share about how this solution is of the, you know, core pricing. Be able to do even though when you look So the way you monitor these environments, where you secure them the way they scale and expand, a CM's position and what do we need in the community space, you know, So kubernetes itself is the automation platform you talked about, you know, early on in the second. Look, you know, what should we be expecting to see from a CME down the So they're going to continue to rev, you know, words, you want customers to understand where we are today and where we need to go down the road. So I think the you know, the market and industry has decided communities is the platform of future right? Congratulations to you in the team on the rapid progression All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

GeraldPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JoePERSON

0.99+

five monthsQUANTITY

0.99+

EuropeLOCATION

0.99+

Red HatORGANIZATION

0.99+

AmericaLOCATION

0.99+

Russell IndustriesORGANIZATION

0.99+

Red Hat CloudORGANIZATION

0.99+

2020DATE

0.99+

mid OctoberDATE

0.99+

each clusterQUANTITY

0.99+

Joe FitzgeraldPERSON

0.99+

single clusterQUANTITY

0.99+

over 100 customersQUANTITY

0.99+

Native Computing FoundationORGANIZATION

0.99+

AsiaLOCATION

0.99+

oneQUANTITY

0.99+

AnsellORGANIZATION

0.98+

KubeConEVENT

0.98+

five formQUANTITY

0.98+

ACMORGANIZATION

0.97+

single clustersQUANTITY

0.97+

more than one clusterQUANTITY

0.97+

end of AprilDATE

0.97+

todayDATE

0.97+

Coop KhanORGANIZATION

0.96+

1000 customersQUANTITY

0.95+

ansellORGANIZATION

0.94+

secondQUANTITY

0.94+

fourQUANTITY

0.94+

Cooper naviesORGANIZATION

0.92+

firstQUANTITY

0.92+

CubeORGANIZATION

0.91+

Ecosystem PartnersORGANIZATION

0.9+

One thingQUANTITY

0.89+

Red hatORGANIZATION

0.88+

few years agoDATE

0.87+

twoQUANTITY

0.87+

red hatORGANIZATION

0.87+

OneQUANTITY

0.86+

Native Con Europe 2020EVENT

0.85+

stew MinutemanPERSON

0.85+

CloudNativeCon Europe 2020EVENT

0.82+

next couple of yearsDATE

0.79+

Red Hat SummitEVENT

0.79+

thousands of customersQUANTITY

0.78+

three major areasQUANTITY

0.75+

past couple of yearsDATE

0.74+

SummitEVENT

0.74+

redheadORGANIZATION

0.7+

con 2020EVENT

0.68+

que con cognitive con 2020EVENT

0.66+

RossPERSON

0.65+

EsoORGANIZATION

0.61+

MawrORGANIZATION

0.56+

Red HatTITLE

0.55+

ACMTITLE

0.53+

CloudORGANIZATION

0.43+

John Apostolopoulos & Anand Oswal, Cisco | Cisco Live US 2019


 

>> Narrator: Live, from San Diego, California, it's The Cube, covering Cisco Live, US, 2019. Brought to you by Cisco, and it's Ecosystem Partners. >> Welcome back to San Diego, everybody, you're watching The Cube, the leader in live tech coverage. My name is Dave Volante. I'm here with my co-host Stu Miniman, we're covering day two here of Cisco Live, 2019. Anand Oswal is here, he's the Senior Vice President of Enterprise Networking Engineering at Cisco, and John Apostolopoulos. The Italians and the Greeks, we have a lot in common. He is the VP and CTO of Enterprise Networking at Cisco. Gentlemen, welcome to The Cube. How did I do? >> You did awesome. >> Dave: Not too bad, right? Thank you. (chuckles) All right. Anand, let's start with you. You guys have had a bunch of news lately. You're really kind of re-thinking access to the network. >> Anand: Yeah. >> Can you explain what's behind that, to our audience? >> Yeah. If you think about it, the network is running more and more critical infrastructure. At the same time, it's increasing modern scale and complexity. What we expect, is that you always need wireless on. The workspace is on the move. You're working here, in your office, in the cafe, in the soccer field, everywhere. You want an uninterrupted, unplugged experience. For that, it's wireless first, it's cloud-driven, and it's data-optimized. So, we had to rethink how we do access. It's not just about your laptops and your phones on the wireless network, in the enterprise it's digital management systems. IOD devices, everything's connected wirelessly. And we need to rethink the access, on that part. >> So John, this obviously ties in to, you know, you hear all the buzz about 5G and WIFI 6. Can you explain the connection and, you know, what do we need to know about that? >> Okay, so 5G and WIFI 6 are two new wireless technologies, which are coming about now, and they're really awesome. So, WIFI 6 is the new version of WIFI. It's available today, and it's going to be available predominantely indoors. As we use WIFI indoors, in high-density environments, where we need a large database per square meter. And the new WIFI 6, the coverage indoors. 5G is going to be used predominately outdoors, in the cellular frequency. Replacing conventional 4G or LTE, and it'll provide you the broad coverage as you roam around, outdoors. And what happens though, is we need both. You need great coverage indoors, which WIFI 6 can provide, and you need great coverage outdoors, which 5G will provide. >> So, the 4G explosion kind of coincided with mobile-- >> Anand: Yep. >> Obviously, and that caused a huge social change-- >> Anand: Yep. >> And of course, social media took off. What should we expect with 5G, is it, you know, I know adoption is going to take a while, we'll talk about that, but it feels like it's more, sort of, B-to-B driven, but maybe not. Can you, sort of, give us your thoughts there. >> Well think about it, if you see WIFI 6 and 5G have actually been on some similar fundamental technology building blocks. You know, you've all been at a ball game. Or the Warriors game, like a few weeks ago, when they were winning. And, after a great play, you're trying to send that message, a video to your kid or something, and the WIFI is slow, latency. With WIFI 6, you won't have that problem. 'Cause WIFI 6 has four times the latency, sorry, four times the throughput and capacity as existing WIFI. Lower latency. And also, the battery life. You know, people say that batteries are the most important thing today, like in the Maslow Hierarchy Chart-- >> Dave: Yeah, yeah, yeah. >> Three times the battery life, for WIFI 6 endpoints. So, you're going to see a lot of use cases where you have inter-working with WIFI 6 and 5G. WIFI 6 for indoors, and 5G for outdoor, and there'll be some small overlap, but the whole idea is that, how do you ensure that these two disparate access networks are talking to each other? Exchanging security, policy, and there is some visibility. >> Okay, so, well, first of all, you're a Warriors fan, right? >> Anand: Yeah, I am. >> Awesome, we want to see this series keep going. >> Game six, baby! >> That was really exciting. Now of course, I'm a Bruins fan, so we're on the plane the other night, and the JetBlue TV shut down, you know, so I immediately went to the mobile. >> Yeah. >> But it was a terrible experience, I was going crazy. Texting my friends, what's happening? >> Anand: Yeah. >> You're saying that won't happen-- >> Anand: Yeah. >> With 5G and WIFI 6? >> Anand: Yeah. Exactly. >> Oh, awesome. >> So, John, help connect for us, Enterprise Networking. We've been talking about the new re-architectures, you know, there's ACI, there's now intent-based networking, how does this play into the 5G and WIFI 6 discussion that we're having today? >> Okay, so one of the things that really matters to our customers, and to everybody, basically, is that they want the sort of end-to-end capability. They have some devices, they want to talk through applications, they want access to data, they want to talk with other people, or to IoT things. So you need this sort of end-to-end capability, wherever the ends are. So one of the things we've been working on for a number of years now, is first of all intent-based networking, which we announced two and a half years ago. And then, multi-domain, where we try to connect across the different domains. Okay, across campus, and WAN, and data center, all the way to the cloud, and across the service finder network. And to add security, as foundational across all of these. This is something that Dave Goeckeler and Chuck Robbins talked about at their keynote yesterday. And this is a huge area for us, 'cause we're going to make this single-orchestrated capability for our customers, to connect end-to-end, no matter where the end devices are. >> All right, so Anand, I have to believe that it's not the poor, you know, administrator, saying, oh my God, I have all these pieces and I need to manage them. (laughing) Is this where machine learning and AI come in to help me with all these disparate systems? >> Absolutely. Our goal is very simple. Any user, on any device, should have access to any application. Whether it's sitting in a data center, in a cloud, or multiple clouds. Or any network. You want that securely and seamlessly. You also want to make sure that the whole network is orchestrated, automated, and you have the right visibilities. Visibilities for ID, and visibility for business insights. Talk of AI and ML, what's happening is that as the network is growing in complexity and scale, the number of alerts are growing up the wazoo. So you are not able to figure it out. That's where the power of AI and machine learning comes. Think about it. In the industrial revolution, the industrial revolution made sure that you don't have the limitations of what humans can do, right? You had machines. And now, we want to make sure that businesses can benefit in the digital revolution. You're not limited by what I can pass through the logs and scrolls. I want to automate everything. And that's the power of AI and machine learning. >> Are there use cases where you would want some human augmentation, where you don't necessarily want the machine taking over for you, or do you see this as a fully-automated type of scenario? >> Yeah, so what happens is, first of all, visibility is really, really important. The operator of a network wants to have visibility, and they want end-to-end across all these domains. So the first thing we do is we apply a lot of machine learning, to take that immense amount of data, as Anand mentioned, and to translate it into pieces of information, to insights into what's happening. So then we can share to the user and they can have visibility in terms of what's happening and how well it's happening, are they anomalies, or is there a security threat, so forth. And then, we can provide them additional feedback. Hey, this is ananomaly, this could be a problem. This is the root cause of the problem, and we believe these are the solutions for it. What do you want to do? Do you want to actuate one of these solutions? And then they get to choose. >> And if you think about the other way, our goal is really to take the bits and bites of data in the network, convert that data into information. That information into insights. That insights that lead to outcomes. Now, you want to also make sure that you can augment the power of AI and machine learning on those insights, so you can drill down exactly what's happening. So, for example, you want to first baseline your network. What's normal for your environment? And when you have deviations. That's anomalies. Then you narrow down exactly what the problem is. And then you want to automate the remediation of that problem. That's the power of AI and ML. >> When you guys, as engineers, when you think about, you know, applying machine intelligence, there's a lot of innovation going on there. Do you home-grow that? Do you open source it? Do you, you know, borrow? Explain the philosophy there, in terms of from a development standpoint. >> Yeah. From a development point of view it's a combination of all the other aspects. We will not reinvent what already exists, but there's always a lot of secret sauce that you need to apply, because everything flows to the network, right? If everything flows to the network, Cisco has a lot of information. It's not just a data lake. We're a data source as well. So taking this disparate source of information, normalizing it, harmonizing it, creating a language, applying the algorithm of AI and machine learning. For example, we do the model learning and training in the cloud. We do inference in the cloud, and you push the rules down. So it's a combination of all of the aspects we talked about. >> Right, and you use whatever cloud tooling is available. >> Yes. >> But it sounds like from a Cisco engineering standpoint, it's how you apply the machine intelligence, for the benefit of your customers and those outcomes-- >> Anand: Yeah. >> Versus us thinking of Cisco as this new AI company, right? >> Anand: Yeah. >> That's not the latter, it's the former, is that fair? >> So one of the things that's really important is as you know, Cisco's been making, we've been designing our A6 for many years, with really, really rich telemetry. And as you know, data is key to doing good machine learning and stuff. So we've been designing the A6, to do do real time at wire speed telemetry. And also to do various sorts of algorithmic work on the A6 to figure out, hey, what is the real data you want to send up? And then we've optimized the OS, IOS XE, to be able to perform various algorithms there, and also to host containers where you can do more machine learning at the switch, at the router, even in the future, maybe, at the AP. And then with DNA center, we've been able to gather all of the data together, in a single data lake, where we can perform machine learner on top. >> That's a very important point John mentioned, because you want layer one to layer some of the analytics. And that's why the Catalyst 9120 access point we launched has the Cisco RF ASIC, that provides things like clean air for spectrum, we've also got the analytics from layer one level, all the way to layer seven. >> Yeah, I really like the line actually, from Chuck Robbins yesterday, he said, the network sees everything and Cisco wants to you know, give you that visibility. Can you walk us through some of the new pieces, what people, either things that, they might not have been aware of, or new announcements this week. >> So, as part of the Cisco AI network analytics, we announced three things. The first thing is automated baselining. What that really means is that, what's normal for your environment, right? Because what's normal for your own environment might not be the same for my environment. Once I understand what that normal baseline is, then, as I have deviations, I can do anomaly detection. I can correlate and aggregate issues. I can really bring down apply AI and machine learning and narrow down the issues that are most critical for you to look at right now. Once I narrow down the exact issue, I go on to the next thing, and that is what we call machine reasoning. And machine reasoning is all about automating the workflow of all you need to do to debug and fix a problem. You want the network to become smarter and smarter, the more you use it. And all of this is done through model learning and training in the cloud, inference in the cloud, and pushing it down, the rules as we have devices online, on plan. >> So do you see the day, if you think about the roadmap for machine intelligence, do you see the day where the machine will actually do the remediation of that workflow? >> Absolutely. That's where we need to get to. >> When you talk about the automated baselining, I mean there's obviously a security, you know, use case there. Maybe talk about that a little bit, and are there others? Really, it depends on your objective, right? If my objective is to drive more efficiency-- >> Yeah. >> Lower costs, I presume a baseline is where you start, right? So... >> When I say, baseline, what I mean really is like, say if I tell you that on this laptop, to connect to the WIFI network, it took you three seconds. And I ask you is that good or bad? You'll say, I don't know. (laughs) >> What's the baseline for the environment? >> Dave: Yeah. >> What's normal? And next time, if you take eight seconds, and your baseline is three, something is wrong. But, what is wrong? Is it a laptop issue? Is it a version on there, on your device? Is it an application issue? A network issue? An RF issue? I don't know. That's where AI machine learning will determine exactly what the problem is. And then you use machine reasoning to fix the problem. >> Sorry, this is probably a stupid question, but, how much data do you actually need, and how much time do you need, to actually do a good job in that type of use case? >> Well, what happens is you need the right data, okay? And you're not sure where the right data is. (chuckles) >> So originally what we'd do, a lot of our expertise, that Cisco has for 20 years, is figuring out what the right data is. And also, with a lot of the machine learning we've done, as well as machine reasoning, where we put together templates and so forth, we've basically gathered the right data, for the customer, and we refined that over time. So over time, like, this venue here, the way this venue's network, what it is, how it operates and so forth, varies with time, and we need to refine that over time, keep it up to date, and so forth. >> And when we talk about data, we're talking about tons of metadata here, right? I mean, do you ever see the day where there'd be more metadata than data? (laughs) >> Yeah-- >> Rhetorical question. (laughs) >> All right, so-- >> It's true though, it's true. >> Right? (laughing) >> We're here in the DevNet zone, lots of people learning about building infrastructures, code, tell us how the developer angle fits into what we've been discussing here. >> Oh, yes. So what happens is, as part of intent-based networking, a key part's the automation, right? And another key part's the assurance. Well, it's what DevNet's trying to do right now, by working with engineering, with us, and various partners, other customers, is they're putting together, what are the key use cases that people have, and what is code that can help them get that done? And what they're also doing, is they're trying to, they're looking through the code, they're improving it, they're trying to instill best practice and stuff, so it's a reasonably good code, that people can use and start building off of. So we think this can be very valuable for our customers to help move into this more advanced automation, and so forth. >> So, architecture matters, we sort of touched upon it, but I want you to talk more about multi domain architecture. We heard Chuck Robbins, you know, talk about it. What is it, why is it such a big deal, and how does it give Cisco a competitive advantage? >> Think about it, I mean, multi domain architecture's nothing but all the components of a modern enterprise network behind the scenes. From giving access to a user or device, to access to an application, and everything in between. Now traditionally, each of these domains, like an access domain, the WAN domain, can have hundreds of thousands of network nodes and devices. Each of these are configured, generally manually, the the CLI. Multi domain architecture's all about stitching these various domains into one cohesive, data-driven, automated, programmable network. So, your campus, your branch, your WAN, your data center and cloud, with security as an integral part of it, if at all. >> So, it's really a customer view of an architecture, isn't it. >> Absolutely. Yeah. Absolutely. >> Okay. It's good, I like that answer. I thought you were going to come out with a bunch of Cisco-- >> Anand: No. >> Mumbo-jumbo and secret sauce-- >> No. >> But it really is, you guys thinking about, okay, how would our customers need to architect their network? >> Exactly. Because if you think about it, it's all about a customer use case. For example, like, we talked earlier, today we are working everywhere. Like, on the poolside, in the cafe, in the office, and always on the go. You're accessing your business-critical applications, whether that's Webex, salesforce.com, O365. At the same time, you're reading Facebook, and WhatsApp, and YouTube, and other applications. Cisco's SD-WAN domain will talk to Cisco's ACI domain, exchange SLAs and policies, so now you can prioritize that application that you want, which is business-critical. And place the right part, for the best experience for you. Because you want the best experience for that app, no matter where you are. >> Well, and the security implications too, I mean-- >> Anand: Absolutely. >> You're basically busting down the security silos-- >> Yeah. >> Dave: And sort of the intent here, right? >> Yeah. Absolutely. >> Great. All right, last thoughts on the show, San Diego, last year we were Orlando, we were in Barcelona earlier this year, your thoughts about that. >> I think it's been great so far. If you think about it, in the last two years we've filled out the entire portfolio for the new access network. On the Catalyst 9100 access points, with WIFI 6, the switches, next generation campus core, the wireless LAN controller, eyes for unified policy, DNA center for automation, analytics, DNA spaces for business insights, the whole access network has been reinvented, and it's a great time. >> Nice, strong summary, but John, we'll give you the last word. >> What happens here is also, everything Anand says, and we have 5000 engineers who've been doing this over multiple years, and we have a lot more in the pipe. So you're going to see more in six months from now, more in nine months, and so forth. It's a very exciting time. >> Excellent. Guys, it's clear you, like you say, completing the portfolio, positioning for the next wave of access, so congratulations on all the hard work, I know a lot goes into it >> Thank you. >> Thank you very much for coming on The Cube. >> Thank you so much. >> All right, keep right there, Dave Volante with Stu Miniman, Lisa Martin is also in the house. We'll be back with The Cube, Cisco Live 2019, from San Diego. (fast electronic music)

Published Date : Jun 11 2019

SUMMARY :

Brought to you by Cisco, and The Italians and the Greeks, we have access to the network. What we expect, is that you always So John, this obviously ties in to, you know, And the new WIFI 6, the coverage indoors. What should we expect with 5G, is it, you know, And also, the battery life. the whole idea is that, how do you ensure and the JetBlue TV shut down, you know, I was going crazy. We've been talking about the new re-architectures, So one of the things we've been working it's not the poor, you know, administrator, And that's the power of AI and machine learning. So the first thing we do is we apply a lot of And then you want to automate Explain the philosophy there, in terms of We do inference in the cloud, and you And as you know, data is key to doing good level, all the way to layer seven. Yeah, I really like the line actually, from the workflow of all you need to do to That's where we need to get to. I mean there's obviously a security, you know, Lower costs, I presume a baseline is where you And I ask you is that good or bad? And then you use machine reasoning to Well, what happens is you need the right data, okay? gathered the right data, for the customer, (laughs) We're here in the DevNet zone, lots of people And another key part's the assurance. touched upon it, but I want you to talk of a modern enterprise network behind the scenes. So, it's really a customer view of Yeah. I thought you were going to come out with And place the right part, for the best experience for you. Yeah. we were in Barcelona earlier this year, for the new access network. we'll give you the last word. a lot more in the pipe. for the next wave of access, so congratulations with Stu Miniman, Lisa Martin is also in the house.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John ApostolopoulosPERSON

0.99+

Dave VolantePERSON

0.99+

Anand OswalPERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

CiscoORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

BarcelonaLOCATION

0.99+

Dave GoeckelerPERSON

0.99+

San DiegoLOCATION

0.99+

Chuck RobbinsPERSON

0.99+

last yearDATE

0.99+

AnandPERSON

0.99+

three secondsQUANTITY

0.99+

eight secondsQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

yesterdayDATE

0.99+

EachQUANTITY

0.99+

eachQUANTITY

0.99+

2019DATE

0.99+

OrlandoLOCATION

0.99+

5000 engineersQUANTITY

0.99+

IOS XETITLE

0.99+

Three timesQUANTITY

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.99+

this weekDATE

0.98+

bothQUANTITY

0.98+

JetBlueORGANIZATION

0.98+

20 yearsQUANTITY

0.98+

todayDATE

0.98+

nine monthsQUANTITY

0.97+

earlier this yearDATE

0.97+

first thingQUANTITY

0.97+

DevNetORGANIZATION

0.97+

two and a half years agoDATE

0.96+

six monthsQUANTITY

0.96+

three thingsQUANTITY

0.96+

salesforce.comOTHER

0.96+

A6COMMERCIAL_ITEM

0.95+

YouTubeORGANIZATION

0.95+

WIFI 6OTHER

0.95+

firstQUANTITY

0.94+

four timesQUANTITY

0.94+

ACIORGANIZATION

0.94+

FacebookORGANIZATION

0.93+

hundreds of thousandsQUANTITY

0.93+

Enterprise Networking EngineeringORGANIZATION

0.92+

Terry Ramos, Cohesity | Cisco Live US 2019


 

>> Voiceover: Live from San Diego, California. It's the CUBE, covering Cisco Live U.S. 2019, brought to you by Cisco, and its EcoSystem Partners. >> Welcome back to San Diego, day two here, of Cisco Live 2019, I'm Dave Villante with my co-host Stu Miniman, Lisa Martin is also here. You're watching the Cube, the leader live tech coverage, we're here in the DevNet zone, which is a very happenin' place, and all the action is here the CCIE folks are getting trained up on how to do Infrastructure as Code. Terry Ramos is here, he's the Vice President of Alliances, at Cohesity, hot company, achieving escape velocity. Terry great to have you on. Good to see you again. >> Great to be here, really enjoy it. >> So Cisco is a big partner of yours, perhaps the biggest I know you don't like to say that, you love all your partners like you love your kids, but clearly a lot of good action going on with you guys. Talk about the partnership, where it started, how it's evolved. >> Sure so first off a little bit about Cohesity, I think would be helpful right, we're in the data management space, really helping customers with their data management, and how do they deal with the problem of mass data fragmentation, right if you think about the traditional data silos that enterprises have, we really take and level that out into one platform, our platform, and really allows customers to get the most out of their data. If we talk about the partnership with Cisco, it's actually a really good partnership. They have been an investor with us, both series C and D rounds. We recently, about three months ago announced that we were on the price book, so now a customer has the ability to go buy a Cisco UCS, Hyperflex, and Cohesity, as a cohesive bundle to solve their problems, right, to really help them grow. And then we are working on some new things, like Cisco Solutions Plus Support, where customers has a single call place, where they get all their support needs addressed. >> That's huge Stu, I remember when the, remember the Vblock when it first came out. It's a V support, I forget how many VMs, like thousands and thousands of VMs, and I just have one question, how do you back it up? And they went, and they were staring at their feet, so the fact that now you're bundled in to UCS HyperFlex, and that's part of the SKU, or its a different SKU or? >> Terry: Yeah they're all different SKUs, but it is bundled together. >> Yeah, so it's all integrated? It's a check box item, right okay? >> What we did was came up with the CVD, validated design so customers can get a validated design that says HyerFlex, UCS, Cohesity, here's how to deploy it, here's the best use cases, and they can actually go buy that, then it's a bundled solution. >> Terry brings us inside a little bit that go to market, because it's one thing to be partnered with CBDs, they're great but Cisco as you know hundred of these, if not more, but you know when you've got access to that Cisco channel out there, people that are transforming data centers, they talked about conversion infrastructure, hyper conversion infrastructure, Cisco UCS, tip of the spear for Cisco in that Data Center world, what does it mean to be that oh hey you know that whole channel, they are going to help get paid on that not just say oh yeah yeah that works. >> Yeah, I think that there's a few things for the channel for us, one is just Cisco's team themselves right, they don't have a backup solution so we are really the next gen backup and that's really helped them out. When we talk about Channel as well Channel partners are looking for a solution that differentiates them from everybody else. So we are a high touch sales team, but we are a hundred percent channel so working with the channel, giving them new ways actually to go out a sell the solution. >> So lets talk a little bit about backup, data protection, data insurance you know sort of we're trying to pass between, all right, what's the marketing and what's the reality for customers, so we remember the VM where Ascendancy days, it caused people to really have to rethink their backup and their data protection. What's driving it now? Why are so many customers kind of reassessing their backup approach and their overall data protection and data management? >> Yeah, I think it's the best analogy to last one is data management right, everybody has thought of data protection, it's just protecting your data. Backup and recovery. What we've done is really looked at it as it's data, you should be able to use your data however you want to. So, yeah we made do data protection on the platform, but then we do tests that, we do file shares, we do things like that, and we make it this cohesive data management platform, where customers get various use cases, but then they can look at their entire dataset, and that is really the key anymore. And when you talk about the data protection as it was, it was very silo. You data protect one set of systems, and data protect the next, and data protect the next. They never talked you couldn't do management across them. >> Dave: Okay so. >> Yeah yeah Terry. So I love when you're talking about the silos there, back in Barcelona we heard Cisco talking about HyperFlex anywhere, and some of the concerns of us have is, is multi-cloud the new multi vendor, and oh my gosh have I just created a whole bunch of silos that are just outside of my data center, like I used to do inside my data center. How's Cohesity helping to solve that solution for people from your. >> Yeah I think that's a interesting one. Cloud is really come along, right? Everybody thought we'll see what cloud does, it's really come a long way and people are using multi-cloud, so they are doing cloud on prem. Then they're archiving out to public cloud providers, and they're archiving out to other silos where they, or other data services where they have it, and that's really been the approach lately, is you can't just have your data in one location, you're going to move it out to the Cloud, you're going to store it on UCS and HyperFlex, and Cohesity. And again its how do you use that data, so that's the key is really that. But it is a cloud world for sure, where you're doing On-prem Cloud and Public Cloud. >> So today a lot of that focus, correct me it I am wrong, is infrastructure as a service? >> Yes >> Whether it's AWS, Google, you know Azure. Do you, have you started to think about, or are customers and partners asking you to think about, all the protecting all the data in SAS, is that something that's sort of on the road map are you hearing that for customers, or to is it still early for that? >> No I think that actually a great use case, if you talk about I'll just pick on one, Office 365 right, if you think about what they really provide it's availability right it's not backup so, if you need to back a year and get that critical email that you need for whatever reason, that's really not what they're doing. They're making sure it's up and running, and available to the users. So data protection for SAS apps is actually a new use case that I think is enormous. >> Okay so take Office 365 as an example, is that something you can protect today, or is that kind on the road map? >> That's something we can do today. >> So explain to our audience, why if I am using Office 365 which is in the Cloud, isn't Microsoft going to take care of that for me, why do I need Cohesity explain? >> Yeah, I think it is really comes down to that, it's they're really providing availability, yeah they have some backup services, but even if they do it's not tying into your overall data management solution. And so backing up O-365 gives you access to all that data as well, so you can do algorithms on it, analytics all those things once it's part of the bigger platform. >> And you probably have more facile recovery, which is, backup is one thing, recovery Stu. >> Is a everything. >> There you go. >> It is. (laugh) >> Terry talk to us about your customers, how about any big you know Cisco joint customers that you can talk about but would love to hear some of the latest from your customers? >> Yeah I think when we started this partnership awhile ago, what we really focused on Cohesity on UCS, and we got some traction there. When we went on the price sheet that really changed, things because the customers are now able to buy on a single price sheet. When you talk about the large customers it's been incredible the last three, four months, the numbers of joint customers that we've been in, and Cisco's been in, and its enterprise customers, it's the fortune five hundred customers that we're going after. A customer that's here later today, Quantium is a great use case. They're data analytics, they're AI, and they're providing a lot of information to customers on supply chain. And he's here later today on the CUBE, and it's a really great use case to what they are doing with it. >> Yeah we're excited to talk to him so lets do a little prep for him, what, tell us about Quantium, what do you know about them so we, gives us the bumper sticker so we're ready for the interview. >> Craig will do a much better job of it, but my understanding is they're looking at data, supply chain data, when to get customers in, when they should have product there, propensity to buy, all of those things, and they are doing all that for very large enterprise customers, and then they're using us to data protect all that they do. >> So, so the reason I asked that is I wanted to double click on that, because you've been stressing Terry, that it's not just backup. It's this notion of data management. You can do Analytics, you can do other things. So when you, lets generalize and lets not make it specific to Quantium, we'll talk to them later, but what specifically are customers doing beyond backup? What kind of analytics are they doing? How is affecting their business? What kind of outcomes are they trying to drive? >> Yeah I think it's a great question, we did something about four months ago, where we replaced released the market place. So now we've gotten all this data from data protection, file shares, test-dev, cloud as we talked about. So we've got this platform with all this data on top of it, and now partners can come in and write apps on top to do all sorts of things with that data. So think of being able to spin up a VM in our platform, do some Analytics on it, looking at it for any number of things, and then destroy it right, destroy the backup copy not the backup the copy that's made, and then be able to go to the next one, and really get deep into what data is on there, how can I use that data, how can I use that data across various applications? >> Are you seeing, I've sort have always thought the corpus, the backup corpus could be used in a security context, not you know, not to compete with Palo Alto Networks but specifically to assess exposure to things like Ransomware. If you see some anomalous behavior 'cause stuff when it goes bad it goes bad quickly these days, so are you seeing those types of use cases emerging? >> Absolutely, ransomware is actually a really big use case for us right now, where customers are wanting data protection to ensure Ransomware's not happening, and if they do get hit how do we make sure to restart quickly. Give you another example is we have a ClamAV so we can spin up a VM and check it for anitivirus. Right in their data protection mode so not without, not touching the production systems but touching the systems that are already backed up. >> I think you guys recently made an acquisition of a Manas Data which if I recall correctly was a specialized, sort of data protection company focused on things like, NoSQL and maybe Hadoop and so forth, so that's cool. We had those guys on in New York City last fall. And then, so I like that, building out the portfolio. My question is around containers, and all this cloud native stuff going on we're in the DevNet zone so a lot DevOps action, data protection for containers are you, your customers and your partners are they sort of pushing you in that direction, how are you responding? >> Yeah I think when you talk about cloud in general right, there's been a huge amount of VMs that are there, containers are there as well so yeah customers are absolutely talking about containers. Our market place is a container based market place, so containers are absolutely a big thing for us. >> So what else can you share with us about you know conversations that you're having with customers and partners at the show? What are the, what's the narrative like? What are some of the big concerns, maybe that again either customers or partners have? >> Yeah I don't want to sound like a broken record but I think the biggest thing we hear always is the data silos, right? It's really breaking down those silos, getting rid of the old legacy silos where you can't use the data how you want to, where you can't run analytics across the data. That is the number one talk track that customers tell us. >> So how does that fit in, you know the old buzz word of digital transformation, but we always say the difference between a business and a digital business is how they use data. And if you think about how a traditional business looks at it's data, well that data's all in silos as you pointed out and there's something in the middle like a business process or a bottling plant or... >> That's right. >> manufacturing facility, but the data's all dispersed in silos, are you seeing people, as at least as part of their digital transformation, leveraging you guys to put that data in at least in a logical place that they can do those analytics and maybe you could add some color to that scenario. >> Yeah, for sure, I mean the data from I'll give you a great example. The CBD we just did with Cisco, the updated one has Edge. So now when you're talking about plants and branch offices and those things, now we can bring that data back in to the central core as well, do analytics on it, and then push it to other offices for updated information. So absolutely, it is a big use case of, it's not just looking at that core central data center. How do you get that data from your other offices, from your retail locations, from your manufacturing plants. >> Final thoughts. San Diego, good venue you know great weather. >> Beautiful. >> Cisco Live. >> Yeah. >> Dave: Put a bumper sticker on it. >> I'm impressed with Cisco Live. I haven't been here in several years. It's an impressive show, 26 thousand people, great, beautiful weather, great convention center. Just a great place to be right now. >> All right and we're bring it all to you live from the CUBE. Thank you Terry for coming on. Dave Villante, for Stu Miniman, Lisa Martin is also here. Day two, Cisco Live, 2019. You're watching the CUBE, we'll be right back. (upbeat techno music)

Published Date : Jun 11 2019

SUMMARY :

brought to you by Cisco, and its EcoSystem Partners. Terry great to have you on. but clearly a lot of good action going on with you guys. and how do they deal with the problem of and I just have one question, how do you back it up? but it is bundled together. here's the best use cases, and they can actually go if not more, but you know when you've got for the channel for us, data protection, data insurance you know and that is really the key anymore. is multi-cloud the new multi vendor, and they're archiving out to other silos where they, on the road map are you hearing that for customers, that you need for whatever reason, And so backing up O-365 gives you access to all that And you probably have more facile recovery, When you talk about the large customers it's been what do you know about them so we, and then they're using us to data protect all that they do. You can do Analytics, you can do other things. and then be able to go to the next one, so are you seeing those types of use cases emerging? and if they do get hit how do we make sure I think you guys recently made an acquisition of a Yeah I think when you talk about cloud in general right, where you can't use the data how you want to, And if you think about how a traditional business and maybe you could add some color to that scenario. and then push it to other offices for updated information. San Diego, good venue you know great weather. Just a great place to be right now. All right and we're bring it all to you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

Dave VillantePERSON

0.99+

Lisa MartinPERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

TerryPERSON

0.99+

San DiegoLOCATION

0.99+

Stu MinimanPERSON

0.99+

New York CityLOCATION

0.99+

BarcelonaLOCATION

0.99+

thousandsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Office 365TITLE

0.99+

San Diego, CaliforniaLOCATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

one platformQUANTITY

0.99+

one questionQUANTITY

0.99+

QuantiumORGANIZATION

0.99+

todayDATE

0.99+

CraigPERSON

0.99+

26 thousand peopleQUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

bothQUANTITY

0.99+

last fallDATE

0.98+

a yearQUANTITY

0.98+

EcoSystem PartnersORGANIZATION

0.98+

2019DATE

0.97+

four monthsQUANTITY

0.97+

HyerFlexORGANIZATION

0.97+

HyperFlexORGANIZATION

0.97+

O-365TITLE

0.97+

UCSORGANIZATION

0.97+

about three months agoDATE

0.96+

one locationQUANTITY

0.96+

Day twoQUANTITY

0.96+

SASORGANIZATION

0.96+

CohesityORGANIZATION

0.95+

one thingQUANTITY

0.95+

CUBEORGANIZATION

0.95+

firstQUANTITY

0.95+

HyperflexORGANIZATION

0.94+

later todayDATE

0.94+

one setQUANTITY

0.93+

hundredQUANTITY

0.92+

day twoQUANTITY

0.92+

single price sheetQUANTITY

0.91+

Cisco LiveEVENT

0.91+

hundred percentQUANTITY

0.9+

thousands of VMsQUANTITY

0.89+

five hundred customersQUANTITY

0.89+

single callQUANTITY

0.89+

Cisco Solutions PlusORGANIZATION

0.89+

RansomwareTITLE

0.87+

USLOCATION

0.85+

Vice PresidentPERSON

0.84+

oneQUANTITY

0.84+

CCIEORGANIZATION

0.81+

about four months agoDATE

0.8+

NoSQLTITLE

0.8+

2019TITLE

0.79+

Cisco UCSORGANIZATION

0.77+

Dee Kumar, CNCF | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's the Cube, covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and Ecosystem Partners. >> Welcome back, this is theCube getting towards the end of two days live wall-to-wall coverage here at KubeCon, CloudNativeCon 2019 in Barcelona. I'm Stu Miniman, my co-host for this week has been Corey Quinn and happy to have on one of our hosts for this week from the Cloud Native Computing Foundation, Dee Kumar, the Vice President of Marketing, also helps with developer relations. Dee, welcome back to the program. >> Thanks for having me. >> And thank you for having us. We've been having a great time this week, a lot of buzz, a lot of people and obviously always a lot of enthusiasm at the show here. Thanks so much. Alright, so your team has been super busy. I've talked with a lot of them leading up to the show. >> That's right. >> Anybody that knows any show of this kind of magnitude know we're usually pretty exhausted before we get on planes and change all the time zones. So, you know, thank you for holding strong. Give us a little bit about, you know, when we talk marketing, you have a big annual report that came out recently from 2018. Give us some of the highlights of some of the things you've been seeing. >> Yeah, sure. Like you mentioned, you're seeing all the excitement and buzz here so this is our largest open-source developer conference, when compared to the last year we did in Copenhagen. So we have close to 8,000 attendees so we're really excited about that. And you're absolutely right, with that comes, we're so exhausted, but we really appreciate. I think the reason the conference has been so successful is primarily just because of the community engagement, which I highlight in the annual report. So it's a combination of our community, which is the developers, the contributors, also our end users, and the third significant portion of our ecosystem is our members. So we recently just announced that CNCF has crossed over 400 members, our end user community is growing, I think Sheryl mentioned this morning in the keynote, we have about 81 end users and this is phenomenal because end of the day, end users are companies who are not commercializing Cloud Native, but essentially they're using these products or technologies internally, so they are essentially the guinea pig of Cloud Native technologies and it's really important to learn from them. >> Well Dee, and actually it's interesting, you know, celebrating the five years of Kubernetes here, I happened to talk to a couple of the OG's of the community, Joe Beta, Tim Hawkin and Gabe Monroy. And I made a comment to Joe, and I'm like, "Well Google started it, but they brought in the Ecosync and pulled in a lot of other vendors too, it's people. And Gabe said, he's like "yeah, I started Deis and I was one of the people >> Absolutely. >> that joined in." So, we said this community is, it's people more than it's just the collection of the logos on the slides. >> Absolutely, I completely agree. And the other thing I also want to point out is a neutral home, like CNCF, it definitely increases contributions. And the reason I say that is, having a neutral home helps the community in terms of engaging and what is really interesting again, going back to the annual report is Google had a leadership role and most of the contributors were from Google, and now with having a neutral home, I think Google has done a phenomenal job to make sure that the contributors are not just limited to Google. And we're seeing all the other companies participating. We're also seeing a new little graph of independent contributors, who are essentially not associated with any companies and they've been again, very active with their comments or their engagement with overall, in terms of, not just limiting to Kubernetes, but all the other CNCF projects. >> So, this is sort of a situation of being a victim of your own success to some extent, but I've mentioned a couple of times today with various other guests, that this could almost be called a conference about Kubernetes and friends, where it feels like that single project casts an awfully long shadow, when you talk to someone who's vaguely familiar with the CNCF, it's "Oh you mean the Kubernetes people?" "Cool, we're on the same page." How do you, I guess from a marketing perspective begin to move out from under that shadow and become something that is more than a single project foundation? >> Yeah, that's a great question, and the way we are doing that is, I think, Kubernetes has become an economic powerhouse essentially, and what it has done is, it's allowed for other start-ups and other companies to come in and start creating new projects and technologies built around Kubernetes, so essentially, now, you're no longer talking about one single project. It's no longer limited to containers or orchestration, or just micro-services, which was the conversation 3 years ago at KubeCon, and today, what you will see is, it's about talking about the ecosystem. So, the way, from a marketing perspective, and it's actually the reality as well, is Kubernetes has now led to other growing projects, it's actually helped other developers come onboard, so now we are seeing a lot more co-ord, a lot more contributions, and now, CNCF has actually become a home to 35+ projects. So when it was founded, we had about 4 projects, and now it's just grown significantly and I think Kubernetes was the anchor tannin, but now we're just talking about the ecosystem as a whole. >> Dee, I'm wondering if it might be too early for this, but do you have a way of measuring success if I'm someone that has rolled out Kubernetes and some of the associated projects? When I talked to the early Kubernetes people, it's like, Kubernetes itself is just an enabler, and it's what we can do with it and all the pieces that go with it, so I don't know that there's spectrums of how are we doing on digital transformation, and it's a little early to say that there's a trillion dollars of benefit from this environm... but, do you have any measure today, or thoughts as to how we can measure the success of everything that comes out of the... >> Yeah, so I think there was Redmont, they published a report last year and it looks like they're in the process of updating, but it is just phenomenal to see, just based on their report, over 50% of fortune 100 companies have started to use Kubernetes in production, and then I would say, more than, I think, to be accurate, 71% of fortune 100 companies are using containers, so I think, right there is a big step forward. Also, if you look at it last year, Kubernetes was the first project to graduate, so one of the ways we also measure, in terms of the success of these projects, is the status that we have within CNCF, and that is completely community driven, so we have a project that's very early stage, it comes in as a sandbox, and then just based on the community growth, it moves onto the next stage, which is incubating, and then, it takes a big deal to graduate, and to actually go to graduation, so we often refer to those stages of the projects to Jeffery Moore, in terms of crossing the chasm. We've talked about that a lot. And again, to answer your question, in terms of how exactly you measure success is just not limited to Kubernetes. We had, this year, a few other projects graduates, we have 6 projects that have graduated within CNCF. >> How do you envision this unfolding in the next 5 years, where you continue to accept projects into the foundation? At some point, you wind up with what will only be described as a sarcastic number of logos on a slide for all of the included projects. How do you effectively get there without having the Cheesecake Factory menu problem of... the short answer is just 'yes', rather than being able to list them off coz no one can hold it all in their head anymore? >> Great question, we're still working on it. We do have a trail map that is a representation of 'where do I get started?', so it's definitely not prescriptive, but it kind of talks about the 10 steps, and it not only talks about it from a technology perspective, but it also talks about processes and people, so we do cover the DevOp, CICD cycle or pipeline. The other thing I would say is, again, we are trying to find other creative ways to move past the logos and landscape, and you're absolutely right, it's now becoming a challenge, but, you know, our members with 400+ members within CNCF. The other way to actually look at it is, back to my earlier point on ecosystems. So one of the areas that we are looking at is, 'okay, now, what next after orchestration?', which is all about Kubernetes is, now I think there's a lot of talks around security, so we're going to be looking at use cases, and also Cloud Native storage is becoming another big theme, so I would say we now have to start thinking more about solutions, solution, the terminology has always existed in the enterprise world for a long time, but it's really interesting to see that come alive on the Cloud Native site. So now we are talking about Kubernetes and then a bunch of other projects. And so now, it's like that whole journey from start to finish, what are the things that I need to be looking at and then, I think we are doing our best with CNCF, which is still a part of a playbook that we're looking to write in terms of how these projects work well together, what are some common use cases or challenges that these projects together can solve. >> So, Dee, we're here at the European show, you think back a few years ago it was a public cloud, there was very much adoption in North America, and starting to proliferate throughout the world. Alibaba is doing well in China and everything. CNCF now does 3 shows a year, you do North America, you do Europe and we've got the one coming up in China. We actually did a segment from our studio previewing the OpenStack Summit, and KubeCon show there, so maybe focus a little bit about Europe. Is there anything about this community and this environment that maybe might surprise people from your annual data? >> Yes, so if you look at... we have a tool called DevStart, it's open source, anyone can look at it, it's very simple to use, and based on that, we kind of monitor, what are the other countries that are active or, not just in terms of consuming, but who are actually contributing. So if you look at it, China is number 2, and therefore our strategy is to have a KubeCon in China. And then from a Euro perspective, I think the third leading country in terms of contributions would be Europe, and therefore, we have strategically figured out where do we want to host our KubeCon, and in terms of our overall strategy, we're pretty much anchoring to those 3 regions, which is North America, Europe as well as China. And, the other thing that we are also looking at is, we want to expand our growth in Europe as well, and now we have seen the excitement here at our KubeCon Barcelona, so we are looking to offer some new programs, or, I would say, new event types outside of KubeCon. Kind of you want to look at it as mini KubeCons, and so those would explore more in terms of different cities in Europe, different cities in other emerging markets as well. So that's still in the works. We're really excited to have, I would say 2 new event types that we're exploring, to really get the community to run and drive these events forward as well, outside of their participation in KubeCon because, oftentimes, I hear that a developer would love to be here, but due to other commitments, or, their not able to travel to Europe, so we really want to bring these events local to where they are, so that's essentially a plan for the next 5 years. >> It's fascinating hearing you describe this, because, everything you're saying aligns perfectly with what you'd expect from a typical company looking to wind up, building adoption, building footprints etc., Only, you're a foundation. Your fundamental goal at the end of it is user engagement, of people continuing to participate in the community, it doesn't turn into a 'and now, buy stuff', the only thing you have for sale here that I've noticed is a T-shirt, there's no... Okay, you also have other swag as well, not the important part of the story, I'm curious though, as far as, as you wind up putting all of this together, you have a corporate background yourself, was that a difficult transition to navigate, as far as, getting away from getting people to put money in towards something in the traditional sense, and more towards getting involved in a larger ecosystem and community. >> That was a big transition for me, just having worked on the classic B2B commercial software side, which is my background, and coming in here, I was just blown away with how people are volunteering their time and this is not where they're getting compensated for their time, it's purely based on passion, motivation and, when I've talked to some key community organizers or leaders who have done this for a while, one of the things that has had an impact on me is just the strong core values that the communities exhibit, and I think it's just based on that, the way they take a project and then they form a working group, and then there are special interest groups that get formed, and there is a whole process, actually, under the hood that takes a project from where Kubernetes was a few years ago, and where it is today, and I think it's just amazing to see that it's no longer corporate driven, but it's more how communities have come together, and it's also a great way to be here. Oftentimes... gone are the days where you try to set up a meeting, people look forward to being at KubeCon and this is where we actually get to meet face-to-face, so it's truly becoming a networking event as well, and to build these strong relationships. >> It goes even beyond just users, I mean, calling this a user conference would not... it would be doing it a bit of disservice. You have an expo hall full of companies that are more or less, in some cases, sworn enemies from one another, all coexisting peacefully, I have seen no fist-fights in the 2 days that we've been here, and it's fascinating watching a community effort get corporate decision makers and stakeholders involved in this, and it seems that everyone we've spoken to has been having a good time, everyone has been friendly, there's not that thousand yard stare where people are depressed that you see in so many other events, it's just something I've never experienced before. >> You know, that's a really amazing thing that I'm experiencing as well. And also, when we do these talks, we really make it a point to make sure that it's not a vendor pitch, and I'm not being the cop from CNCF policing everyone, and trying to tell them that, 'hey, you can't have a vendor pitch', but what I'm finding is, even vendors, just did a silverless talk with AWS, and he's a great speaker, and when he and I were working on the content, he in fact was, "you know, you're putting on that hat", and he's like, "I don't want to talk about AWS, I really want to make sure that we talk about the underlying technology, focusing on the projects, and then we can always build on top, the commercial aspect of it, and that's the job for the vendor. So, I think it's really great collaboration to see how even vendors put on the hat of saying, 'I'm not here to represent my products, or my thing', and of course they're here to source leads and stuff, but at the end of the day, the underlying common protocol that's already just established without having explicit guidelines saying, 'this is what you need to be following or doing', it's just like an implicit understanding. Everyone is here to promote the community, to work with the community, and again, I think I really want to emphasize on the point that people are very welcoming to this concept of a neutral home, and that really had helped with this implicit understanding of the communities knowing that it's not about a vendor pitch and you really want to think about a project or a technology and how to really use that project, and what are the use cases. >> It's very clear, that message has resonated well. >> Dee, thank you. We've covered a lot of ground, we want to give you the final word, anything else? We've covered the event, we've covered potential little things and the annual report. Any last words you have for us that you want people to take away? >> Not really, I think, like I said, it's the community that's doing the great work. CNCF has been the enabler to bring these communities together. We're also looking at creating a project journey it terms of how these projects come into CNCF, and how CNCF works with the communities, and how the project kind of goes through different stages. Yeah, so there are a lot of great things to come, and looking forward to it. >> Alright, well, Dee, thank you so much for all of the updates, and a big thank you, actually, to the whole CNCF team for all they've done to put this together. We really appreciate the partnership here. For Corey Quinn, I'm Stu Miniman. Back to wrap 2 days, live coverage, here at KubeCon, Cloud Native Con 2019, Thanks for watching the Cube. >> Thank you.

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, and happy to have on one of our hosts for this week and obviously always a lot of enthusiasm at the show here. when we talk marketing, you have a big annual report and it's really important to learn from them. Well Dee, and actually it's interesting, you know, of the logos on the slides. and most of the contributors were from Google, and become something that is more and the way we are doing that is, I think, and all the pieces that go with it, so one of the ways we also measure, as a sarcastic number of logos on a slide for all of the So one of the areas that we are looking at is, and starting to proliferate throughout the world. and therefore our strategy is to have a KubeCon in China. the only thing you have for sale here that I've noticed and I think it's just amazing to see that it's no longer and it seems that everyone we've spoken to has been having and of course they're here to source leads and stuff, we want to give you the final word, anything else? and how the project kind of goes through different stages. for all of the updates,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GabePERSON

0.99+

JoePERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

EuropeLOCATION

0.99+

Tim HawkinPERSON

0.99+

2018DATE

0.99+

Jeffery MoorePERSON

0.99+

AlibabaORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Gabe MonroyPERSON

0.99+

ChinaLOCATION

0.99+

CNCFORGANIZATION

0.99+

Dee KumarPERSON

0.99+

DeePERSON

0.99+

CopenhagenLOCATION

0.99+

GoogleORGANIZATION

0.99+

6 projectsQUANTITY

0.99+

SherylPERSON

0.99+

Red HatORGANIZATION

0.99+

Corey QuinnPERSON

0.99+

Joe BetaPERSON

0.99+

35+ projectsQUANTITY

0.99+

last yearDATE

0.99+

two daysQUANTITY

0.99+

KubeConEVENT

0.99+

AWSORGANIZATION

0.99+

North AmericaLOCATION

0.99+

10 stepsQUANTITY

0.99+

BarcelonaLOCATION

0.99+

3 regionsQUANTITY

0.99+

400+ membersQUANTITY

0.99+

3 years agoDATE

0.99+

DeisPERSON

0.99+

71%QUANTITY

0.99+

todayDATE

0.99+

KubeConsEVENT

0.99+

DevStartTITLE

0.99+

OpenStack SummitEVENT

0.99+

Ecosystem PartnersORGANIZATION

0.99+

2 daysQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

five yearsQUANTITY

0.98+

RedmontORGANIZATION

0.98+

first projectQUANTITY

0.98+

over 400 membersQUANTITY

0.98+

this weekDATE

0.97+

one single projectQUANTITY

0.97+

this yearDATE

0.97+

EcosyncORGANIZATION

0.97+

oneQUANTITY

0.96+

over 50%QUANTITY

0.96+

Bridget Kromhout, Microsoft | KubeCon + CloudNativeCon EU 2019


 

(upbeat techno music) >> Live from Barcelona Spain, it's theCUBE. Covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat, The Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back, this is The Cube's coverage of KubeCon CloudNativeCon 2019. I'm Stu Miniman with Corey Quinn as my cohost, even though he says kucon. And joining us on this segment, we're not going debate how we pronounce certain things, but I will try to make sure that I get Bridget Kromhout correct. She is a Principle Cloud Advocate at Microsoft. Thank you for coming back to The Cube. >> Thank you for having me again. This is fun! >> First of all I do have to say, the bedazzled shirt is quite impressive. We always love the sartorial, ya know, view we get at a show like this because there are some really interesting shirts and there is one guy in a three-piece suit. But ya know-- >> There is, it's the high style, got to have that. >> Oh, absolutely. >> Bringing some class to the joint. >> Wearing a suit is my primary skill. (laughing) >> I will tell you that, yes, they sell this shirt on the Microsoft company store. And yes, it's only available in unisex fitted. Which is to say much like Alice Goldfuss likes to put it, ladies is gender neutral. So, all of the gentleman who say, but I have too much dad bod to wear that shirt! I say, well ya know get your bedazzlers out. You too can make your own shirt. >> I say it's not dad bod, it's a father figure, but I digress. (laughing) >> Exactly! >> Alright, so Bridget you're doing some speaking at the conference. You've been at this show a few times. Tell us, give us a bit of an overview of what you're doing here and your role at Microsoft these days. >> Absolutely. So, my talk is tomorrow and I think that, I'm going to go with its a vote of confidence that they put your talk on the last day at 2:00 P.M. instead of the, oh gosh, are they trying to bury it? But no, it's, I have scheduled enough conferences myself that I know that you have to put some stuff on the last day that people want to go to, or they're just not going to come. And my talk is about, and I'm co-presenting with my colleague, Jessica Deen, and we're talking about Helm 3. Which is to say, I think a lot of times it would, with these open-sourced shows people say, oh, why do you have to have a lot of information about the third release of your, third major release of your project? Why? It's just an iterative release. It is, and yet there are enough significant differences that it's kind of valuable to talk about, at least the end user experience. >> Yeah, so it actually got an applause in the keynote, ya know. (Bridget laughing) There are certain shows where people are hootin' and hollerin' for every, different compute instance that that is released and you look at it a little bit funny. But at the keynote there was a singular moment where it was the removal of Tiller which Corey and I have been trying to get feedback from the community as to what this all means. >> It seems, from my perspective, it seemed like a very strange thing. It's, we added this, yay! We added this other thing, yay! We're taking this thing and ripping it out and throwing it right into the garbage and the crowd goes nuts. And my two thoughts are first, that probably doesn't feel great if that was the thing you spent a lot of time working on, but secondly, I'm not as steep in the ecosystem as perhaps I should be and I don't really know what it does. So, what does it do and why is everyone super happy to con sine it to the dub rubbish bin of history? >> Right, exactly. So, first of all, I think it's 100% impossible to be an expert on every single vertical in this ecosystem. I mean, look around, KubeCon has 7,000 plus people, about a zillion vendor booths. They're all doing something that sounds slightly, overlapping and it's very confusing. So, in the Helm, if you, if people want to look we can say there's a link in the show notes but there, we can, people can go read on Helm.sh/blog. We have a seven part, I think, blog series about exactly what the history and the current release is about. But the TLDR, the too long didn't follow the link, is that Helm 1 was pretty limited in scope, Helm 2 was certainly more ambitious and it was born out of a collaboration between Google actually and a few other project contributors and Microsoft. And, the Tiller came in with the Google folks and it really served a need at that specific time. And it was, it was a server-side component. And this was an era when the Roll by Stacks has control and Kubernetes was, well nigh not existent. And so there were a lot of security components that you kind of had to bolt on after the fact, And once we got to, I think it was Kubernetes 1.7 or 1.8 maybe, the security model had matured enough that instead of it being great to have this extra component, it became burdensome to try to work around the extra component. And so I think that's actually a really good example of, it's like you were saying, people get excited about adding things. People sometimes don't get excited about removing things, but I think people are excited about the work that went into, removing this particular component because it ends up reducing the complexity in terms of the configuration for anyone who is using this system. >> It felt very spiritually aligned in some ways, with the announcement of Open Telemetry, where you're taking two projects and combining them into one. >> Absolutely. >> Where it's, oh, thank goodness, one less thing that-- >> Yes! >> I have to think about or deal with. Instead of A or B I just mix them together and hopefully it's a chocolate and peanut butter moment. >> Delicious. >> One of the topics that's been pretty hot in this ecosystem for the last, I'd say two years now it's been service matched, and talk about some complexity. And I talk to a guy and it's like, which one of these using? Oh I'm using all three of them and this is how I use them in my environment. So, there was an announcement spearheaded by Microsoft, the Service Mesh Interface. Give us the high level of what this is. >> So, first of all, the SMI acronym is hilarious to me because I got to tell you, as a nerdy teenager I went to math camp in the summertime, as one did, and it was named SMI. It was like, Summer Mathematics Institute! And I'm like, awesome! Now we have a work project that's named that, happy memories of lots of nerdy math. But my first Unix system that I played with, so, but what's great about that, what's great about that particular project, and you're right that this is very much aligned with, you're an enterprise. You would very much like to do enterprise-y things, like being a bank or being an airline or being an insurance company, and you super don't want to look at the very confusing CNCF Project Map and go, I think we need something in that quadrant. And then set your ships for that direction, and hopefully you'll get to what you need. And it's especially when you said that, you mentioned that, this, it basically standardizes it, such that whichever projects you want to use, whichever of the N, and we used to joke about JavaScript framework for the week, but I'm pretty sure the Service Mesh Project of the week has outstripped it in terms of like speed, of new projects being released all the time. And like, a lot of end user companies would very much like to start doing something and have it work and if the adorable start-up that had all the stars on GitHub and the two contributors ends up, and I'm not even naming a specific one, I'm just saying like there are many projects out there that are great technically and maybe they don't actually plan on supporting your LTS. And that's fine, but if we end up with this interface such that whatever service mesh, mesh, that's a hard word. Whatever service mesh technology you choose to use, you can be confident that you can move forward and not have a horrible disaster later. >> Right, and I think that's something that a lot of developers when left to our own devices and in my particular device, the devices are pretty crappy. Where it becomes a, I want to get this thing built, and up and running and working, and then when it finally works I do a happy dance. And no one wants to see that, I promise. It becomes a very different story when, okay, how do you maintain this? How do you responsibly keep this running? And it's, well I just got it working, what do you mean maintain it? I'm done, my job is done, I'm going home now. It turns out that when you have a business that isn't being the most clever person in the room, you sort of need to have a longer term plan around that. >> Yeah, absolutely. >> And it's nice to see that level of maturation being absorbed into the ecosystem. >> I think the ecosystem may finally be ready for it. And this is, I feel like, it's easy for us to look at examples of the past, people kind of shake their heads at OpenStack as a cautionary tale or of Sprawl and whatnot. But this is a thriving, which means growing, which means changing, which means very busy ecosystem. But like you're pointing out, if your enterprises are going to adapt some of this technology, they look at it and everyone here was, ya know, eating cupcakes or whatever for the Kubernetes fifth birthday, to an enterprise just 'cause that launched in 2014, June 2014, that sounds kind of new. >> Oh absolutely. >> Like, we're still, we're still running that mainframe that is still producing business value and actually that's fine. I mean, I think this maybe is one of the great things about a company like Microsoft, is we are our customers. Like we also respect the fact that if something works you don't just yolo a new thing out into production to replace it for what reason? What is the business value of replacing it? And I think for this, that's why this, kind of Unix philosophy of the very modular pieces of this ecosystem and we were talking about Helm a little earlier, but there's also, Draft, Brigade, etc. Like the Porter, the CNET spec implementation stuff, and this Cloud Native application bundles, that's a whole mouthful. >> Yes, well no disrespect to your sparkly shirt, but chasing the shiny thing, and this is new and exciting is not necessarily a great thing. >> Right? >> I heard some of the shiny squad that were on the show floor earlier, complaining a little bit about the keynotes, that there haven't been a whole lot of new service and feature announcements. (Bridget laughing) And my opinion on that is feature not bug. I, it turns out most of us have jobs that aren't keeping up with every new commit to an open-source project. >> I think what you were talking about before, this idea of, I'm the developer, I yolo'd out this co-load into production, or I yolo'd this out into production. It is definitely production grade as long as everything stays on the happy path, and nothing unexpected happens. And I probably have air handling, and, yay! We had the launch party, we're drinkin' and eatin' and we're happy and we don't really care that somebody is getting paged. And, it's probably burning down. And a lot of human misery is being poured into keeping it working. I like to think that, considering that we're paying attention to our enterprise customers and their needs, they're pretty interested in things that don't just work on day one, but they work on day two and hopefully day 200 and maybe day 2000. And like, that doesn't mean that you ship something once and you're like, okay, we don't have to change it for three years. It's like, no, you ship something, then you keep iterating on it, you keep bug fixing, you keep, sure you want features, but stability is a feature. And customer value is a feature. >> Well, Bridget I'm glad you brought that up. Last thing I want to ask you 'cause Microsoft's a great example, as you say, as a customer, if you're an Azure customer, I don't ask you what version of Azure you're running or whether you've done the latest security patch that's in there because Microsoft takes care of you. Now, your customers that are pulled between their two worlds is, oh, wait, I might have gotten rid of patch Tuesdays, but I still have to worry and maintain that environment. How are they dealing with, kind of that new world and still have, certain things that are going to stay the old way that they have been since the 90's or longer? >> I mean, obviously it's a very broad question and I can really only speak to the Kubernetes space, but I will say that the customers really appreciate, and this goes for all the Cloud providers, when there is something like the dramatic CVE that we had in December for example. It's like, oh, every Kubernetes cluster everywhere is horribly insecure! That's awesome! I guess, your API gateway is also an API welcome mat for everyone who wants to, do terrible things to your clusters. All of the vendors, Microsoft included, had their managed services patched very quickly. They're probably just like your Harple's of the world. If you rolled your own, you are responsible for patching, maintaining, securing your own. And this is, I feel like that's that tension. That's that continuum we always see our customers on. Like, they probably have a data center full of ya know, veece, fear and sadness, and they would very much like to have managed happiness. And that doesn't mean that they can easily pickup everything in the data center, that they have a lease on and move it instantly. But we can work with them to make sure that, hey, say you want to run some Kubernetes stuff in your data center and you also want to have AKS. Hey, there's this open-source project that we instantiated, that we worked on with other organizations called Vertual Kubelet. There was actually a talk happening about it I think in the last hour, so people can watch the video of that. But, we have now offered, we now have Virtual Node, our product version of it in GA. And I think this is kind of that continuum. It's like, yes of course, you're early adapters want the open-source to play with. Your enterprises want it to be open-source so they can make sure that their security team is happy having reviewed it. But, like you're saying, they would very much like to consume a service so they can get to business value. Like they don't necessarily want to, take, Kelsey's wonderful Kubernetes The Hard Way Tutorial and put that in production. It's like, hmm, probably not, not because they can't, these are smart people, they absolutely could do that. But then they spent their, innovation tokens as, the McKinley blog post puts it, the, it's like, choose boring technology. It's not wrong. It's not that boring is the goal, it's that you want the exciting to be in the area that is producing value for your organization. Like that's where you want most of your effort to go. And so if you can use well vetted open-source that is cross industry standard, stuff like SMI that is going to help you use everything that you chose, wisely or not so wisely, and integrate it and hopefully not spend a lot of time redeveloping. If you redevelop the same applications you already had, its like, I don't think at the end of the quarter anybody is getting their VP level up. If you waste time. So, I think that is, like, one of the things that Microsoft is so excited about with this kind of open-source stuff is that our customers can get to value faster and everyone that we collaborate with in the other clouds and with all of these vendor partners you see on the show floor, can keep the ecosystem moving forward. 'Cause I don't know about you but I feel like for a while we were all building different things. I mean like, instead of, for example, managed services for something like Kubernetes, I mean a few jobs that would go out was that a start up that we, we built our own custom container platform, as one did in 2014. And, we assembled it out of all the LEGOs and we built it out of I think Docker and Packer and Chef and, AWS at the time and, a bunch of janky bash because like if someone tells you there's no janky bash underneath your home grown platform, they are lying. >> It's always a lie, always a lie. >> They're lying. There's definitely bash in there, they may or may not be checking exit codes. But like, we all were doing that for a while and we were all building, container orchestration systems because we didn't have a great industry standard, awesome! We're here at KubeCon. Obviously Kubernetes is a great industry standard, but everybody that wants to chase the shiny is like but surface meshes. If I review talks for, I think I reviewed talks for KubeCon in Copenhagen, and it was like 50 or 60 almost identical service mesh talk proposals. And it's like, and then now, like so that was last year and now everyone is like server lists and its like, you know you still have servers. Like you don't add sensation to them, which is great, but you still have them. I think that that hype train is going to keep happening and what we need to do is make sure that we keep it usable for what the customers are trying to accomplish. Does that make sense? >> Bridget, it does, and unfortunately, we're going to have to leave it there. Thank you so much for sharing everything with our audience here. For Corey, I'm Stu, we'll be back with more coverage. Thanks for watching The Cube. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, Thank you for coming back to The Cube. Thank you for having me again. We always love the sartorial, There is, it's the high style, Wearing a suit is my primary skill. I will tell you that, yes, they sell this shirt I say it's not dad bod, at the conference. that they put your talk on the last day at 2:00 P.M. from the community as to what this all means. doesn't feel great if that was the thing you And this was an era when the Roll by Stacks has It felt very spiritually aligned in some ways, I have to think about or deal with. And I talk to a guy and it's like, And it's especially when you said that, clever person in the room, you sort of need to And it's nice to see that level of maturation And this is, I feel like, And I think for this, sparkly shirt, but chasing the shiny thing, I heard some of the shiny squad that were on I think what you were talking about Last thing I want to ask you 'cause Microsoft's a SMI that is going to help you use everything Like you don't add sensation to them, which is great, Thank you so much for sharing everything with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica DeenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Bridget KromhoutPERSON

0.99+

DecemberDATE

0.99+

Corey QuinnPERSON

0.99+

2014DATE

0.99+

CoreyPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

three yearsQUANTITY

0.99+

Summer Mathematics InstituteORGANIZATION

0.99+

two projectsQUANTITY

0.99+

100%QUANTITY

0.99+

GALOCATION

0.99+

Vertual KubeletORGANIZATION

0.99+

Alice GoldfussPERSON

0.99+

tomorrowDATE

0.99+

KelseyPERSON

0.99+

BridgetPERSON

0.99+

third releaseQUANTITY

0.99+

last yearDATE

0.99+

KubeConEVENT

0.99+

CNETORGANIZATION

0.99+

firstQUANTITY

0.99+

CopenhagenLOCATION

0.99+

three-pieceQUANTITY

0.99+

one guyQUANTITY

0.99+

two yearsQUANTITY

0.99+

Helm 3TITLE

0.99+

seven partQUANTITY

0.99+

60QUANTITY

0.99+

50QUANTITY

0.99+

AWSORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.98+

OpenStackORGANIZATION

0.98+

StuPERSON

0.98+

two contributorsQUANTITY

0.98+

Barcelona SpainLOCATION

0.98+

two worldsQUANTITY

0.98+

two thoughtsQUANTITY

0.98+

KubernetesTITLE

0.98+

threeQUANTITY

0.98+

June 2014DATE

0.98+

2:00 P.M.DATE

0.97+

OneQUANTITY

0.97+

oneQUANTITY

0.97+

Kubernetes The Hard Way TutorialTITLE

0.97+

day oneQUANTITY

0.95+

McKinleyORGANIZATION

0.95+

SprawlTITLE

0.95+

7,000 plus peopleQUANTITY

0.95+

JavaScriptTITLE

0.94+

day twoQUANTITY

0.94+

third major releaseQUANTITY

0.94+

90'sDATE

0.94+

LEGOsORGANIZATION

0.93+

GitHubORGANIZATION

0.92+

Helm 2TITLE

0.9+

DockerORGANIZATION

0.9+

KubernetesPERSON

0.9+

AzureTITLE

0.89+

fifth birthdayQUANTITY

0.89+

HarpleORGANIZATION

0.88+

CloudNativeCon EU 2019EVENT

0.88+

The CubeTITLE

0.88+

The Cloud Native Computing FoundationORGANIZATION

0.87+

VirtualORGANIZATION

0.87+

AKSORGANIZATION

0.85+

about a zillion vendor boothsQUANTITY

0.85+

Helm.sh/blogOTHER

0.85+

FirstQUANTITY

0.83+

secondlyQUANTITY

0.83+

Helm 1TITLE

0.81+

SMIORGANIZATION

0.8+

Lukas Heinrich & Ricardo Rocha, CERN | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE, covering KubeCon + CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back to theCUBE, here at KubeCon CloudNativeCon 2019 in Barcelona, Spain. I'm Stu Miniman. My co-host is Corey Quinn and we're thrilled to welcome to the program two gentlemen from CERN. Of course, CERN needs no introduction. We're going to talk some science, going to talk some tech. To my right here is Ricardo Rocha, who is the computer engineer, and Lukas Heinrich, who's a physicist. So Lukas, let's start with you, you know, if you were a traditional enterprise, we'd talk about your business, but talk about your projects, your applications. What piece of, you know, fantastic science is your team working on? >> All right, so I work on an experiment that is situated with the Large Hadron Collider, so it's a particle accelerator experiments where we accelerate protons, which are hydrogen nuclei, to a very high energy, so that they almost go with the speed of light. And so, we have a large tunnel underground, 100 meters underground in Geneva, so straddling the border of France and Switzerland. And there, we're accelerating two beams. One is going clockwise. The other one is going counterclockwise, and there, we collide them. And so, I work on an experiment that kind of looks at these collisions and then analyzes this data. >> Lukas, if I can, you know, when you talk to most companies, you talk about scale, you talk about latency, you talk about performance. Those have real-world implications for your world. Do you have anything you could share there? >> Yeah, so, one of the main things that we need to do, so we collide 40 million times a second these protons, and we need to analyze them in real time, because we cannot write out all the collision data to disk because we don't have enough disk space, and so we've essentially run 10,000 core real-time application to analyze this data in real-time and see what collisions are actually most interesting, and then only those get written out to disk, so this is a system that I work on called The Trigger, and yeah, that's pretty dependent on latency. >> All right, Ricardo, luckily you know, your job's easy. We say most people you need to respond, you know, to what the business needs for you and, you know, don't worry, you can't go against the laws of physics. Well, you're working on physics here, and boy those are some hefty requirements there. Talk a little bit about that dynamic and how your team has to deal with some pretty tough challenges. >> Right, so, as Lukas was saying, we have this large amount of data. The machines can generate something around the order of a petabyte a second, and then, thanks to their hardware- and software-level triggers, they will reduce this to something that is 10 gigabytes a second, and that's what my side has to handle. So, it's still a lot of data. We are collecting something like 70 petabytes a year, and we keep adding, so right now we have, the amount of storage available is on the order of 400 petabytes. We're starting to get at a pretty large scale. And then we have to analyze all of this. So we have one big data center at CERN, which is 300,000 cores, or something like this, around that, but that's not enough, so what we've done over the last 15, 20 years, we've created this large distributed computing environment around the world. We link to many different institutes and research labs together, and this doubles our capacity. So that's our challenge, is to make sure all the effort that the physicists put into building this large machine, that, in the end, it's not the computing that is breaking the world system. We have to keep up, yup. >> One thing that I always find fascinating is people who are dealing with real problems that push our conception of what scale starts to look like, and when you're talking about things like a petabyte a second, that's beyond the comprehension of what most of us can wind up talking about. One problem that I've seen historically with a number of different infrastructure approaches is it requires a fair level of complexity to go from this problem to this problem to this problem, and you have to wind up working through a bunch of layers of abstraction, and the end result is, and at the end of all of this we can run our blog that gets eight visits a day, and that just doesn't seem to make sense. Whereas what you're talking about, that level of complexity is more than justified. So my question for you is, as you start seeing these things evolve and looking at other best practices and guidance from folks who are doing far less data-intensive applications, are you seeing that a lot of the best practices start to fall down as you're pushing theoretical boundaries of scale? >> Right, that's actually a good point. Like, the physicists are very good at getting things done, and they don't worry that much about the process, as long as in the end it works. But there's always this kind of split between the physicists and the more computing engineer where the practices, we want to establish practices, but at the end of the day, we have a large machine that has to work, so sometimes we skip a couple of steps, but we still need, there's still quite a lot of control on like data quality and the software validation and all of this. But yeah, it's a non-traditional environment in terms of IT, I would say. It's much more fast pacing than most traditional companies. >> You mentioned you had how many cores working on these problems on site? >> So in-house, we have 300,000. >> If you were to do a full migration to the public cloud, you'd almost have to repurpose that many cores just to calculating out the bill at that point. Just, because all the different dimensions, everything winds working on at that scale becomes almost completely non-trivial. I don't often say that I'm not sure public cloud can scale to the level that someone would need to. In your case, that becomes a very real concern. >> Yeah, so that's one debate we are having now, and it's, it has a lot of advantages to have the computing in-house, and also because we pretty much use it 24/7, it's a very different type of workload. So we need a lot of resources 24/7, like even the pricing is kind of calculated differently. But the issue we have now is that the accelerator will go through a major upgrade just in five years' time, where we will increase the amount of data by 100 times. Now we are talking about 70 petabytes a year and we're very soon talking about like exabytes. So the amount of computing we'll need there is just going to explode, so we need all the options. We're looking into GPUs and machine learning to change how we do computing, and we are looking at any kind of additional resources we might get, and there the public cloud will probably play a role. >> Could you speak to kind of the dynamic of how something like an upgrade of that, you know, how do you work together? I can't imagine that you just say, "Well, we built it, "whatever we needed and everything, and, you know, "throw it over the wall and make sure it works." >> Right, I mean, so I work a lot on this boundary between computing and physics, and so internally, I think we also go through the same processes as a lot of companies, that we're trying to educate people on the physics side how to go through the best practices, because it's also important. So one thing I stressed also in the keynote is this idea of reproducibility and reusability of scientific software is pretty important, so we teach people to containerize their applications and then make them reusable and stuff like that, yup. >> Anything about that relationship you can expound on? >> Yeah, so like this keynote we had yesterday is a perfect example of how this is improving a lot at CERN. We were actually using data from CMS, which was one of the experiments. Lukas is a physicist in ATLAS, which is like a computing experiment, kind of. I'm in IT, and like all this containerized infrastructure kind of is getting us all together because computing is getting much easier in terms of how to share pieces of software and even infrastructure, and this helps us a lot internally also. >> So what particular about Kubernetes helps your environment? You talk for 15 years that you've been on this distributed systems build-out, so sounds like you were the hipsters when it came to some of these solutions we're working on today. >> That has been like a major change. Lukas mentioned the container part for the software reproducibility, but I have been working on the infrastructure for, I joined CERN as a student and I've been working on the distributed infrastructure for many years, and we basically had to write our own tools, like storage systems, all the batch systems, over the years, and suddenly with this public cloud explosion and open source usage, we can just go and join communities that have requirements sometimes that are higher than ours and we can focus really on the application development. If we base, if we start writing software using Kubernetes, then not only we get this flexibility of choosing different public clouds or different infrastructures, but also we don't have to care so much about the core infrastructure, all the monitoring, log collection, restarting. Kubernetes is very important for us in this respect. We kind of remove a lot of the software we were depending on for many years. >> So these days, as you look at this build-out and what you're looking, not just what you're doing today but what you're looking to build in the upcoming years, are you viewing containers as the fundamental primitive of what empowers this? Are you looking at virtual machines as that primitive? Are you looking at functions? Where exactly do you draw the abstraction layer, as you start building this architecture? >> So, yeah, traditionally we've been using virtual machines for like the last maybe 10 years almost, or, I don't know, eight years at least, and we see containerization happening very quickly, and maybe Lukas can say a bit more about the physics, how this is important on the physics side? >> Yeah, what's been, so currently I think we are looking at containers for the main abstraction because it's also we go through things like functions as a service. What's kind of special about scientific applications is that we don't usually just have our entire code base on one software stack, right? It's not like we would deploy Node.js application or Python stack and that's it. And so, sometimes you have a complete mix between C++, Python, Fortran, and all that stuff. So this idea that we can build the entire software stack as we want it is pretty important. So even for functions as a service where, traditionally, you had just a limited choice of runtimes, this becomes important. >> Like, from our side, the virtual machines still had a very complex setup to be able to support all this diversity of software and the containerization, just all the people have to give us is like run this building block and it's kind of a standard interface, so we only have to build the infrastructure to be able to handle these pieces. >> Well, I don't think anyone can dispute that you folks are experts in taking larger things and breaking them down into constituent components thereof. I mean, you are, quite obviously, the leading world experts on that. But was there any challenge to you as you went through that process of, I don't necessarily even want to say modernizing, but in changing your viewpoint of those primitives as you've evolved, have you seen that there were challenges in gaining buy-in throughout the organization? Was there pushback? Was it culturally painful to wind up moving away from the virtual machine approach into a containerized world? >> Right, so yeah, a bit, of course. But traditionally we, like physicists really focus on their end goal. We often say that we don't count how many cores or whatever, we care about events per second, how many events we can process per second. So, it's a kind of more open-minded community maybe than traditional IT, so we don't care so much about which technology we use at some point, as long as the job gets done. So, yeah, there's a bit of traction sometimes, but there's also a push when you can demonstrate that we get a clear benefit, then it's kind of easier to push it. >> What's a little bit special maybe also for particle physics is that it's not only CERN that is the researcher. We are an international collaboration of many, many institutes all around the world that work on the same project, which is just hosted at CERN, and so it's a very flat hierarchy and people do have the freedom to try out things and so it's not like we have a top-down mandate what technology we use. And then somebody tries something out. If it works and people see a value in it then you get adoption from it. >> The collaboration with the data volumes you're talking about as well has got to be intense. I think you're a little bit beyond the, okay, we ran the experiment, we put the data in Dropbox, go ahead and download it, you'll get that in only 18 short years. It seems like there's absolutely a challenge in that. >> That was one of the key points actually in the keynote is that, so a lot of the experiments at CERN have an open data policy where we release our data, and so that's great because we think it's important for open science, but it was always a bit of an issue, like who can actually practically analyze this data for people who don't have a data center? And so one part of the keynote was that we could demonstrate that using Kubernetes and public cloud infrastructure actually becomes possible for people who don't work at CERN to analyze this large-scale scientific data sets. >> Yeah, I mean maybe just for our audience, the punchline is rediscovering the Higgs boson in the public cloud. Maybe just give our audience a little bit of taste of that. >> Right, yeah, so basically what we did is, so the Higgs boson was discovered in 2012 by both ATLAS and CMS, and a part of that data, we used open data from CMS and part of that data has now been released publicly, and basically this was a 70-terabyte data set which we, thanks to our Google Cloud partners, could put onto public cloud infrastructure and then we analyzed it on a large-scale Kubernetes cluster, and-- >> The main challenge there was that, like, we publish it and we say you probably need a month to process it, but we had like 20 minutes on the keynote, so we kind of needed a bit larger infrastructure than usual to run it down to five minutes or less. In the end, it all worked out, but that was a bit of a challenge. >> How are you approaching, I guess, making this more accessible to more people? By which I mean, not just other research institutions scattered around the world, but students, individual students, sometimes in emerging economies, where they don't have access to the kinds of resources that many of us take for granted, particularly work for a prestigious research institutions? What are you doing to make this more accessible to high school kids, for example, folks who are just dipping their toes into a world they find fascinating? >> We have entire programs, outreach programs that go to high schools. I've been doing this when I was a student in Germany. We would go to high schools and we would host workshops and people would analyze a lot of this data themselves on their computers. So we would come with a USB stick that have data on them, and they could analyze it. And so part of also the open data strategy from ATLAS is to use that open data for educational purposes. And then there are also programs in emerging countries. >> Lukas and Ricardo, really appreciate you sharing the open data, open science mission that you have with our audience. Thank you so much for joining us. >> Thank you. >> Thank you. >> All right, for Corey Quinn, I'm Stu Miniman. We're in day two of two days live coverage here at KubeCon + CloudNativeCon 2019. Thank you for watching theCUBE. (upbeat music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, What piece of, you know, fantastic science and there, we collide them. to most companies, you talk about scale, Yeah, so, one of the main things that we need to do, to what the business needs for you and, you know, and we keep adding, so right now we have, and at the end of all of this we can run our blog but at the end of the day, we have a large machine Just, because all the different dimensions, But the issue we have now is that the accelerator "whatever we needed and everything, and, you know, on the physics side how to go through the best practices, Yeah, so like this keynote we had yesterday so sounds like you were the hipsters and we basically had to write our own tools, is that we don't usually just have our entire code base just all the people have to give us But was there any challenge to you We often say that we don't count how many cores and so it's not like we have a top-down mandate okay, we ran the experiment, we put the data in Dropbox, And so one part of the keynote was that we could demonstrate in the public cloud. and we say you probably need a month to process it, And so part of also the open data strategy Lukas and Ricardo, really appreciate you sharing Thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ricardo RochaPERSON

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

CERNORGANIZATION

0.99+

LukasPERSON

0.99+

ATLASORGANIZATION

0.99+

2012DATE

0.99+

GenevaLOCATION

0.99+

GermanyLOCATION

0.99+

RicardoPERSON

0.99+

Lukas HeinrichPERSON

0.99+

Red HatORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

70-terabyteQUANTITY

0.99+

15 yearsQUANTITY

0.99+

300,000 coresQUANTITY

0.99+

300,000QUANTITY

0.99+

Node.jsTITLE

0.99+

70 petabytesQUANTITY

0.99+

PythonTITLE

0.99+

400 petabytesQUANTITY

0.99+

10,000 coreQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

100 metersQUANTITY

0.99+

eight yearsQUANTITY

0.99+

KubeConEVENT

0.99+

a monthQUANTITY

0.99+

100 timesQUANTITY

0.99+

SwitzerlandLOCATION

0.99+

five minutesQUANTITY

0.99+

oneQUANTITY

0.99+

FortranTITLE

0.98+

yesterdayDATE

0.98+

FranceLOCATION

0.98+

two daysQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

One problemQUANTITY

0.98+

OneQUANTITY

0.98+

five years'QUANTITY

0.98+

18 short yearsQUANTITY

0.97+

CMSORGANIZATION

0.97+

two beamsQUANTITY

0.97+

two gentlemenQUANTITY

0.96+

KubernetesTITLE

0.96+

bothQUANTITY

0.96+

CloudNativeCon Europe 2019EVENT

0.95+

40 million times a secondQUANTITY

0.95+

One thingQUANTITY

0.94+

eight visits a dayQUANTITY

0.94+

CloudNativeCon EU 2019EVENT

0.93+

CloudNativeCon 2019EVENT

0.93+

C++TITLE

0.93+

many yearsQUANTITY

0.92+

KubeCon CloudNativeCon 2019EVENT

0.92+

todayDATE

0.91+

one softwareQUANTITY

0.91+

DropboxORGANIZATION

0.89+

about 70 petabytesQUANTITY

0.86+

one debateQUANTITY

0.86+

10 gigabytes a secondQUANTITY

0.85+

one partQUANTITY

0.77+

a yearQUANTITY

0.75+

one thingQUANTITY

0.74+

a secondQUANTITY

0.73+

petabyteQUANTITY

0.73+

Dejan Bosanac & Josh Berkus, Red Hat | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE. Covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back to theCUBE here in Barcelona, Spain. This is KubeCon, CloudNativeCon 2019. I'm Stu Miniman, my co-host for two days of wall-to-wall coverage is Corey Quinn. Joining us on the program we have two gentleman from Red Hat. To my right is Josh Berkas who's the Kubernetes community manager and sitting to his right is Dejan Bosanac who's a senior software engineer and as I said, both with Red Hat. Gentlemen, thanks so much for joining us. >> Well thank you. >> Thank you. >> All right. So Josh, a community manager in the Kubernetes space, so what brings you here to KubeCon and maybe explain to us and give the clarification on the shirt so that we can be educated to properly call this city and residence by, how they should be. >> Oh, so many things, so. I mean obviously, I'm here because the community is here, right? A very large community. We had a contributor summit on Monday. They had a couple hundred people, three hundred people at it. The important thing, when we talk about community in Kubernetes there's the general ecosystem community and then there's the contributor community. >> Right. >> And the latter is more with what I'm concerned with. Because even the contributor community by itself is quite large. As for the t-shirt, speaking of community, so we like to actually do special t-shirts for the contributor summits. I designed this one. Despite my current career, my academic background is actually in art. This is obviously a Moreau pastiche, but one of things I actually learned by doing this was I did a different version first, It said Barca on it, and then one of the folks from here is like, "Well that's the football team." That when they abbreviate the city, it's actually Barna. >> It was news to me. I am today years old when I found that out. >> Yes. >> So thank you very much for that. >> Yes, that was an additional four hours of drawing for me. >> All right. Go ahead Corey. >> So a while back, I had a tweet that went out that I knew was going to be taken in two different ways and you were one of the first people to come back on that in the second way. Everyone first thought I was being a snarky jerk. >> Yeah. Which, let's be honest, fair. >> Yeah. >> But what I said was that in five years no one is going to care about Kubernetes. >> Right. >> And your response was yeah, that's a victory condition. If you don't have to think or care about this, >> Yeah. >> that means it won >> Right. >> in a similar way that a lot of things have slipped >> Yeah. >> beneath the level of awareness. And I'm curious as to what both of you think about the idea of Kubernetes not, I'm not saying it loses in the marketplace, I don't think that that is likely at all, but at what point do people not have to think about it any more and what does that future look like? >> Yeah, I mean one of our colleagues noticed yesterday that this conference particularly is not about Kubernetes any more. So, you hear more about all the ecosystem. A lot of projects around it. So it certainly grew up above the Kubernetes. And so you see all the talks about service meshes and things we try to do for the edge computing and things like that. So it's not just the Kubernetes any more. It's a whole ecosystem of the products and projects around it. I think, it's a big success. >> Yeah. And I mean I'll say, talking sort of a longer view is, I can remember compiling my own Linux kernels. I can remember doing it on a weekly basis. Because you honestly had to, right? If you wanted certain devices to work you had to actually compile your own kernel. Now on my various servers and stuff that I do for testing and demos and development, I can't even tell you what kernel version I'm running. Because I don't care, right? And for core Kubernetes, like I said, if we get to that point of not needing to care about it of only needing to care about it when we're developing something, then that looks like victory to me. >> Josh, is there anything in the core contributor team that they have milestones and say "Hey, by the time we get to 2.0 or 3.0, you know Kubernetes is invisible?" >> Yeah, well it's spoken of more in terms of GA and API stability >> Yeah. >> because really, if you're going to back off and you're going to say, "What is Kubernetes?" Well, Kubernetes is, what the definition of Kubernetes is, is a bag of APIs. A very large bag of APIs, we do a lot of APIs but a bag of APIs and the less those APIs change in the future the closer we're getting to maturity and stability, right? Because we want people building new stuff around the APIs, not modifying the APIs themselves. >> Yeah well, to that end, last night, here at Barcelona time, a blog post came out from AWS where they set out a formalized deprecation strategy for their EKS product to keep up with the releases of Kubernetes. Now, AWS generally does not turn things off ever, which means that 500 years from now, two trunkless legs of stone in a desert will be balanced by an ELB classic. And we're never going to be rid of anything they've ever built, but if nothing else, you've impacted them to formalize a deprecation strategy that follows upstream, which is awesome. It's great to start seeing a world where you don't have to support older versions of things as your user base and your community informs you. It's nice to see providers breaking from their model to respond to what the community has done. And I can't imagine, for you, that's anything other than an unqualified success. >> All right, so, Dejan. >> Yeah? >> When we talk about dispersion of technology, you know, there are few issues that get people as excited these days as edge computing. So, tell us a little bit about what you're doing and the community's doing in the IOTN edge space. >> Yeah. So, we noticed that more and more people want to try their workloads outside of the centralized, mon-centralized data clusters, so the big term for the last year was the hyper-cloud, but it's not just hyper-cloud. People coming also from the IOT user space wants to, you know, containerize their workloads, wants to put the processing closer and closer to the devices that they're actually producing and presuming those data in the users. And there's a lot of use cases which should be tackled in that way. And as you all said previously, like Kubernetes won developers' hearts and minds so APIs are stable, everybody's using them, it will be supported for decades so it's natural to try to bring all these tools and all these platforms that are already available to developers, try to tackle these new challenges. So that's why last year we formed Kubernetes IT edge working group, trying to, you know, start with simple questions because when people come to you and say edge, everybody thinks something different. For somebody it's an IOT gateway, for somebody it's a full blown, you know, Kubernetes cluster at some telco provider. So that's what they're trying to figure out, all these things, and try to form a community because as we saw in the previous sales for the IOT users space is that complex problems like this are never basically solved by a single company. You need open source, you need open standard, you need community around it so that people can pick and choose and build a solution to fit their needs. >> Yes, so as you said, right, there is that spectrum of offerings everything from that telco down to, you know, is this going to be something sitting on a tower somewhere or, you know, the vast proliferation of IOT which, you know, we spent lots of time. So are you looking at all of these or are you pointing "Okay, we already have a telco working group over here, and, you know, we're going to work on the IOT thing." You know, where are we? What are the answers and starting point for people today? >> Yes, so we have a single working group for now and we try to bring in to people that are interested in this topic in general. So it's, one of the guys said like "Edge is everything that's not running in the center crowd right, so, we have a couple of interesting things happening at a moment, so future way guys have a cubics project and there're presented at this conference. We have a couple of sessions on that. That's basically trying to tackle this device age kind of' space, how to, you know, put Kubernetes' workload on the constrained device and over to constrained network kind of' problem. And we have a people like coming from the rancher, which provide their own, again, resource-constrained Kubernetes deployments, and we see a lot of developments here, but it's still, I think, early age and that's why we have like a working group which is something that we can build our community and work over the time to shape things and find the appropriate reference, architectural blueprints for people that can follow in the future. >> Yeah, I think that there's been an awful lot of focus here on this show on Kubernetes, but it is KubeCon plus CloudNativeCon. I'm curious as far as what you're seeing with these conversations, something you eluded to as well is that there's now a bunch of other services that are factored in. I mean, it feels almost like this show is become, just from conversations, Kubernetes and friends; but, the level of attention that being paid to those friends is dramatically increasing. And I'm curious as to how you're seeing this evolve in the community particularly but also with customers and what you're seeing as this entire ecosystem continues to evolve. >> Yeah. Well, I mean part of it out of necessity, right, as when Kubernetes' move from Dev and experimental into production, you don't run Kubernetes by itself, right? And some of the things with Kubernetes is you can run with existing tooling, rank cloud providers, that sort of thing. But other things you discover that you want new tools. For example, one of the areas that we saw, expansion to start with, was the area of monitoring and telemetry because it turns out that monitoring telemetry that you build for a hundred servers does not work with twenty thousand pods. It's just a volume problem there. And so then we had new projects like Heapster and Prometheus and the new products from other companies like Sistic and that sort of thing, just looking at that space, right, in order to have that part of the tool because you can't be in production without monitoring and telemetry. One of my personal areas that I'm involved is storage, right, and so we've had the rook project here go from and pretty much a year and a half actually, go from being open sourced to being now a serious alternative solution if you don't want to be dependent on cloud provider storage. >> Please tell me you're giving that an award called Rookie of the Year. [laughs] >> I do not apologize for that one. One thing that does resonate with me though is the idea that you've taken, strategically, that instead of building all of this functionality into Kubernetes and turning it into, "You'll do it this way or you're going to be off in the wilderness somewhere," it's decoupled. I love that pattern. Was that always the design from day one or was this a contentious decision history? >> No, it wasn't. Kubernetes started out as kind of a monolith, right, because it was like the open source version of borg light, right, and, which was build as a monolith within Google 'cause there weren't options. They had to work with Google's stuff, right, if you're looking at borg, right, and so they're not worried about supporting all this other stuff, but from day one of Kubernetes being a project, it was a multi-company project, right, and if you look at, you know, open shift and open shift's users and open shift's stack, it's different from what Google uses for GKE. And, honestly, the easiest way to support sort of multiple stack layers is to decouple everything, right? And not how we started out, right? Cloud providers, like one of our problems cloud providers entry, storage entry, networking. Networking was the only thing that was separate from day one. You know but all this stuff was entry, and it didn't take very long for that to get unmaintainable, right? >> Well, I mean I think one of the, I've been following you and running into you in the conference circuit for years, and one of the talks I gave for a year and a half was Heresy in the Church of Docker where we don't know what your problem is but Docker, Docker, Docker, Docker, Docker, and I gave a list of twelve or thirteen different reasons and things that were not being handled by Docker. And now, I've sunset that talk largely because 1) no one talks about Docker and it feels a bit like punching down, but more importantly, Cooper Netties has largely solved almost all of those. There are still a few exceptions here and there 'cause it turns out "Sorry, nothing is perfect and we've not yet found containersation utopia. Surprise!" But it's really come a very long way in a very short period of time. >> Yeah, what a lot of it is is decoupling 'cause the thing is that you can take it two ways, right, one is that potentially as an ecosystem Kubernetes solves almost anything. Some things like IOT are, you know, a lot more alpha state than others. And then if you actually look at just core Kubernetes, it's like what you would get off the Kubernetes' Kubernetes repo if you compiled it yourself, Kubernetes solves almost nothing. Like by itself, you can't do much with it other than test your patches. >> Right, in isolation, the big problem it solves is "Room is limited to 'I want a buzz wort on my resume.'" >> Yes. >> There needs to be more to it than that. >> So, and I think that's true in general 'cause like, you know, if you look at "why did Linux become the default server OS, right?" It became the default server OS because it was adaptable, right, because you would compile in your own stuff because we define posics and kernel module API's to make it easy for people to build their own stuff without needing to commit to Lin EX Kernel. >> Alright, so I'd to get both your thoughts just on the storage piece there because, you know, 1) you know, storage is a complex, highly fragmented ecosystem out there. Red Hat has many options out there, and, boy, when I saw the key note this morning, I thought he did a really good job of laying out the options but, boy, there's, you know, it's a complex multi fragmented stack with a lot of different options out there, and edge computing, the storage industry as a whole without even Kubernetes is trying to figure out how that works, so Dejan, maybe we start with you, and yeah. >> So yeah. I don't have any particular answers for you for today in that area, but what I want, to emphasize what Josh said earlier is that these API's and these modelization that is done in Kubernetes, it's one of the big important things for edge's vow because people coming there and saying "We should do this. Should we invent things or should we just try to reuse what's a basically very good, very well designed system?" So that's a starting point, like why do we want to start using Kubernetes for the edge computing? But for the storage questions, I would hand over to Josh. >> So, your problem with storage is not anything to do with Kubernetes in particular, but the fact that, like you said, the storage sort of stack ecosystem is a mess. It's more vendor. Everything is vendor specific. Things don't work even semantically the same, let alone like the same by API. And so, all we can do in the world of Kubernetes is make enabling storage for Kubernetes not any harder than it would have been to do it in some other system. >> Right, and look, the storage industry'd say, "No no. It's not a mess. It's just that there's a prolifera of applications our there. There is not one solution to fit them all and that's why we have block, we have file, we have object, we have all these various ways of doing things, so you're saying storage is hard, but storage with Kubernetes is no harder today. We're getting to that point. >> I would say it's a little harder today. And we're working on making it not any harder. >> All right, excellent. Well, Josh and Dejan, thank you so much for the updates. >> Thank you guys. Always appreciative of the community contributions. Look forward to hearing more about the, of course, the contributors always and as the Edge and IOT groups mature. Look forward to hearing updates in the future. Thank you. >> Cool. >> Thank you guys. >> Alright, for Corey Quinn, I'm Stu Miniman back with lots more coverage hear from KubeCon CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching theCube.

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, and sitting to his right is Dejan Bosanac so what brings you here to KubeCon because the community is here, right? And the latter is more with what I'm concerned with. I am today years old when I found that out. So thank you Yes, that was All right. in two different ways and you were one of the first people Yeah. no one is going to care about Kubernetes. If you don't have to think And I'm curious as to what both of you think And so you see all the talks about I can't even tell you what kernel version I'm running. "Hey, by the time we get to 2.0 or 3.0, but a bag of APIs and the less those APIs change where you don't have to support older versions of things and the community's doing in the IOTN edge space. for somebody it's a full blown, you know, Kubernetes cluster everything from that telco down to, you know, for people that can follow in the future. And I'm curious as to how you're seeing this evolve And some of the things with Kubernetes is you can run Rookie of the Year. Was that always the design from day one a multi-company project, right, and if you look at, and one of the talks I gave for a year and a half was the thing is that you can take it two ways, right, one is Right, in isolation, the big problem it solves is "Room you know, if you look at "why did Linux become on the storage piece there because, you know, 1) you know, I don't have any particular answers for you like you said, the storage sort of stack ecosystem is Right, and look, the storage industry'd say, "No no. And we're working thank you so much for the updates. Always appreciative of the community contributions. Alright, for Corey Quinn, I'm Stu Miniman back with lots

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dejan BosanacPERSON

0.99+

Josh BerkasPERSON

0.99+

Corey QuinnPERSON

0.99+

JoshPERSON

0.99+

Stu MinimanPERSON

0.99+

Josh BerkusPERSON

0.99+

BarcaORGANIZATION

0.99+

DejanPERSON

0.99+

MondayDATE

0.99+

twelveQUANTITY

0.99+

AWSORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

two daysQUANTITY

0.99+

last yearDATE

0.99+

yesterdayDATE

0.99+

thirteenQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

GoogleORGANIZATION

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

CoreyPERSON

0.99+

second wayQUANTITY

0.99+

twenty thousand podsQUANTITY

0.99+

a year and a halfQUANTITY

0.99+

two waysQUANTITY

0.99+

KubeConEVENT

0.99+

three hundred peopleQUANTITY

0.99+

LinuxTITLE

0.99+

SisticORGANIZATION

0.99+

firstQUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

four hoursQUANTITY

0.98+

CloudNativeConEVENT

0.98+

telcoORGANIZATION

0.98+

OneQUANTITY

0.97+

todayDATE

0.97+

Cooper NettiesPERSON

0.97+

last nightDATE

0.97+

Ecosystem PartnersORGANIZATION

0.97+

CloudNativeCon EU 2019EVENT

0.96+

two different waysQUANTITY

0.96+

CloudNativeCon 2019EVENT

0.95+

first peopleQUANTITY

0.95+

two gentlemanQUANTITY

0.94+

Kubernetes'PERSON

0.94+

One thingQUANTITY

0.94+

Lin EX KernelTITLE

0.94+

CloudNativeConTITLE

0.93+

one solutionQUANTITY

0.93+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> live from Barcelona, Spain. It's the key covering Cook Con Cloud, Native Con Europe twenty nineteen by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm student of my co host is Corey Quinn and happy to welcome back to the program. Doug Davis, who's a senior technical staff member and PM of a native and happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright. So, Corey, I got really excited when he saw this Because server lists, uh, is something that, you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, Kay native. You know, I look at it kind of a bridging the solution, but, you know, we're talking. It's not the, you know, you know, containers or server. Listen, you know, we understand that world, they're spectrums, and there's overlap. So maybe is that is a set up. You know what is the service. Working groups, you know, Charter, Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use? It went out through it. Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen staff in general. What should they do? Do next and basic came down to three different things. One was education. We want to be educate the community on what services, when it's appropriate >> stuff like that >> to what should wait. I'm sorry I'm getting somebody thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And, third, >> what should we >> do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability portability, stuff like that. And one of the low hang your food so they identified was, Well, service seems to be all about events. So there's something inventing space we can do and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cloud events working or project, and they're basically divine, Eva said. Specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and run it. I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that point is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers things like event base, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a one of the key. Thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think Canada does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do If you want to think of it as a pass. Go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything else. >> So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says, No, you gotta know about networks, Congress on values, that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could. This stuff that you deploy on K native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is a first. It was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience, time or on Lee runs in certain run times, or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial I kind of understand why they're there, because the way things have progressed, But again, it's one reason I excited like a native is because a lot of those limitations aren't there. Now. Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the service features it offers up, became these simplified incriminate his experience. So if you think about what you could do with Coronet is right, you can deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seemed very excited by K native and I think a lot of it was because we were just talking about sort of the merging of the worlds because I do think there is still some confusion around, as you said, when to use one versus the other. And I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going the future. I don't know the honest They didn't actually say a whole lot there. I had my own personal opinion, and lot of is what already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick technology gets my job done and hopefully that one will solve a lot of my needs. But for the most part, I think it was really more about Kenya than anything else yesterday. >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to seriously with server lists where some of its most vocal proponents are also so the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The solution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your >> mind? Um, so this is hardly the answer because I have the IBM Army running through my head because what's interesting is. I do hear people talk about service is good for this and not this or you can date. It was good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around, they would say, You know when to use viens roses containers. My go to answer was, always try containers first. Your life would be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured >> thank you Usual response back. Teo >> I'LL give you the tough one. The critical guy had in Seattle when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws lambda, and number two is probably as your functions and as of Seattle, neither of those was fully integrated since then. I talked a little startup called I Believe his Trigger Mash that that has made some connections between Lambda on K Native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary. Tightly integrated and leveraging the carbonates features the way Kay native is doing, and I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it. I think people were doing were noticing. Is that what you said? There's a lot of other players out there and his heart feel the choose and what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure like communities is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, obviously. >> Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff, obviously, through the eventing side of a native. We do. But from a serving perspective, I'm not sure I hold her old water. From that perspective, you have to be >> honest. All right, Well, Doug Davis, we're done for This one really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 22 2019

SUMMARY :

It's the key covering Cook Con It's not the, you know, you know, containers or server. And so the output of this working group was a white paper. others other service projects, you know, getting encouraged in the joint to grow the community. and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native help us move towards It is not the easiest thing in the world to use, and it kind of forced you that it's about business logic and solving the things that Air corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual in terms of what you should not run on time like he needed or any of the platform is whatever that platform I tend to see most of the constraints around. thank you Usual response back. And, you know, what can you tell us about, Is that what you said? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the servant working

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

CoreyPERSON

0.99+

EvaPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

third pieceQUANTITY

0.99+

K NativeORGANIZATION

0.99+

KayPERSON

0.99+

eightQUANTITY

0.99+

TeoPERSON

0.99+

eightyQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

DougPERSON

0.99+

IBM ArmyORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

Missing the PointTITLE

0.99+

yesterdayDATE

0.98+

YesterdayDATE

0.98+

CongressORGANIZATION

0.98+

two serviceQUANTITY

0.98+

KubeConEVENT

0.98+

two worldsQUANTITY

0.98+

KaniaPERSON

0.98+

Ecosystem PartnersORGANIZATION

0.98+

zeroQUANTITY

0.98+

IBM Red HatORGANIZATION

0.97+

CincyLOCATION

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.96+

ninety percentQUANTITY

0.96+

one reasonQUANTITY

0.96+

thirdQUANTITY

0.95+

Cooper NeteaseORGANIZATION

0.93+

KenyaLOCATION

0.93+

MohrPERSON

0.92+

Native Computing FoundationORGANIZATION

0.91+

surlyPERSON

0.91+

One thingQUANTITY

0.91+

KeitaPERSON

0.9+

Cube KhanPERSON

0.9+

CloudORGANIZATION

0.9+

K nativePERSON

0.89+

Cloud FoundryORGANIZATION

0.87+

twenty nineteenQUANTITY

0.86+

LennoxORGANIZATION

0.85+

The Cloud,ORGANIZATION

0.85+

CoronetORGANIZATION

0.83+

CloudNativeCon EU 2019EVENT

0.83+

Kay nativePERSON

0.82+

fourQUANTITY

0.82+

point BOTHER

0.81+

FeliciaPERSON

0.78+

couple of weeks agoDATE

0.77+

KenyaORGANIZATION

0.75+

three different thingsQUANTITY

0.74+

CainPERSON

0.73+

Cook ConEVENT

0.73+

two thousand seventeenQUANTITY

0.72+

CubesORGANIZATION

0.7+

KORGANIZATION

0.69+

CubeCOMMERCIAL_ITEM

0.69+

K nativeORGANIZATION

0.69+

Ueda GoodPERSON

0.69+

K nativePERSON

0.67+

lambdaORGANIZATION

0.67+

Native Con EuropeEVENT

0.67+

coupleQUANTITY

0.67+

Cooper NeteaseORGANIZATION

0.66+

Morgan McLean, Google Cloud Platform & Ben Sigelman, LightStep | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain it's theCUBE, covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back. This is theCUBE's coverage of KubeCon, CloudNativeCon 2019. I'm Stu Miniman, my co-host for two days wall-to-wall coverage is Corey Quinn. Happy to welcome back to the program first Ben Sigelman, who is the co-founder and CEO of LightStep. And welcome to the program a first time Morgan McLean, who's a product manager at Google Cloud Platform. Gentlemen, thanks so much for joining us. >> Thanks for having us. >> Yeah. >> All right so, this was a last minute ad for us because you guys had some interesting news in the keynote. I think the feedback everybody's heard is there's too many projects and everything's overlapping, and how do I make a decision, but interesting piece is OpenCensus, which Morgan was doing, and OpenTracing, which Ben and LightStep were doing are now moving together for OpenTelemetry if I got it right. >> Yup. >> So, is it just everybody's holding hands and singing Kumbaya around the Kubernetes campfire, or is there something more to this? >> Well I mean, it started when the CNCF locked us in a room and told us there were too many projects. (Stu and Ben laughing) Really wouldn't let us leave. No, to be fair they did actually take us to a room and really start the ball rolling, but conversations have picked up for the last few months and personally I'm just really excited that it's gone so well. Initially if you told me six or nine months ago that this would happen, I would've been, given just the way the projects were going, both were growing very quickly, I would've been a little skeptical. But seriously, this merger's gone beyond my wildest dreams. It's awesome, both to unite the communities, it's awesome to unite the projects together. >> What has the response been from the communities on this merger? >> Very positive. >> Yeah. >> Very positive. I mean OpenTracing and OpenCensus are both projects with healthy user bases that are growing quickly and all that, but the reason people adopt them is to future-proof their own software. Because they want to adopt something that's going to be here to stay. And by having these two things out in the world that are both successful, and were overlapping in terms of their goals, I think the presence of two projects was actually really problematic for people. So, the fact that they're merging is net positive, absolutely for the end user community, also for the vendor community, it's a similar, it's almost exactly the same parallel thought process. When we met, the CNCF did broker an in-person meeting where they gave us some space and we all got together and, I don't know how many people were there, like 20 or 30 people in that room. >> They did let us leave the room though, yesterday, yeah that was nice. >> They did let us leave the room, that's true. We were not locked in there, (Morgan laughing) but they asked us in the beginning, essentially they asked everyone to state what their goals were. And almost all of us really had the same goal, which is just to try and make it easy for end users to adopt a telemetry project that they can stick with for the long haul. And so when you think of it in that respect, the merger seems completely obvious. It is true that it doesn't happen very often, and we could speculate about why that is. But I think in this case it was enabled by the fact that we had pretty good social relationships with OpenCensus people. I think Twitter tends to amplify negativity in the world in general, as I'm sure people, not a controversial statement. >> News alert, wait, absolutely the negatives are, it's something in the algorithm I think. >> Yeah, yeah. >> Maybe they should fix that. >> Yeah, yeah (laughs) exactly. And it was funny, there was a lot of perceived animosity between OpenTracing and OpenCensus a year ago, nine months ago, but when you actually talk to the principals in the projects and even just the general purpose developers who are doing a huge amount of work for both projects, that wasn't a sentiment that was widely held or widely felt I think. So, it has been a very kind of happy, it's a huge relief frankly, this whole thing has been a huge relief for all of us I think. >> Yeah it feels like the general ask has always been that, for tracing that doesn't suck. And that tends to be a bit of a tall order. The way that they have seemed to have responded to it is a credit to the maturity of the community. And I think it also speaks to a growing realization that no one wants to have a monoculture of just one option, any color you want so long as it's black. (Ben laughing) Versus there's 500 different things you can pick that all stand in that same spot, and at that point analysis paralysis kicks in. So this feels like it's a net positive for, absolutely everyone involved. >> Definitely. Yeah, one of the anecdotes that Ben and I have shared throughout a lot of these interviews is there were a lot of projects that wanted to include distributed tracing in them. So various web frameworks, I think, was it Hadoop or HBase was-- >> HBase and HDFS were jointly deciding what to do about instrumentation. >> Yeah, and so they would publish an issue on GitHub and someone from OpenTracing would respond saying hey, OpenTracing does this. And they'd be like oh, that's interesting, we can go build an implementation file and issue, someone from OpenCensus would respond and say, no wait, you should use OpenCensus. And with these being very similar yet incompatible APIs, these groups like HBase would sit it and be like, this isn't mature enough, I don't want to deal with this, I've got more important things to focus on right now. And rather than even picking one and ignoring the other, they just ignored tracing, right? With things moving to microservices with Kubernetes being so popular, I mean just look at this conference. Distributed tracing is no longer this kind of nice to have when you're a big company, you need it to understand how your app works and understand the cause of an outage, the cause of a problem. And when you had organizations like this that were looking at tracing instrumentation saying this is a bit of joke with two competing projects, no one was being served well. >> All right, so you talked about there were incompatible APIs, so how do we get from where we were to where we're going? >> So I can talk about that a little bit. The APIs are conceptually incredibly similar. And the part of the criteria for any new language, for OpenTelemetry, are that we are able to build a software bridge to both OpenTracing and OpenCensus that will translate existing instrumentation alongside OpenTelemetry instrumentation, and omit the correct data at the end. And we've built that out in Java already and then starting working a few other languages. It's not a tremendously difficult thing to do if that's your goal. I've worked on this stuff, I started working on Dapper in 2004, so it's been 15 years that I've been working in this space, and I have a lot of regrets about what we did to OpenTracing. And I had this unbelievably tempting thing to start Greenfield like, let's do it right this time, and I'm suppressing every last impulse to do that. And the only goal for this project technically is backwards compatibility. >> Yeah. >> 100% backwards compatibility. There's the famous XKCD comic where you have 14 standards and someone says, we need to create a new standard that will unify across all 14 standards, and now you have 15 standards. So, we don't want to follow that pattern. And by having the leadership from OpenTracing and OpenCensus involved wholesale in this new effort, as well as having these compatibility bridges, we can avoid the fate of IPv6, of Python 3 and things like that. Where the new thing is very appealing but it's so far from the old thing that you literally can't get there incrementally. So that's, our entire design constraint is make sure that backwards compatibility works, get to one project and then we can think about the grand unifying theory of a provability-- >> Ben you are ruining the best thing about standards is that there is so many of them to choose from. (everyone laughing) >> There's still plenty more growing in other areas (laughs) just in this particular space it's smaller. >> One could argue that your approach is nonstandard in its own right. (Ben laughing) And in my own experiments with distributed tracing it seems like step one is, first you have to go back and instrument everything you've built. And step two, hey come back here, because that's a lot of work. The idea of an organization going back and reinstrumenting everything they've already instrumented the first time. >> It's unlikely. >> Unless they build things very modularly and very portably to do exactly that, it's a bit of a heavy lift. >> I agree, yeah, yeah. >> So going forward, are people who have deployed one or the other of your projects going to have to go back and do a reinstrumentation, or will they unify and continue to work as they are? >> So, I would pause at the, I don't know, I would be making up the statistic, so I shouldn't. But let's say a vast majority, I'm thinking like 95, 98% of instrumentation is actually embedded in frameworks and libraries that people depend on. So you need to get Dropwizard, and Spring, and Django, and Flask, and Kafka, things like that need to be instrumented. The application code, the instrumentation, that burden is a bit lower. We announced something called SpecialAgent at LightStep last week, separate to all of this. It's kind of a funny combination, a typical APM agent will interpose on individual function calls, which is a very complicated and heavyweight thing. This doesn't do any of that, but it takes, it basically surveys what you have in your process, it looks for OpenTracing, and in the future OpenTelemetry instrumentation that matches that, and then installs it for you. So you don't have to do any manual work, just basically gluing tab A into slot B or whatever, you don't have to do any of that stuff which is what most OpenTracing instrumentation actually looks like these days. And you can get off the ground without doing any code modifications. So, I think that direction, which is totally portable and vendor neutral as well, as a layer on top of telemetry makes a ton of sense. There are also data translation efforts that are part of OpenCensus that are being ported in to OpenTelemetry that also serve to repurpose existing sources of correlated data. So, all these things are ways to take existing software and get it into the new world without requiring any code changes or redeploys. >> The long-term goal of this has always been that because web framework and client library providers will go and build the instrumentation into those, that when you're writing your own service that you're deploying in Kubernetes or somewhere else, that by linking one of the OpenTelemetry implementations that you get all of that tracing and context propagation, everything out of the box. You as a sort of individual developer are only using the APIs to define custom metrics, custom spans, things that are specific to your business. >> So Ben, you didn't name LightStep the same as your project. But that being said, a major piece of your business is going through a change here, what does this mean for LightStep? >> That's actually not the way I see it for what it's worth. LightStep as a product, since you're giving me an opportunity to talk about it, (laughs) foolish move on your part. No, I'm just kidding. But LightStep as a product is totally omnivorous, we don't really care where the data comes from. And translating any source of data that has a correlation ID and a timestamp is a pretty trivial exercise for us. So we do support OpenTracing, we also support OpenCensus for what it's worth. We'll support OpenTelemetry, we support a bunch of weird in-house things people have already built. We don't care about that at all. The reason that we're pursuing OpenTelemetry is two-fold, one is that we do want to see high quality data coming out of projects. We said at the keynote this morning, but observability literally cannot be better than your telemetry. If your telemetry sucks, your observability will also suck. It's just definitionally true, if you go back to the definition of observability from the '60s. And so we want high quality telemetry so our product can be awesome. Also, just as an individual, I'm a nerd about this stuff and I just like it. I mean a lot of my motivation for working on this is that I personally find it gratifying. It's not really a commercial thing, I just like it. >> Do you find that, as you start talking about this more and more with companies that are becoming cloud-native rapidly, either through digital transformation or from springing fully formed from the forehead of some God, however these born in the cloud companies tend to be, that they intuitively are starting to grasp the value of tracing? Or does this wind up being a much heavier lift as you start, showing them the golden path as it were? >> It's definitely grown like I-- >> Well I think the value of tracing, you see that after you see the negative value of a really catastrophic outage. >> Yes. >> I mean I was just talking to a bank, I won't name the bank but a bank at this conference, and they were talking about their own adoption of tracing, which was pretty slow, until they had a really bad outage where they couldn't transact for an hour and they didn't know which of the 200 services was responsible for the issue. And that really put some muscle behind their tracing initiative. So, typically it's inspired by an incident like that, and then, it's a bit reactive. Sometimes it's not but either way you end up in that place eventually. >> I'm a strong proponent of distributed tracing and I feel very seen by your last answer. (Ben laughing) >> But it's definitely made a big impact. If you came to conferences like this two years ago you'd have Adrian, or Yuri or someone doing a talk on distributed tracing. And they would always start by asking the 100 to 200 person audience, who here knows what distributed tracing is? And like five people would raise their hand and everyone else would be like no, that's why I'm here at the talk, I want to find out about it. And you go to ones now, or even last year, and now they have 400 people at the talk and you ask, who knows what distributed tracing is? And last year over half the people would raise their hand, now it's going to be even higher. And I think just beyond even anecdotes, clearly businesses are finding the value because they're implementing it. And you can see that through the number of companies that have an interest in OpenTracing, OpenTelemetry, OpenCensus. You can see that in the growth of startups in this space, LightStep and others. >> The other thing I like about OpenTelemetry as a name, it's a bit of a mouthful but that's, it's important for people to understand the distinction between telemetry and tracing data and actual solutions. I mean OpenTelemetry stops when the correct data is being omitted. And then what you do with that data is your own business. And I also think that people are realizing that tracing is more than just visualizing a single distributed trace. >> Yeah. >> The traces have an enormous amount of information in there about resource usage, security patterns, access patterns, large-scale performance patterns that are embedded in thousands of traces, that sort of data is making its way into products as well. And I really like that OpenTelemetry has clearly delineated that it stops with the telemetry. OpenTracing was confusing for people, where they'd want tracing and they'd adopt OpenTracing, and then be like, where's my UI? And it's like well no, it's not that kind of project. With OpenTelemetry I think we've been very clear, this is about getting >> The name is more clear yeah. >> very high quality data in a portable way with minimal effort. And then you can use that in any number of ways, and I like that distinction, I think it's important. >> Okay so, how do we make sure that the combination of these two doesn't just get watered-down to the least common denominator, or that Ben just doesn't get upset and say, forget it, I'm going to start from scratch and do it right this time? (Ben laughing) >> I'm not sure I see either of those two happening. To your comment about the least common denominator, we're starting from what I was just commenting about like two years ago, from very little prior art. Like yeah, you had projects like Zipkin, and Zipkin had its own instrumentation, but it was just for tracing, it was just for Zipkin. And you had Jaeger with its own. And so, I think we're so far away, in a few years the least common denominator will be dramatically better than what we have today. (laughs) And so at this stage, I'm not even remotely worried about that. And secondly to some vendor, I know, because Ben had just exampled this, >> Some vendor, some vendor. >> that's probably not, probably not the best one. But for vendor interference in this projects, I really don't see it. Both because of what we talked about earlier where the vendors right now want more telemetry. I meet with them, Ben meets with 'em, we all meet with 'em all the time, we work with them. And the biggest challenge we have is just the data we get is bad, right? Either we don't support certain platforms, we'll get traces that dead end at certain places, we don't get metrics with the same name for certain types of telemetry. And so this project is going to fix that and it's going to solve this problem for a lot of vendors who have this, frankly, a really strong economic incentive to play ball, and to contribute to it. >> Do you see that this, I guess merging of the two projects, is offering an opportunity to either of you to fix some, or revisit if not fix, some of the mistakes, as they were, of the past? I know every time I build something I look back and it was frankly terrible because that's the kind of developer I am. But are you seeing this, as someone who's probably, presumably much better at developing than I've ever been, as the opportunity to unwind some of the decisions you made earlier on, out of either ignorance or it didn't work out as well as you hoped? >> There are a couple of things about each project that we see an opportunity to correct here without doing any damage to the compatibility story. For OpenTracing it was just a bit too narrow. I mean I would talk a lot about how we want to describe the software, not the tracing system. But we kind of made a mistake in that we called it OpenTracing. Really people want, if a request comes in, they want to describe that request and then have it go to their tracing system, but also to their metric system, and to their logging stack, and to anywhere else, their security system. You should only have to instrument that once. So, OpenTracing was a bit too narrow. OpenCensus, we've talked about this a lot, built a really high quality reference implementation into the product, if OpenCensus, the product I mean. And that coupling created problems for vendors to adopt and it was a bit thick for some end users as well. So we are still keeping the reference implementation, but it's now cleanly decoupled. >> Yeah. >> So we have loose coupling, a la OpenTracing, but wider scope a la OpenCensus. And in that aspect, I think philosophically, this OpenTelemetry effort has taken the best of both worlds from these two projects that it started with. >> All right well, Ben and Morgan thank you so much for sharing. Best of luck and let us know if CNCF needs to pull you guys in a room a little bit more to help work through any of the issues. (Ben laughing) But thanks again for joining us. >> Thank you so much. >> Thanks for having us, it's been a pleasure. >> Yeah. >> All right for Corey Quinn, I'm Stu Miniman we'll be back to wrap up our day one of two days live coverage here from KubeCon, CloudNativeCon 2019, Barcelona, Spain. Thanks for watching theCUBE. (soft instrumental music)

Published Date : May 21 2019

SUMMARY :

Brought to you by Red Hat, the Cloud Native Happy to welcome back to the program first Ben Sigelman, because you guys had some interesting news in the keynote. and really start the ball rolling, like 20 or 30 people in that room. They did let us leave the room though, And so when you think of it in that respect, in the algorithm I think. and even just the general purpose developers And that tends to be a bit of a tall order. Yeah, one of the anecdotes that Ben and I have shared HBase and HDFS were jointly deciding And rather than even picking one and ignoring the other, And the only goal for this project There's the famous XKCD comic where you have 14 standards is that there is so many of them to choose from. growing in other areas (laughs) just in this One could argue that your to do exactly that, it's a bit of a heavy lift. and get it into the new world without requiring that by linking one of the OpenTelemetry implementations But that being said, a major piece of your business one is that we do want to see high quality data you see that after you see the negative value And that really put some muscle and I feel very seen by your last answer. You can see that in the growth of startups And then what you do with that data is your own business. And I really like that OpenTelemetry has clearly delineated and I like that distinction, I think it's important. And you had Jaeger with its own. Some vendor, And so this project is going to fix that and it's going to solve is offering an opportunity to either of you to fix some, and then have it go to their tracing system, And in that aspect, I think philosophically, Best of luck and let us know if CNCF needs to pull you guys Thanks for having us, Thanks for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ben SigelmanPERSON

0.99+

2004DATE

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

MorganPERSON

0.99+

20QUANTITY

0.99+

BenPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

StuPERSON

0.99+

100QUANTITY

0.99+

Python 3TITLE

0.99+

two projectsQUANTITY

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

JavaTITLE

0.99+

five peopleQUANTITY

0.99+

15 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

LightStepORGANIZATION

0.99+

AdrianPERSON

0.99+

last weekDATE

0.99+

bothQUANTITY

0.99+

400 peopleQUANTITY

0.99+

two daysQUANTITY

0.99+

KubeConEVENT

0.99+

30 peopleQUANTITY

0.99+

Morgan McLeanPERSON

0.99+

twoQUANTITY

0.99+

200 servicesQUANTITY

0.99+

each projectQUANTITY

0.99+

CNCFORGANIZATION

0.99+

nine months agoDATE

0.99+

YuriPERSON

0.99+

two thingsQUANTITY

0.99+

OpenCensusTITLE

0.99+

BothQUANTITY

0.99+

TwitterORGANIZATION

0.99+

oneQUANTITY

0.99+

OpenCensusORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

OpenTracingTITLE

0.99+

CloudNativeConEVENT

0.98+

two years agoDATE

0.98+

95, 98%QUANTITY

0.98+

200 personQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

one optionQUANTITY

0.98+

one projectQUANTITY

0.98+

first timeQUANTITY

0.98+

two-foldQUANTITY

0.98+

both projectsQUANTITY

0.97+

sixDATE

0.97+

GoogleORGANIZATION

0.97+

two years agoDATE

0.97+

15 standardsQUANTITY

0.97+

firstQUANTITY

0.97+

LightStepTITLE

0.96+

GitHubORGANIZATION

0.96+

CloudNativeCon 2019EVENT

0.96+

'60sDATE

0.96+

OpenTracingORGANIZATION

0.96+

ZipkinORGANIZATION

0.96+

Day One Analysis | KubeCon + CloudNativeCon EU 2019


 

>> Live, from Barcelona Spain, it's theCube! Covering, KubeCon CloudNativeCon Europe 2019: Brought to you by RedHat, the Cloud Native Computing Foundation and the Ecosystem Partners. >> Hi, and welcome back. this is theCube's coverage of KubeCon CloudNativeCon 2019 here in Barcelona, Spain. We're at the end of day one of two days of live, wall-to-wall coverage. I'm Stu Miniman, and at the end of the day, what we try to do always is do our independent analysis and say what we really think. And joining me is someone that usually has no problem telling you exactly what he thinks online. So, I've challenged Mr. Corey Quinn. Cloud economist, of the Duckbill Group. and the curator, author, Last Week in AWS. To tell us what he actually thinks. >> Well, Stu, you know what your problem is. All the best feedback starts off that way. Now, this has been a fascinating experience for me. This is the first time I've ever been to KubeCon. I didn't quite know what to expect- >> It's KubeCon, not Koob-Con. Come on. It is in GitHub, how you have to make the pronunciation correct. >> We are on theCube. We would think that we would be subject matter experts on this. >> CNCF will be cracking down on you if I don't correct you on this. >> I still maintain we're in Barcelona, Italy. But that's a whole separate argument to have with other people. >> Yes, well, most Americans are geographically challenged. And we understand you have some challenges too. >> Exactly, most Americans need to learn geography, we go to war. (chuckling) >> All right, so, Corey, I guess the first question for you is, you usually go to mostly AWS shows. Most of the customers we've talked to have been AWS customers. So is this feeling much different from the usual show you go to? >> The focus of the conversations is different, and to be clear, I'm not much of a cloud partisan myself. I deal with AWS primarily because, not for nothing, that's where my customers are. That tends to be exactly where the expensive problems tend to live. For better or worse. If that changes, so will I. >> So, you're saying yet that the other cloud providers don't have their customers big enough bills, or they just haven't figured out how you might be able to help them in the future? >> To be very honest with you. Yes, is the short answer. Right now on aggregate, my customers spend about a billion dollars a year on AWS. I don't see the same order of magnitude on other providers, but it's coming. It is very clearly coming. None of these providers are shrinking as far as size goes. It's largely a matter of time. >> Alright. But Corey, I hope at least you've understood that Kubernetes at the center for all things. And that multi-cloud is the way that we are today and will always be in the future. And we should all hold hands and sing along, that we all get along. Is that what you've learned so far? >> I think that's absolutely what I've learned so far. It comes down to religion and it's perfectly name for it. I mean, Kubernetes was the Greek God of spending money on cloud services. >> All right. But seriously. Corey, I think one of the things that I really liked is. We talk to customers and there were some interesting things at least I heard when you talked about they see huge value in what they're doing with Kubernetes. Many of them only have one cloud provider today. Yet they are choosing to lay on Kubernetes either with AWS or with another solution there. What's been your take of what you've heard about. Kind of the why and what they're doing? >> There've been a few different reasons on it. One that resonated with me did validate what I talked about at the beginning of the day. Which was, that by trying to position yourself to be strategically amenable to any potential provider you might want to use in the future. You are sacrificing velocity. And you're gaining agility, losing velocity to do that. Is that trade off worth it? I don't think I'm qualified to judge. I think that's a decision every business has to make on its own. My argument has always been that if that's the decision you make, do it knowingly. And I don't think we've talked to anyone who's made that unknowingly today. >> Yeah. I think that's a really good point. What is it, you know, surprised you or interest you that we've heard so far? >> I have to be honest. I have a long and storied history in open source. I was staff at the Freenode IRC network for about a decade. Which was an interesting time. And I've seen a lot of stuff, but I don't think I've ever seen two open source projects merge before. The fact that we saw that today is still swirling around in my head for better or worse. >> Yeah. And it was OpenCensus and OpenTracing coming together. Open Telemetry. So, definitely check out Ben Siegelman. and it was Morgan McLean from a Google cloud. You know, really interested in discussion. I don't think we're sharing too much when we say off camera. There were like, look, it's like, yes, they got us in a room and we worked, but we'll try not to throw punches here on the set and everything like that. We understand that look, there are people that put these things together and you have smart people that build things the way that it should be done. And these were not like two very similar projects going in the same direction, they were built with different design principles and therefore there'll be somethings that they all need to reconcile to be able to go forward. But yeah, very interesting. >> And everyone we spoke to today was very focused on what the needs of their customers, whoever they happen to be and how to meet those customers and their business requirements. There's no one that we spoke to that was sitting here saying, oh, this is the right answer because it is technically correct. The answer is we're always of the form. This is what we need to do in order to serve customers. And it's very hard to argue against that strategy. >> All right, but none of this really matters because Serverless, right Corey? >> Oh, absolutely. Serverless is the way and the light of the future and to some extent I believe that. >> But they're not doing Serverless. I'm pretty sure they're half a step behind you. Yes, it tends to be, it's easy to make go ahead and die and say, Oh, if you're not running the absolute latest bleeding edge thing, you're behind, you're backwards, etc. And I don't get that all the sense that that is reality. I think that there's, if you're building something greenfield today, you are fundamentally going to make different choices, than if you have something you're trying to carry forward. And I don't just mean carrying forward a technical sense. I mean carrying it forward in terms of process, in terms of culture, in terms of existing business units that need to modernize. People are moving in the same general direction. The question that I think is still on answered is, today, there's a perception rightly or wrongly, that Containers are slightly behind Serverless. I don't know that that necessarily holds true. I think that they are aligned towards the same business value. I think, judge either one of them by today's constraints in the context of longer term strategy is a mistake. I'm curious to see what happens. >> Corey, I love. So we had Jeff Brewer from Intuit and they were like look, we're doing Serverless, we're doing a lot of Containerless stuffs and I'd love it for my developer not to have to worry about. And they've had been moved down that path. So, we know one of the truisms out there is everything in IT is always additive. When you talk to them and say, oh, well I'm going into cloud wait, I still have some stuff that, running on my main frame or my eyes series. And that we'll probably be running there when I've retired. We were talking offline. It's like, well, there's been a little resurgence in COBOL. Just because it did not die after Y2K and so did these things always come back and it's always additive and the longer you've been in business as a company, the more legacy you need to be able to maintain and extend and connect to where you want to go with the future. >> It's almost a sawtooth curve. As complexity continues to rise it becomes to a point where it's untenable. There's something that comes out that abstracts that away and you're back down to a level a human being might actually be able to understand. And you take it a step further and you start to see it again and again and again, and then it collapses down. Docker and a lot of the handbuilt orchestration systems were like that. And then Kubernetes came out. Initially it was fairly simple and then things have been added to it now. And I think we're climbing that sawtooth curve again. Whether or not that maintains? Whether or not that simplifies again? I find that history rhymes particularly in tech. >> Well yeah and I always worry sometimes when you talk about the abstraction layer you got to be really careful what you're abstracting. What we see here a lot, is a lot of times it's people, how can I just consume that? I want to buy it as a service and somebody take care of that not, it hides the complexity for me but some of the complexity is still there. >> Right. So our site is now intermittently slow what do you plan to do? Its update my resume immediately cause we're never untangling that Gordian knot of an infrastructure. That's not a great answer but it is an honest one in some shops. >> I've talked to, we know that there was, for a long time people outsourced what they were doing. And we need to make sure that when you're buying something as a service that you haven't outsourced, That you understand what's important to your business, what happens when things go wrong. We had some discussion today about, networking and observability that we need to be able to go down that rabbit hole, at least turn to somebody who can. Because just because I can't touch that gear doesn't mean my next not on the line, If something goes wrong. >> You can outsource a lot of work. You can't outsource responsibility. I put slightly more succinctly, the line I've always liked was you own your own availability. If you have a provider that you've thrown a lot of these things over to and they go down, well sure you're going to have loud angry phone calls and maybe a few bucks back from an SLA credit. We your customers we're down and we're suffering. So the choices you made impact your businesses perception in the market and your customer's happiness. So as much as fun as it is to be able to throw things over the wall for someone else to deal with, you're still responsible. And I think that people forget that at their own peril. >> One of the things I like. I've got a long history in open source to. If there are things that aren't perfect or things that are maturing. A lot of times we're talking about them in public. Because there is a roadmap and people are working on it and we can all go to the repositories and see where people are complaining. So at a show like this, I feel like we do have some level of transparency and we can actually have realism here. What's been your experience so far? >> I think that people have been remarkably transparent about the challenges that they're facing in a way that you don't often get at a vendor show. Where you have a single vendor, you're at their show, regardless of who that might be. You're not going to be invited back if you wind up with a litany of people coming on a video show or a podcast or screaming and sobbing in the bathroom, however you want to, whatever your media is. Just have a litany of complaints the entire time or make that provider look bad. I don't sense that there's any of that pressure. And for some reason, and this is my first coop gone, so maybe this is just the way this culture it works. Everyone, regardless of who they worked for or what they're working on or what their experience has been, seems happy. I can only assume there's something in the water. >> All right. Well, I've just been informed that the CNCF had asked me to remove Corey because he refuses to say KubeCon. But, Corey. Since this might be your last time on the program, any other final words that you have for it or I will let you do something very rare and if you have any questions for me. Love on my way. >> Absolutely. What did you find today that you didn't expect to find? >> The one that jumps out for me really is two things. One, we discussed it already is the, the observability piece coming together. The other one is. You talk about that maturation of where Amazon fits in this ecosystem. And we had lovely conversation, with Abby fuller. But not just that one. We talked to the users and how they think about it. Which is what really matters is, there's so much talk about, who contributes more code and who does the most here. But look, we're talking cloud. Most of these customers are using AWS as if not the cloud, one of the clouds. I've set it on theCube many times. When you live in a hybrid and multi-cloud world and the public cloud, AWS is the far leader. There's no debating that. So they are participating here. They are doing plenty for what their customers want and they give choice and they listen to the feedback. So that was interesting to me that maturation of where that sits because when I come into the show and many times it is, it is the open source in this whole ecosystem, trying to prevent Amazon from taking over the world. And look, we want a good robust ecosystem out there. >> We absolutely do. >> While I have many friends that work for Amazon. We probably don't want to all be working for a single company down the road. >> I certainly don't. >> We like a nice robust ecosystem where there is choice out there and that keeps its (mumbles). So that maturation of where they are on has been interesting to me so far, especially from the user stand point. >> Very much so. I don't think that anyone wants to look back and say, wow, I'm sure glad we have only one option in this entire space that does anything useful. And then a whole bunch of could have the didn't. And for better or worse, I don't think that the future is nearly as clear cut as the past of cloud. Historically, AWS has been the 800 pound gorilla. I think that we hearing fascinating things from GCP and from Azure. I don't necessarily think that the future is preordained. I do think right now it is AWS game to lose, but I'm starting to see a lot of other players in his face start to make a lot of very interesting and arguably very correct moves. >> All right. Well, we know you as our audience have lots of places where you can turn to find your information and we are always pleased that when you turn to us to watch theCube. if you have any feedback for ourselves, Corey Quinn and myself, Stu Miniman. Reach out on Twitter. We are easy to reach on that. And we have lots of posts. So if you're like, Hey, tired of looking at this mug here. Let us know. But hopefully we're asking the questions and digging into the areas that you want and we'll help your businesses going forward. So we are at the end of day one, Two days live coverage here at KubeCon CloudNativeCon. This is the cube. You're a leader in live tech coverage. Thanks for watching. (music)

Published Date : May 21 2019

SUMMARY :

Brought to you by RedHat, I'm Stu Miniman, and at the end of the day, This is the first time I've ever been to KubeCon. how you have to make the pronunciation correct. we would be subject matter experts on this. if I don't correct you on this. to have with other people. And we understand you have some challenges too. Exactly, most Americans need to learn geography, I guess the first question for you is, and to be clear, I don't see the same order of magnitude on other providers, And that multi-cloud is the way that we are today I think that's absolutely Kind of the why and what they're doing? that if that's the decision you make, What is it, you know, I have to be honest. that they all need to reconcile There's no one that we spoke to and to some extent I believe that. And I don't get that all the more legacy you need to be able to maintain Docker and a lot of the handbuilt you got to be really careful what you're abstracting. what do you plan to do? that you haven't outsourced, So the choices you made One of the things I like. I don't sense that there's any of that pressure. that the CNCF had asked me to remove Corey that you didn't expect to find? and they give choice and they listen to the feedback. a single company down the road. and that keeps its (mumbles). I do think right now it is AWS game to lose, that you want and we'll help your businesses

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Corey QuinnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CoreyPERSON

0.99+

Ben SiegelmanPERSON

0.99+

Duckbill GroupORGANIZATION

0.99+

Morgan McLeanPERSON

0.99+

Two daysQUANTITY

0.99+

Jeff BrewerPERSON

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

800 poundQUANTITY

0.99+

StuPERSON

0.99+

Barcelona, SpainLOCATION

0.99+

KubeConEVENT

0.99+

Barcelona SpainLOCATION

0.99+

OneQUANTITY

0.99+

CNCFORGANIZATION

0.99+

two daysQUANTITY

0.99+

first questionQUANTITY

0.98+

IntuitORGANIZATION

0.98+

Day OneQUANTITY

0.98+

first coopQUANTITY

0.98+

RedHatORGANIZATION

0.97+

oneQUANTITY

0.97+

Barcelona, ItalyLOCATION

0.97+

TwitterORGANIZATION

0.97+

one optionQUANTITY

0.97+

two very similar projectsQUANTITY

0.96+

Ecosystem PartnersORGANIZATION

0.95+

first timeQUANTITY

0.94+

KubeCon CloudNativeCon 2019EVENT

0.94+

single companyQUANTITY

0.94+

Abby fullerPERSON

0.93+

AmericansPERSON

0.91+

GoogleORGANIZATION

0.91+

two open source projectsQUANTITY

0.89+

CloudNativeCon EU 2019EVENT

0.89+

about a billion dollars a yearQUANTITY

0.89+

KubeCon CloudNativeCon Europe 2019EVENT

0.88+

ServerlessORGANIZATION

0.88+

one cloudQUANTITY

0.87+

KubernetesTITLE

0.85+

single vendorQUANTITY

0.84+

GreekOTHER

0.83+

Freenode IRCORGANIZATION

0.82+

AzureTITLE

0.79+

GordianORGANIZATION

0.79+

Last WeekDATE

0.78+

Y2KORGANIZATION

0.77+

KubeCon CloudNativeConEVENT

0.76+

a decadeQUANTITY

0.74+

COBOLTITLE

0.73+

OpenCensusTITLE

0.72+

theCubeORGANIZATION

0.7+

day oneQUANTITY

0.7+

OpenTracingTITLE

0.68+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> about >> fifteen live from basically about a room that is a common club native con Europe twenty nineteen by Red Hat, The >> Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes. Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm stupid in my co host is Corey Quinn and having a welcome back to the program, Doug Davis, who's a senior technical staff member and PM of a native. And he happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright, So Corey got really excited when he saw this because server Lis is something that you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, k native. You know, I look at it kind of a bridging a solution, but, you know, we're talking. It's not the, you know, you know, containers or server lists. And, you know, we understand that world. They're spectrums and there's overlap. So maybe as that is a set up, you know, What is the surveillance working groups? You know, Charter. Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use that went out through it? Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen stuff in general. What do they do next? And basic came down to three different things. One was education. We want to be educate the community on what services when it's appropriate stuff like that. Two. What should wait? I'm sorry I'm getting somebody Thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And third, what should we do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability, portability stuff like that and one of the low hang your food should be identified was, well, service seems to be all about events. So there's something inventing space we could do, and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cod events working or project, and they're basically divine, Eva said specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and running, I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that viewpoint is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers, things like event based, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a the monkey thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think he does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do. If you want to think of it as a pass, go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything >> else. So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says No, you gotta know about Network Sing Gris on values that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could stuff that you deploy on. K Native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is at first it was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience time or it only runs in certain run times. Or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air Corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial. I kind of understand why they're there, because the way things have progressed. But again, that's one reason I excited like a native is because a lot of those limitations aren't there. Now, Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the serval ISS features it offers up, became these simplified, incriminate his experience. So if you think about what you could do with Coronet is right, you could deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seem very excited by K native and I think a lot of it was because we were just talking about that sort of merging of the worlds because I do think there is still some confusion around, as you said when you use one verse of the other and I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going in the future. I don't know. Be honest. They didn't actually say a whole lot there. I had my own personal opinion, and lot of years would already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick the technology. It gets my job done and hopefully that one will solve a lot of my needs. But for the most parts, I think it was really more about Kaneda than anything else. Yesterday, >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to Syria to see with server lists where some of its most vocal proponents are also the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The absolution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your mind? >> Um, >> so this is hardly an answer because I have the IBM Army running through my head because what's interesting is I do hear people talk about service is good for this and not this or you can date. It is good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around. They would say, You know when to use BMS vs Containers. My go to answer was, always try containers first. Your life will be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured with >> thank you. That's the usual response back. So don't >> go. I'Ll give you the tough one critical guy had in Seattle. Okay, when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws Lambda, and number two is probably as your functions. And as of Seattle, neither of those was fully integrated. Since then, I talk to a little startup called Believers Trigger Mash, that that has made some connections between Lambda Ah, and a native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary, tightly integrated and leveraging the community's features the way Kay Native is doing. And I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it? I think people were doing were noticing. Is that what you said? There's a lot of other players out there, and it's hard for people to choose. And what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure? Like Who Burnett is is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, >> obviously. Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff. Obviously, through the eventing side of a native we do went from a serving perspective. I'm not sure I hold her old water. From that perspective, you have >> to be honest. All right, Well, Doug Davis, we're done for This one. Really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

So maybe as that is a set up, you know, What is the surveillance working groups? you know, getting encouraged in the joint to grow the community. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native But the important thing is if for some reason you need to go around K that it's about business logic and solving the things that Air Corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual I honestly I don't have a good answer for that because I don't necessary by some of the the I don't want to sound like I'm coming after you, That's the usual response back. And, you know, what can you tell us about, Is that what you said? Okay, so will there be a bridge to those other cloud players soon as their road map? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

CoreyPERSON

0.99+

OracleORGANIZATION

0.99+

EvaPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

third pieceQUANTITY

0.99+

Air CorpsORGANIZATION

0.99+

TeoPERSON

0.99+

K NativeORGANIZATION

0.99+

eightyQUANTITY

0.99+

DougPERSON

0.99+

eightQUANTITY

0.99+

IBM ArmyORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

Missing the PointTITLE

0.99+

YesterdayDATE

0.99+

KubeConEVENT

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

Cloud, Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

fifteenQUANTITY

0.99+

TwoQUANTITY

0.98+

two worldsQUANTITY

0.98+

SyriaLOCATION

0.98+

thirdQUANTITY

0.98+

IBM Red HatORGANIZATION

0.98+

two serviceQUANTITY

0.98+

one reasonQUANTITY

0.98+

CincyLOCATION

0.98+

zeroQUANTITY

0.97+

KayPERSON

0.97+

ninety percentQUANTITY

0.96+

K nativeORGANIZATION

0.96+

Believers Trigger MashORGANIZATION

0.96+

Kay NativePERSON

0.95+

One thingQUANTITY

0.95+

EuropeLOCATION

0.95+

point BOTHER

0.95+

Cooper NeteaseORGANIZATION

0.94+

MohrPERSON

0.93+

twentyQUANTITY

0.93+

KaniaPERSON

0.93+

threeQUANTITY

0.91+

one verseQUANTITY

0.89+

KediaPERSON

0.89+

Point AOTHER

0.88+

couple of weeks agoDATE

0.87+

KeitaPERSON

0.83+

fourQUANTITY

0.82+

K NativePERSON

0.81+

CloudNativeCon EU 2019EVENT

0.79+

KenyaORGANIZATION

0.79+

two thousand seventeenQUANTITY

0.78+

Ueda GoodPERSON

0.78+

K nativePERSON

0.76+

coupleQUANTITY

0.76+

Teo K nativePERSON

0.75+

LambdaTITLE

0.75+

twenty nineteenQUANTITY

0.75+

Cloud FoundryORGANIZATION

0.75+

LennoxPERSON

0.74+

CoronetORGANIZATION

0.73+

FeliciaPERSON

0.71+

Cube KhanPERSON

0.71+

K nativeORGANIZATION

0.7+

Network Sing GrisORGANIZATION

0.67+

NeteaseORGANIZATION

0.65+

surlyPERSON

0.64+

ConORGANIZATION

0.64+

Erin A. Boyd, Red Hat | KubeCon + CloudNativeCon EU 2019


 

>> Live, from Barcelona, Spain, it's the theCUBE, covering KUBECON and CloudNativeCon Europe 2019. Brought to you by RedHat, the Cloud Native Computing Foundation, and the Ecosystem Partners. >> Welcome back to theCUBE. I'm Stu Miniman. My co-host, Corey Quinn. 7700 here in Barcelona, Spain, for KUBECON, CLOUDNATIVECON. Happy to welcome to the program a first-time guest, Erin Boyd, who is a senior Principal Software Engineer in the office of the CEO of RedHat. Erin, thanks so much for joining us. >> Yeah, thanks for having me. >> Alright, so just a couple of weeks ago, I know I was in Boston, you probably were too, >> Yep. >> For RedHat Summit. Digging into a lot of the pieces. You focus on multi-cloud and storage. Tell us a little bit about, you know, your role, and what you're doing here at the KUBECON show. >> Sure, I'd be happy to. So for over a year now, RedHat's really been kind of leading the pack on hybrid cloud. You know, allowing customers to have more choice, you know, with both public and private cloud offerings. And, of course, OpenShift being our platform built on Kubernetes, we believe that should be the consistent API in which we have Federation. Yeah, so Erin, I got to talk to quite a few OpenShift customers at RedHat Summit. It was really how they're using that as a lever to help them really gain agility in their application deployment. But, let's start for a second, without getting too fanatic, you say hybrid cloud. What does that mean to your customers? You know, RedHat has a long legacy of, well, lives everywhere. So, public cloud, private cloud, hosting provider, all of the environments, you, RedHat, Enterprise, Linux, can live there. So in your space, what does hybrid cloud mean? >> So, hybrid cloud, I think follows a model of real. It's everywhere. So it's having OpenShift run on top of that and being able to have the application portability that you would expect. Along with the application portability, which is my focus, is having the data agility within those applications. >> Alright, how do you wind up approaching a situation where an app is now agile enough to move between providers almost seamlessly, without having it, I guess, descend down to the lowest common denominator that all providers that it's on are going to provide? I mean, at some point, doesn't that turn into treating the cloud as a place to just run your either instances or containers, and not taking advantage of, I guess, the platform level services? >> Sure, so I think that the API should expose those choices, I don't think it's a one size fits all when we talk about, you know, if you move your application maybe your data doesn't necessarily have to move. So part of the core functionality the Federation is meant to provide, which has been renamed Kubefed since Summit, is that you have the choice within that. And, you know, defining policies around the way we do this. So, perhaps your application is agile enough to span three different clouds, but due to data privacy, you want to keep your data on prem. So, Kubefed should enable you to have that choice. >> You know, so you know, help us dig down a little bit in the storage, you know, environment here, you know. >> Sure. I go back and I worked for a very large storage company that was independent before it got bought for a very large sum of money. But, we had block and file storage. And mostly, that you know, lived in a box, or in a certain application. >> Right. You know, the future, we always talked that there's going to be this wonderful object storage and actually it's designed to be, you know, we'll shard it, we'll spread it around >> Right. And it can live in lots of places. Cloud, a lot of times has that underneath it, so you know, have we started to you know, cross that gap of you know, that mythical nirvana of where say, you know, storage should actually live up to that distributive architecture that we're all looking for. >> Right, so with Kubernetes, the history is, we started off with only file systems. Block is something very new within the last couple releases that I actually personally worked on. The next piece that we're doing at Red Hat is leading the charge to create CRDs for object storage. So it's defining those APIs so customers can dynamically provision and manage their object storage with that. In addition, we recently acquired a company called NooBaa that does exactly that. They're able to have that data mobility through object buckets across many clouds doing the sharding and replication with the ability to dedupe. And that's super important because it opens up for our customers to have image streams, photos, things like that that they typically use within an enterprise, and quickly move the data and copy it as they need to. >> Yeah, so I've actually talked to the Noovaa team. I would joke with them that, didn't they deduplicate, couldn't they deduplicate their name 'cause it's like Noovaa. >> (laughs) yeah. >> So you know, plenty of vowels there. But, right, storage built for the cloud world is, you know, what we're talking about there. >> Right. >> How's that different from some of the previous storage solutions that we've been dealing with? >> So I think before, we were trying to maybe make fit what didn't work. That's not to say that file and block aren't important. I mean, having local storage for a high performance application is absolutely critical. So I think we're meeting the market where it is. It's dependent on the behavior of the application. and we should be able to provide that. And applications that primarily run in the cloud and need that flexibility, we should be offering object as a first-class citizen, and that's why our work with those CRDs is really critical. >> What is the customer need that drives this? Historically, with my own work with object stores, I tend to view that as almost exclusively accessed via HTTP end points. And at that point, it almost doesn't matter where that lives, as long as the networking and security and latency requirements are being met. What is it that's driving this as making it a first-class citizen built in to Kubernetes itself, the Rook? >> So it allows us to create the personas that we need. So it allows an administrator to administrate storage, just like they would normally with your persistent volume, persistent volume claims and quotas. And then it abstracts the details of, for instance, including that URL in your application. We use a config map within the app so the user doesn't have access necessarily to your keys in the cloud. It also creates a user so you're able to manage users like you would normal objects, which is a little bit different than the PV PVC, and that's why we feel like you know, it's important to have a CRD that defines object in that sense because it is a little bit different. >> All right, so Erin, is this Rook we're talking about then, is, you know, Rook, did I understand, I think got to 1.0, just got released. >> Yeah. >> You know, give us the update on what Rook is, you know, how that fits with this conversation we've been having. >> Right. You know, where we are with the maturity of it. And Rook, as was on the keynote this morning, you know, is a great CNCF project with a really healthy community behind it. One of the provisioners we've created as part of those object CRDs is a Rook provisioner for CEF block, or excuse me, CEF object. We also have an s3 provisioner. So, you know, we hope to have, just like we had external provisioners in Kubernetes, use, you know, allow for the same contribution from the community for those. >> Okay, yeah, there, I remember a couple of years ago at the show, this fixing storage for containers in Kubernetes was something that was a little bit contention in there, and there were a few different projects out there. >> Right. >> For that, you know, where are we with that? We understand that it's never, you know, one solution for every single use case. You know, you already talked about, you know, block file and object. >> Right. >> And how there's going to be a you know, a spectrum of options. >> Sure and so I think there's lots of things to fix. >> Yeah. >> When you talk about that. One of the key things that Rook offered was the ability to ease the deployment of the storage and administration of it, and, as you know, Rook you know, has a plethora of different storage systems that it provides. And, you know, what we're really pushing at RedHat, which I think is important, is having, you know, operators. Like the operator hub that was released with OpenShift 4.0. Rook will be an operator in there. So what that allows is for more automation and true scaling. 'Cause that's where we want to get to with hybrid cloud. If you're managing 10,000 clusters, you cannot do that manually. So having Rook, having operators, and automating the storage piece underneath is really critical to make it now-scale happen. >> Forgive my ignorance. When you say that Rook winds up exposing, for example, now an object store underneath. Is that it's own pile of disks on a system somewhere that it's running? Is it wrapping around object store provided by other cloud providers? Is it something else entirely? What is the, where do the actual drives that hold my data, when I'm using Rook's object store, live? So with Rook today, the object storage that it uses is CEF object. So it exposes the ability to create, you know the CEF components underneath, which Rook can lay down and then expose the object piece of that. So that's the first provisioner in there, yep. >> Wonderful. >> Alright, so I guess when I think about object storage, for years it's been, well, I've got s3 compatibility. And that's kind of the big thing. >> Yep. Is Rook s3 compatible then? Is it, you know, giving more flexibility to users to make this the standard in a cloud native environment? Help us, you know, put a fine as to what this is and isn't. >> Yeah, that's a great question, actually, and we get asked it often. So one of the first provisioners we did is just a proof the concept was an s3, a generic s3 provisioner. And of course, CEF is s3 compliant, so it also does that, but you know, there isn't a standard for object. So most providers of object are s3 compatible. We found it very easy to take off the s3 provisioner we created to create the CEF one. There wasn't much differentiation, which means it's a great pattern for anyone to want to onboard. >> Yeah. Do you find that as s3 itself, and of course, it's competitors of other cloud providers, become more capable, you're starting to see differentiation. Now easy example would be with some of the object storage tiers, where there's increased latency on retrievals. In some cases, as little as five minutes, or as much as 12 hours. Other providers, like Google Cloud, for example, or Azure, have consistent retrieval times on their archive storage. As an easy example, is that something that you're going to start seeing divergence on as object storage becomes smarter by, I guess, all of the providers as they race each other to improve their products. >> Absolutely. I think tiering is one of the facets of object that's really critical. And you know, of course, as we spoke earlier, it's physics, you know, and having data consistency at that very low threshold is important. So, you know, using the storage for what it's worth. Using the best tools, and pulling object into the ecosystem is part of that. >> Yeah, Erin, is there anything that differentiates kind of Kubernetes storage from, you know, what people are familiar with in the past? >> I think Kubernetes storage continues to evolve. The more we learn about how people use Kubernetes, and their needs, I think we listen closely to the community and we develop against that. >> Okay, I guess the other thing is, you know, what kind of feedback are you getting from customers? Where are we along this maturation journey. You know, my history is you know, I worked when we had to fix networking and storage in virtualized environment, and it took about a decade. We're five years into Kubernetes. It feels like we've, you know, accelerated that based on what we've done in the past, but you know, definitely, you know, when it first started, it was you know, let's put stateless stuff in containers and you know, storage will be an afterthought. >> Right. >> Or something that was kind of a side car over here where you had your repository. >> Right. And I think that's the beauty of Kubefed, is that in order to have true hybrid cloud, and have Federation, we have to come together in consensus with both network compute and storage. So it really brings the story full circle. >> Perfect. What do you think right now customers are having their biggest challenges with, as they start wrapping their minds around this new way of thinking? I mean, again, it's easy for a tiny start-up, it's Twitter for Pets, or something like that, to spin off in a pure cloud native way, but larger companies with this legacy concept known as a business model that might involve turning a profit, generally predate cloud, and have done an awful lot of stuff on the data center. What are they seeing as currently being limiting factors on their digital transformation? >> So with Kubernetes just being five years old, as we celebrate the birthday today, I think customers are also maturing. You know, they're entering the landscape, learning about Kubernetes, learning how to containerize, you know, lift and ship their applications, and then they're running up, to costs, right? And lock-ins and things they want to avoid. And that's really where we in the community want to provide a platform and a runway for them to have that choice. >> Alright. Erin, any customer successes that you can share with us, either about the operator or about work specifically? >> Certainly not with Federation. We haven't released it. It will come out in OpenShift 4.2, so we don't have any customer success stories yet, but I would say definitely it's a request, and you know, we're asking customers about it, and if they're interested. And you will find many times maybe they're not familiar with the word Federation, but they're definitely interested in that use case. >> Okay, how's the general feel. You know, what kind of feedback are you getting from customers so far, things that you're excited about that are happening here at the show? >> I'm just excited that Kubernetes is kind of growing up. And it's you know, becoming a true enterprise-level project that customers rely on, and build their business on. >> Well, Erin Boyd, really appreciate you joining us, sharing all the updates. Look forward to the upcoming release, and definitely get to follow up with you soon, to hear about those customers as they start rolling it out. >> Alright, great. Thank you. >> Alright. For Corey Quinn, I'm Stu Miniman, here at KUBECON, CLOUDNATIVECON 2019, Barcelona, Spain. Thanks for watching theCUBE (techno music)

Published Date : May 21 2019

SUMMARY :

Brought to you by RedHat, in the office of the CEO of RedHat. Tell us a little bit about, you know, your role, you know, with both public and private cloud offerings. that you would expect. but due to data privacy, you want to keep your data on prem. in the storage, you know, environment here, you know. And mostly, that you know, lived in a box, you know, we'll shard it, we'll spread it around cross that gap of you know, that mythical nirvana and quickly move the data and copy it as they need to. Yeah, so I've actually talked to the Noovaa team. So you know, plenty of vowels there. And applications that primarily run in the cloud in to Kubernetes itself, the Rook? we feel like you know, it's important to have a CRD we're talking about then, is, you know, on what Rook is, you know, how that fits So, you know, we hope to have, at the show, this fixing storage for containers For that, you know, where are we with that? And how there's going to be a you know, and administration of it, and, as you know, So it exposes the ability to create, you know And that's kind of the big thing. Help us, you know, put a fine as to what this is and isn't. so it also does that, but you know, Do you find that as s3 itself, and of course, And you know, of course, as we spoke earlier, to the community and we develop against that. Okay, I guess the other thing is, you know, over here where you had your repository. is that in order to have true hybrid cloud, What do you think right now customers are having to containerize, you know, lift and ship their applications, Erin, any customer successes that you can share and you know, we're asking customers about it, You know, what kind of feedback are you getting And it's you know, becoming a true and definitely get to follow up with you soon, Alright, great. Thanks for watching theCUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErinPERSON

0.99+

Erin BoydPERSON

0.99+

Corey QuinnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

BostonLOCATION

0.99+

Stu MinimanPERSON

0.99+

five yearsQUANTITY

0.99+

five minutesQUANTITY

0.99+

RedHatORGANIZATION

0.99+

Erin A. BoydPERSON

0.99+

10,000 clustersQUANTITY

0.99+

KUBECONEVENT

0.99+

Barcelona, SpainLOCATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

NooBaaORGANIZATION

0.98+

KubeConEVENT

0.98+

Red HatORGANIZATION

0.98+

RookORGANIZATION

0.98+

OpenShift 4.0TITLE

0.98+

12 hoursQUANTITY

0.98+

NoovaaORGANIZATION

0.97+

OpenShiftTITLE

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.96+

a decadeQUANTITY

0.96+

CLOUDNATIVECON 2019EVENT

0.96+

one solutionQUANTITY

0.95+

s3TITLE

0.95+

OpenShift 4.2TITLE

0.95+

TwitterORGANIZATION

0.95+

KubernetesTITLE

0.95+

first-timeQUANTITY

0.94+

five years oldQUANTITY

0.94+

CloudNativeCon Europe 2019EVENT

0.93+

couple of years agoDATE

0.91+

OneQUANTITY

0.91+

KubefedORGANIZATION

0.91+

over a yearQUANTITY

0.9+

first provisionersQUANTITY

0.9+

this morningDATE

0.87+

CloudNativeCon EU 2019EVENT

0.85+

couple of weeks agoDATE

0.84+

GoogleORGANIZATION

0.83+

NoovaaTITLE

0.8+

coupleQUANTITY

0.8+

LinuxTITLE

0.79+

CLOUDNATIVECONEVENT

0.79+

first provisionerQUANTITY

0.78+

theCUBEORGANIZATION

0.78+

threeQUANTITY

0.78+

RedHat SummitEVENT

0.76+

SummitORGANIZATION

0.76+

single use caseQUANTITY

0.73+

OpenShiftORGANIZATION

0.73+

KubernetesORGANIZATION

0.71+

EnterpriseORGANIZATION

0.71+

Bryan Liles, VMware & Janet Kuo, Google | KubeCon + CloudNativeCon EU 2019


 

>> live from Barcelona, Spain. It's the key covering KubeCon Cloud, Native Con Europe twenty nineteen by Red Hat, the Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to Barcelona, Spain >> were here of the era, and seventy seven hundred people are here for the KubeCon Cloud NativeCon, twenty, nineteen, Off student. My co host for the two days of coverage is Corey Quinn, and joining Me are the two co chairs of this CNC event. Janet Cooper, who is also thie, suffer engineer with Google and having done the wrap up on stage in the keynote this morning, find Lyle's a senior staff engineer with BM where thank you both for joining us, >> Thank you. >> Thanks for having me. >> So let's start. We're celebrating five years of Kubernetes as damn calm laid out this morning. You know, of course, you know came from Google board in over a decade of experience there. So it just helps out the state for us. >> Um, so I started working on communities since before the 1.4 release and then steal a project Montana today. And I feel so proud to see, uh, the progress off this project and its has grown exponentially. And today we have already thirty one thousand contributors and expect it to grow even more if you can. >> All right. So, Brian, you work with some of the original people that helped create who Burnett ease because you came to be and where, by way of the FTO acquisition, seventy seven hundred people here we said it. So it's, you know, just about the size of us feel that we had in Seattle a few months ago Way Expect that San Diego is going to be massive when we get there in the fall. But you know, talk to us is the co chair, you know, What's it mean to, you know, put something like this together? >> Well, so as ah is a long time open source person and seeing you know, all these companies move around for, you know, decades. Now it's nice to be a part of something that I saw from the sidelines for so, so long. I'm actually... it's kind of surreal because I didn't do anything special to get here. I just did what I was doing. And you know, Jan and I just wound up here together, so it's a great feeling, and it's the best part about it is whenever I get off stage and I walked outside and I walked back. It's like a ten minute walk each way. So many people are like, Yeah, you really made my morning And that's that's super special. >> Yeah. I mean, look, you know, we're we're huge fans of open source in general and, you know, communities, especially here. So look, there was no, you know, you both have full time jobs, and you're giving your time to support this. So thank you for what you did. And, you know, we know it takes an army to put together in a community. Some of these people, we're Brian, you know, you got upstate talk about all the various project. There's so many pieces here. We've only have a few minutes. Any kind of major highlights You wanna pull from the keynote? >> So the biggest. Actually, I I've only highlight won the open census open. Tracing merge is great, because not only because it's going to make a better product, but he had two pretty good pieces of software. One from Google, actually, literally both from Google. Ultimately, But they realize that. Hey, we have the same goals. We have similar interfaces. And instead of going through this arms race, what they did is sable. This is what we'LL do. We'LL create a new project and will merge them. That is, you know, that is one of the best things about open source. You know, you want to see this in a lot of places, but people are mature enough to say, Hey, we're going to actually make something bigger and better for everyone. And that was my favorite update. >> Yeah, well, I tell you, and I'm doing my job well, because literally like during the keynote, I reached out to Ben. And Ben and Morgan are going to come on the program to talk about that merging later today. That was interested. >> I've often been accused of having that first language being snark, and I guess in that light, something that I'm not particularly clear on, and this is not the setup for a joke. But one announcement that was made on stage today was that Tiller is no longer included in the current version of Wasn't Helm. Yes, yes, And everyone clapped and applauded, and my immediate response was first off. Wow, if you were the person that wrote Tiller, that probably didn't feel so good given. Everyone was copping and happy about it. But it seems that that was big and transformative and revelatory for a lot of the audience. What is Tiller and why is it perceived as being less than awesome? >> All right, so I will give you a disclaimer, >> please. >> The disclaimer is I do not work on the helm project... Wonderful >> ...so anything that I say should be fact checked. >> Excellent. >> So Well, so here's the big deal. When Tiller, when Helm was introduced, they had this thing called Tiller. And what tiller did was it ran at a basically a cluster wide level to make sure that it could coordinate software being installed and Kubernetes named Spaces or groups how Kubernetes applications are distributed. So what happens is is that that was the best vector for security problems. Basically, you had this root level piece of software running, and people were figuring out ways to get around it. And it was a big security hole. What >> they've done Just a component. It's an attack platform. It >> was one hundred percent. I mean, I remember bit. Nami actually wrote a block post. You know, disclaimer of'em were just bought that bit na me. >> Yes, I insisted It's called Bitten, am I? But we'LL get to that >> another. This's a disclaimer, You know, There Now you know there now my co workers But they wrote they were with very good article about a year and a half ago about just all the attack vectors, but and then also gave us solution around that. Now you don't need that solution. What you get by default. Now something is much more secure. And that's the most important piece. And I think the community really loves Helm, and now they have helm with better defaults. >> So, Janet, a lot of people at the show you talk about, you know, tens of thousands of contributors to it. But that being said, there's still a lot of the world that is just getting started. Part of the key note. And I knew you wrote something running workloads and cover Netease talk a little bit about how we're helping you know, those that aren't yet, you know, on board with you getting into the community ship. >> So I work on the C gaps. So she grabs one of the sub fracture that own is the work wells AP Eyes. That's why I had that. What post? About running for closing covered alleys. So basically, you you're using coronaries clarity, baby eyes to run a different type of application, and we call it were close. So you have stay full state wears or jobs and demons and you have different guys to run those clothes in the communities. And then for those who are just getting started, maybe start with, uh, stay last were close. That's the easiest one. And then for people who are looking Teo, contribute war I. I encouraged you to start with maybe small fixes, maybe take some documents or do some small P R's and you're reputations from there and star from small contributions and then feel all the way up. >> Yeah, so you know, one of one of the things when I look out there, you know, it's a complex ecosystem now, and, you know, there's a lot of pieces in there, you know, you know, trend we see is a lot of customers looking for manage services. A lot of you know, you know, I need opinions to help get me through all of these various pieces. You know what? What do you say to those people? And they're coming in And there's that, you know, paradox of choice When they, you know, come, come looking. You know, all the options out there. >> So I would say, Start with something simple that works. And then you can always ask others for advice for what works, What doesn't work. And you can hear from their success stories or failure stories. And then I think I recently he saw Block post about Some people in the community is collecting a potential failure stories. There is also a talk about humanity's fellow, the stories. So maybe you can go there and learn from the old those mistakes and then how to build a better system from there. >> I'd love that. We have to celebrate those failures that we hopefully can learn from them. Find anything on that, You know, from your viewpoint. >> Eso Actually, it's something I research is developer experience for you. Bernetti. So my communities is this whole big ping. I look on top of it and I'm looking at the outside in howto developers interact with Burnett, ese. And what we're seeing is that there's lots of room for opportunities and Mohr tools outside of the main community space that will help people actually interact with it because that's not really communities. Developers responsibility, you know, so one anything that I think that we're doing now is we're looking and this is something that we're doing and be aware that I can talk about is that we're looking at a P ice we're looking at. We realize that client go, which is the way that you burnett ese talks with sapi eyes, and a lot of people are using out externally were looking at. But what does it actually mean for human to use this and a lot of my work is just really around. Well, that's cool for computers. Now, what if a human has to use it? So what we're finding is that no. And I'm going to talk about this in my keynote tomorrow. You know, we're on this journey, and Kubernetes is not the destination. Coover Netease is the vehicle that is getting us to the destination that we don't even know what it is. So there's lots of spaces that we can look around to improve Kubernetes without even touching Cooper Netease itself, because actually, it's pretty good and it's fairly stable in a lot of cases. But it's hard, and that's the best part. So that's, you know, lots of work for us, the salt >> from my perspective. One of the turning points in Kou Burnett is a success. Story was when it got beyond just Google. Well, folks working on it. For better or worse, Google has a certain step of coding standards, and then you bring it to the real world, where there are people who are, Let's be honest, like me, where my coding standard is. I should try to right some some days, and not everything winds up having the same constraints. Not everything has the same approach. To some extent, it really feels like a tipping point for all of it was when you wind up getting to a position where people are bringing their real world workload that don't look like anything, anyone would be able to write a googol and keep their job. But still having to work with this, there was a wound up being sort of blossoming effect really accelerating the project. Conversely, other large infrastructure projects we need not mention when they had that tipping point in getting more people involved, they sort of imploded on themselves. I'm curious. Do you have any thoughts as to why you Burnett? He started thriving where other projects and failed trying to do the same things. >> I have something you go first. And >> I think the biggest thing about cybernetics is the really strong community and the ecosystem and also communities has the extensive bility for you to build on top of communities. We've seen people building from works, and then the platform is different platforms. Open source platforms on top of you. Burnett is so other people can use on other layers. Hyah. Layers off stacks on top of fraternities. Just use those open source. So, for example, we have the CRD. It's an A P I that allows you to feel your own customized, overnighted style FBI, so they're using some custom for couple databases. You could just create your own carbonated style FBI and call out your database or other stuffs, and then you can combine them into your own platform. And that's very powerful because everywhere. I can just use the same FBI, the Carbonari style idea to manage almost everything and that enables a Teo be able to, you know, on communities being adopted in different industry, such as I o t. A and Lord. >> So actually, this is perfect because the sleaze and so what I was going to say The secret of community is that we don't talk about actually job, Ada says. It's a lot, but it's a communities is a platform for creating platforms. So Kubernetes really is almost built on itself. You can extend Cooper. Netease like communities extends itself with the same semantics that it lets users extended. So Janet was talking about >> becoming the software that is eating the world. Yeah, it >> literally is. So Janet talked about the CRD sees custom resource definitions. It's the same. It's the same mechanism that Kubernetes uses to add new features. So whenever you're using these mechanisms, you're using Kou Burnett. He's basically the Cooper Nate's infrastructure to create. So really, what it is is that this is the tool kit for creating your solutions. What is why I say that Kubernetes is not an end point its its journey. >> So the cloud native system. >> So you know what? Yeah, and I like I like the limits analogy that people talk about. Like Coburn. Eighties is is like clinics. If you think about how Lennox you know little l. Lennox. Yeah. You know, I'm saying little l olynyk sub Let's put together. Yeah, you Burnett. He's like parts of communities would be system. And it's it's all these components come together the creature operating system, and that's the best part about it. >> Okay, so for me, the people that are not the seventy seven hundred that air here give them a little bit of, you know, walk around the show and some of the nooks and crannies that they might not know, like, you know, for myself having been to a number of these like Boy, there were so many half day and full day workshops yesterday there were, like, at least, like fifteen or seventeen or something like that that I saw, You know, obviously there's some of the big keynote. The Expo Hall is sprawling it, you know, I've been toe, you know, fifteen twenty thousand people show here This sex Bohol feels is bustling ahs that one is and well as tons of breakout session. So, you know, give us some of the things that people would have been missing if they didn't come to the show here. >> So just for the record, if you missed the show, you can still watch all the videos online. And then you can also watch the lifestream for keynotes so on. I personally love the applicant the different ways for a customizing covered at ease. So there's Ah, customizing overnight is track. And also there's the apple that applications track and I personally love that. And also I like the color case studies So you can't go to the case studies track to see on different users and users off Cooper, Natty shared. There were war stories, >> Yes, So I think that she will miss. There's a few things that you'll miss if you if you're not here in Barcelona right now, the first thing is that this convention center is huge. It's a ten minute walk from the door to where we're sitting right now, but more seriously, one. The things you'LL miss is that before the conference starts, there are there are a whole bunch of summits, Red had had a summit and fewer people had some. It's yesterday where they talk about things. There's the training sessions, which a lot of cases aren't recorded. And then another thing is that the special interest groups, the cigs. So Cooper ninety six, they all get together and they have faced the face discussions and then generally one from yesterday We're not. We're not recorded. So what you're missing is the people who actually make this big machine turn. They get together face to face and they first of all, they built from a rotary. But they get to discuss items that have require high bit of bandwith that you really can't do over again of issue or email, or even even a slack call like you can actually get this thing solved. And the best thing is watching these people. And then you watch the great ideas that in, you know, three, six months to a year become like, really big thing. So I bet yesterday, so something was discussed. Actually, I know of some things that we discussed yesterday that might fundamentally change how we deal with communities. So that's that is the value of being here and then the third thing is like when you come to a conference like this, where there's almost a thousand people, there's a lot of conversations that happened between, you know, the Expo Hall and the session rooms. And there's, um there's, you know, people are getting jobs here, People are finding new friends and people are learning. And before thing and I'll end with This is that I walk around looking for people who come in on the on the diversity scholarships, and I would not hear their stories if I did not come. So I met two people. I met a young lady from New Zealand who got the scholarship and flew here, you know, and super smart, but is in New Zealand and university, and I get to hear her insights with life. And then I get to share how you could be better in the same thing. I met a gentleman from Zimbabwe yesterday was going to school and take down, and what I hear is that there's so many smart people without opportunities, so if you're looking for opportunities, it's in these halls. There's a lot of people who have either money for you or they have re sources were really doesn't have a job or just you know what? Maybe there's someone you can call whenever you're stuck. So there is a lot of benefit to come into these. If you can get here, >> talent is evenly distributed. Opportunity is not. So I think the diversity scholarship program is one of the most inspirational things I saw mentioned out of a number of inspirational things that >> I know. It's It's my favorite part of communities. You know, I am super lucky that I haven't employees that our employer that can afford to send me here. Then I'm also super lucky that I probably couldn't afford to send myself here if I wanted to. And I do as much as I can to get people >> here. Well, Brian and Janet thank you so much for all you did to put this and sharing it with our community here. I'Ll repeat something that I said in Seattle. Actually, there was a lot of cloud shows out there. But if you're looking for you know, that independent cloud show that you know, lives in this multi hybrid cloud, whatever you wanna call it world you know this is one of the best out there. And the people? Absolutely. If you don't come with networking opportunities, we had into it on earlier, and they talked about how you know, this is the kind of place you come and you find a few people that you could hire to train the hundreds of people inside on all of the latest cloud native pieces. >> Can I say one thing, please? Brian S O, this is This is significant and it's significant for Janet and I. We are in the United States. We are, you know, Janet is a minority and I am a minority. This is the largest open source conference in the world. Siri's This is the largest open source conference in Europe. When we do, when we do, it ended a year. Whenever we do San Diego, it'Ll be the largest open source conference in the world. And look who's running it. You know, my new co chair is also a minority. This is amazing. And I love that. It shows that people who look like us we can come up here and do these things because like you said, opportunity is is, you know, opportunities the hard thing. Talent is everywhere. It's all over the place. And I'm glad we had a chance to do this. >> All right. Well, Brian, Janet, thank you so much for all of that. And Cory and I will be back with more coverage after this brief break. Thank you for watching the cues.

Published Date : May 21 2019

SUMMARY :

It's the key covering KubeCon thank you both for joining us, You know, of course, you know came from Google board in over a decade it to grow even more if you can. But you know, talk to us is the co chair, you know, What's it mean to, And you know, Jan and I just wound up here together, So look, there was no, you know, you both have full time jobs, That is, you know, that is one of the best things about open source. And Ben and Morgan are going to come on the program to talk about that merging later today. Wow, if you were the person that wrote Tiller, that probably didn't feel so good given. The disclaimer is I do not work on the helm project... ...so anything that I say should be So Well, so here's the big deal. It's an attack platform. You know, disclaimer of'em were just bought that bit na me. This's a disclaimer, You know, There Now you know there now my co workers But they wrote So, Janet, a lot of people at the show you talk about, you know, tens of thousands of contributors So basically, you you're using Yeah, so you know, one of one of the things when I look out there, you know, it's a complex ecosystem now, And then you can always ask others for advice for what works, We have to celebrate those failures that we hopefully can learn from them. So that's, you know, lots of work for us, the salt and then you bring it to the real world, where there are people who are, I have something you go first. a Teo be able to, you know, on communities being adopted So actually, this is perfect because the sleaze and so what I was going to say The secret becoming the software that is eating the world. So Janet talked about the CRD sees custom resource definitions. So you know what? you know, I've been toe, you know, fifteen twenty thousand people show here This sex Bohol feels is bustling So just for the record, if you missed the show, you can still watch all the the scholarship and flew here, you know, and super smart, but is in New Zealand is one of the most inspirational things I saw mentioned out of a number of inspirational things that And I do as much as I can to we had into it on earlier, and they talked about how you know, this is the kind of place you come and you find a few people like you said, opportunity is is, you know, opportunities the hard thing. Thank you for watching the cues.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

BrianPERSON

0.99+

Janet CooperPERSON

0.99+

CoryPERSON

0.99+

EuropeLOCATION

0.99+

BenPERSON

0.99+

SeattleLOCATION

0.99+

New ZealandLOCATION

0.99+

BarcelonaLOCATION

0.99+

ZimbabweLOCATION

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BurnettPERSON

0.99+

two peopleQUANTITY

0.99+

BernettiPERSON

0.99+

MorganPERSON

0.99+

two daysQUANTITY

0.99+

threeQUANTITY

0.99+

fifteenQUANTITY

0.99+

Bryan LilesPERSON

0.99+

Brian S OPERSON

0.99+

United StatesLOCATION

0.99+

Barcelona, SpainLOCATION

0.99+

Corey QuinnPERSON

0.99+

LennoxPERSON

0.99+

San DiegoLOCATION

0.99+

yesterdayDATE

0.99+

seventy seven hundred peopleQUANTITY

0.99+

KubeConEVENT

0.99+

AdaPERSON

0.99+

OneQUANTITY

0.99+

CoburnPERSON

0.99+

JanPERSON

0.99+

Wasn't HelmTITLE

0.99+

five yearsQUANTITY

0.99+

NamiPERSON

0.99+

CooperPERSON

0.99+

one hundred percentQUANTITY

0.99+

fifteen twenty thousand peopleQUANTITY

0.99+

tomorrowDATE

0.99+

seventeenQUANTITY

0.99+

seventy seven hundredQUANTITY

0.99+

FBIORGANIZATION

0.99+

FTOORGANIZATION

0.99+

thirty one thousand contributorsQUANTITY

0.99+

ten minuteQUANTITY

0.99+

Janet KuoPERSON

0.99+

ten minuteQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

KubeCon Cloud NativeConEVENT

0.98+

first languageQUANTITY

0.98+

six monthsQUANTITY

0.98+

seventy seven hundred peopleQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

firstQUANTITY

0.98+

third thingQUANTITY

0.97+

VMwareORGANIZATION

0.97+

l. LennoxPERSON

0.97+

appleORGANIZATION

0.97+

Kou BurnettPERSON

0.97+

one announcementQUANTITY

0.97+

SiriTITLE

0.96+

NattyPERSON

0.96+

first thingQUANTITY

0.96+

MontanaLOCATION

0.96+

each wayQUANTITY

0.96+

KubernetesTITLE

0.96+

Rob Szumski, Red Hat OpenShift | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain. It's theCUBE! Covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and Ecosystem Partners. >> Hi, and welcome back. This is KubeCon, CloudNativeCon 2019 here in Barcelona. 7700 in attendance according to the CNCF foundation. I'm Stu Miniman and my co-host for this week is Corey Quinn. And happy to welcome back to the program, a cube-i-lom Rob Szumski, who's the Product Manager for Red Hat OpenShift. Rob, thanks so much for joining us >> Happy to be here. >> All right, so a couple of weeks ago, we had theCUBE in Boston. You know, short drive for me, didn't have to take a flight as opposed to... I'm doing okay with the jet lag here, but Red Hat Summit was there. And it was a big crowd there, and the topic we're going to talk about with you is operators. And it was something we talked about a lot, something about the ecosystem. But let's start there. For our audience that doesn't know, What is an operator? How does it fit into this whole cloud-native space in this ecosystem? >> (Corey) And where can you hire one? >> (laughs) So there's software programs first of all. And the idea of an operator is everything it takes to orchestrate one of these complex distributor applications, databases, messaging queues, machine learning services. They all are distinct components that all need to be life-cycled. And so there's operational expertise around that, and this is something that might have been in a bash script before, you have a Wiki page. It's just in your head, and so it's putting that into software so that you can stamp out mini copies of that. So the operational expertise from the experts, so you want to go to the folks that make MongoDB for Mongo, for Reddits, for CouchBase, for TensorFlow, whatever it is. Those organizations can embed that expertise, and then take your user configuration and turn that into Kubernetes. >> Okay, and is there automation in that? When I hear the description, it reminds me a little bit of robotic process automation, or RPA, which you talk about, How can I harem them? RPA is, well there's certain jobs that are rather repetitive and we can allow software to do that, so maybe that's not where it is. But help me to put it into the >> No, I think it is. >> Okay, awesome. >> When you think about it, there's a certain amount of toil involved in operating anything and then there's just mistakes that are made by humans when you're doing this. And so you would rather just automate away that toil so you can spend you human capitol on higher level tasks. So that's what operator's all about. >> (Stu) All right. Great. >> Do you find that operator's are a decent approach to taking things that historically would not have been well-suited for autoscaling, for example, because there's manual work that has to happen whenever a no-joinser leaves a swarm. Is that something operators tend to address more effectively? Or am I thinking about this slightly in the wrong direction? >> Yeah, so you can do kind of any Kubernetes event you can hook into, so if your application cares about nodes coming and leaving, for example, this is helpful for operators that are operating the infrastructure itself, which OpenShift has under the hood. But you might care about when new name spaces are created or this pod goes away or whatever it is. You can kind of hook into everything there. >> So, effectively it becomes a story around running stateful things in what was originally designed for stateless containers. >> Yeah, that can help you because you care about nodes going away because your storage was on it, for example. Or, now I need to re-balance that. Whatever that type of thing is it's really critical for running stateful workloads. >> Okay, maybe give us a little bit of context as to the scope of operators and any customer examples you have that could help us add a little bit of concreteness to it. >> Yeah, they're designed to run almost anything. Every common workload that you can think about on an OpenShift cluster, you've got your messaging queues. We have a product that uses an operator, AMQ Streams. It's Kafka. And we've got folks that heavily use a Prometheus operator. I think there's a quote that's been shared around about one of our customer's Ticketmaster. Everybody needed some container native monitoring and everybody could figure out Prometheus on their own. Or they could use operator. So, they were running, I think 300-some instances of Prometheus and dev and staging and this team, that team, this person just screwing around with something over here. So, instead of being experts in Prometheus, they just use the operator then they can scale out very quickly. >> That's great because one of the challenges in this ecosystem, there's so many pieces of it. We always ask, how many companies need to be expert on not just Kubernetes, but any of these pieces. How does this tie into the CNCF, all the various projects that are available? >> I think you nailed it. You have to integrate all this stuff all together and that's where the value of something like OpenShift comes at the infrastructure layer. You got to pick all your networking and storage and your DNS that you're going to use and wire all that together and upgrade that. Lifecycle it. The same thing happens at a higher level, too. You've got all these components, getting your Fluentd pods down to operating things like Istio on Service Mesh's, serviceless workloads. All this stuff needs to be configured and it's all pretty complex. It's moving so fast, nobody can be an expert. The operator's actually the expert, embedded from those teams which is really awesome. >> You said something before we got started. A little bit about a certification program for operators. What is that about? >> We think of it as the super set of our community operators. We've got the TensorFlow community, for example, curates an operator. But, for companies that want to go to market jointly with Red Hat, we have a certification program that takes any of their community content, or some of their enterprise distributions and makes sure that it's well-tested on OpenShift and can be jointly supported by OpenShift in that partner. If you come to Red Hat with a problem with a MongoDB operator, for example, we can jointly solve that problem with MongoDB and ultimately keep your workload up and keep it running. We've got that times a bunch of databases and all kinds of servers like that. You can access those directly from OpenShift which is really exciting. One-click install of a production-ready Mongo cluster. You don't need to dig through a bunch of documentation for how that works. >> All right, so Rob, are all of these specific only to OpenShift, or will they work with flavors of Kubernetes? >> Most of the operators work just against the generic Kubernetes cluster. Some of them also do hook into OpenShift to use some of our specialized security primitives and things like that. That's where you get a little bit more value on OpenShift, but you're just targeting Kubernetes at the end of the day. >> What do you seeing customers doing with this specifically? I guess, what user stories are you seeing that is validating that this is the right direction to go in? >> It's a number of different buckets. The first one is seeing folks running services internally. You traditionally have a DBA team that maybe runs the shared database tier and folks are bringing that the container native world from their VM's that they're used to. Using operators to help with that and so now it's self-service. You have a dedicated cluster infrastructure team that runs clusters and gives out quota. Then, you're just eating into that quota to run whatever workloads that you want in an operator format. That's kind of one bucket of it. Then, you see folks that are building operators for internal operation. They've got deep expertise on one team, but if you're running any enterprise today especially like a large scale Ecommerce shop, there's a number of different services. You've got caching tier, and load balancing tiers. You've got front-ends, you've got back-ends, you've got queues. You can build operators around each one of those, so that those teams even when they're sharing internally, you know, hey where's the latest version of your stack? Here's the operator, go to town. Run it in staging QA, all that type of stuff. Then, lastly, you see these open source communities building operators which is really cool. Something like TensorFlow, that community curates an operator to get you one consistent install, so everyone's not innovating on 30 different ways to install it and you're actually using it. You're using high level stuff with TensorFlow. >> It's interesting to lay it out. Some of these okay, well, a company is doing that because it's behind something. Others you're saying it's a community. Remind me, just Red Hat's long history of helping to give if you will, adult supervision for all of these changes that are happening in the world out there. >> It's a fast moving landscape and some tools that we have are our operator SDK are helping to tame some of that. So, you can get quickly up and running, building an operator whether you are one of those communities, you are a commercial vendor, you're one of our partners, you're one of our customers. We've got tools for everybody. >> Anything specific in the database world that's something we're seeing, that Cambrian explosion in the database world? >> Yeah, I think that folks are finally wrapping their heads around that Kubernetes is for all workloads. And, to make people feel really good about that, you need something like an operator that's got this extremely well-tested code path for what happens when these databases do fail, how do I fail it over? It wasn't just some person that went in and made this. It's the expert, the folks that are committing to MongoDB, to CouchBase, to MySQL, to Postgres. That's the really exciting thing. You're getting that expertise kind of as extension of your operations team. >> For people here at the show, are there sessions about operators? What's the general discussion here at the show for your team? >> There's a ton. Even too many to mention. There's from a bunch of different partners and communities that are curating operators, talking about best practices for managing upgrades of them. Users, all that kind of stuff. I'm going to be giving a keynote, kind of an update about some of stuff we've been talking about here later on this evening. It's all over the place. >> What do you think right now in the ecosystem is being most misunderstood about operators, if anything? >> I think that nothing is quite misunderstood, it's just wrapping your head around what it means to operate applications in this manner. Just like Kubernetes components, there's this desired state loop that's in there and you need to wrap your head around exactly what needs to be in that. You're declarative state is just the Kubernetes API, so you can look at desired and actual and make that happen, just like all the Kub components. So, just looking at a different way of thinking. We had a panel yesterday at the OpenShift Commons about operators and one of the questions that had some really interesting answers was, What did you understand about your software by building an operator? Cause sometimes you need to tease apart some of these things. Oh, I had hard coded configuration here, one group shared that their leader election was not actually working correctly in every single incidences and their operator forced them to dig into that and figure out why. So, I think it's a give and take that's pretty interesting when you're building one of these things. >> Do you find that customers are starting to rely on operators to effectively run their own? For example, MongoDB inside of their Kubernetes clusters, rather than depending upon a managed service offering provided by their public cloud vendor, for example. Are you starting to see people effectively reducing public cloud to baseline primitives at a place to run containers, rather than the higher level services that are starting to move up the stack? >> A number of different reasons for that too. You see this for services if you find a bug in that service, for example, you're just out of luck. You can't go introspect the versions, you can't see how those components are interacting. With an operator you have an open source stack, it's running on your cluster in your infrastructure. You can go introspect exactly what's going on. The operator has that expertise built in, so it's not like you can screw around with everything. But, you have much more insight into what's going on. Another thing you can't get with a cloud service is you can't run it locally. So, if you've got developers that are doing development on an airplane, or just want to have something local so it's running fast, you can put your whole operator stack right on your laptop. Not something you can do with a hosted service which is really cool. Most of these are opens source too, so you can go see exactly how the operator's built. It's very transparent, especially if you're going to trust this for a core part of the infrastructure. You really want to know what's going on under the hood. >> Just to double check, all this can run on OpenShift? It is agnostic to where it lives, whether public cloud or data center? >> Exactly. These are truly hybrid services, so if you're migrating your database to here, for example, over now you have a truly hybrid just targeting Kubernetes environment. You can move that in any infrastructure that you like. This is one of the things that we see OpenShift customers do. Some of them want to be cloud-to-cloud, cloud-to-on-prem, different environments on prem only, because you've got database workloads that might not be leaving or a mainframe you need to tie into, a lot of our FSI customers. Operators can help you there where you can't move some of those workloads. >> Cloud-on-prem makes a fair bit of sense to me. One thing I'm not seeing as much of in the ecosystem is cloud-to-cloud. What are you seeing that's driving that? >> I think everybody has their own cloud that they prefer for whatever reasons. I think it's typically not even cost. It's tooling and cultural change. And, so you kind of invest in one of those. I think people are investing in technologies that might allow them to leave in the future, and operators and Kubernetes being one of those important things. But, that doesn't meant that they're not perfectly happy running on one cloud versus the other, running Kubernetes on top of that. >> Rob, really appreciate all the updates on operators. Thanks so much for joining us again. >> Absolutely. It's been fun. >> Good luck on the keynote. >> Thank you. >> For Corey Quinn, I'm Stu Miniman, back with more coverage two days live from wall to wall here at KubeCon CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

Brought to you by Red Hat, 7700 in attendance according to the CNCF foundation. and the topic we're going to talk about so that you can stamp out mini copies of that. which you talk about, How can I harem them? so you can spend you human capitol on higher level tasks. (Stu) All right. Do you find that operator's are a decent approach Yeah, so you can do kind of any So, effectively it becomes a story Yeah, that can help you because you care and any customer examples you have Every common workload that you can think about That's great because one of the challenges You got to pick all your networking and storage What is that about? and can be jointly supported by OpenShift in that partner. That's where you get a little bit more value and folks are bringing that the container native world that are happening in the world out there. So, you can get quickly up and running, the folks that are committing to MongoDB, to CouchBase, and communities that are curating operators, and you need to wrap your head around Do you find that customers are starting to so it's not like you can screw around with everything. You can move that in any infrastructure that you like. What are you seeing that's driving that? that might allow them to leave in the future, Rob, really appreciate all the updates on operators. It's been fun. at KubeCon CloudNativeCon 2019 in Barcelona, Spain.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Corey QuinnPERSON

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

BostonLOCATION

0.99+

Rob SzumskiPERSON

0.99+

BarcelonaLOCATION

0.99+

RobPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

30 different waysQUANTITY

0.99+

yesterdayDATE

0.99+

One-clickQUANTITY

0.99+

two daysQUANTITY

0.99+

MySQLTITLE

0.99+

KubeConEVENT

0.99+

Barcelona, SpainLOCATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

PrometheusTITLE

0.99+

CoreyPERSON

0.99+

oneQUANTITY

0.99+

OpenShiftTITLE

0.99+

MongoDBTITLE

0.98+

KafkaTITLE

0.98+

KubernetesTITLE

0.98+

Red Hat SummitEVENT

0.98+

one teamQUANTITY

0.98+

CloudNativeConEVENT

0.98+

first oneQUANTITY

0.96+

CloudNativeCon 2019EVENT

0.96+

one cloudQUANTITY

0.96+

this weekDATE

0.94+

CouchBaseTITLE

0.94+

CloudNativeCon EU 2019EVENT

0.93+

TensorFlowTITLE

0.91+

One thingQUANTITY

0.91+

EuropeLOCATION

0.9+

2019EVENT

0.89+

KubeCon CloudNativeCon 2019EVENT

0.89+

todayDATE

0.88+

couple of weeks agoDATE

0.87+

one groupQUANTITY

0.87+

300OTHER

0.85+

each oneQUANTITY

0.85+

RedditsORGANIZATION

0.83+

OpenShift CommonsORGANIZATION

0.83+

TicketmasterORGANIZATION

0.83+

this eveningDATE

0.79+

one ofQUANTITY

0.79+

7700QUANTITY

0.79+

PostgresTITLE

0.77+

single incidencesQUANTITY

0.75+

FSIORGANIZATION

0.73+

doubleQUANTITY

0.69+

Bob Ward & Jeff Woolsey, Microsoft | Dell Technologies World 2019


 

(energetic music) >> Live from Las Vegas. It's theCUBE. Covering Dell Technologies World 2019. Brought to you by Dell Technologies and it's Ecosystem Partners. >> Welcome back to theCUBE, the ESPN of tech. I'm your host, Rebecca Knight along with my co-host Stu Miniman. We are here live in Las Vegas at Dell Technologies World, the 10th anniversary of theCUBE being here at this conference. We have two guests for this segment. We have Jeff Woolsey, the Principal Program Manager Windows Server/Hybrid Cloud, Microsoft. Welcome, Jeff. >> Thank you very much. >> And Bob Ward, the principal architect at Microsoft. Thank you both so much for coming on theCUBE. >> Thanks, glad to be here. >> It's a pleasure. Honor to be here on the 10th anniversary, by the way. >> Oh is that right? >> Well, it's a big milestone. >> Congratulations. >> Thank you very much. >> I've never been to theCUBE. I didn't even know what it was. >> (laughs) >> Like what is this thing? >> So it is now been a couple of days since Tatiana Dellis stood up on that stage and talked about the partnership. Now that we're sort of a few days past that announcement, what are you hearing? What's the feedback you're getting from customers? Give us some flavor there. >> Well, I've been spending some time in the Microsoft booth and, in fact, I was just chatting with a bunch of the guys that have been talking with a lot of customers as well and we all came to the consensus that everyone's telling us the same thing. They're very excited to be able to use Azure, to be able to use VMware, to be able to use these in the Azure Cloud together. They feel like it's the best of both worlds. I already have my VMware, I'm using my Office 365, I'm interested in doing more and now they're both collocated and I can do everything I need together. >> Yeah it was pretty interesting for me 'cause VMware and Microsoft have had an interesting relationship. I mean, the number one application that always lived on a VM was Microsoft stuff. The operating system standpoint an everything, but especially in the end using computer space Microsoft and VM weren't necessarily on the same page to see both CEOs, also both CUBE alums, up there talking about that really had most of us sit up and take notice. Congratulations on the progress. >> For me, being in a SQL server space, it's a huge popular workload on VMware, as you know and virtualization so everybody's coming up to me saying when can I start running SQL server in this environment? So we're excited to kind of see the possibilities there. >> Customers, they live in a heterogeneous environment. Multicloud has only amplified that. It's like, I want to be able to choose my infrastructure, my Cloud, and my application of choice and know that my vendors are going to rally around me and make this easy to use. >> This is about meeting our customers where they are, giving them the ability to do everything they need to do, and make our customers just super productive. >> Yeah, absolutely. >> So, Jeff, there's some of the new specific give us the update as to the pieces of the puzzle and the various options that Microsoft has in this ecosystem. >> Well, a lot of these things are still coming to light and I would tell people definitely take a look at the blog. The blog really goes in in depth. But key part of this is, for customers that want to use their VMware, you get to provision your resources using, for example, the well known, well easy to use Azure Infrastructure and Azure Portal, but when it's time to actually do your VMs or configure your network, you get to use all of the same tools that you're using. So your vCenter, your vSphere, all of the things that a VMware administrator knows how to do, you continue to use those. So, it feels familiar. You don't feel like there's a massive change going on. And then when you want to hook this up to your Azure resources, we're making that super easy, as well, through integration in the portal. And you're going to see a lot more. I think really this is just the beginning of a long road map together. >> I want to ask you about SQL 19. I know that's your value, so-- >> That's what I do, I'm the SQL guy. >> Yeah, so tell us what's new. >> Well, you know, we launched SQL 19 last year at Ignite with our preview of SQL 19. And it'll be, by the way, it'll be generally available in the second half of this calendar year. We did something really radical with SQL 19. We did something called data virtualization polybase. Imagine as a SQL customer you connecting with SQL and then getting access to Oracle, MongoDB, Hadoop data sources, all sorts of different data in your environment, but you don't move the data. You just connect to SQL Server and get access to everything in your corporate environment now. We realize you're not just going to have SQL Server now in your environment. You're going to have everything. But we think SQL can become like your new data hub to put that together. And then we built something called big data clusters where we just deploy all that for you automatically. We even actually built a Hadoop cluster for you with SQL. It's kind of radical stuff for the normal database people, right? >> Bob, it's fascinating times. We know it used to be like you know I have one database and now when I talk to customers no, I have a dozen databases and my sources of data are everywhere and it's an opportunity of leveraging the data, but boy are there some challenges. How are customers getting their arms around this. >> I mean, it's really difficult. We have a lot of people that are SQL Server customers that realize they have those other data sources in their environment, but they have skills called TSQL, it's a programming language. And they don't want to lose it, they want to learn, like, 10 other languages, but they have to access that data source. Let me give you an example. You got Oracle in a Linux environment as your accounting system and you can't move it to SQL Server. No problem. Just use SQL with your TSQL language to query that data, get the results, and join it with your structured data in SQL Server itself. So that's a radical new thing for us to do and it's all coming in SQL 19. >> And what it helps-- what really helps break down is when you have all of these disparate sources and disparate databases, everything gets siloed. And one of the things I have to remind people is when I talk to people about their data center modernization and very often they'll talk about you know, I've had servers and data that's 20, 30, even, you know, decades old and they talk about it almost like it's like baggage it's luggage. I'm like, no, that's your company, that's your history. That data is all those customer interactions. Wouldn't it be great if you could actually take better advantage of it. With this new version of SQL, you can bring all of these together and then start to leverage things like ML and AI to actually better harvest and data mine that and rather than keeping those in disparate silos that you can't access. >> How ready would you say are your customers to take advantage of AI and ML and all the other-- >> It's interesting you say that because we actually launched the ability to run R and Python with SQL Server even two years ago. And so we've got a whole new class of customers, like data scientists now, that are working together with DBAs to start to put those workloads together with SQL Server so it's actually starting to come a really big deal for a lot of our community. >> Alright, so, Jeff, we had theCUBE at Microsoft Ignite last year, first time we'd done a Microsoft show. As you mentioned, our 10th year here, at what used to be EMC World. It was Interesting for me to dig in. There's so many different stack options, like we heard this week with Dell Technologies. Azure, I understood things a lot from the infrastructure side. I talked to a lot of your partners, talked to me about how many nodes and how many cores and all that stuff. But very clearly at the show, Azure Stack is an extension of Azure and therefore the applications that live on it, how I manage that, I should think Azure first, not infrastructure first. There's other solutions that extend the infrastructure side, things like WSSD I heard a lot about. But give us the update on Azure Stack, always interest in the Cloud, watching where that fits and some of the other adjacent pieces of the portfolio. >> So the Azure Stack is really becoming a rich portfolio now. So we launched with Azure Stack, which is, again, to give you that Cloud consistency. So you can literally write applications that you can run on premises, you can move to the Cloud. And you can do this without any code change. At the same time, a bunch of customers came to us and they said this is really awesome, but we have other environments where we just simply need to run traditional workloads. We want to run traditional VMs and containers and stuff like that. But we really want to make it easy to connect to the Cloud. And so what we have actually launched is Azure Stack HCI. It's been out about a month, month and a half. And, in fact, here at Dell EMC Dell Technology World here, we actually have Azure Stack HCI Solutions that are shipping, that are on the marketplace right now here are the show as well and I was just demoing one to someone who was blown away at just how easy it is with our admin center integration to actually manage the hyper converged cluster and very quickly and easily configure it to Azure so that I can replicate a virtual machine to Azure with one click. So I can back up to Azure in just a couple clicks. I can set up easy network connectivity in all of these things. And best yet, Dell just announced their integration for their servers into admin center here at Dell Technologies World. So there's a lot that we're doing together on premises as well. >> Okay, so if I understand right, is Dell is that one of their, what they call Ready Nodes, or something in the VxFlex family. >> Yes. >> That standpoint. The HCI market is something that when we wrote about it when it was first coming out, it made sense that, really, the operating system and hypervisor companies take a lead in that space. We saw VMware do it aggressively and Microsoft had a number of different offerings, but maybe explain why this offering today versus where we were five years ago with HCI. >> Well, one of the things that we've been seeing, so as people move to the Cloud and they start to modernize their applications and their portfolio, we see two things happen. Generally, there are some apps that people say hey, I'm obviously going to move that stuff to Azure. For example, Exchange. Office 365, Microsoft, you manage my mail for me. But then there are a bunch of apps that people say that are going to stay on Prem. So, for example, in the case of SQL, SQL is actually an example of one I see happening going in both places. Some people want to run SQL up in the Cloud, 'cause they want to take advantage of some of the services there. And then there are people who say I have SQL that is never, ever, ever, ever, ever going to the Cloud because of latency or for governance and compliance. So I want to run that on modern hardware that's super fast. So this new Dell Solutions that have Intel, Optane DC Persistent Memory have lots of cores. >> I'm excited about that stuff, man. >> Oh my gosh, yes. Optane Persistent Memory and lots of cores, lots of fast networking. So it's modern, but it's also secure. Because a lot of servers are still very old, five, seven, ten years old, those don't have things like TPM, Secure Boot, UEFI. And so you're running on a very insecure platform. So we want people to modernize on new hardware with a new OS and platform that's secure and take advantage of the latest and greatest and then make it easy to connect up to Azure for hybrid cloud. >> Persistent Memory's pretty exciting stuff. >> Yes. >> Actually, Dell EMC and Intel just published a paper using SQL Server to take advantage of that technology. SQL can be I/O bound application. You got to have data and storage, right? So now Dell EMC partnered together with SQL 19 to access Persistent Memory, bypass the I/O part of the kernel itself. And I think they achieved something like 170% faster performance versus even a fast NVNMe. It's a great example of just using a new technology, but putting the code in SQL to have that intelligence to figure out how fast can Persistent Memory be for your application. >> I want to ask about the cultural implications of the Dell Microsoft relationship partnership because, you know, these two companies are tech giants and really of the same generation. They're sort of the Gen Xers, in their 30s and 40s, they're not the startups, been around the block. So can you talk a little bit about what it's like to work so closely with Dell and sort of the similarities and maybe the differences. >> Sure. >> Well, first of all, we've been doing it for, like you said, we've been doing this for awhile. So it's not like we're strangers to this. And we've always had very close collaboration in a lot of different ways. Whether it was in the client, whether it's tablets, whether it's devices, whether it's servers, whether it's networking. Now, what we're doing is upping our cloud game. Essentially what we're doing is, we're saying there is an are here in Cloud where we can both work a lot closer together and take advantage of the work that we've done traditionally at the hardware level. Let's take that engineering investment and let's do that in the Cloud together to benefit our mutual customers. >> Well, SQL Server is just a primary application that people like to run on Dell servers. And I've been here for 26 years at Microsoft and I've seen a lot of folks run SQL Server on Dell, but lately I've been talking to Dell, it's not just about running SQL on hardware, it's about solutions. I was even having discussions yesterday about Dell about taking our ML and AI services with SQL and how could Dell even package ready solutions with their offerings using our software stack, but even addition, how would you bring machine learning and SQL and AI together with a whole Dell comp-- So it's not just about talking about the servers anymore as much, even though it's great, it's all about solutions and I'm starting to see that conversation happen a lot lately. >> And it's generally not a server conversation. That's one of the reasons why Azure Stack HCI is important. Because its customers-- customers don't come to me and say Jeff, I want to buy a server. No, I want to buy a solution. I want something that's pre configured, pre validated, pre certified. That's why when I talk about Azure Stack HCI, invariably, I'm going to get the question: Can I build my own? Yes, you can build your own. Do I recommend it? No, I would actually recommend you take a look at our Azure Stack HCI catalog. Like I said, we've got Dell EMC solutions here because not only is the hardware certified for Windows server, but then we go above and beyond, we actually run whole bunch of BurnInTests, a bunch of stress tests. We actually configure, tune, and tune these things for the best possible performance and security so it's ready to go. Dell EMC can ship it to you and you're up and running versus hey, I'm trying to configure make all this thing work and then test it for the next few months. No, you're able to consume Cloud very quickly, connect right up, and, boom, you got hybrid in the house. >> Exactly. >> Jeff and Bob, thank you both so much for coming on theCUBE. It was great to have you. >> Our pleasure. Thanks for having us. Enjoyed it, thank you. >> I'm Rebecca Knight for Stu Miniman. We will have more of theCUBEs live coverage of Dell Technologies World coming up in just a little bit.

Published Date : May 2 2019

SUMMARY :

Brought to you by Dell Technologies We have Jeff Woolsey, the Principal Program Manager Thank you both so much for coming on theCUBE. Honor to be here on the 10th anniversary, by the way. I've never been to theCUBE. what are you hearing? and we all came to the consensus but especially in the end using computer space it's a huge popular workload on VMware, as you know and make this easy to use. and make our customers just super productive. and the various options that Microsoft has Well, a lot of these things are still coming to light I want to ask you about SQL 19. and get access to everything in your and it's an opportunity of leveraging the data, and you can't move it to SQL Server. And one of the things I have to remind people is so it's actually starting to come and some of the other adjacent pieces of the portfolio. a bunch of customers came to us and they said or something in the VxFlex family. and hypervisor companies take a lead in that space. and they start to modernize their applications and then make it easy to connect up to Azure Actually, Dell EMC and Intel just published a paper and really of the same generation. and let's do that in the Cloud together and I'm starting to see that conversation Dell EMC can ship it to you and you're up and running Jeff and Bob, Thanks for having us. of Dell Technologies World

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff WoolseyPERSON

0.99+

Rebecca KnightPERSON

0.99+

Tatiana DellisPERSON

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Bob WardPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

SQL 19TITLE

0.99+

fiveQUANTITY

0.99+

DellORGANIZATION

0.99+

170%QUANTITY

0.99+

Las VegasLOCATION

0.99+

BobPERSON

0.99+

sevenQUANTITY

0.99+

Azure StackTITLE

0.99+

yesterdayDATE

0.99+

26 yearsQUANTITY

0.99+

SQL ServerTITLE

0.99+

IntelORGANIZATION

0.99+

SQLTITLE

0.99+

ten yearsQUANTITY

0.99+

last yearDATE

0.99+

two guestsQUANTITY

0.99+

Azure Stack HCITITLE

0.99+

bothQUANTITY

0.99+

two companiesQUANTITY

0.99+

AzureTITLE

0.99+

Dell Technologies WorldORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Office 365TITLE

0.99+

10th yearQUANTITY

0.99+

five years agoDATE

0.99+

two years agoDATE

0.98+

PythonTITLE

0.98+

two thingsQUANTITY

0.98+

both placesQUANTITY

0.98+