Image Title

Search Results for OpenShift Container Platform:

Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020


 

(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)

Published Date : Nov 2 2020

SUMMARY :

leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

SamPERSON

0.99+

twoQUANTITY

0.99+

EricPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Jeff FrickPERSON

0.99+

Wells FargoORGANIZATION

0.99+

October 2020DATE

0.99+

Wells FargoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

50 petabytesQUANTITY

0.99+

10 petabytesQUANTITY

0.99+

North CarolinaLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

150 petabytesQUANTITY

0.99+

CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

University of CaliforniaORGANIZATION

0.99+

2020DATE

0.99+

a year agoDATE

0.99+

both casesQUANTITY

0.99+

24QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

second exampleQUANTITY

0.99+

Eric Cigar ShopORGANIZATION

0.99+

Herzog Cigar StoreORGANIZATION

0.99+

OpenShiftTITLE

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

over 500 different arraysQUANTITY

0.98+

end of JuneDATE

0.98+

four peopleQUANTITY

0.98+

vCenter OpsTITLE

0.98+

Todd Wilson & Shea Phillips - Red Hat Summit 2017


 

>> Important place in that history right now is that we're-- >> Announcer: Live from Boston, Massachusetts, it's theCUBE covering Red Hat Summit 2017 brought to you by Red Hat. >> Welcome back to theCUBE's coverage of the Red Hat Summit here in beautiful Boston, Massachusetts. I'm your host Rebecca Knight. I'm joined by Todd Wilson and Shea Phillips of the BC Developers Exchange. Thanks so much for joining us today. >> Thanks for having us. >> So the BC Developer's Exchange, you described it to me before the cameras were rolling as helping the British Colombian government think differently. Talk a little, explain, unpack that a bit for our viewers. >> Sure, so it's been a journey for us. We've evolved over awhile, so we've been going for about three years now. What we wanted to do, we recognized that government had fallen behind in its technology practices and technology utilization and we were trying to participate in the tech industry that's growing in BC and we were finding that it was a pretty big gap in understanding. We didn't really speak the same language, we didn't really understand what their needs were, they didn't understand how to work with us and so we started exploring ways to connect better. So one of the things we recognized that we had on our side was technology assets of data. We have tons and tons of data that's valuable to the tech industry to use for their apps. So we first started by opening up that data and then realizing that just open data is part of the story. We need APIs so providing API access and that was just kind of part of the story. We needed to actually start collaborating on solutions. So then we brought the Province into GitHub and we're doing open source collaboration on GitHub and it's kind of morphed into a much bigger picture than we originally started with but it's been a really exciting way to work. >> And your realization that the government was a little bit behind here or you were working in a different track than the government, that's not uncommon, wouldn't you think? The government is not known for innovative practices. So did it take, did it take some persuasion on your part? >> I think that you know, it's mixed. So there are certainly factions within the government that there's a bit of pent up demand, right? So there are people who are very quick to kind of get on the train and then there are other groups who do need convincing and it's kind of a work in progress. So we're building collaboration across government all the time but we certainly didn't have trouble finding people within government and within the tech community who wanted to come along with us. >> So talk about some of the projects that you're working on to make government run better. >> Sure, so there's a couple of examples of how moving into the open source just made sense for government. One example that we've used in a sort of why GitHub makes sense for what we're doing, the Environmental Reporting Branch of the Ministry of the Environment is responsible every year for producing a report on the water quality, air quality, all the basic things that the environmentalists you know, care about and all of the different universities and academic institutions consume this report and then do their analysis on it. One of the things that was always a challenge is there was always kind of wondering, are these numbers cooked? Are you guys actually reporting on the actual findings or are you cleaning it up a little bit? So what the Environmental Reporting Office was able to do is they published the code on GitHub, the data in our Open Data Catalog and it was all there 100% transparent for anybody to recreate the results. So they could download the code, have it running on their laptop. They could download the data, bring it in and run the numbers. What ended up happening after a few months, they got an issue in GitHub. Somebody created an issue, said it's broken, it's not working, I can't get it to go and a little bit of investigation and they found out that the nature of the data, one of the datasets they were using had changed. So it broke the program and so the developer that was responsible for it wasn't going to fix that until next year, next time to run the report. So he said thanks for pointing out the error but you know, I'll be fixing that next year and a day or two went by and all of a sudden out of nowhere he got a pull request in GitHub. The guy who discovered the issue actually went away on the weekend and fixed the code himself and said here, I fixed it for you, it's all ready to go. And so that's sort of that whole community spirit that just starts to grow naturally when citizens can engage with government on such a personal level and work on something together and collaborate in a space that previous to that had been kind of adversarial. There wasn't a lot of trust there, there wasn't sort of that good feeling of are we getting the right information? All of a sudden to turn into a real collaborative partnership, that's the model that we want to see. >> Well I'm wondering if we could turn that example into a real metaphor for what we'd like to see overall with a more engaged citizenry who is people who want to work alongside or with government to solve these problems. >> Exactly yeah, we're all living in the same space. We're all using the same resources. You know, the government is there for the citizens and it's by the citizens, so to be able to work together and work openly is a real strength, real power play. >> So that environmental code that you just gave was a great example. Talk about some other ways that you're working with the government. >> So one example that we have is sort of in an internal sharing scenario. So previously when applications were built within gov, there wasn't an easy way for applications to be shared across different ministries or agencies. So they'd get built and they'd kind of get locked away and used for that one particular business function. What we've been able to do with GitHub and by having shared code is to have projects come along and actually borrow what's been done already and repurpose those applications and that gives them a great starting point. So there's a lot of common things that every application would have to figure out and so by having these starter kits essentially, development teams can get a leg up on taking on new projects and so that reduces the time to market and the cost ultimately and also makes things a little more consistent. >> And what about the project you did with the highways? >> Okay, so that was one where there was a collaboration on a standard for reporting of road incidents. So it's called Open 511 and so this was an international standard that was being developed. So there's various States in the US and Provinces in Canada and a couple of other international jurisdictions that collaborated on this specification for highway event APIs so that data could be shared easily. So the Ministry of Transportation in BC participated in that and collaborated and contributed to it but then they also exposed their data using these APIs. But then they didn't end up building anything on it, they just kind of said here, it's available to use. Go figure it out. So what we really wanted to do there is it's really not the government's job to be building all of the end product apps. We're kind of the resource store for the building blocks and then what ended up happening, an opportunity got recognized by a mobile app developer in Victoria, they saw an opportunity to take these APIs and build a little notification app so that if you put your route in, it'll ping you notifications if there's obstructions or traffic or whatever may have you and show you the webcam image that is on your route. So a really interesting solution that gov never would have built. Like we would never have built a mobile app for that. >> Do you, how do you ensure security? That's one of the biggest themes of this conference is making sure the data is in fact secure, it's what you hear over and over again as a big concern. How do you address that? >> Do you want to, oh yeah I was getting to that. So we have a data center that we run in partnership with HP and the data resides on premise in that data center. What we're using Red Hat OpenShift Container Platform is sort of all the front end facing interfaces would go through OpenShift. So when people are accessing the data, the access in controlled through gateways and however projects get set up in order to control that access. Meanwhile the data is still sitting securely in the network zone back at the mother ship. So what we've found with the OpenShift Container Platform is the developers don't necessarily need to worry about a lot of the tactical policies and network policies that are part of that security standard because that's handled by the platform. When we build OpenShift, we built it compliant to all those policies and so developers can come in to the platform, just start working and as long as they're not punching out data that has personal information out to the internet, you know of course there's things they could do wrong, but as long as they're using the platform as it was intended, they're compliant right from day one. >> In terms of recruiting and retaining talented developers and talented technologists, do you find that a challenge? I mean as we said before, you don't necessarily think of the government as this hotbed of innovation and creativity. Is it difficult to get the best and the brightest to come work for you? >> I think that was actually part of the strategy around adopting tools like containers and open source was actually to make gov more compatible with the IT market. So using the same tools that the private sector uses, so there's a more seamless transition from a recruiting perspective and people can, you know they're not sort of going back in time when they go and work with government. So that was definitely a deliberate part of the strategy. >> So it's the tools but then also the projects. Are you finding coders and engineers who are, who want to dig into these projects? >> They do but we want to work with them in a different way. So we don't necessarily want every developer to be a gov employee. That's really not the model. We would never scale properly that way. So what we've done is we've created a new procurement method. So in government, procurement is hard like it is in a lot of enterprises. Contracts and all of these things get complicated and take time and you have to wait maybe a few months before you actually get the resource that you need. So what we've done is shortened that timeline down as much as we can and also micro-sized the work as much as we can. So if a project is running on GitHub and they have an issue, they can post that issue and put a dollar sign associated with it from 1,000 to $10,000 and kind of do a bounty and say hey development community, we want this fixed, can you do it? So developers can engage with that. They can write a short proposal, 100 words or less of what they will do and then if they get assigned the work and we accept the pull request, we will pay them using PayPal or write them a check or however they want right on the spot. So we can go end-to-end from problem, proposal, code and solution literally in a couple of days whereas before that would have taken a few months and the engagement would have been much larger and much more expensive. >> And are you finding that that is in fact having the impact you want in terms of the workforce that you're trying to attract? >> Yeah, Shea, you want to? >> Yeah, I think there's definitely been interest in the private sector, kind of independent freelance developers are generally pretty excited about this and some of them are downright shocked to see that this is such a progressive thing that the gov has undertaken. >> Yeah, we've had comments from developers saying oh, I never knew working with gov was this easy and that's the way we like to hear it. >> And hopefully it will become easier, too. We think about the government and the technology industry not necessarily working together, particularly when it comes to this new digital world that we're living in and we hear so much about the benefits of automation but also the fact that automation is going to have a big impact on jobs. Do you think that the government and tech need to be thinking together about the effects of this and working together to make sure that we aren't seeing more displaced workers? >> Absolutely, I mean I think we're, you know no one has a crystal ball. Nobody can tell what's going to happen but if we don't start thinking proactively about some of these issues, workforce issues, we're going to be caught flat-footed and so one of the things that we've been trying to prove along is automation doesn't necessarily mean losing jobs and so we've been trying to explore what the workforce shift looks like. So what we find within the little corner of sort of DevOps automation that we're doing is it's not that we're taking jobs away from people, we're just moving them to a different part of the value stream. So they're usually moving further up the value stream closer to the business so that they're actually much more engaged with the day-to-day business of gov and less engaged just with the tech and the plumbing. So by moving automation in, we're actually connecting the business and the technology closer together. >> What are some of the future projects that you envisage working closely with the government to change the way citizens engage with government? >> Sure, we've got a couple of big projects coming up where we are looking at different models of reaching citizens in meaningful ways. So there's a sort of personalized service or some kind of citizen dashboard, however you want to phrase that. That's one of the things that's on our wish list of wouldn't it be great if. We also have partnerships that we're looking to explore in different areas with sort of big data and data analytics. Because government has so much rich resource data, we're looking for ways to get that out and get that available but one of the challenges is just the sheer size of it. So the big data equation and big data analytics are very interesting things for us in the future because if we can provide expertise in that area, then tech sector and industry partners can come and participate with that data and just make it better. >> Well thank you so much for joining us Todd and Shea, I appreciate your time. >> Great, thank you. >> We'll be back with more of theCUBE's coverage of the Red Hat Summit 2017 after this. (up tempo electronic tones)

Published Date : May 3 2017

SUMMARY :

brought to you by Red Hat. of the BC Developers Exchange. So the BC Developer's Exchange, So one of the things we recognized that we had So did it take, all the time but we certainly didn't have trouble So talk about some of the projects that So it broke the program and so the developer that was to see overall with a more engaged citizenry and it's by the citizens, so to be able to work together So that environmental code that So one example that we have is So the Ministry of Transportation in BC participated That's one of the biggest themes of this conference is the developers don't necessarily need to worry and the brightest to come work for you? So that was definitely a deliberate part of the strategy. So it's the tools but then also the projects. micro-sized the work as much as we can. that the gov has undertaken. and that's the way we like to hear it. the benefits of automation but also the fact and so one of the things that we've been trying So the big data equation and big data analytics Well thank you so much for joining us Todd and Shea, of the Red Hat Summit 2017 after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Shea PhillipsPERSON

0.99+

Todd WilsonPERSON

0.99+

ToddPERSON

0.99+

VictoriaLOCATION

0.99+

SheaPERSON

0.99+

Red HatORGANIZATION

0.99+

100 wordsQUANTITY

0.99+

100%QUANTITY

0.99+

next yearDATE

0.99+

1,000QUANTITY

0.99+

Ministry of TransportationORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

HPORGANIZATION

0.99+

USLOCATION

0.99+

PayPalORGANIZATION

0.99+

Red Hat SummitEVENT

0.99+

BCLOCATION

0.99+

CanadaLOCATION

0.99+

GitHubORGANIZATION

0.99+

Red Hat Summit 2017EVENT

0.98+

OpenShiftTITLE

0.98+

Environmental Reporting OfficeORGANIZATION

0.98+

$10,000QUANTITY

0.98+

todayDATE

0.98+

a dayQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

oneQUANTITY

0.97+

Ministry of the EnvironmentORGANIZATION

0.97+

firstQUANTITY

0.97+

One exampleQUANTITY

0.97+

OneQUANTITY

0.97+

one exampleQUANTITY

0.96+

OpenShift Container PlatformTITLE

0.95+

about three yearsQUANTITY

0.94+

BC Developers ExchangeORGANIZATION

0.91+

day oneQUANTITY

0.9+

tons and tonsQUANTITY

0.9+

Red HatTITLE

0.88+

twoQUANTITY

0.88+

ReportingORGANIZATION

0.82+

OpenShift Container PlatformTITLE

0.8+

BC Developer's ExchangeORGANIZATION

0.77+

monthsDATE

0.76+

one particularQUANTITY

0.75+

DevOpsTITLE

0.7+

Open 511TITLE

0.66+

yearQUANTITY

0.6+

British ColombianORGANIZATION

0.59+

coupleQUANTITY

0.56+

Andrius Benokraitis, Red Hat - Red Hat Summit 2017


 

>> Red Hat OpenShift Container Platform >> Announcer: Live from Boston, Massachusetts, it's theCube Covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to theCube's coverage, I'm Rebecca Knight your host, here with Stu Miniman. Our guest now is Andrius Benokraitis, he is the Principle Product Manager at Ansible Red Hat Network Automation, thanks so much Andrius. >> Thanks for having me I appreciate it. >> This is your first time on the program. >> Andrius: First time. >> We're nice, >> Really nervous, so, okay. we don't bite. >> Start a little bit with your new to the company relatively >> Andrius: Relatively. >> networking guy by background, can you give us a little bit about your background. >> Sure, I mean, I actually started at Red Hat in 2003. And then did about four five jobs there for about 11 years. And then jumped, went to a startup named Cumulus Networks for about two years. Great crew, and then, now I'm at Ansible, been there since about December, so working on the Network Automation Use Case for Ansible. >> Alright, so networking, has a little bit of coverage here, I remember, you know, something like the Open Daylight stuff and I have, actually there are a couple of Red Hatters that I interviewed at one show ended up forming a company that got bought by Dockers, so you know, there's definitely networking people, but maybe give us a broad view of where networking fits into this stuff that you're working on specifically. >> Yeah, sure thing. I think it's interesting to point out that as everything started in the compute side, and everything started to get disaggregated, the networking side has come along for the ride per se. It's been a little bit behind. When we talk about networking a lot of people just think automatically that's the end. And we're actually trying to think a little bit lower level, so layer one, layer two, layer three, so switching, routing, firewalls, load balancers, all those things are still required in the data center. And when people started using Ansible, it started five years ago on the compute side, a lot of the people started saying, I need to run the whole rec, and I'm not a CCIE, and I don't really know what to do there but I've been thrown in to do something, I'm a cloud admin, the new title right. I have to run the network, so what do I do. I don't know anything about networking, I'm just trying to be good enough, well, I know Ansible, so why don't I just treat switches like servers, and just treat them like, like what I know, they just have a lot more interfaces, but they just treat it that way. So a lot of the expertise came from the ground up with the opensource model and said this is the new use case. >> Well, Jay Rivers, the founder of Cumulus, it's like networking will just be a Linux operating model, you know, extended to the network, which is always like, hey, sounds like a company like Red Hat should be doing that kind of stuff. >> Exactly, it's interesting to see a Bash prompt in the networking right, it's familiar to a lot of people, in the devop space, absolutely. >> So it's a very rapidly changing time, as we know, in this digital computing age, the theme of this conference is the power of the individual, celebrating that individual, the developer, empowering the developers to take risks, be able to fail, make changes, modify. You're not a developer, but you manage developers, you lead developers, how do you work on creating that context, that Jim Whitehurst talked about today. >> I think it starts with, the true empowerment, you have the majority of the networking platforms are still proprietary and walled off, walled off gardens, they're black boxes you can't really do much with them, but you still have the ability to SSH into them, you have familiar terms and concepts from the server side in the networking side. So as long as you have SSH in the box and you know your CLI commands to make changes, you can utilize that in part of Ansible to generate larger abstractions to use the play books in order to build out your data center, with the terms and the Lexicon of YAML, the language of Ansible, things that you already know and utilizing that and going further. >> Can you speak to us a little bit about customers, you know, what's holding them back, how are you guys moving them forward to the more agile development space? >> Our customers are mostly brownfield, they're trying to extend what they already have. They have all their gear, they have everything they have that they need but they're trying to do things better. >> I don't find greenfield customers when it comes to the network side of the house, I mean we've all got what I have and we knew that IT's always additive, so, I mean that's got to be a challenge. >> It's a huge challenge. >> Something you can help with right? >> It's a huge challenge, and I think from the network operators and network engineers, a lot of them are saying, again, they're looking at their friends on the compute side, and they can spin up VMs and provision hardware instantaneously, but why does it have to take four to six weeks to provision a VLAN or get a VLAN added to a network switch? That sounds ridiculous, so a lot of the network engineers and operators are saying, well I think I can be as agile as you, so we can actually work together, using a common framework, common language with Ansible, and we can get things done, and we can get all of this stuff I hate doing, and we don't have to do that anymore, we can worry about more important things in our network, like designing the next big thing, if you want to do BGP, design your BGP infrastructure, you want to move from a layer two to a layer three or an SDN solution. >> I love that you talk about everybody, kind of the software wave and breaking down silos, network and storage people are like, oh my God, you're taking my job away. >> Exactly, completely, no, we're not taking your job. We are augmenting what you already have. We're giving you more tools in your tool belt to do better at your job, and that's truly it, we don't have to, people can be smarter so, if you want to add a VLAN, that can be a code snippet created by the sys admin, it can be in Git, and then the network engineer can say, oh yeah, that looks good, and then I just say, submit. What we see today with some of the customers is, yeah, I want to automate, I really want to automate, and you say, great, let's automate. But then you start getting, you peel back the onion, and you start seeing that, well, how are you managing your inventory, how are you managing your endpoints. And they're like, I have a spreadsheet? And you're like, as a networking guy I guess you, (excited clamoring) >> Networking is scary for a lot, >> It's super scary, yeah. >> So how, do you break that down? >> You do what you can, you do it in small pieces, we're not trying to change the world, we're not trying to say, you're going to go 100% devops in the network. Start small, start with something, like again, you really hate doing, if you want to change, something really low risk, things you really hate doing, just start small, low risk things. And then you can propagate that, and as you start getting confidence, and you start getting the knowledge, and the teams, and every one starts, everyone has to be bought in by the way. This is not something you just go in and say, go do it. You have to have everyone on board, the entire organization, it can't be bottom up, it can't be top down, everyone has to be on board. >> And Andrius, when I talk to people in the networking space, risk is the number one thing they're worried about. They buy on risk, they build on risk, and the problem we have with the networks, they're too many things that are manual. So if I'm typing in some you know, 16 digit hexadecimal code >> From notepad, manually you're copying and pasting >> from like a spreadsheet. Copying and pasting, or gosh, so things like that, the room for error is too high. So there's the things that we need to be able to automate, so that we don't have somebody that's tired or just, wait, was that a one or an L or an I. I don't know, so we understand that it actually should be able to reduce risk, increase security, all the things that the business is telling you. >> All these network vendors have virtual instances. You can do all your testing and deployment, all your testing and your infrastructure, and you can do everything in Jenkins and have all your networking switches, virtually, you can have your whole data center in a virtual environment if you want. So if you talk about lower risk, instead of just copying and pasting, and oh was that a slash 24 or a slash 16, oops, I mean that looked right, but it was wrong, but did it go through test, it probably didn't. And then someone's going to get paged at three in the morning, and a router's down, an edge router's down and your toast. So enabling the full devops cycle of continuous integration. So bringing in the same concepts that you have on the compute side, testing, changes, in a full cycle, and then doing that. >> You talked about the importance of buy in and also the difficulties of getting buy in. How much of that is an impediment to the innovation process, but one of the things we've been talking about, is can big companies innovate? What are the challenges that you see, and how do you overcome them? >> That is the number one, that is the biggest issue right now in the network space, is getting buy in. Whether it's someone who has done it on their own, someone can just install Ansible and do something, and then deploy a switch, but if they leave the company and there's no remediation, if it's not in the MOP, if it's not in the Method of Procedure, no one knows about it. So it has to be part of your, you want to keep all the things you have, all the good things you have today with your checks and balances in the networking, and the CIOs and the people at the top have to understand, you can keep all that stuff, but you have to buy in to the automation framework, and everyone has to be onboard to understand how it fits in in order to go from where you are today to where you want to be. >> At the show here what's exciting your customers? You know, give us a little bit of a viewpoint for people that are checking out your stuff, what to expect. >> Well I think the one thing is they're not used to seeing, they think it's black magic, they think it's just magic. They're like, I can use the same things for everything? I say, yeah, you can. The development processes, the innovation in the community, you know for example, if you want to assist, go ACI Module, it's in GitHub, it's in Cisco's GitHub, you can just go ahead and do that. Now we're trying, starting to migrate those things into core. So the more that we get innovation in the community, and that we have the vendors and the partners driving it, and you're seeing that today, you know, we have F5 here we have Cisco, we have Juniper we have Avi, all those people, you know, they have certified platforms with Ansible, Ansible Core, which is going to be integrated with Ansible Tower, we have full buy in from them. They want to meet with us and say how can we do better. How can we innovate with you to drive the nexgen data centers with our products. >> You talked about yourself as a boomerang employee, what is the value in that, and are you seeing a lot of colleagues who are bouncing around and then coming back from ... >> Absolutely, I think pre acquisition Ansible, the vast majority of the people, I believe were ex-Red Hatters that went to Ansible. So what's really nice to come back home and understand the people that left, that came back to understand already what the, >> And people feel that way, it's a coming home? >> Yeah, it's a coming home, it really is. They understand, you know, they came back, they understood the values of opensource and the culture, again, I started Red Hat in 2003, I see the great things, I see new people getting hired and I see the same things I saw back then, 2003, 2004, with all the great things that people are doing, and the culture. You know, Jim's done a great job at keeping the culture how it is, even way back then when there was only 400 people when I started. >> Andrius, extend that culture, I think about the network community and opensource and you know, you talk about, there's risk there, and you know, you think about, I grew up with kind of enterprise, infrastructure mentality, it's like, don't touch it, don't play with it. We always joked, I got every thing there, really don't walk by it and definitely, you know, some zip tie or duct tape's going to come apart. Are we getting better, is networking embracing this? >> Yes, for sure. I think the nice thing is you start seeing these communities pop up. You're starting to see network operators and engineers, they've been historically, if they don't know the answer, they won't go find it. They kind of may be shy, shy to ask for help, per se. >> If it wasn't on their certification, >> Exactly. >> They weren't going to do it. >> If it wasn't there I'm not going to go, we're bringing them into, so we have, whether there's slack instance, there are networking communities, networking automation, communities, just for network automation. And there's one, there's an Ansible channel, on the network decode, select channel, has almost 800 people on it. So they're coming and now they have a place, they have a safe place to ask questions. They don't have to kind of guess or say, you know what, I'm not going to do that. And know they have a safe place for network engineers, for network engineers to get into the net devop space. >> Another one of the sort of sub themes of this summit is people's data strategy, and customers and vendors, how they're dealing with the massive amounts of data that they're customers are generating. What is your data strategy, and how are you using data? >> So there's two aspects here. So the data can be the actual playbooks themselves, the actual, the golden master images, so you can pull configs from switches, and you can store them and you can use them for continuous compliance. You can say, you know, a rogue engineer might make a change, you know, configuration drift happens. But you need to be able to make those comparisons to the other versions. So we're utilizing things like Git, so you're data strategy can be in the cloud, it can be similar on your side, you can do Stash locally. For part of the operations piece, you can use that. A second piece is, log aggregation is a big piece of the Ansible. So when you actually want to make sure that a change happens, that it's been successful, and that you want to ensure continuous compliance, all that data has to go somewhere, right? So you can utilize Ansible Tower as an aggregator, you can go off using the integrations like Splunk and some other log aggregation connectors with Ansible Tower to help utilize your data strategy with the partners that are really the driving, the people that know data and data structures, so we can use them. >> And one of the other issues is the building the confidence to make decisions with all the data, are you working on that too with your team? >> Yes, we are working with that, and that's part of the larger tower organization, so it goes beyond networking. So, whatever networking gets, everyone else gets. When we started developing Ansible Core and the community and Ansible Tower in-house, we think about networking and we think about Windows, that's a huge opportunity there, you know, we're talking about AWS in the cloud. So cloud instances, these are all endpoints that Ansible can manage, and it's not just networking, so we have to make sure that all of the pieces, all of the endpoints can be managed directly. Everyone benefits from that. >> Andrius thank you so much for your time we appreciate it. >> Thanks again for having me. >> I'm Rebecca Knight for Stu Miniman, thank you very much for joining us. We'll be back after this.

Published Date : May 3 2017

SUMMARY :

Brought to you by Red Hat. he is the Principle Product Manager we don't bite. can you give us a little bit about your background. And then did about four five jobs there for about 11 years. I remember, you know, something like So a lot of the expertise came from the ground up you know, extended to the network, in the networking right, it's familiar to a lot of people, empowering the developers to take risks, the language of Ansible, things that you already know that they need but they're trying to do things better. the network side of the house, I mean we've all got like designing the next big thing, if you want to do BGP, I love that you talk about everybody, and you start seeing that, and you start getting the knowledge, and the problem we have with the networks, all the things that the business is telling you. and you can do everything in Jenkins What are the challenges that you see, all the good things you have today At the show here what's exciting your customers? How can we innovate with you to drive the nexgen and are you seeing a lot of colleagues that came back to understand already what the, They understand, you know, they came back, and you know, you talk about, there's risk there, you start seeing these communities pop up. They don't have to kind of guess or say, you know what, the massive amounts of data that and that you want to ensure continuous compliance, and the community and Ansible Tower in-house, Andrius thank you so much for your time thank you very much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jay RiversPERSON

0.99+

Rebecca KnightPERSON

0.99+

Andrius BenokraitisPERSON

0.99+

2003DATE

0.99+

CiscoORGANIZATION

0.99+

JimPERSON

0.99+

Jim WhitehurstPERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

100%QUANTITY

0.99+

Cumulus NetworksORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

2004DATE

0.99+

two aspectsQUANTITY

0.99+

fourQUANTITY

0.99+

CumulusORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

second pieceQUANTITY

0.99+

todayDATE

0.99+

Red HattersORGANIZATION

0.98+

16 digitQUANTITY

0.98+

six weeksQUANTITY

0.98+

Ansible Red Hat Network AutomationORGANIZATION

0.98+

Ansible TowerORGANIZATION

0.98+

five years agoDATE

0.98+

JenkinsTITLE

0.98+

First timeQUANTITY

0.98+

about 11 yearsQUANTITY

0.98+

AndriusPERSON

0.98+

JuniperORGANIZATION

0.97+

400 peopleQUANTITY

0.97+

about two yearsQUANTITY

0.97+

DockersORGANIZATION

0.97+

LinuxTITLE

0.96+

WindowsTITLE

0.96+

Ansible CoreORGANIZATION

0.95+

Red Hat Summit 2017EVENT

0.95+

GitTITLE

0.93+

about four five jobsQUANTITY

0.93+

AndriusTITLE

0.9+

almost 800 peopleQUANTITY

0.89+

threeDATE

0.87+

YAMLTITLE

0.86+

layer oneQUANTITY

0.85+

GitHubTITLE

0.85+

theCubeORGANIZATION

0.84+

AviORGANIZATION

0.84+

one showQUANTITY

0.82+

layer threeQUANTITY

0.77+

HatORGANIZATION

0.71+

layer twoQUANTITY

0.7+

StashTITLE

0.68+

F5ORGANIZATION

0.68+

layerQUANTITY

0.67+

one thingQUANTITY

0.65+

SplunkORGANIZATION

0.65+

aboutDATE

0.62+

OpenShift Container PlatformTITLE

0.62+

RedTITLE

0.6+

threeOTHER

0.59+